This week’s prompt is to comment on AI tracking for eye gaze, posture, and other indicators of attention. Employers who need to follow my indicators of attention are not employers I want to work for. For me, it’s not a privacy issue, it’s a respect issue. For a professional knowledge worker, of course there are many ways to track how much work a person does. I’ve spent the last 18 years as a remote worker, and most of that as a contractor. There are ways an employer could track minute metrics of my productivity. They could look at my meetings; they could look at my Slack messages; they could look at my files. However, usually we don’t do those things. Instead, we trust that the person hired to do a job actually does the job. Some workers are slackers. We try to identify and get those folks out of the organization as much as possible. With this in mind, I am insulted by the idea of tracking my adult learners' gaze and posture. I could see a scenario in which it might work. Say I wanted to A/B test two versions of a learning experience to improve my offerings. Say that I had an AI tracking tool that was reasonably priced and easy to put in place. The tracking could provide useful information. In that scenario, learners would know they were being tracked during the test. They would also know why. That scenario requires quite a lot of effort. Both learning experiences are developed enough to test. And the AI tool is implemented. Learners are informed what is happening and why. The big question to ask is if it would be worth it. Would that information be more valuable than what we can get from other feedback? My suspicion is that the answer is no. My suspicion is that it's best to leave that type of testing of cognitive psychologists. As Clark (2020) says: If you want insights into how people actually learn, set some time aside and look at the existing research in cognitive science. You will do better looking at what the research actually says and then redesigning your online learning around that science. Remember that these scientific findings have already gone through a process of controlled studies, with a methodology that statistically attempts to get clean data on specific variables. References
Clark, D. (2020). Artificial Intelligence for Learning: How to use AI to support employee development. Kogan Page Limited.
0 Comments
I’m reflecting on the question about how AI-supported sentiment analysis changes how decisions are made about curriculum, activities, assessments or other educational aspects in my organization. In the company I’m contracting for, as well as in the broader customer education community, no one is talking about sentiment analysis in an educational context (yet).
I did find one mention of an off-the-shelf tool that uses sentiment analysis. This is a case where I could see that people may already be using sentiment analysis without necessarily calling it that. The tool is SurveyMonkey. I see SurveyMonkey as already being a popular tool, but in the context of things like product feedback, employee engagement, and market research. For organizations that consider their curriculum, learning activities, and assessments products (as my current organization does), it’s not a far leap to see using a survey tool like this to get further insights into learners’ sentiment. We already use surveys to collect feedback on learning experiences. We usually have a small amount of data to work with, so the value of having some of that analysis automated by AI is limited. It would very much depend on asking the right kinds of open-ended questions in the survey itself. In this context, sentiment analysis would mostly benefit the Product Owner and Instructional Designer (as well as other stakeholders). This would be during the early testing for the purpose of knowing the most important ways to improve a learning experience. That would presumably benefit learners as well, as we work to make the most effective learning experiences. Why would I trust SurveyMonkey? For me it wouldn’t take much. A quick glance through their site leads me to believe they have done their due diligence in regards to privacy and compliance with existing regulations. For me personally, this would probably be enough, because the impact of decisions would be more about company success than anything with long-term consequences for learners. However, the cost of the tool would have to be justified for the organization. And as I mentioned earlier, that would only be true if we needed to analyze a large amount of data. This assignment poses three questions related to using diagnostics and decision-making.
Question 1 Given what you understand about how AI can be used for learning diagnostics and decision making in educational spaces, how do you think educators and administrative staff should employ it and why? The Director of Hubspot Academy recently said in a community discussion that one of the most popular topics with Customer Education leaders is the evaluation of success of their programs (Sembler, 2023). I agree with her point that it’s important to be very clear about what you are trying to accomplish. Where Customer Education differs from other domains of education is that usually evaluating the impact for learners is secondary to evaluating the impact on the business. It’s not that learner outcomes are not important. But for a domain that constantly needs to prove its value to other functions of a business, it doesn’t matter what learners know or can do if it doesn’t support the business goals. However, the two are intertwined. If I have a customer who submits too many support tickets because they didn’t go to the onboarding training, the business is not reaching its goals by providing that onboarding training. The learner hasn’t reached the outcome, but the business hasn’t either. Question 2 What are potential practical pitfalls with relying on AI for these tasks? AI for diagnostics in this context could be a great time saver and simplify making the connections between what, when, and how customers learn and the impact that has on their product adoption, expansion of licenses, and renewal of subscriptions. The pitfalls include potential problems similar to other domains of education, like having biased or incomplete data. The biggest pitfall, though is getting the buy-in from the business for investing in developing this capability. Question 3 What are potential ethical challenges with relying on AI for assessment tasks? Assessments are a different issue. There is a broad range of interpretation of assessment in Customer Education. Some companies have exams for certifications, some do not. I doubt the quality of the exam (developed and delivered) is consistent from one company to the next. In general, I would lean on the aspects of responsible AI to answer this question. I would want to know that the AI assessment is trustworthy, especially of keeping the data private. I would want to know how the AI made its determinations so that I can very its accuracy (and provide the details for any learners with complaints). And I would want to know that the AI is following our laws and regulations and not harming anyone. References Sembler, C. (2023). Evaluating Success: Learner Data vs. Business Metrics. [Post] Customer Education Community. In Customer Education, there are always conversations about how to better harness data to make decisions about programs that educate the customers of the business. As a fledgling function at many businesses, and having been subject to one reduction-in-force after another in the last two years, Customer Education is keen on proving its value as a function vital to (especially subscription-based) business success.
In the Learning Analytics section of Artificial Intelligence for Learning (Clark, 2020), Clark makes a key point. He says, “…the goal is not to improve training but to improve the business. (p. 183). Analyzing the data is key in making that connection. However, the data can be so difficult to get. In my experience, request after request to integrate the LMS data with key business data or to have a share of a data analysts’ time frequently fell on deaf ears. So I had to tell the best story I could with the data I had available. I was working in a hypergrowth software company that had just started investing in Customer Education. The story that I wanted to tell was that “trained” customers onboarded more quickly, took fewer customer success and customer support resources, and adopted the product more quickly and thoroughly. I had access to data like when they became customers (via SalesForce), as well as the LMS data like learners’ view times and dates for content titles, percent completion of courses, and number of visits to the learning site. Without spending too much time on the data, I had an intuitive sense that there were aspects of being on the right track. However, AI-supported analytics could have confirmed that intuition. For example, was the curriculum was solving the problem it intended to solve regarding onboarding? By tying the completion of a set of courses to product adoption metrics and account license usage, we could have confirmed this. We might also learn that customers needed to learn and practice beyond that initial onboarding content, which would have required further investment. Another example is I sensed the self-paced online instruction methods were generally working based on learner time spent and number of courses completed. But there was little to compare to, since the company wasn’t investing in other instruction methods. Perhaps by analyzing community posts, conversations with customer success, and other sources, we could have confirmed that the limited forms of instruction we offered weren’t enough for all but the most motivated learners. That is a perfect task for AI-enabled learning analytics. Let’s look at the framework of Clark’s four goals for learning analytics of describe, analyze, predict, and prescribe (Clark, 2020) in the context of Customer Education. Describing who, what, where, and when are marginally useful, but this goal is much more valuable when learning data is tied to who, what, where, and when of product usage and advocate behaviors. Analyzing is where AI could save Customer Educators time, not only in making decisions about curriculum and needed course improvements, but also in connecting learning with business impact. The Predicting goal is different in this context. Grades and drop outs are less relevant if customers are achieving their goals without the measured learning. When it comes to prescribing, some Customer Education products already incorporate engines to recommend learning based on a Customer’s other learning or performance on an assessment. The real win would be recommendations based on anticipating their learning needs during their work and offering the appropriate bite-size learning to get them started, followed up by other relevant experiences to deepen their learning. References Clark, D. (2020). Artificial Intelligence for Learning: How to use AI to support employee development. Kogan Page Limited. Recently, I gave Google’s NotebookLM a try, and I’m excited about the possibilities. With this product, you upload the sources relevant to a specific project. Then in that notebook, you can engage with the chatbot in several ways. Google markets the tool as “Do your best… brainstorming…thinking…note-taking…creating…learning” (Google, 2023).
You can also start typing anything in the input field. This works similar to ChatGPT. You can ask it to create an outline, write questions, draft a blog post, or whatever type of output you would like it to create. Here’s an example of what I got from the example notebook (where the sources describe using NotebookLM) when I asked “help me draft a blog post describing using NotebookLM in an educational context.” The natural language processing model isn’t trained on the data you upload. In fact, NotebookLM doesn’t even save the information past your notebook session. According to author Tiago Forte (Feb 15, 2024) the “software just shuttles your inputs into its context window temporarily so it can answer factually based on that information. Once you end your session, the information you entered is wiped from the model's memory so your data is secure.” That means your information stays private.
Before I asked for help on educational uses for NotebookLM, I was mostly considering it as a self-directed learning tool. What I find helpful as a learner is that I upload the articles I read (or the notes I took on them) and use those sources to help consolidate the findings of those articles. Since my studies have me reading a large amount each week, this is a helpful step in my own learning process. However, seeing the suggestions that it provided opens additional possibilities. Especially in writing or research-related learning activities, I could see incorporating NotebookLM into the lesson to help develop critical thinking skills for the learner. For example, the lesson could encourage students to generate ideas or content, report the output, and then evaluate the output, such as through pointing out what could be missing or biased. In any case, incorporating it into an educational setting would take careful design to implement appropriately and effectively. But working with generative AI content is clearly a skill that future workers will need. References Google. (2023.) NotebookLM Experiment. Retrieved March 1, 2024, from https://notebooklm.google/notebooklm.google/ Forte, T. (Feb 15, 2024). How to Use NotebookLM (Google’s New AI Tool). [Video]. YouTube. Retrieved March 1, 2024 from https://www.youtube.com/watch?v=iWPjBwXy_Io. The prompt this week is to consider technology interfaces (such as learning management systems) that I believe could benefit from the use of AI. I think in the world of extended enterprise training, especially for software, much is happening already in terms of personalization and adaptive learning.
While personalized and adaptive learning would definitely improve the experience for learners by meeting them with what they need to learn when they need to learn it, I think immediate feedback is a key area where AI could really revolutionize learning. Do you like multiple choice questions (MCQ) for assessments? As an instructional designer, I find that it’s quite challenging and time-consuming to write MCQs that really assess what learners know, versus learners being able to merely or vaguely recognize, versus learners being able to guess. As I student, I usually find MCQs fall into one of two categories: either they are insulting to my intelligence or they are unnecessarily tricky (which could be because I don’t know the answer, but it also could be just because it’s a bad question). Either way, multiple choice questions rarely require that a learner pull something out of their memory in a way that active recall does. Clark (2020) reports that from Gates’ research in 1917, we’ve known about the importance of active recall for a century. However, we keep using multiple choice in technology-based learning, because they are easy to grade instantly. Of course, we can program generic feedback for each answer choice (although I rarely see that done). We’ve been able to program short answer questions for awhile as well. But without AI, we have to program every possible iteration of a correct answer, including capitalization or punctuation. With AI, what if instead of MCQs, we could have a reflection question? Reflection questions require the learners to use active recall - not just for a word, but for an understanding of a concept, or an application of a process, and so on. Then the AI provides realistic, accurate, and useful feedback that corrects misunderstandings, supports learners in their journey with appropriate resources for revisiting a concept, and helps consolidate learning. I will add that implementing the AI grading of reflection questions would require curating and training (or at least referencing) the relevant resources, which has its own difficulties. And of course, MCQs do have their place when they are well done. In fact, I used a MCQ for an undoing activity that I thought was particularly useful. References Clark, D. (2020). Artificial Intelligence for Learning: How to use AI to support employee development. Kogan Page Limited. This week, I’ll take a look at some general uses of Artificial Intelligence in Education (AIED).
Intelligent Tutorial Systems (ITS) Because of the improvements in learning when students have one-on-one tutoring, the goal of ITS is to provide one-on-one tutoring at scale. Examples of ITS include Mathia, Assistments, and alta (Homes et al, 2019). These programs provide optimal step-by-step tutorials for students and adjust the level of difficulty according to the students’ needs. This approach appears beneficial for well-defined domains, such as math and physics. Studies suggest that these “have not yet quite achieved parity with one-to-one teaching” (Homes et al., 2019), but have had positive outcomes. In some contexts, this could be a good assistance for teachers in supporting students who need additional help, and a great complementary learning activity for students who want additional practice in the domain they are studying. My concern about using AI here is forgetting that it’s a complement to other forms of teaching and learning. Dialogue-Based Tutoring Systems (DBTS) Building on the idea of Intelligent Tutoring Systems, Dialogue-Based Tutoring Systems engages students in conversations about a topic. Again, this is intended to be a complement to other forms of teaching and learning, as students should have already attended lectures and read to consume the domain content. This is another system where students work step-by-step through tasks (such as for Math, Computer Science, and Physics). Examples of DBTS include AutoTutor and Watson Tutor (Homes et al., 2019). Homes and the other authors (2019) report that evaluations of AutoTutor show that it does help students have higher learning gains, especially for deep learning of concepts, on par with the level of having a human tutor in some cases. I have a similar concern about the use of AI here, ensuring that it’s a good complement to other forms of teaching and learning as additional practice. Exploratory Learning Environments (ELEs) ELEs use a constructivist approach and a learner model that encourages students to explore to build their own knowledge (Homes et al., 2019). They provide automatic guidance and feedback, including addressing misconceptions and proposing alternate approaches. Examples include Fractions Lab, Betty’s Brain, Crystal Island, and ECHOES (Homes et al., 2019). The evaluations seem to have mixed results, depending on the study and the individual program’s goals. Since the goal is more open, I can see where it would be more difficult to tell whether it’s a helpful support to teaching and learning. My concern is that it would need to be used in the right context. Automatic Writing Evaluation Automatic Writing Evaluation programs like Intelligent Essay Assessor, WriteToLearn, e-Rater, and Revision Assistant, provide feedback and/or assessment for student writing assignments (Homes et al., 2019). This can be helpful to the teacher as grading writing assignments can be time-consuming. It’s helpful for learners because the system can provide immediate feedback. An interesting evaluation here for WriteToLearn is that students completed more revisions using the system (Homes et al., 2019). Additional Concerns AI is such a popular idea that administrators may be rushing to get improvements any way they can. Working outside of our local school district, I’ve had a difficult time discovering how exactly they may be using AI. Companies who want some of the budget may have marketing hype that is full of promise, but the ethics, safety, and effectiveness may not yet be fully tested. References Homes, W., Bialik, M., and Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implementations for Teaching & Learning. Center for Curriculum Redesign. When considering how to implement AI in education to ensure equal access and opportunities, it’s important to consider educational goals. In Artificial Intelligence in Education, the authors point out that primary and secondary learning is a future for future learning, especially as it relates to economic, civic, and personal goals (Holmes et al., 2019). It’s important for anyone involved in AI education development to remember that information is not education. We don’t need bots to help do the work of cramming more knowledge into students’ heads; we need to support the development of skills relevant to their adult lives (one of which being knowing how to learn). With that in mind, the need to ensure equal access and opportunities is a challenging one. This week’s reading and my previous experience doesn’t help provide a solid answer for the inequalities in education. But I do know that implementing solutions need to incorporate plans for the hardware and the energy requirements, which may be much more difficult to implement in some locations.
How will the role of teachers change? I really appreciated this quote: “content knowledge may be the least important thing that students retain from their schooling” (Holmes et al., 2019). Humans' ability to build relationships is better than a machine’s ability (Holmes et al., 2019), no matter how well we implement the AI. In that case, teachers’ ability to create a supportive and inclusive classroom where students feel valued and respected is an important foundation for their learning experiences. Also, teachers serve as facilitators to foster collaboration between students as they learn. Collaboration is an important component of the learning process. It's also an important skill for students' to develop for their professional lives. References Homes, W., Bialik, M., and Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implementations for Teaching & Learning. Center for Curriculum Redesign. Over the past few years, I've been somewhat aware of the increasing integration of AI into many aspects of my life. What first caught my attention was the map app on my first iPhone (around 2012?), providing directions and real-time feedback on traffic conditions. It was transformational for me in getting around in Dallas. Recently, I've been using a pair of apps in my own learning, especially relevant in my graduate studies. Reader and Readwise are connected apps that serve as a "read-it-later" app and a service to help with remembering what you read.
I use Reader to handle digital content. Whether it's a webpage or a PDF document, I add it to Reader for a more focused reading experience. The app allows me to highlight key points that resonate with me and add notes for additional context. I also use it to highlight headings and tag them accordingly, creating a structured outline of the content. The AI capabilities of Reader have been particularly useful, offering features such as summarizing the document and asking questions of it. I like Readwise for its daily review emails. If my original reading wasn't digital, I can manually enter my notes from the physical book into Readwise. The app has algorithms to resurface my highlights and notes, and even questions I choose, to remind me of (and help me recall) what I've read. The control settings allow me to manage how often certain types of content make the email, including articles, books, and even books I've marked as read, but not highlighted. My interest in AI for learning and teaching primarily stems from my curiosity about how technology can leverage good learning science. I've observed how quickly businesses can adapt new technology, and I'm intrigued to see how this can be implemented in an educational context. As my experience with Reader and Readwise illustrates, I believe AI can offer specific solutions that enhance the learning experience. I do have some concerns about the potential harm from AI. The way it is being used in China in elementary schools (The Wall Street Journal, 2019), brings to mind dystopian images of George Orwell's Big Brother from his novel, 1984 (Orwell, 2021). However, privacy and misuse are my primary concerns at this point. I like calling AI software, rather than any kind of "intelligence." The comment Clark (2020) cites from Roger Schank, that "AI is merely software and that we should in fact just call it software" really resonated with me. Having worked in software for many years now, I understand that software can help solve problems, and that iteration and constant improvement is part of its DNA. It's imperative that we bring human intelligence to the oversight of what problems we use computers to solve and how they solve them. References Clark, D. (2020). Artificial Intelligence for Learning: How to use AI to support employee development. Kogan Page Limited. Orwell, G. (2021). Nineteen Eighty-Four. Penguin Classics. The Wall Street Journal. (Oct 1, 2019). How China is Using Artificial Intelligence in Classrooms [Video]. YouTube. https://youtu.be/JMLsHI8aV0g?si=Jr52iLn3DbShz4F7 |
AuthorMichele Wiedemer has worked in software as an "accidental instructional designer" for many years. She is currently completing the MS in Learning Technologies at The University of North Texas. This blog represents reflections on specific assignments in the coursework. Archives
February 2024
Categories |