This week, I’m reflecting on how AI can be used for learning diagnostics and decision-making in educational spaces. Given that currently, the educational space I’m most familiar with is customer education for technical software companies, I’ll speak to that context.
AI has immense potential to revolutionize the field of customer education. I don’t think AI should fully own the capability of diagnosing a potential learner’s needs - that should be done through a full understanding of analysis of the company’s specific audiences and the goals the educational program is looking to solve. But I do think that based on an instructionally-sound assessment - or through surveillance and analysis of customer’s behavior in the software tools (such as looking at where people might be getting stuck), AI could recommend appropriate learning experiences for those learners. The real power would be in using AI to determine whether users are doing the tasks in the software that experts know are required for customers to have success with the program. However, it's important to be aware of the practical pitfalls that can arise when relying on AI for these tasks. AI algorithms are only as good as the data they're trained on, and if the data is biased or inaccurate, it can lead to incorrect diagnoses and decisions. For example, an AI might incorrectly categorize a customer as a beginner based on their interactions with the company's products, even though they may actually be an experienced user. However, this pitfall wouldn’t be as problematic with adult learners as it might be for a young learner in primary grades. It could impact the customer’s opinion of the product, ultimately leading to churn (lack of renewal). Privacy is the main ethical concern, although we all sign the end-user license agreements when we use a new software. The concern is that organizations are clear about what data is collected, as well as how it is stored and used. They also must be vigilant about protecting customer privacy. Companies need to be aware of the pitfalls and challenges as well as the benefits. For both the company and the customers, they can weigh the costs and risks against the potential benefits, just as with any other type of decision made in a company.
0 Comments
This week, I’m continuing my reflection on the process of developing my semester-long course. There is not much time left before my delivery for a grade. As a graded project, I am feeling confident that the development is complete. I’ve got fifteen modules (over a 16-week semester) scheduled, with a good balance of discussions, group assignments, and individual assignments. I’ve added rubrics and quizzes and videos to support some of the topics. I have a job aid for the instructor. What I have left to do is any final editing and recommended revisions.
As to challenges, there is never enough time. Period. I’ve found that to be true throughout my career. We always have to make trade-offs, because there is always more we could do. (In fact, one of the assignments in the course is on that topic!) I’m a big believer that less is more in online learning, so I wouldn’t want to add too much, either as content or activities. But there are probably a hundred small things I could do to improve it. For example, the videos I made in three of the modules will enhance the reading. I would love to make videos for two or three other modules, but I have run out of time for that. One of the challenges I faced is in the job aid. I think of a job aid as a one-page checklist for a learner to use to help them implement a new skill. However, this is mostly a semantic issue. Once I thought about how I could help an instructor be successful teaching this course, it became easier to understand what to include. In my job, we’ve had a similar issue. We are innovating a new type of blended learning, which includes less instructor-delivered content. It's a big shift from a lecture-based format. Our testing revealed that some instructors are better at facilitating activities than others. We created a facilitator enablement guide to make our expectations of their task very clear. I won’t be implementing this course in the foreseeable future. I would love to teach this class at some point. But I don’t even know how to go about getting a new course approved at a university. When I embarked on this assignment, I considered choosing a different modality. I have in mind a more professional-friendly schedule that I can deliver as a series of workshops. However, I wanted the practice developing an undergraduate course. At some point, I may use this as a springboard to that format, which I’ll have more control over implementing commercially. In doing that, I will offer an early access discount in exchange for feedback so that my early learners can help with evaluating the course. This week’s prompt is to comment on AI tracking for eye gaze, posture, and other indicators of attention. Employers who need to follow my indicators of attention are not employers I want to work for. For me, it’s not a privacy issue, it’s a respect issue. For a professional knowledge worker, of course there are many ways to track how much work a person does. I’ve spent the last 18 years as a remote worker, and most of that as a contractor. There are ways an employer could track minute metrics of my productivity. They could look at my meetings; they could look at my Slack messages; they could look at my files. However, usually we don’t do those things. Instead, we trust that the person hired to do a job actually does the job. Some workers are slackers. We try to identify and get those folks out of the organization as much as possible. With this in mind, I am insulted by the idea of tracking my adult learners' gaze and posture. I could see a scenario in which it might work. Say I wanted to A/B test two versions of a learning experience to improve my offerings. Say that I had an AI tracking tool that was reasonably priced and easy to put in place. The tracking could provide useful information. In that scenario, learners would know they were being tracked during the test. They would also know why. That scenario requires quite a lot of effort. Both learning experiences are developed enough to test. And the AI tool is implemented. Learners are informed what is happening and why. The big question to ask is if it would be worth it. Would that information be more valuable than what we can get from other feedback? My suspicion is that the answer is no. My suspicion is that it's best to leave that type of testing of cognitive psychologists. As Clark (2020) says: If you want insights into how people actually learn, set some time aside and look at the existing research in cognitive science. You will do better looking at what the research actually says and then redesigning your online learning around that science. Remember that these scientific findings have already gone through a process of controlled studies, with a methodology that statistically attempts to get clean data on specific variables. References
Clark, D. (2020). Artificial Intelligence for Learning: How to use AI to support employee development. Kogan Page Limited. I’m reflecting on the question about how AI-supported sentiment analysis changes how decisions are made about curriculum, activities, assessments or other educational aspects in my organization. In the company I’m contracting for, as well as in the broader customer education community, no one is talking about sentiment analysis in an educational context (yet).
I did find one mention of an off-the-shelf tool that uses sentiment analysis. This is a case where I could see that people may already be using sentiment analysis without necessarily calling it that. The tool is SurveyMonkey. I see SurveyMonkey as already being a popular tool, but in the context of things like product feedback, employee engagement, and market research. For organizations that consider their curriculum, learning activities, and assessments products (as my current organization does), it’s not a far leap to see using a survey tool like this to get further insights into learners’ sentiment. We already use surveys to collect feedback on learning experiences. We usually have a small amount of data to work with, so the value of having some of that analysis automated by AI is limited. It would very much depend on asking the right kinds of open-ended questions in the survey itself. In this context, sentiment analysis would mostly benefit the Product Owner and Instructional Designer (as well as other stakeholders). This would be during the early testing for the purpose of knowing the most important ways to improve a learning experience. That would presumably benefit learners as well, as we work to make the most effective learning experiences. Why would I trust SurveyMonkey? For me it wouldn’t take much. A quick glance through their site leads me to believe they have done their due diligence in regards to privacy and compliance with existing regulations. For me personally, this would probably be enough, because the impact of decisions would be more about company success than anything with long-term consequences for learners. However, the cost of the tool would have to be justified for the organization. And as I mentioned earlier, that would only be true if we needed to analyze a large amount of data. I have quite a lot of writing experience outside academia. I’ve written lessons for students that included creative writing examples. I’ve written policies and procedures. I’ve written documentation for programmers. I’ve written help articles for end users of technical products. I’ve written a memoir, as well as discarded drafts for three or four novels. I’ve written hundreds of video scripts. I’ve written online instruction for dozens of lessons. I wrote blog posts on a regular basis This is what 30+ years as a professional writer looks like.
None of that writing is quite like what we see in scholarly and academic writing. According to Zhihui Fang, writing for Routledge (2021), “Academic writing is a means of producing, codifying, transmitting, evaluating, renovating, teaching, and learning knowledge and ideology in academic disciplines.” The author also stresses the importance of academic writing for disciplinary learning and academic success. As an English major in my undergraduate days, I scholarly literature reviews and critiques. These papers required research and appropriate citations. However, in my Master of Science writing assignments, the purpose is more about capturing and sharing empirical knowledge in a particular field. This is an important way to share learnings with other researchers. I’ve also written quite a few blog posts to reflect on my learnings in the program, as you can see on this site. According to Eveleth, writing for Smithsonian Magazine (2014) there were 1.8 million articles published each year in 28,000 journals (and likely growing more since then). Eveleth reports a study from 2007 that claimed most papers are only read by their authors, referees, and journal editors (Eveleth, 2014). To me, this represents an enormous missed opportunity. If the purpose of scholarly writing is to share new knowledge, or make new connections between existing knowledge, something is off if hardly anyone reads them. Maybe it’s the tone, which is much more formal than what you see in this blog. Maybe authors frequently fail to have a clear goal for why they are writing the paper and who they are writing it for. Eveleth (2014) implies that the incentive structure in academia is to blame for an attitude that it is better to publish something of poor quality over not publishing. If the incentive structure is the root cause, maybe authors are only writing it to fulfill a quota. I have had many academic journal articles assigned for reading in my Master’s level courses in Learning Technologies. In some ways, scholarly writing is a great way to share knowledge about the newer field of online learning, especially where an appropriate textbook doesn’t yet exist. However, I have found the quality of articles uneven at best, and a poor avenue for learning most of the time. That is not to say that scholarly writing isn’t an important format. For student assignments, it’s a structured way to show what you’ve learned. It's a thorough way to demonstrate your own critical thinking about a subject. I'm proud of my own work. One of my samples represents my research interests quite well. Academics may be in for a major evolution, however. As AI makes it easier to write (and review), I hope the quality of scholarly writing will improve. As an assignment, instructors will need to be careful about how they structure student work that could be completed or assisted by AI. However, AI can now also assist with grading and feedback, so perhaps the we can enhance the learning experience for students. References Eveleth, R. (March 25, 2014). Academics Write Papers Arguing Over How Many People Read (And Cite) Their Papers. Smithsonian Magazine. Retrieved April 5, 2024 from https://www.smithsonianmag.com/smart-news/half-academic-studies-are-never-read-more-three-people-180950222/ Fang, Z. (June 15, 2021). What is Academic Writing? (and Other Burning Questions About It). Routledge Taylor & Francis Group. Retrieved April 5, 2024 from https://www.routledge.com/blog/article/what-is-academic-writing-and-other-burning-questions-about-it. As of this writing, my course is complete except for a few details. I still need to add my final evaluation, which will be done as a survey and focus groups.
From a technology perspective, the main challenge I had was during development. I wanted to move quickly from one page to another. I often wanted to see what a previous direction or week’s assignment had specified. I overcame this challenge (partly) by having multiple tabs open. Canvas still makes it challenging to open and navigate while designing a course. I suppose another way of addressing this is to complete the full text of all of the assignments of the course before putting them into Canvas. However, beyond the fact that seems like extra work, sometimes the structure of the course highlights something you might have not seen when drafting the text in a document. The people challenge I see is having an editor. In my current role at Scaled Agile, all products have a formal editing and quality assurance review before being released. This is a little different than the peer feedback we’re doing as part of the coursework. An editor has a specific skill set, and is able to look beyond the design at details the designer may have missed. For example, this week I noticed that I have week numbers spelled and in numeral form. Beyond that inconsistency, it would be worth a conversation with an editor on whether that is the best way to label the modules. It feels important as a designer, because that’s how I divided the content. However, as a student, I learned after Spring Break that can be problematic, as I have two classes on different weeks now. One class numbered Spring Break and one didn’t. I considered putting dates in the headers instead for easy reference for students. However, that introduces tech debt that would require updates every semester. That may be a useful exercise, but also unrealistic, depending on an instructor’s workload. My experience with working with professional deadlines helped me manage the expectations for what I could accomplish in the timeline. I would have liked to add two additional videos, but I recognized that I simply didn’t have the time to create them. But as a fan of working in an Agile way, I believe the course could be improved every semester, given what I would learn by teaching it, as well as from student feedback. Additional videos could be one of the improvements I make on future iterations. I’m sure there are things I can improve as a designer. However, as a capstone project for my Master’s, I’m quite proud of this accomplishment. It’s a course that has promise in the customer education community. As the semester comes down to the last few weeks, I still have about a quarter of my course development to complete. Since the course is sixteen weeks, I divided the development into four-week chunks to keep from feeling overwhelmed with all the small things that need to be considered or included.
I have weeks 13-16 left to develop. This section includes an assignment that improves a previous assignment based on peer feedback. It has two more discussion assignments and another quiz to write and develop. It also has the final assignment to complete. I considered two more videos in this section, but I think I will save those for a future improvement of the course, as I am running out of time. I also have the final assessment and evaluation plans to develop. That includes writing a survey for students to self-assess their progress, as well as guidance for the small focus groups the instructor would have with volunteers after the course to gain learner insights on the subject matter and learning activities. The development needs to be complete next week. It’s not a small amount, but I believe I have a good pattern of development and the time set aside to complete those tasks. Some of the challenges I’ve faced stem from turning the sometimes sketchy plans in the design document to real assignments. The main example I found this week was in providing the structure for peers to give feedback and get graded on that. I’m still not sure if that changed my point structure on the course. But I hope to check that next week. One of the questions I’ve considered is whether I’ll be able to implement the course. I would love to teach this course at a university. There are unknown hurdles in implementing that plan. I’m working on my own career goals to have the qualifications to do so. But I’m not sure what it takes to get a new course approved. I do have a possibility simmering for a pilot, and that is where I could evaluate the current design for this format. This assignment poses three questions related to using diagnostics and decision-making.
Question 1 Given what you understand about how AI can be used for learning diagnostics and decision making in educational spaces, how do you think educators and administrative staff should employ it and why? The Director of Hubspot Academy recently said in a community discussion that one of the most popular topics with Customer Education leaders is the evaluation of success of their programs (Sembler, 2023). I agree with her point that it’s important to be very clear about what you are trying to accomplish. Where Customer Education differs from other domains of education is that usually evaluating the impact for learners is secondary to evaluating the impact on the business. It’s not that learner outcomes are not important. But for a domain that constantly needs to prove its value to other functions of a business, it doesn’t matter what learners know or can do if it doesn’t support the business goals. However, the two are intertwined. If I have a customer who submits too many support tickets because they didn’t go to the onboarding training, the business is not reaching its goals by providing that onboarding training. The learner hasn’t reached the outcome, but the business hasn’t either. Question 2 What are potential practical pitfalls with relying on AI for these tasks? AI for diagnostics in this context could be a great time saver and simplify making the connections between what, when, and how customers learn and the impact that has on their product adoption, expansion of licenses, and renewal of subscriptions. The pitfalls include potential problems similar to other domains of education, like having biased or incomplete data. The biggest pitfall, though is getting the buy-in from the business for investing in developing this capability. Question 3 What are potential ethical challenges with relying on AI for assessment tasks? Assessments are a different issue. There is a broad range of interpretation of assessment in Customer Education. Some companies have exams for certifications, some do not. I doubt the quality of the exam (developed and delivered) is consistent from one company to the next. In general, I would lean on the aspects of responsible AI to answer this question. I would want to know that the AI assessment is trustworthy, especially of keeping the data private. I would want to know how the AI made its determinations so that I can very its accuracy (and provide the details for any learners with complaints). And I would want to know that the AI is following our laws and regulations and not harming anyone. References Sembler, C. (2023). Evaluating Success: Learner Data vs. Business Metrics. [Post] Customer Education Community. To reflect on the impact the Master’s Degree program has had on my career goals, I need to back up about 5 years.
In 2019, I had been a freelancer for many years. At the time, I was splitting my time between two projects, one developing content for Idaptive Academy, and one developing for Rapid7 Academy. I enjoyed it, but I sometimes felt like all I did was create videos. I decided that in order to continue to develop professionally, I would need to get a full time job. It took several months (with the help of COVID) before I landed at Snyk. My hope was that by working at a start-up, I could build a customer education function from the ground up, and my leadership skills would develop as the function matured and expanded. That plan went great for about eight months. I won’t detail the specifics, but it became clear over the next months that I’d never reach the goals I’d set for myself by taking that job. I did accomplish some great things. I was nominated by the growing customer education community as a “rising voice” for the field in late 2021 and again in 2022. The customer “academy” I built also won an award in 2022. My initial interest in graduate school had two parts. First, I hoped to fill the gaps of any missing skills, as it would relate to being an education function leader at a software company. Second, I’d wanted to be a university professor (very vaguely without action) when I was an undergraduate. I didn’t have the commitment (in time, money, or energy) to pursue that then, especially when I found there were other jobs I enjoyed using my writing skills. But on the personal side, as my mother started declining rapidly in late 2021, she inspired me in a warped way. She never realized her educational dreams. When she died, I had a strong impulse to not let my own dreams wither. I remember the day I came to UNT for the graduate school preview in March 2021. The speaker told a story about his wife, who had just finished her PhD at age 62 after 30 years in nursing practice. I felt like the story was told for my personal benefit, given my similar years of experience and age. The next few months were quite stressful, and I had a somewhat fuzzy picture of what it would mean to go to graduate school. But I also was experiencing an empty nest. I’d taken my kids on multiple college visits, and every time, I wanted to go myself. In my MS UNT application, I stated that I wanted to fill gaps in my knowledge to help me in my current career path as Manager of Customer Education at a start-up software company, but that I had a longer-term goal. I wanted to focus on research that differentiated university learning, learning and development within a workforce, and professional learning as a customer of a product. I knew that my experience as an “accidental instructional designer” would make me a good instructor for future customer educators. Over the course of the program, I’ve not changed that long-term goal. In fact, after some internal struggles, I recognized that leading wasn’t actually what I was after, and it would be a better fit for me to think about how I could help move the customer education field forward and teach others in that space. I did, however, change my academic plan. I started with the Teaching and Learning focus, and later switched to what is now Instructional Design and Technology. I started with one class at a time, but by the second semester decided to speed up. Spring of 2023 was very difficult. I lost my dad as well, and finally decided to leave the job without a clear plan for what was next. I took time off to finally grieve, and focused on my classes. In the summer, I found a lovely part-time contract in which I was more confident an instructional designer than I’d ever been. It balanced perfectly with taking a full load of classes. It also gave me the space to focus on my PhD applications. Through the process of talking to many people at both SMU and UNT, my PhD plans came into focus. That goal I had more than 30 years ago now seems possible, although I’m aware that it may evolve more over the next four years. I have the beginnings of a business plan if I choose to go that route. But whether I obtain a university faculty position or start my own business, I feel at peace with the goal of teaching future customer educators, and am very excited about the next step. In Customer Education, there are always conversations about how to better harness data to make decisions about programs that educate the customers of the business. As a fledgling function at many businesses, and having been subject to one reduction-in-force after another in the last two years, Customer Education is keen on proving its value as a function vital to (especially subscription-based) business success.
In the Learning Analytics section of Artificial Intelligence for Learning (Clark, 2020), Clark makes a key point. He says, “…the goal is not to improve training but to improve the business. (p. 183). Analyzing the data is key in making that connection. However, the data can be so difficult to get. In my experience, request after request to integrate the LMS data with key business data or to have a share of a data analysts’ time frequently fell on deaf ears. So I had to tell the best story I could with the data I had available. I was working in a hypergrowth software company that had just started investing in Customer Education. The story that I wanted to tell was that “trained” customers onboarded more quickly, took fewer customer success and customer support resources, and adopted the product more quickly and thoroughly. I had access to data like when they became customers (via SalesForce), as well as the LMS data like learners’ view times and dates for content titles, percent completion of courses, and number of visits to the learning site. Without spending too much time on the data, I had an intuitive sense that there were aspects of being on the right track. However, AI-supported analytics could have confirmed that intuition. For example, was the curriculum was solving the problem it intended to solve regarding onboarding? By tying the completion of a set of courses to product adoption metrics and account license usage, we could have confirmed this. We might also learn that customers needed to learn and practice beyond that initial onboarding content, which would have required further investment. Another example is I sensed the self-paced online instruction methods were generally working based on learner time spent and number of courses completed. But there was little to compare to, since the company wasn’t investing in other instruction methods. Perhaps by analyzing community posts, conversations with customer success, and other sources, we could have confirmed that the limited forms of instruction we offered weren’t enough for all but the most motivated learners. That is a perfect task for AI-enabled learning analytics. Let’s look at the framework of Clark’s four goals for learning analytics of describe, analyze, predict, and prescribe (Clark, 2020) in the context of Customer Education. Describing who, what, where, and when are marginally useful, but this goal is much more valuable when learning data is tied to who, what, where, and when of product usage and advocate behaviors. Analyzing is where AI could save Customer Educators time, not only in making decisions about curriculum and needed course improvements, but also in connecting learning with business impact. The Predicting goal is different in this context. Grades and drop outs are less relevant if customers are achieving their goals without the measured learning. When it comes to prescribing, some Customer Education products already incorporate engines to recommend learning based on a Customer’s other learning or performance on an assessment. The real win would be recommendations based on anticipating their learning needs during their work and offering the appropriate bite-size learning to get them started, followed up by other relevant experiences to deepen their learning. References Clark, D. (2020). Artificial Intelligence for Learning: How to use AI to support employee development. Kogan Page Limited. |
AuthorMichele Wiedemer has worked in software as an "accidental instructional designer" for many years. She is currently completing the MS in Learning Technologies at The University of North Texas. This blog represents reflections on specific assignments in the coursework. Archives
February 2024
Categories |