The prompt this week is to consider technology interfaces (such as learning management systems) that I believe could benefit from the use of AI. I think in the world of extended enterprise training, especially for software, much is happening already in terms of personalization and adaptive learning.
While personalized and adaptive learning would definitely improve the experience for learners by meeting them with what they need to learn when they need to learn it, I think immediate feedback is a key area where AI could really revolutionize learning. Do you like multiple choice questions (MCQ) for assessments? As an instructional designer, I find that it’s quite challenging and time-consuming to write MCQs that really assess what learners know, versus learners being able to merely or vaguely recognize, versus learners being able to guess. As I student, I usually find MCQs fall into one of two categories: either they are insulting to my intelligence or they are unnecessarily tricky (which could be because I don’t know the answer, but it also could be just because it’s a bad question). Either way, multiple choice questions rarely require that a learner pull something out of their memory in a way that active recall does. Clark (2020) reports that from Gates’ research in 1917, we’ve known about the importance of active recall for a century. However, we keep using multiple choice in technology-based learning, because they are easy to grade instantly. Of course, we can program generic feedback for each answer choice (although I rarely see that done). We’ve been able to program short answer questions for awhile as well. But without AI, we have to program every possible iteration of a correct answer, including capitalization or punctuation. With AI, what if instead of MCQs, we could have a reflection question? Reflection questions require the learners to use active recall - not just for a word, but for an understanding of a concept, or an application of a process, and so on. Then the AI provides realistic, accurate, and useful feedback that corrects misunderstandings, supports learners in their journey with appropriate resources for revisiting a concept, and helps consolidate learning. I will add that implementing the AI grading of reflection questions would require curating and training (or at least referencing) the relevant resources, which has its own difficulties. And of course, MCQs do have their place when they are well done. In fact, I used a MCQ for an undoing activity that I thought was particularly useful. References Clark, D. (2020). Artificial Intelligence for Learning: How to use AI to support employee development. Kogan Page Limited.
0 Comments
In this week’s post, I’ll reflect on the first stage of developing my first full undergraduate course. It’s called Developing Effective Customer Education. It’s meant as an introduction to the field, both for people who want to create and lead a customer education function, as well as students targeting other roles where customer education could be a key collaboration (such as Business and Marketing students).
Although the requirements of the assignment made me reconsider how I wanted to develop this course, I’d been thinking about the possibilities for a long time. This has helped the development go smoothly. You may have seen my LinkedIn posts summarizing The Customer Education Playbook (Quick & Kelly, 2022) chapters after the book was first published. I knew the 12 steps in this book had a perfect framework for developing into a 16-week course. The incredible community participation in my series of LinkedIn posts contributed to my decision to include lots of discussions in the course. You can read more about me using Connectivism as the overarching learning theory behind my design here. Peer feedback is another important tool in learning. My peer provided some good insights on things that I could clarify or modify from my original design. For example, she commented that the course seemed to require a lot of writing. Writing is a good way to reflect and consolidate learning, but it makes sense that (as a writer), I would default to it as an activity. In response to her comments, I removed some of the blog assignments. I also gave all of the discussion, blog, and report assignments word count requirements to clarify them, as well as providing the option to do them as videos as an alternative. The intent is that these are reflections. This course is the first time I’ve developed learning activities in Canvas. But I have the benefit of having worked with several other learning management systems. That means that even though I needed to learn some functionality specific to Canvas, my knowledge of using other systems transferred easily. Canvas is significantly easier to learn than the system I used back in 2015, which required frequent collaboration with a WordPress developer. One thing I especially appreciated in Canvas was the duplication option to help me set up the Modules framework. Currently, there are lots of copies of the first week, which I’m updating as I go through the development process. The main challenge I’ve had in developing is knowing what is enough to support the audience’s learning. I’ve been reading Donald Clark (Clark, 2020) for my other class, and love his frequent reminders that ‘less is more’ in learning design. So I have weekly directions, a smattering of videos to complement the reading, discussions, and other assignments. I wanted to make it very clear what students need to do. I have had classes where it’s so hard to figure out what I’m supposed to do, that the cognitive load for that gets in the way of learning. So I modeled off one of the online classes I’ve taken in this program that I thought did this well. I settled on headings in the Modules of Learn and Activities. The course is complete through the first four weeks. I believe I can use that to speed development for the next sections. References Clark, D. (2020). Artificial Intelligence for Learning: How to use AI to support employee development. Kogan Page Limited. Quick, D., & Kelly, B. (2022). The Customer Education Playbook. Wiley. This week, I’ll take a look at some general uses of Artificial Intelligence in Education (AIED).
Intelligent Tutorial Systems (ITS) Because of the improvements in learning when students have one-on-one tutoring, the goal of ITS is to provide one-on-one tutoring at scale. Examples of ITS include Mathia, Assistments, and alta (Homes et al, 2019). These programs provide optimal step-by-step tutorials for students and adjust the level of difficulty according to the students’ needs. This approach appears beneficial for well-defined domains, such as math and physics. Studies suggest that these “have not yet quite achieved parity with one-to-one teaching” (Homes et al., 2019), but have had positive outcomes. In some contexts, this could be a good assistance for teachers in supporting students who need additional help, and a great complementary learning activity for students who want additional practice in the domain they are studying. My concern about using AI here is forgetting that it’s a complement to other forms of teaching and learning. Dialogue-Based Tutoring Systems (DBTS) Building on the idea of Intelligent Tutoring Systems, Dialogue-Based Tutoring Systems engages students in conversations about a topic. Again, this is intended to be a complement to other forms of teaching and learning, as students should have already attended lectures and read to consume the domain content. This is another system where students work step-by-step through tasks (such as for Math, Computer Science, and Physics). Examples of DBTS include AutoTutor and Watson Tutor (Homes et al., 2019). Homes and the other authors (2019) report that evaluations of AutoTutor show that it does help students have higher learning gains, especially for deep learning of concepts, on par with the level of having a human tutor in some cases. I have a similar concern about the use of AI here, ensuring that it’s a good complement to other forms of teaching and learning as additional practice. Exploratory Learning Environments (ELEs) ELEs use a constructivist approach and a learner model that encourages students to explore to build their own knowledge (Homes et al., 2019). They provide automatic guidance and feedback, including addressing misconceptions and proposing alternate approaches. Examples include Fractions Lab, Betty’s Brain, Crystal Island, and ECHOES (Homes et al., 2019). The evaluations seem to have mixed results, depending on the study and the individual program’s goals. Since the goal is more open, I can see where it would be more difficult to tell whether it’s a helpful support to teaching and learning. My concern is that it would need to be used in the right context. Automatic Writing Evaluation Automatic Writing Evaluation programs like Intelligent Essay Assessor, WriteToLearn, e-Rater, and Revision Assistant, provide feedback and/or assessment for student writing assignments (Homes et al., 2019). This can be helpful to the teacher as grading writing assignments can be time-consuming. It’s helpful for learners because the system can provide immediate feedback. An interesting evaluation here for WriteToLearn is that students completed more revisions using the system (Homes et al., 2019). Additional Concerns AI is such a popular idea that administrators may be rushing to get improvements any way they can. Working outside of our local school district, I’ve had a difficult time discovering how exactly they may be using AI. Companies who want some of the budget may have marketing hype that is full of promise, but the ethics, safety, and effectiveness may not yet be fully tested. References Homes, W., Bialik, M., and Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implementations for Teaching & Learning. Center for Curriculum Redesign. As an instructional designer for (mostly) short adult learning experiences, I was only fully aware of two instructional design models: ADDIE (Analyze, Design, Develop, Implement, and Evaluate) and SAM (Successive Approximation Model). My MS in Learning Technologies program classes have all seemed to focus on my ability to use ADDIE.
But here’s my hot take: ADDIE is not an instructional design model. Both ADDIE and SAM describe the process for designing instruction. Ideally, designers would work to include effective instruction as part of the analysis and design phases. Ideally, they would also make improvements based on the evaluation. However, the models themselves don’t include the direct guardrails that would ensure effective instructional practices are included from the start. This week, I’ve been reading research related to the ARCS Model of Motivational Design, which John Keller created in the 1980s. ARCS is an acronym for Attention, Relevance, Confidence, and Satisfaction (Francom & Reeves, 2010). The model is named for the ARCS components of motivation, and it includes several motivational strategies to address each of those components. The point of the model is to ensure instruction is motivating for learners, so they will be willing to put in the effort required for effective learning. In an asynchronous online context, motivation is a key consideration. I sometimes get the impression that the suggestion for addressing motivation is always in a single sound byte: gamification. I’m glad to know there is much more to it. The analysis, design, development, and evaluation phases of the motivational design process is of course, quite similar to ADDIE. The ARCS process includes ten steps in total to improve the motivational appeal of learning experiences (Francom & Reeves, 2010). One theory that the ARCS model is based on is the expectancy-value theory (Small, 2000). There is a difference between a theory and an instructional design model. A theory is more about understanding how learning occurs. An instructional design model, on the other hand, is about creating effective learning experiences. This distinction is important. We cannot create effective learning experiences without some understanding of how learning happens. But does that distinction matter for a client? I’ve worked with many clients. I can’t remember anyone mentioning either learning theories or instructional design models. My experience may be skewed to mostly an audience of startup software companies, who may be least likely to care about either. But regardless of whether the theory or the model is important to them, they want results. And to get the best results, learning experiences need to be grounded in effective strategies. That’s where a model like ARCS shines. References Francom, G., & Reeves, T. C. (2010). A Significant Contributor to the Field of Educational Technology. Educational Technology, 50(3), 55-58. https://www.jstor.org/stable/44429809 Small, R. (2000). Motivation in Instructional Design. Teacher Librarian, 27(5), 29. This week I got feedback from my professor and a peer on my design for a semester-long course for undergraduates on customer education. This post reflects on that feedback and changes I may implement to improve the design.
One of the challenges with customer education is that it is rapidly evolving as the field matures. Because of that, I want the course to be more about building a foundation of awareness and skills, rather than students being able to become experts in a specific set of knowledge. So I was very happy to discover George Siemen's theory of Connectivism (Siemens, 2005). One question my peer reviewer brought up that I’d like to explore is in the Theoretical Basis of Design section of my design document. I'd like to explain more why I chose Connectivism over the Behaviorism, Cognitivism, and Constructivism theories that are all mentioned in the theory paper. To me, Connectivism is the ideal approach for this course for many of the same reasons that Siemens points out that these older theories aren’t able to address the learning needs of today. Two questions of note that Siemens (2005) asks, in relation to the limits of the other theories are: “How are learning theories impacted when knowledge is no longer acquired in the linear manner?” and “How do learning theories address moments where performance is needed in the absence of complete understanding?” The challenge is in delivering such a learning experience in a 100% online asynchronous manner. As it is intended to be delivered through a university (such as University of North Texas’ Learning Technologies program), it is a cohort-based course. That means that discussions can still be a key component of the course. My peer brought up that the design includes many writing assignments. I realize that as a writer, writing is my go-to option for reflection. To address this, I’m going to incorporate options for students to turn in their reflections that include writing a blog or doing a video blog. The design hinges on the diversity of opinions introduced in the discussions, so the reflections just need to be accessible to students’ classmates. There were several other details that I want to clarify in the design, such as clarifying the audience and grading of some of the listed optional activities. These details highlight the fact that instructional designers can always improve their designs by getting at least one other opinion on their designs before they start building. Siemens, G. (2005). Connectivism: A Learning Theory for the Digital Age. International Journal of Instructional Technology & Distance Learning. 2(1). Retrieved 1/25/2024 from https://itdl.org/Journal/Jan_05/article01.htm. When considering how to implement AI in education to ensure equal access and opportunities, it’s important to consider educational goals. In Artificial Intelligence in Education, the authors point out that primary and secondary learning is a future for future learning, especially as it relates to economic, civic, and personal goals (Holmes et al., 2019). It’s important for anyone involved in AI education development to remember that information is not education. We don’t need bots to help do the work of cramming more knowledge into students’ heads; we need to support the development of skills relevant to their adult lives (one of which being knowing how to learn). With that in mind, the need to ensure equal access and opportunities is a challenging one. This week’s reading and my previous experience doesn’t help provide a solid answer for the inequalities in education. But I do know that implementing solutions need to incorporate plans for the hardware and the energy requirements, which may be much more difficult to implement in some locations.
How will the role of teachers change? I really appreciated this quote: “content knowledge may be the least important thing that students retain from their schooling” (Holmes et al., 2019). Humans' ability to build relationships is better than a machine’s ability (Holmes et al., 2019), no matter how well we implement the AI. In that case, teachers’ ability to create a supportive and inclusive classroom where students feel valued and respected is an important foundation for their learning experiences. Also, teachers serve as facilitators to foster collaboration between students as they learn. Collaboration is an important component of the learning process. It's also an important skill for students' to develop for their professional lives. References Homes, W., Bialik, M., and Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implementations for Teaching & Learning. Center for Curriculum Redesign. |
AuthorMichele Wiedemer has worked in software as an "accidental instructional designer" for many years. She is currently completing the MS in Learning Technologies at The University of North Texas. This blog represents reflections on specific assignments in the coursework. Archives
February 2024
Categories |