DEEPER ELEARNING DESIGN: PART 6 – PUTTING IT ALL TOGETHER

This is the final post of the "Deeper eLearning Design" series by Clark Quinn.

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

This is the sixth and final post in a series of six that covers Deeper eLearning. The goal of this series is to build upon good implementations of instructional design, and go deeper into the nuances of what makes learning really work. It is particularly focused on eLearning, but much of what has been mentioned also applies to face-to-face or virtual instruction. We’ve covered objectives, practice, concepts, examples, and the emotional component. Here we’re talking about putting it all together.

While the elements indicated for Deeper eLearning are really a minimum if you want eLearning that optimally achieves your outcomes, totally abandoning your existing processes is likely to be a daunting challenge. Instead, we should look at ways we can modify existing processes to accomplish the same goals. To start with, we will use a sample process as an example, and you will have to infer how to adapt this for you.

The typical process starts with the source of an overarching objective. Whether from market research or client request, at some point an initial goal is determined. This is then fleshed out with a combination of conversations with a SME and/or a suite of documents and presentations that constitute subject matter knowledge. Typically, an instructional designer uses these resources to determine the objective(s) of the course, and the major content to be developed, as well as assessment. The designer creates a storyboard, which is reviewed and refined before being passed on to developers to create. Once the course is developed, another review is undertaken to fix any errors. The course is then released. We’ll use this as a basis for suggesting changes.

Process

I was a grad student back when the usability field was beginning to explore different design methodologies. The watchwords then were iterative, situated, and participatory (testing and refining, in context, with the users involved). Fast forward a few decades (shhh), and the agile manifesto arises for software development, focusing on the same elements. Now we’re hearing about these principles being proposed for learning design, and I couldn’t be happier.

What this means in practice is we work in teams, developing core elements in conjunction with stakeholders, in tight cycles, testing and elaborating. There are core reasons why this is better. When people work together the output is better (particularly if you have the right culture and process). Tight cycles of testing and refinement end up addressing emergent issues that waterfall models (set requirements and develop to meet) can’t. And involving stakeholders ensures that their viewpoints are included, and as such facilitates their support.

I’m largely talking design here, and I’m fine with storyboards before production, but you should have at least your designer and developer collaborating at various checkpoints throughout the design and development process, and similarly designer and SME having several touchpoints. You can (and should) get more agile and actually develop iterative versions of your final result.

Also implied is a better process for working with SMEs to get objectives. I’ve found that with a fairly astute design team, you can anticipate what core skills might need to be, and prepare a draft before talking with SMEs. Having end user stakeholders (e.g. those who employ the recipients of the training) involved helps focus the process on real outcomes. And helping SMEs focus on decisions and skills, not knowledge, working with them in a partnership rather than them as a fount of knowledge is helpful. Instilling approaches such as these into your process early on, and then having regular cycles of development and feedback, increases the likelihood of having an impact.

We should, at the same time we set objectives, also set our criteria for success. This shouldn’t be (just) how much they like it (though that is not a bad thing to evaluate), but how effective the outcome is. The question is: “What is an independent evaluation of the success of the learning experience?” And of course we need to measure our time and expenses in getting there. We should determine that the expense to develop justifies the benefits obtained by a good outcome. Doing this all may be more expensive, but it’s also going to be effective, which is unlike the case of most eLearning. We will want to test against our metrics so we have a basis to refine our approaches, by testing against our goals as well as with our stakeholders and representative users.

From good objectives, we need to start designing a learning experience. Immediately after the objectives are stipulated, you should design the final practice or assessment. Then align other elements, such as the ones listed below, to assist the learner to succeed on the final practice.

  • What’re the minimal intermediate practices they need to pass this final challenge and be prepared to address the real life challenges they’ll face?
  • What are the minimal examples that will facilitate transfer to the assessment (and beyond)?
  • What is the core model (or models) that they’ll need to be able to recall or regenerate, and how can we provide (no more than) sufficient exposure so they’ll appropriately abstract and transfer?
  • How can we hook and maintain their attention with minimal media usage?

Notice the focus on minimalism to achieve these goals. We want to minimize the time they spend (and our resources) to develop their ability. We will want to balance that with the appropriate use of media for both the message and variety, but we want the minimal set of content (not everything) on principle. We’ll also need to document our initial estimates to achieve the ultimate solution, and then review afterwards to improve our estimates as well as our outcomes.

Then we need templates. Here I don’t mean templates for specific interactions. I’m talking about templates that ensure we detail the successful elements for the components of learning.

  • Our practices need to be contextualized, meaningful as well as challenging, and include & address misconceptions.
  • Our examples need to refer to the model, include explication of the underlying thinking, emotionally engage, and provide sufficient scope.
  • Our models need to provide an appropriate basis for the inference of the correct actions, use appropriate media, and have sufficient re-representations.
  • Our introductions need to emotionally as well as cognitively open the experience just as our exits need to close the experience.

Our cognitive architecture means that we’re likely to miss some elements once in a while (particularly if we’ve previously done it otherwise). Templates and checklists are tools that help us avoid skipping steps and ensure that we’re on track. Creating job aids for ourselves and other forms of support is a quality control procedure.

We also want to ‘bake in’ creativity. We should have a brainstorming process to kick off a project and come up with some good ideas for settings. Good brainstorming includes having time to process the challenge individually before we come together. We need to diverge (e.g. no premature evaluation) before we converge, everyone’s opinion should be heard, and we should deliberately work to get wilder than we think we can get away with. We should also be addressing the overall learning experience, and looking for ways to make that engaging.

Team composition is important too. There needs to be someone on the team who has a sufficient background in learning. The designer can’t just be someone who has trained, let alone a tool-equipped SME. Even if one has had a formal background in training, there are differences between face-to-face and eLearning. And too often, ID certificates focus on process and objectives, but skip the nuances on things like examples versus concepts. Someone needs to be aware of the elements covered in the previous posts in this series.

An ideal team also has expertise for all the component media: just because you can storyboard doesn’t mean you’re a competent dialog writer, just because you’re good at graphic design doesn’t mean you’re an interaction designer, and so on. Of course, in the real world you may find people who have several skills, or you might have one expert in media who serves several teams, and, yes, sometimes someone can be ‘good enough’, but do pay attention to what constitutes good principles in all the component areas.

And the team should ideally be composed of professionals. This means not just able to do their jobs, but also being members of the community of practice in their area and continuing to update their skills. The organization needs to provide and support their attendance at appropriate events F2F or online, conduct internal review sessions and even internal education sessions, as well as support time for reflection.

This is part of learning to learn, and not only should this happen for the professionals, but it can and should be layered onto the courses being developed as well. Include assignments that review other people’s performance in the area, to help the learners develop self-evaluation skills. Provide requirements to do research on their own, so they learn how to research in the area. Require them to develop tools to scaffold their own performance, as a way for them to understand what the required performance is in a new way.

Finally, work with tools that support the above, not hinder it. Resist tools or templates that provide tarted up drill-and-kill for knowledge. You’ll need them from time to time, but not near as much as needing tools that provide the opportunities to make better decisions, whether just better written multiple choice questions or preferably branching scenarios. Also avoid tools that only have one response for all the wrong answers, as you want each wrong choice to represent a misconception, and you want to address them individually.

In the longer term, you’re also going to want to start developing not into monolithic courses, but instead making the elements distinct and individually accessible. This is initially valuable to support content management, as you should want to have a lifecycle and evaluation process for every bit of content you develop. You’re also going to want to separate out what it says and does from how it manifests on screens, to support delivering on a wider variety of platforms. Going forward, with more discrete and described content, you’re going to be able to assemble courses by description and rule, not by hardwiring. This is the core to customization and personalization, and also supports mobilization.

Not all of this will likely happen at once, but it gives you a vision of the possibilities. It’s up to you to choose where you want to go first, but you want to start moving in this direction. Identify where you are, what things are good first moves for you, and create your learning experience design strategy. This is really the only viable path to create elearning that actually sticks. I implore you to start the journey, and wish you the best of luck on the path.

And on a personal closing note, thanks to Learnnovators for the opportunity to write on something I feel very strongly about. Having come out of a deep immersion in the science of learning, I’ve been dismayed about what I’ve seen perpetrated under the guise of training. I’m on a campaign to try to raise our game, and I hope this series has helped and that you’ll join in.

x—–x—–x—–x—–x

Here are links to all six parts of the “Deeper eLearning Design” series:

1. Deeper eLearning Design: Part 1 – The Starting Point: Good Objectives
2. Deeper eLearning Design: Part 2 – Practice Makes Perfect
3. Deeper eLearning Design: Part 3 – Concepts
4. Deeper eLearning Design: Part 4 – Examples
5. Deeper eLearning Design: Part 5 – Emotion
6. Deeper eLearning Design: Part 6 – Putting It All Together

x—–x—–x—–x—–x

Written by Clark Quinn

_________________________________

(Visited 357 times, 1 visits today)

More To Explore

E-Learning

ZSOLT OLAH – CRYSTAL BALLING WITH LEARNNOVATORS

In this enlightening interview with Learnnovators, Zsolt Olah shares his pioneering insights on the integration of technology and learning in the workplace. As an expert in blending gamification with psychological insights, Zsolt discusses the evolution of learning technologies and their impact on creating engaging and effective learning environments. He emphasizes the importance of not letting technology dictate our thinking and the need for learning professionals to master data literacy and ask the right questions to harness AI’s potential. Zsolt’s forward-thinking vision for utilizing Generative AI to create seamless, personalized learning experiences highlights the transformative power of these technologies.

E-Learning

MARGIE MEACHAM – CRYSTAL BALLING WITH LEARNNOVATORS (SEASON II)

In this engaging interview with Learnnovators, Margie, known for her innovative use of artificial intelligence in educational strategies, discusses the integration of AI and neuroscience in creating compelling, personalized learning experiences that challenge traditional methods and pave the way for the future of training and development. Margie’s vision for utilizing AI to facilitate ‘just-in-time’ learning that adapts to individual needs exemplifies her creativity and forward-thinking.

Instructional Design

INSTRUCTIONAL DESIGN BASICS – GOALS

This article emphasizes the importance of goals in instructional design. A goal, at the macro level, answers the WIIFM for the business. Broken down into a more micro level, it defines the specific actions learners need to take to reach the goal. This article focuses on the macro, business, goals and lists the characteristics of a good goal. It also discusses how to derive a good goal from a bad one by asking probing questions.

REQUEST DEMO