DEEPER ELEARNING DESIGN: PART 2 – PRACTICE MAKES PERFECT

This is the second post in a series of six that covers Deeper eLearning

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

This is the second post in a series of six that covers Deeper eLearning. The goal of this series is to build upon good implementations of instructional design, and go deeper into the nuances of what makes learning really work. It is particularly focused on eLearning, but almost all of what is mentioned also applies to face-to-face or virtual instruction. We started with objectives and we’ll continue on through concepts, examples, emotional elements, and putting it together, but here we’re talking about good practice for learning.

Once you’ve established good objectives, the next thing you do is make meaningful practice that aligns with those objectives. If you’ve gotten the right objective, with a clear statement of what the learner needs to be able to do, in what context, and what performance measure you’ll use to evaluate them, you’ve got the basis for creating the practice that will develop the learner’s abilities. The goal is to develop learners to the point where you can have confidence that they’re prepared to perform in the workplace in the ways they need to have an impact on organizational outcomes.

The formal goals for learning are twofold:

1. Retention over time until the learning is needed, and
2. Transfer to all appropriate (and no inappropriate) situations
Practice is key to both. While you can support transfer with concepts and examples, practice is essential for retention, and so the practices we design have to be optimized to achieve both those goals.

I like to add a third:

3. Confidence on the part of the learner that they can perform as desired.

A certain level of uncertainly may be ok, but anxiety is not. These principles will continue to resonate through the coming posts in this series.

If what you’re asking learners to be able to do is make better decisions (and I will suggest that making better decisions is likely to be the most valuable component of organizational success), then that’s what they should be doing in the learning practice. That is, the practice should involve putting them in a story where a situation precipitates the requirement for the learner to make a decision. Let’s tease this out.

When I say “put them in a story”, what I mean is to put them into a context where that decision gets made. Contextualized decision-making leads to better learning outcomes than abstract decisions. We may use real contexts or fantastic ones (such as running a business in the wild west or learning about project management through terraforming planets), but we need a setting. The point is to situate the decision in an environment where the importance of getting the decision right in the learning experience is made to seem as crucial as it will be in real performance.

The more widely you want learners to apply the learning, the more different contexts you will want to use (or build a variety of contexts into one practice arena). If what they are learning is to deal with a wide variety of contractors, for instance, you’ll want practice dealing with some widely different contracting situations. If you’re developing coaching skills, you’ll want to have learners coach a suite of employees that mimic the breadth of employees they’re likely to see. In certain circumstances, say when you want far transfer as the task can apply widely, you might even use fantastic settings – e.g. the wild west, medieval times, or science fiction – to facilitate abstraction and transfer. The goal is to have the practice be broad enough to create a generalizable basis for performance outside the learning experience.

This likely will mean you need more than one practice. If it matters, we have to get beyond the ‘practice until they get it right’, and allow them to ‘practice until they can’t get it wrong’. Several elements go into this.

1. Give them a simple challenge to work through first, and gradually increase the difficulty level

First, they may not initially be prepared for the full decision you propose, so you may have to scaffold them along the way. You might simplify the problems they face, or have some parts of the task performed (make it natural in how that happens, so the story is plausible). You gradually increase the challenge until they’re performing the full task. For instance in learning, to diagnose trouble in a system, you might have obvious things first, and then gradually build up to multiple interacting problems.

2. Create the most difficult challenge first, and then work backwards

The first practice you should create will be the last practice they will undertake, as this practice should be the final test of the learner’s ability. From there, you’re likely to need some preliminary practice to get the learner ready to be able to undertake the full performance. As mentioned, you likely will have to create preliminary practice to get them to the final ability. The point here is that you should work backwards from the final performance to determine the necessary intermediate practice until you get to a level they should be able to initially perform (based upon your audience analysis).

3. Include a variety of practice situations that are representative of real world situations

In addition, they may need a lot of practice to be able to deal with all the situations they’ll encounter with the transfer we hope to see. You don’t get this by being exhaustive; a more pragmatic solution would be to select sample problems that span the space of possibilities (and outside, where it won’t apply), to support appropriate abstraction to facilitate transfer. Still, there has to be sufficient practice to develop the ability to apply the emergent representation to new situations. This is tightly coupled to the concepts you use as the basis to guide learner performance.

4. Space out the challenges, and provide them over a period of time

You also need to recognize that ‘massed practice’, giving all the learning in a short period of time, isn’t effective in leading to retention. What makes learning stick is spacing out the practice so that there is time for the brain to rest and then get reactivated. At the neural level, what happens in learning is that links between neurons get strengthened to create patterns of activation. There’s only so much strengthening that can happen in a day before rest is needed. Once that saturation limit is reached, the learner must rest (e.g. sleep) before more strengthening can begin. So the ‘event’ model of learning isn’t the most effective approach to creating persistent change.

This might seem like a lot of practice, but it is one area where most eLearning has fallen short. The point is to give learners a basis from which to perform in the workplace. One way to achieve this would be to trade off content in favor of practice, i.e., less content and more practice. You have to match the amount of practice to the complexity of the situation, the variability of likely situations, and the time that is needed to apply. However, do provide sufficient practice to ensure that your learners have a high likelihood of success. If it matters (and if it doesn’t, why are you bothering), do it right!

I’m ecumenical about whether this is practice or assessment; if they’re performing the task they have to do in the workplace, with whatever trappings, we have a basis to assess their ability. Any practice you track is assessment, and so inherently the final practice is summative assessment. If you give them feedback about their performance that they can incorporate in improving, it’s formative assessment as well. You can create separate practice that merely is used to determine their competency, but it’s still practice, and you’d be hard-pressed to explain why not giving feedback on any practice is a good idea.

5. Provide challenging alternative choices based on common misconceptions

Also, in designing practice, make sure that the alternatives to the right answer are not just obvious or silly, but reflect ways in which learners go wrong. Learners (typically) don’t make random mistakes, but instead they make patterned mistakes that reflect misconceptions. You want to give them the opportunity to get it wrong in the learning experience rather than when it counts, and you want to have feedback specific to the misunderstanding (which is why I rail against any quiz tool which doesn’t have specific feedback available for each wrong answer). The choices should be challenging, not easy. Challenge is both more effective in learning and more engaging in experience.

6. Provide meaningful, and where possible, intrinsic feedback

Feedback is critical; not only should it be specific to the way they went wrong, it should also include the consequences of their choices. Before they hear ‘right’ or ‘wrong’, they should see how it plays out. It’s easy for simple responses, e.g. multiple choice, but we likely want richer responses as we near final practice.

7. Make the practice richer by reflecting the real world

One way to make richer practice is by having them provide freeform responses. This can be hard for us to assess, but we can help learners assess their own performance, and this develops their own self-monitoring too. So, for example, we can show them a model response to compare their response to, and give them a rubric to support evaluating their result. The goal is to gradually hand responsibility to the learners for their own self-monitoring.

We can also develop additional skills, so called 21C skills, by layering across practice certain repeated activities. We can, for example, have different practices require creating a presentation, or a document, or some other artifact that uses technology like they have to use in the real world. We can require them to create their own checklists or tools as well. Similarly, we can require them to work in groups, collaborate, and role-play. Each of these develops not only the domain, but also those specific skills such as technology use and working with others if a) we assess them in aggregate across the learning practice and b) provide support for these skills, too.

8. Allow learners to progress at their ability level

Adapting the challenge is what games do so well, starting with simple challenges and gradually adding on more complex requirements. Games adapt based upon learner choices, gradually getting more challenging. This optimizes learning, allowing learners to progress at their ability level. Let’s be clear, I’m talking serious games, where we’ve got a simulation underpinning the action, and have layered story and adaptive challenge on top. This is not gamification, which is adding extrinsic game mechanics, but instead tapping into intrinsic interest. The benefits of a model-driven interaction include essentially infinite replay, allowing learners to practice to their level of comfort. Next to mentored live performance (which is problematic both because mistakes in live performance can be costly and individual mentoring doesn’t scale well), serious games are the best learning practice.

This may seem like a lot, but with practice and support, designing better practice becomes automatic over time. Templates that focus on scaffolding design (rather than on tarting up meaningless quizzes), and processes that are aligned to learning will help, and I’ll suggest that meaningful practice is the most valuable improvement you can make in your learning design.

So, please, start with good objectives, and then align practice to be sufficient and appropriate. That is, make it meaningful.

x—–x—–x—–x—–x

Here are links to all six parts of the “Deeper eLearning Design” series:

1. Deeper eLearning Design: Part 1 – The Starting Point: Good Objectives
2. Deeper eLearning Design: Part 2 – Practice Makes Perfect
3. Deeper eLearning Design: Part 3 – Concepts
4. Deeper eLearning Design: Part 4 – Examples
5. Deeper eLearning Design: Part 5 – Emotion
6. Deeper eLearning Design: Part 6 – Putting It All Together

x—–x—–x—–x—–x

Written by Clark Quinn

_________________________________

(Visited 970 times, 1 visits today)

More To Explore

E-Learning

How Learning Analytics Shapes Effective L&D Programs

Learning analytics is transforming L&D by delivering data-driven insights that evaluate and enhance training effectiveness. By tracking key metrics, analyzing engagement patterns, and measuring ROI, organizations can align learning outcomes with business goals, achieving impactful results such as improved productivity and reduced attrition. Beyond optimizing current programs, learning analytics predicts trends, identifies skill gaps, and helps prepare a future-ready workforce. Success in L&D lies in understanding how learning drives performance—and analytics makes this possible.

E-Learning

ZSOLT OLAH – CRYSTAL BALLING WITH LEARNNOVATORS

In this enlightening interview with Learnnovators, Zsolt Olah shares his pioneering insights on the integration of technology and learning in the workplace. As an expert in blending gamification with psychological insights, Zsolt discusses the evolution of learning technologies and their impact on creating engaging and effective learning environments. He emphasizes the importance of not letting technology dictate our thinking and the need for learning professionals to master data literacy and ask the right questions to harness AI’s potential. Zsolt’s forward-thinking vision for utilizing Generative AI to create seamless, personalized learning experiences highlights the transformative power of these technologies.

E-Learning

MARGIE MEACHAM – CRYSTAL BALLING WITH LEARNNOVATORS (SEASON II)

In this engaging interview with Learnnovators, Margie, known for her innovative use of artificial intelligence in educational strategies, discusses the integration of AI and neuroscience in creating compelling, personalized learning experiences that challenge traditional methods and pave the way for the future of training and development. Margie’s vision for utilizing AI to facilitate ‘just-in-time’ learning that adapts to individual needs exemplifies her creativity and forward-thinking.

Instructional Design

INSTRUCTIONAL DESIGN BASICS – GOALS

This article emphasizes the importance of goals in instructional design. A goal, at the macro level, answers the WIIFM for the business. Broken down into a more micro level, it defines the specific actions learners need to take to reach the goal. This article focuses on the macro, business, goals and lists the characteristics of a good goal. It also discusses how to derive a good goal from a bad one by asking probing questions.

REQUEST DEMO