GETTING STARTED WITH LEARNING IMPACT MEASUREMENT

L&D teams seem to follow an all or nothing approach when it comes to learning impact measurement. If you’re measuring a lot of parameters, then good. But if you’re measuring nothing, here are some pointers to get started.

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

Ever lamented the mysterious algorithm that powers your LinkedIn feed? As you’re probably aware, your feed is curated based on a variety of factors including posts you previously engaged with, the people you are connected to, and even potential topics of interest posted by their billion strong user base. That’s big data at work for you.

However, if you think that data has to only operate at this scale, you couldn’t be more wrong. This is precisely the reason why many L&D teams don’t measure the outcome of their work at all. They either think it takes a huge amount of data to get the information needed, or they are totally afraid of data to begin with. You don’t have to be.

Have you ever been on a diet or exercise program? What’s the first thing you do, even before the program starts? Measure your weight, right? And, as the program progresses, you keep weighing yourself to see how much you’ve lost. Measuring the outcome of your learning interventions could be as simple as this.

So, here are a few points to start, ranked based on ease of measurement and in reverse order of importance. Start small and then work your way up. And, remember that having some measures in place is better than nothing at all.

1. Did they open the learning?

This is an easy one. In fact, it may even sound too trivial to measure, but what’s the point of understanding impact if people didn’t even open the learning intervention you put together for them? If you find that the learning has very few or no takers, it could mean one of two things:

  • Users thought the topic is not relevant / useful to them
  • They didn’t find it interesting enough, based on prior experience with other courses

Either of these can be fixed with better upfront communication, aka, marketing for learning. You can do this by creating a campaign of sorts, and drumming up the excitement about the program before it’s due to be released.

2. Did they complete it?

Okay, let’s say people came in droves to the learning. What next? Did they actually complete it? This can give us a lot of input into the design of the learning itself. Was it too easy or was it too difficult? Or was it boring? These leads us into the next point…

3. Did they engage with it?

Completion data is really easy to obtain from an LMS, but if you want to dig deeper (that’s the whole point of this exercise, right?), you can look into factors like:

  • Popular pages: Are there some pages that are more popular than others? If yes, what is causing this?
  • Drop-off points: Is there a particular point at which more people are dropping off the learning?
  • Question attempts: Are people able to answer questions correctly? If not, where are they struggling the most?

This could lead to some interesting insights on the design of the learning, and it should hopefully help tweak it to improve your results.

4. How well did they perform?

An analysis of people’s performance in the assessment can reveal several aspects about the design of the course. We can understand which questions most people got right, and which ones many people failed to answer correctly.

If there is a pre-test, a comparison between the performance in the two assessments will give us an immediate clue as to the efficacy of the program.

And as mentioned above, digging deeper can yield far more interesting insights. For example:

  • Is there a particular question that users are wrestling with?
  • Is there a topic that they are struggling in?
  • How many attempts are they taking to ‘pass’ the test?
  • Do they go back to review the content if they don’t get a set of questions right before attempting again?

5. Did they think it is useful?

Up to this point, we have been looking at the data we can gather from the LMS without specifically asking users for their opinion about the learning. Here, we want to know what they think.

However, please be aware that learners are poor judges of learning efficacy, so basing your impact measurement on a learner survey alone is a dangerous game. But when the survey is used along with the other metrics, it can be extremely useful. For instance, if users report the course as helpful but many do not pass the test, it probably means that the difficulty of the assessment has to be dialed down a notch. Or perhaps the course needs more guided activities to better prepare learners for the assessment.

6. Would they recommend it to friends?

This is another indicator of user impression about the course. If many users are confident of recommending to their colleagues or friends, then they likely find it engaging / valuable. Here again, we should consider this information along with the other metrics, and not in isolation.

7. Has their behavior changed? And if yes, does the change last?

What we have seen until now involves data that can be drawn from the LMS or an associated service. However, measuring behavior is most likely to take us out of the learning realm and into the real world.

For instance, for a course on safety, we want to know if people continue to slack off on safety protocols, or is there a positive change after taking the course? This information is likely to be available with the department whose people were enrolled in the learning, or maybe in a central location, depending on the organization.

8. Is there a business impact? Can it be measured?

In the last point, we measured data from the users’ point of view. This point talks about the same information from the business perspective.

In our example, has the number of reported safety incidents come down after the course was rolled out? If yes, by how much?

Note that in some cases, this information may not be available or measurable. And that’s okay. The idea is to glean as much data as possible, so we have a meaningful understanding of how our efforts went, and what we can do to improve our work.

Most of the data we’ve talked about can be obtained from the LMS itself, or from a simple user survey. For the last two points, obtaining the data might seem daunting and unattainable, but this is where we have the chance to shine and show that our work is creating value for the organization.

And remember: Just like you decided that you want to reduce your weight (and started measuring it) before embarking on your diet or workout regimen, decide on your metrics of measurement before you start designing the learning solution.

Good luck!


Written by Srividya Kumar, Co-Founder @ Learnnovators

(Visited 158 times, 1 visits today)

More To Explore

E-Learning

ZSOLT OLAH – CRYSTAL BALLING WITH LEARNNOVATORS

In this enlightening interview with Learnnovators, Zsolt Olah shares his pioneering insights on the integration of technology and learning in the workplace. As an expert in blending gamification with psychological insights, Zsolt discusses the evolution of learning technologies and their impact on creating engaging and effective learning environments. He emphasizes the importance of not letting technology dictate our thinking and the need for learning professionals to master data literacy and ask the right questions to harness AI’s potential. Zsolt’s forward-thinking vision for utilizing Generative AI to create seamless, personalized learning experiences highlights the transformative power of these technologies.

E-Learning

MARGIE MEACHAM – CRYSTAL BALLING WITH LEARNNOVATORS (SEASON II)

In this engaging interview with Learnnovators, Margie, known for her innovative use of artificial intelligence in educational strategies, discusses the integration of AI and neuroscience in creating compelling, personalized learning experiences that challenge traditional methods and pave the way for the future of training and development. Margie’s vision for utilizing AI to facilitate ‘just-in-time’ learning that adapts to individual needs exemplifies her creativity and forward-thinking.

Instructional Design

INSTRUCTIONAL DESIGN BASICS – GOALS

This article emphasizes the importance of goals in instructional design. A goal, at the macro level, answers the WIIFM for the business. Broken down into a more micro level, it defines the specific actions learners need to take to reach the goal. This article focuses on the macro, business, goals and lists the characteristics of a good goal. It also discusses how to derive a good goal from a bad one by asking probing questions.

REQUEST DEMO