WILL THALHEIMER – CRYSTAL BALLING WITH LEARNNOVATORS (PART 1)

In this exclusive interview with Learnnovators, Will Thalheimer shares his insights on the disruptive trends that are currently influencing corporate L&D.

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

WILL THALHEIMER – CRYSTAL BALLING WITH LEARNNOVATORS (PART 1)

ABOUT WILL THALHEIMER:

Will Thalheimer, PhD, does research-based consulting focused on learning evaluation and presentation design in workplace learning. He’s available for keynotes, speaking, workshops, evaluation strategy, smile-sheet rebuilds, and research benchmarking.

Founder of The Debunker Club, author of the award-winning book Performance-Focused Smile Sheets, creator of LTEM, the Learning-Transfer Evaluation Model, creator and host of the Presentation Science Online Workshop, and co-host of the Truth in Learning podcasts, Will tweets as @WillWorkLearn and blogs and consults at Work-Learning Research, where he also publishes extensive research-to-practice reports—and makes them available for free.

Will holds a BA from the Pennsylvania State University, an MBA from Drexel University, and a PhD in educational psychology: human learning and cognition from Columbia University.

ABOUT THIS INTERVIEW SERIES:

Crystal Balling with Learnnovators is a thought-provoking interview series that attempts to gaze into the future of e-learning. It comprises stimulating discussions with industry experts and product evangelists on emerging trends in the learning landscape.

Join us on this exciting journey as we engage with thought leaders and learning innovators to see what the future of our industry looks like.

THE INTERVIEW:

1. LEARNNOVATORS: We are great fans of your work, Dr. Thalheimer. You are a leading voice in the learning industry who translates important research findings for practitioners to understand and implement in the workplace. It’s an honor to have you here to discuss about the past, present and future of learning science and learning evaluation!

The community considers you a learning science expert and a thought leader. For many years now, being in the role of a Learning Scientist, you have been inspiring the community with your insights based on research-based innovations in the practice of learning evaluation. As the President of Work-Learning Research, Inc., what do you see as the disruptive trends that are presently influencing corporate L&D? And, how, according to you, is learning science and evaluation evolving in synch with these disruptions?

WILL THALHEIMER: One thing real “learning science experts”—as you describe me—know is that learning is one of the most wondrous, complex, and important areas of human functioning. With this complexity should come humility. I’ve been translating research on learning, memory, and instruction for over two decades now and I still feel I have much more to learn. Still, I take pride in having earned the wisdom that I have, despite my limitations.

What disruptive trends are coming in Learning and Development? Great question! Let me say first that disruptions can be helpful or harmful. Let me start with some harmful trends.

As a field, we are still getting inappropriately gobsmacked by sexy new artifacts, buzzwords, memes, and mythologies. Here are a few: Neuro-Everything is a current problem that will continue for the foreseeable future. Anything with a hint of neuroscience or brain science attracts our attention even though—as of this time—there is zero contribution from neuroscience to practical learning recommendations. What we’ve learned from neuroscience was previously known through regular old learning science. I’ve written about this extensively here: https://www.worklearning.com/2016/01/05/brain-based-learning-and-neuroscience-what-the-research-says/.

Artificial Intelligence, big data, and machine learning is also a worry. While these tools certainly will have an influence, we are going to muck things up for at least a decade before we get useful improvements from them. Also, we will be bamboozled by vendors who say they are using AI, but are not, or who are using just 1% AI and claiming that their product is AI-based.

Learning Analytics is poised to cause problems as well. People are measuring all the wrong things. They are measuring what is easy to measure in learning, but not what is important. I highly recommend Jerry Muller’s book The Tyranny of Metrics to learn how data can be good, but also very, very bad.

That’s enough depressing news; now on to the good disruptions. The explosion of different learning technologies beyond authoring tools and LMSs is likely to create a wave of innovations in learning. Organizations now—and even more so in the near future—will use many tools in a Learning-Technology Stack. These will include (1) platforms that offer asynchronous cloud-based learning environments that enable and encourage better learning designs, (2) tools that enable realistic practice in decision-making, (3) tools that reinforce and remind learners, (4) spaced-learning tools, (5) habit-support tools, (6) insight-learning tools (those that enable creative ideation and innovation), et cetera.

Organizations will begin using better learning-evaluation approaches and models such as LTEM (the Learning-Transfer Evaluation Model). This will propel better learning designs. One important aspect of this is that we’ll be able to leverage two aspects of learning evaluation; the traditional one where we are gathering information about what’s working and what’s not, but also the new idea of “stealth messaging,” where our evaluation approaches send messages about what is important.

Finally, note that you are asking me to predict the future. It’s been said that the best way to predict the future is to create it. So, let me make a great leap of faith here—with some optimistic hubris—and say that I’m hoping that my forthcoming book will push some changes and innovations in the L&D ecosystem. We’ve been swimming in the same stagnant L&D fishpond for decades—and we don’t even notice the logjams in our thinking. We can’t even breath in the fetid waters behind these logjams and yet we persist in doing the same weak things, just sometimes with more vigor.

Note about the book: The working title is: The CEO’s Guide to Training, eLearning & Work: Reshaping Learning into a Competitive Advantage. People can learn about the book here and sign up to get an early copy by clicking on this link.

In the book, I’m going to attempt—as best I can—to explode some of these logjams: Our senior managers don’t understand learning; they think it is easy, so they don’t support L&D like they should. Because our L&D leaders live in a world where they are not understood, they do stupid stuff like pretending to align learning with business terminology and business-school vibes—forgetting to align first with learning. We are under-professionalized because of this dynamic and it must change if we expect to be effective in our learning-and-performance work.

We lie to our senior leaders when we show them our learning data—our smile sheets and our attendance data. We then manage toward these superstitious targets, causing a gross loss of effectiveness. This logjam must change.

We just do training. Sometimes in small ways we add prompting tools and on-the-job learning, but too poorly and inadequately. We must break this logjam. Part of this is our fault because we have terrible learning-request processes. In the book, I will offer one that expands beyond training. We must explode our sphere of influence beyond training.

Our CEOs don’t know how to judge their learning leaders or learning teams, so they revert to listening to inappropriate signals. If they hear hints of business terminology, they grade on a curve. If they get a beautiful dashboard of meaningless smile-sheet data, they smile. If their learning team wins learning industry awards, they assume all is well—even though these awards largely come through weak review processes. If we in learning can be judged on the wrong criteria, we will continue to spew out inadequate learning and performance products and practices. Again, by breaking this logjam, we will move forward.

I’m probably overly optimistic about moving L&D to a better, healthier, relationship with our organizational sponsors, but dammit if it’s not worth trying. If successful, the future looks very different from the present. We as an industry will take a robust, more scientific approach to learning. We will do A-B testing, we will develop ways to create virtuous cycles of continuous improvements, we will professionalize and hire and recruit people versed in science-of-learning and research methods who can use these in practical ways. We will be judged more on effectiveness than on our silver tongues. Our tools will change to encourage a pilot-testing, comparison-testing mindset. We will train and educate ourselves more rigorously. We will earn the right to call ourselves L&D professionals.

2. LEARNNOVATORS: You are known for ‘reinventing the smile sheets’ for developing a radical new approach to help get data about learning effectiveness. Known as the ‘smile-sheet whisperer’, you lead organizations across the world to help them re-build their smile sheets to gather better quality data on learning effectiveness. Your book, ‘Performance Focused Smile Sheets – A Radical Rethinking of a Dangerous Art Form’, published in 2016, is touted as one of the best books on smile-sheet design and implementation. The 2019 recommended list of learner survey (aka smile sheet) questions that you recently released based on the Performance-Focused Smile Sheet approach is like gold dust for learning designers. What are the key challenges in implementing a ‘performance-focused’ approach in our learning interventions, and what would be your advice to handle these?

WILL THALHEIMER: Thank you for your kind words about my smile-sheet innovations! First let me say that none of us should rely on smile-sheet data alone—not even using my Performance-Focused Smile Sheet methods.

What I’ve seen from working with clients over the last several years—and improving my tactics, approaches, and questions based on real-world feedback—is that even small improvements in our smile-sheet questions can produce amazing results. I was talking with a client recently and he offered a new set of smile-sheet questions to his trainers (his SMEs). One question asked the learners specifically about whether the trainer utilized Yammer (a social-media intra-organization app) after training. One of the trainers went on a long rant about how terrible Yammer was and how the question asked was stupid. The guy I talked with talked to this trainer and explained the proposed value of using Yammer after training. The trainer was not moved. A week later, the same trainer apologized after rethinking his approach. He added a post-training element to his training—something the transfer research shows very clearly has benefits. So, we can see what happens when we add smile-sheet questions that nudge us—send messages to us—about the factors of learning effectiveness.

You ask about obstacles and there are some, but they are surmountable. The big hurdle is tradition and convincing key stakeholders to make a change. People love Likert-like scales and questions about learner satisfaction and course reputation even though these have been shown to be meaningless in terms of learning effectiveness. Also, some of the biggest learning-evaluation evangelists still recommend poor practices, and vendors as well.

The key to getting beyond these is twofold. First, a short introduction to the futility of current approaches sways 90% of people—even resistors. Second, pilot testing the new smile sheets—even with a direct comparison to traditional smile sheets—always wins the day. The data is infinitely better and where traditional smile sheets encourage stagnation and paralysis, the new ones engender curiosity, change, and innovation in learning design and deployment.

3. LEARNNOVATORS: You refer to yourself as a ‘Research-Inspired Innovator in Learning Presentation Science’, and teach presentation skills based on the science of learning. What does research recommend for crafting impactful presentations (human-centered ones that connect, resonate, and inspire the audience)?

WILL THALHEIMER: Well, I’ve got a 12-hour online workshop on this, so there is a lot I could say about my Presentation-Science approach—but let me briefly tell the story. Again, let me start with a recognition that giving a presentation is an unbelievably complex enterprise; so much so that there are many ways of wisdom about giving presentations. Indeed, I am a very good presenter myself, but I have much more to learn.

My Presentation Science approach is unique in focusing on our audience members as learners. No matter what type of presentation we give, we want our audience to engage, we want them to comprehend something deeply, and we want them to be able to go away and do something with what they’ve learned.

The workshop—and the approach—then focus on helping presenters to help their audience to ENGAGE, LEARN, REMEMBER, and ACT. So, for example, we know from the science of human cognition that when people encounter visual stimuli, their eyes move rapidly from one object to another and back again trying to comprehend what they see. I call this the “eye-path phenomenon.” So, because of this inherent human tendency, we as presenters—as learning designers too!—have to design our presentation slides to align with these eye-path movements. In the workshop I talk about many ways to do this, but here are two. First, we want to rid our slides of logos and decorative graphics that do nothing for learning except draw the eye-path attention of our audience members. Second, we need to show objects on our slides one at a time—not all at once!

The workshop is offered right now at introductory pricing of $149, even though learners who have gone through the workshop have said it’s worth $10,000 and $2,000 and $349. By the middle of 2020, I will probably raise the price. People can learn more about the course at https://www.presentationscience.net/.

4. LEARNNOVATORS: There are many including Geoffrey James who argue that they ‘hate PowerPoint’ since it hampers learning. In one of his recent articles titled, It’s Official: PowerPoint Is the Worst Productivity Tool in All Creation, he puts forward his arguments defending his views about the tool. There are even a few who suggest Prezi as a better tool for effective presentations. However, there are others who question this argument saying that these views (against PowerPoint or any other tool for that matters) are not backed by science or any research findings. As a ‘Research-Inspired Innovator in Learning Presentation Science’, it is interesting to hear you suggest that we take the responsibility and the tool we use has no bearing on the effectiveness of the presentation. What does recent research have to say about this? Can you share your thoughts with our readers please?

WILL THALHEIMER: People have complained about PowerPoint for years, but they are wrong to blame PowerPoint, Prezi, Keynote, or Google Slides. Most of the problems are in the designs we use. I agree that the templates these tools push on us are poor—mostly because they utilize bullet points, but we don’t have to use the templates. Indeed, in the workshop I suggest we use bullet points on less than 5% of our slides and I offer four methods to deal with the dangers of bullet points.

5. LEARNNOVATORS: In the Spacing Learning Events Over Time: What the Research Says – Research Report you say, “It might be helpful to think of spaced retrieval practice as the aspirin of instructional design. Like a miracle drug, it has multiple benefits and very few negative side effects. “We too, like many out there, are quite convinced that spacing learning over time enhances learning effectiveness. However, as we know, many learning solutions are still being designed as ‘one-time events’. What would be your advice to learning designers to break away from the traditional design approaches to support spaced learning?

WILL THALHEIMER: Did I write that? What a nice turn of phrase! Thanks for reminding me. SMILE. Here’s the deal, spaced learning—in all its varieties—is a proven research-based methodology to support learners in learning and remembering. Over 400 scientific studies! We can do several things to utilize spaced learning—and by the way, when we talk spaced learning, we are talking spaced repetitions (not verbatim repetitions but conceptual repetitions). We can space content repetitions over time; we can also space practice over time. We can use delay between repetitions, or we can think about interleaving other topics with the current one. We can provide feedback as a form of repetition, delaying feedback in some instances. Immediate feedback is better when people are beginning to comprehend a topic, but delayed feedback can be better when people are looking to support their remembering.

Fortunately, today there are many tools that enable spaced repetitions and spaced practice. We can utilize a subscription-learning model to augment event-based learning or possibly as a replacement. We are still learning what works and for whom and in what situations.

6. LEARNNOVATORS: To quote Bob Mosher, “It is not about what they know; it is about what they can do.” Evaluating the real outcomes of our learning interventions has always been challenging, the confusion being what type of data to collect and how. You say, “As for the data, there are infinite ways to collect it. Mostly we default to learner surveys and attendance, but so much more can be gathered.” In this regard, your ‘Learning-Transfer Evaluation Model (LTEM)’ – a new learning-evaluation framework that provides precise recommendations for what to collect (and what not to) – looks very promising. Can you let our readers know why you feel LTEM is better than Kirkpatrick-Katzell Four-Level Model to build effective learning interventions and validate learning results, and how you compare this with other new evaluation models in the field?

WILL THALHEIMER: Our models should propel our actions and our thinking toward appropriate end points. Our learning evaluation models should nudge us NOT to accept attendance or learner-activity as a valid metric. Our learning evaluation models should NOT nudge us to evaluate “Learning” as knowledge checks. Unfortunately, that is exactly what the Kirkpatrick-Katzell Four-Level Model has done for six decades. It has been silent about the dangers of validating learning by measuring attendance, and so we in the learning field see attendance as a valuable metric. Even most industry awards judge applicant organizations on how many people were trained. Completely bogus! The Four-Level model also puts “Learning” into one bucket. It’s supposed to be a learning-evaluation model, but it doesn’t differentiate between (a) regurgitation of meaningless trivia, (b) recognition of irrelevant knowledge, (c) recognition and/or recall of meaningful information, (d) decision-making competence, and (e) task competence. So, what do we do when we evaluate using the Kirkpatrick-Katzell Four-Level Model (or the Five Level ROI model or any of these variants)? We say, “Okay, we need a Level 2 learning evaluation; let’s use a knowledge check.” Oh, the damage this has done! We have been nudged to use inadequate metrics about learning—even though learning is where we learning professionals have the most leverage.

LTEM is designed from the ground up to focus on what messages we want an evaluation model to send. It’s got eight tiers instead of four levels, keeping it relatively simple while also being more definitive about learning. Organizations are using LTEM not just for evaluation. They are finding it useful as one of the centerpieces of their learning strategy because it gets all of us thinking about what we really need to accomplish.

Another great thing is that LTEM and the report that accompanies it are available for free at https://www.worklearning.com/ltem/.

7. LEARNNOVATORS: It is inspiring to see your recent Research Review on Learning Transfer that you shared freely for learning practitioners and the community. As we understand, this research-backed report contains numerous recommended transfer factors, and is a valuable tool for anyone in the training field to review and practice. We appreciate your efforts in bringing out this report, and your initiative to make this freely available to the learning community. How do you think this will challenge training professionals to cast a more critical eye at their current programs and future designs with respect to learning transfer?

WILL THALHEIMER: Thanks for recognizing this report! It took me two years to research and write—and I should acknowledge that the real force behind the report are the dozens and dozens of researchers whose work I compiled. Also, a special shout out to Emma Weber and Lever Transfer of Learning (https://transferoflearning.com/) who funded my early research efforts. I also should start out by saying that one of the highlights of the report is that the current state of transfer science is not where it needs to be. In short, too much of the research is based on learner subjective reflections on their learning and transfer—a problem, because we know subjective responding is usually not fully accurate.

Still, there is enough good research on transfer to highlight some of the most critical factors for learning transfer. You ask, how this research-to-practice report will benefit trainers, learning architects, and elearning developers (and their managers, of course). Great Question! First, it’s helpful for all of us to remember that transfer is our ultimate goal in workplace learning. If our learners don’t transfer their learning to be useful to them in their work, then we as learning architects have failed. The report highlights several findings we should keep in mind. Transfer is great, but we must design effective learning interventions. Without good learning, there is not likely to be transfer. But training is not usually enough to support transfer either. We need additional supports including things like reminders, management support, learning design that enables remembering, triggered action planning, etc. By looking at the report, we in L&D can better plan our learning-to-performance strategies.

To be continued. . . (Part 2 Coming Soon; Stay Tuned)


(Visited 1,375 times, 1 visits today)

More To Explore

E-Learning

How Learning Analytics Shapes Effective L&D Programs

Learning analytics is transforming L&D by delivering data-driven insights that evaluate and enhance training effectiveness. By tracking key metrics, analyzing engagement patterns, and measuring ROI, organizations can align learning outcomes with business goals, achieving impactful results such as improved productivity and reduced attrition. Beyond optimizing current programs, learning analytics predicts trends, identifies skill gaps, and helps prepare a future-ready workforce. Success in L&D lies in understanding how learning drives performance—and analytics makes this possible.

E-Learning

ZSOLT OLAH – CRYSTAL BALLING WITH LEARNNOVATORS

In this enlightening interview with Learnnovators, Zsolt Olah shares his pioneering insights on the integration of technology and learning in the workplace. As an expert in blending gamification with psychological insights, Zsolt discusses the evolution of learning technologies and their impact on creating engaging and effective learning environments. He emphasizes the importance of not letting technology dictate our thinking and the need for learning professionals to master data literacy and ask the right questions to harness AI’s potential. Zsolt’s forward-thinking vision for utilizing Generative AI to create seamless, personalized learning experiences highlights the transformative power of these technologies.

E-Learning

MARGIE MEACHAM – CRYSTAL BALLING WITH LEARNNOVATORS (SEASON II)

In this engaging interview with Learnnovators, Margie, known for her innovative use of artificial intelligence in educational strategies, discusses the integration of AI and neuroscience in creating compelling, personalized learning experiences that challenge traditional methods and pave the way for the future of training and development. Margie’s vision for utilizing AI to facilitate ‘just-in-time’ learning that adapts to individual needs exemplifies her creativity and forward-thinking.

Instructional Design

INSTRUCTIONAL DESIGN BASICS – GOALS

This article emphasizes the importance of goals in instructional design. A goal, at the macro level, answers the WIIFM for the business. Broken down into a more micro level, it defines the specific actions learners need to take to reach the goal. This article focuses on the macro, business, goals and lists the characteristics of a good goal. It also discusses how to derive a good goal from a bad one by asking probing questions.

REQUEST DEMO