KIRKPATRICK’S FOUR LEVELS OF EVALUATION

Since Kirkpatrick stated his original model, other theorists

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

It was while writing his thesis in 1952 that Donald Kirkpatrick became interested in evaluating training programs. In a series of articles published in 1959, he prescribed a four-stage model for evaluating training programs, but it was not until 1994, that he published “Evaluating Training Programs: The Four Levels“. According to Kirkpatrick, evaluating training programs is necessary for the following reasons: 1. To decide whether to continue offering a particular training program 2. To improve future programs 3. To validate your existence and job as a training professional. The four-level model developed by Kirkpatrick is now universally used in gauging training effectiveness. As per the model, evaluation should always start off with level one, followed by levels two, three, and four if time and budgets permit. Information from each previous level serves as the foundation for the next level’s evaluation, offering, in stages, an accurate reading of the effectiveness of the training program.

Level 1 Evaluation – Reactions:

Level one serves to be the gauge – it evaluates how participants/trainees react to the training program or learning experience. It tests waters by attempting to understand participants’ perceptions – Did they like the training program? Was the training material relevant? Was the method of delivery effective? The reaction evaluation tools and methods used in this stage are feedback forms, post-training surveys, questionnaires, which are quick and easy to gather, and non-expensive to analyse. Often called a smilesheet, this type of evaluation, according to Kirkpatrick, should be an inherent feature of every training program at the very first level, for it offers ways in which a training program can be improved. Secondly it builds the base for level two, as the participants’ reactions serve as a pointer as to whether learning is possible. Even though a positive reaction does not in effect guarantee learning, a negative reaction to the training program reduces its chances significantly.

Level 2 Evaluation – Learning:

Level two measures the increase in knowledge – before and after the training program. In order to do this, tests are conducted on participants before training (pre test) and after training (post test). At this stage, evaluation moves beyond participants’ reactions to the newly acquired knowledge, skills, and attitude of the learners if any. What is important to note is that this stage does not merely verify skills/knowledge learnt but the extent to which participants have advanced with regards to new knowledge. This stage calls for more rigorous procedures, ranging from formal to informal testing to team assessment and self-assessment. The most common learning evaluation tools are assessments or tests conducted before and after the training. Interviews, observation are also not uncommon as they are simple to set up and specific.

Level 3 Evaluation – Transfer:

The third level assesses the change that has occurred in participants’ behavior due to the training program. At this stage, all evaluation focuses on the core question – Are the newly acquired skills, knowledge, or attitude being used by the learners in their everyday work arena? Did the trainees use the relevant skills and knowledge? Was there significant and measurable change in performance of the trainees when back to their jobs? Was the transfer in behaviour retained? Would the trainee successfully be able to transfer knowledge to someone else? Several trainers view this level as the most accurate assessment of a training program’s success. However, this stage throws up questions like when, how often, and how to evaluate as it is nearly impossible to predict when learners will exhibit their newly acquired skills and behaviour. Hence, during level three, observation and interview over a period of time are required to measure change, its relevance, and sustainability. Arbitrary, subjective assessments are unreliable as people change differently at different times. Evaluation in this area is challenging and is possible only through support and involvement of both line managers and trainees.

Level 4 Evaluation – Results:

The fourth and final level tries to assess training with regards to business results, for example, determining if sales transactions improved after training of sales staff. In essence it is the acid test. Frequently regarded as the “bottom line”, level four evaluation measures how successful a training program is in a context that is easily understood by managers and executives – better production levels, improved quality, lower costs, higher sales, staff turnover, attrition rates, failures, wastage, non-compliance, quality ratings, growth, retention, and increased profits or return on investment. From a business point of view, this is the overall reason for providing executives with a training program. As determining results in financial terms is difficult to measure and hard to link directly with training, it is of utmost importance to identify and link accountability with the trainee at the very start of the training. This way they understand what is to be measured in the first place. Failure to do so will greatly reduce the chances by which results can be attributed to the training program. It is to be noted, individually, results evaluation is not difficult; it however poses a challenge when it has to be done across an entire organization.

Since Kirkpatrick stated his original model, other theorists like Jack Phillips, have referred to a fifth level, namely, Return On Investment (ROI).

Written by Banshori Bhattacharya

_________________________________

(Visited 1,978 times, 1 visits today)

More To Explore

E-Learning

ZSOLT OLAH – CRYSTAL BALLING WITH LEARNNOVATORS

In this enlightening interview with Learnnovators, Zsolt Olah shares his pioneering insights on the integration of technology and learning in the workplace. As an expert in blending gamification with psychological insights, Zsolt discusses the evolution of learning technologies and their impact on creating engaging and effective learning environments. He emphasizes the importance of not letting technology dictate our thinking and the need for learning professionals to master data literacy and ask the right questions to harness AI’s potential. Zsolt’s forward-thinking vision for utilizing Generative AI to create seamless, personalized learning experiences highlights the transformative power of these technologies.

E-Learning

MARGIE MEACHAM – CRYSTAL BALLING WITH LEARNNOVATORS (SEASON II)

In this engaging interview with Learnnovators, Margie, known for her innovative use of artificial intelligence in educational strategies, discusses the integration of AI and neuroscience in creating compelling, personalized learning experiences that challenge traditional methods and pave the way for the future of training and development. Margie’s vision for utilizing AI to facilitate ‘just-in-time’ learning that adapts to individual needs exemplifies her creativity and forward-thinking.

Instructional Design

INSTRUCTIONAL DESIGN BASICS – GOALS

This article emphasizes the importance of goals in instructional design. A goal, at the macro level, answers the WIIFM for the business. Broken down into a more micro level, it defines the specific actions learners need to take to reach the goal. This article focuses on the macro, business, goals and lists the characteristics of a good goal. It also discusses how to derive a good goal from a bad one by asking probing questions.

REQUEST DEMO