Looks at assessing the costs and total spend on learning and development activities, and the issues involved in benchmarking
Most learning and development (L&D) practitioners are concerned about their level of understanding of the impact of learning. Effective learning and development evaluation needs to be strongly linked with identified performance gaps. The L&D strategy will outline the organisation’s evaluation approach and describe how the impact of any interventions, will be measured.
This factsheet defines evaluation in an organisational L&D context. It explores typical evaluation methods, from post-training questionnaires to development metrics and quantitative survey methods. It also enforces why learning must be aligned with business objectives.
The CIPD is at the heart of change happening across L&D, supporting practitioners in providing insights and resources. We are proud to be at the 'epicentre' of this changing world of L&D.
What is evaluating learning and development?
The quality and effectiveness of learning and development (L&D) activities can be assessed both formally and informally and show alignment to organisational performance. Our new Profession Map encourages practitioners to view evaluation as learner engagement, transfer and impact.
Links with learning and development strategy
A learning and development strategy driven by the organisation’s strategic goals and needs is widely recognised as important to business success. To effectively evaluate L&D, it’s first necessary to have clearly identified organisational performance targets and subsequent learning needs, and agree what measures of success will look like. Evaluation covers the impact of learning provision, how that is transferred as well as the engagement of employees undertaking L&D activities and the engagement of wider stakeholders in the process.
Coverage of learning and development evaluation
Our learning cultures research gives advice on evaluating the learning environment across the whole organisation, with particular teams and at an individual level.
Whilst the majority of organisations carry out some evaluation of learning activities, in 2019 our Professionalising learning and development report showed about a quarter of respondents struggle to understand L&Ds impact. Our 2020 Learning and skills at work report showed that nearly a third of respondents don’t systematically evaluate L&D initiatives..
Evaluation activities can include:
Impact – where L&D can work with the organisation to show how the learning interventions have impacted on performance – these can include links to key performance indicators (financial and operational).
Transfer – where L&D can work with the organisation to show how any learning undertaken on L&D events has been transferred back into the employee’s role and work area – these can include performance goals and how new skills and knowledge have been used.
Engagement – where L&D can demonstrate how stakeholders are engaged with learning, this can be at an organisational level where a positive learning environment is the goal, at team levels or at an individual level (the ‘happy sheet’ is an individual reaction to an individual event).
As L&D practice moves from solely offering formal face-to-face training to embrace ‘social learning’, where sharing, impact and kudos are less tangible measures, measurement can become more challenging. See more on evolving L&D practice.
Common learning and development evaluation methods
The Kirkpatrick model
The seminal model for L&D evaluation, first published in the 1950s by US academic Don Kirkpatrick remains influential today. However, research conducted by Thalheimer indicates this model was first introduced by Raymond Katzell.
It outlines four levels for evaluating learning or training:
- Reactions – reaction to a learning intervention that could include ‘liking or feelings for a programme’.
- Learning - ‘principles, facts etc absorbed’.
- Behaviour - ‘using learning gained on the job’.
- Results - ‘increased production, reduced costs, etc’.
This was helpful guidance when launched. However, in the 1980s Alliger and Janak found that the relationships between the levels were weak because each level is not always linked positively to the next.
Various surveys from the Association for Talent Development have found that most attention is focused on evaluation of learning at the reactions level because of the difficulties and time costs of measuring the other three levels. Thalheimer suggests eight recognised levels of learning evaluation, including some listed above, but he argues that some of these are highly ineffective.
Brinkerhoff success case method
A key criticism of Kirkpatrick’s evaluation model is that changes to performance cannot solely be linked to learning. The Brinkerhoff success case method (SCM) addresses this challenge by proposing a wider focus on systems.
Firstly, an SCM evaluation involves finding likely ‘success cases’ where individuals or teams have benefited from the learning. These typically come from a survey, performance reports, organisational data or the ‘information grapevine’. Those representing potential ‘success cases’ are interviewed and ‘screened’ to find out if they genuinely represent verifiable success with corroborating evidence from other parties. Factors that contribute to success beyond the learning intervention are also explored.
Secondly, an SCM evaluation looks at ‘non-success cases’ to discover those who have found little or no value from the learning. Exploring the reasons why can be very illuminating.
The approach asks four questions:
- How well is an organisation using learning to improve performance?
- What organisational processes/resources are in place to support performance improvement?
- What needs to be improved?
- What organisational barriers stand in the way of performance improvement?
Following analysis, the success and non-success ‘stories’ are shared.
SCM should not be seen as comprehensive evaluation method because of the nature of the sampling, but it offers a manageable, cost-effective approach to determine success insights and areas for improvement.
Philips' return on investment model
Philips and Philips built on the Kirkpatrick model by adding return on investment (ROI) as a fifth level. However, much ROI evaluating is carried out post project and does not build from a baseline. Another problem is that the arithmetic of ROI means that when a small cost learning intervention is set against a big project cost, it can look superficially impressive.
Some commentators ask whether a financial model represents the best way to address the effectiveness of learning. Does stating an ROI of x% help an organisation address its performance gaps and allow the L&D team to communicate their impact.
In the mid-1990s Easterby-Smith was able to draw together four main strands for the purposes of learning evaluation:
- Proving – that the training worked or had measurable impact in itself
- Controlling – for example, the time needed for training courses, access to costly off-the-job programmes, consistency or compliance requirements
- Improving – for example, the training, trainers, course content and arrangements etc
- Reinforcing – using evaluation efforts as a deliberate contribution to the learning process itself.
This model focuses on single learning programmes and was created in a time when the learning and development activities were largely training events.
The value of using models to approach evaluation
Each model of evaluation outlined above offers a specific approach and most were developed to assess the value of individual training programmes.
Many organisations have an established approach for getting learners’ reaction to interventions. Commonly called a ‘happy sheet’, it looks at learner satisfaction levels of, for example, the facilitator, materials, venue etc. This is not the approach Katzell and Kirkpatrick advise in their first level of evaluation. They state the first level is the learner’s reaction to the learning itself.
Value of learning research
Our 2020 Learning and skills at work survey shows that where L&D teams have a clear vision and strategy, and that it’s aligned to the organisation’s performance needs, evaluation tends to be more prevalent and the data is widely used within the organisation. Wider forms of evaluation beyond reaction are also common in organsiations where L&D professionals have engaged with senior leaders and there’s a collective agreement of the value of learning.
L&D teams are seen as a credible business partner to the organisation when they take time to use a range of evaluation approaches that are in line with performance data
The ‘RAM’ approach
Drawing on our research findings, we developed an approach to learning known as RAM (Relevance, Alignment, Measurement) that still has value today. It’s based on the need for:
Relevance: How existing or planned learning provision will meet new opportunities and challenges for the business.
Alignment: If the L&D strategy takes an integrated blended approach, it’s critical for L&D practitioners to work with stakeholders about what their performance needs and how to achieve them. Aligning with broader organisational strategy gives focus, purpose and relevance to L&D.
Measurement: L&D teams effectively and consistently measure the impact, engagement and transfer of learning activities as part of the evaluation process. It may be helpful to use a mixture of evaluation methods and broader measures of expected change and improvement such as return on expectation, and to link L&D outcomes to key performance indicators.
The RAM approach focuses on the outcome, rather than the response to a learning event (the focus of the majority of ‘happy sheets’). Our costing and benchmarking L&D factsheet has further detail on measurement.
Valuing your Talent
Valuing Your Talent research is helping employers to measure the impact of workforce skills and capabilities on their organisation’s performance. It’s being run by us in collaboration with the UK Commission on Employment and Skills (UKCES), Investors in People, the Chartered Management Institute (CMI), the Chartered Institute of Management Accountants (CIMA) and the Royal Society of Arts (RSA).
The programme is developing a common way of understanding the impact people have on the performance of their organisation. This is useful in evaluating as it provides insight to effective measurement tools.
The 70:20:10 Institute ‘Performance Approach’
The 70:20:10 Institute suggest L&D take on ‘performance roles’. They explore a role for ‘performance detective’ and ‘performance tracker’, where the detective role remit is to find out where data exists in an organisation, and tracker to provide insights from data for effective L&D provision in meeting stakeholder needs.
Implications for learning evaluation
The focus on learning outcomes
An immediately obvious implication of L&D evaluation research is the need to focus on learning outcomes - broadly defined as some permanent or long-lasting change in knowledge, skills and attitudes - which is an output or outcome, rather than on any training itself which is an input.
The ’talent analytics’ perspective
‘Talent analytics’ is about ‘mining’ a whole range of data streams to gain insight into how people learn and develop. Looking at the way we develop talent and provide future capability is a challenging area. It provides opportunities for real time evaluation close to the operational pulse of the organisation and is therefore more likely to be useful as a decision tool. Essentially, it’s an evidence-based approach to demonstrate value. Read more in our Talent analytics and big data research report.
Advice for L&D practitioners
Measuring the impact, transfer and engagement of L&D activities can’t be done just by an end of course questionnaire or post-training survey. L&D practitioners must work closely with stakeholders to agree success criteria for the whole L&D offering as well as individual programmes. L&D practitioners also need to work with the organisation to prioritise the available resources.
L&D practitioners need to question the value of traditional happy sheets along with the standard default questions they contain. Is it the learner’s responsibility to ‘rate’ the facilitator or materials? To what degree does a value on a Likert scale apply to learner reaction to the learning, the engagement of a learner or, arguably the most important element, the impact on the learner’s performance? Brinkerhof uses an analogy that measuring the satisfaction of a learning event is akin to predicting the satisfaction and longevity of a marriage based on the quality of the wedding reception.
Books and reports
BEEVERS, K., REA, A. and HAYDEN, D. (2019) Learning and development practice in the workplace. 4th ed. London: CIPD and Kogan Page.
LANCASTER, A. (2019) Driving performance through learning. London: Kogan Page
PAGE-TICKELL, R. (2018) Learning and development: a practical introduction. 2nd ed. HR Fundamentals. London: CIPD and Kogan Page.
PHILLIPS, J.J. and PHILLIPS, P. (2016) Handbook of training evaluation and measurement methods. 4th ed. New York: Routledge.
Visit the CIPD and Kogan Page Bookshop to see all our priced publications currently in print.
DERVEN, M. (2012) Building a strategic approach to learning evaluation. T+D. Vol 66, No 11, November. pp54-57.
DIAMANTIDIS, A.D. and CHATZOGLOU, P.D. (2014) Employee post-training behaviour and performance: evaluating the results of the training process. International Journal of Training and Development. Vol 18, No 3, September. pp149-170.
MATTOX, J.R. (2012) Measuring the effectiveness of informal learning methodologies. T+D. Vol 66, No 2, February. pp48-53.
PHILLIPS, J.J. and PHILLIPS, P. (2011) Moving from evidence to proof. T+D. Vol 65, No 8, August. pp34-39.
CIPD members can use our online journals to find articles from over 300 journal titles relevant to HR.
Members and People Management subscribers can see articles on the People Management website.
This factsheet was last updated by David Hayden.
David Hayden: Digital Learning Portfolio Manager, L and D
David is part of the CIPD’s Learning Development team responsible for the digital learning portfolio - he leads the design and delivery of a number of L&D-focused products and keeps his practice up to date by facilitating online events for a range of clients. David began his L&D career after taking responsibility for three Youth Trainees in 1988 as an Operations Manager, and has since gone on to work in, and headed up, a number of corporate L&D teams and HR functions in distribution, retail, financial and public sector organisations. He completed his first Masters degree specialising in CPD and has just completed his second in Online and Distance Education. David also has a background in 'lean' and has worked as a Lean Engineer in a number of manufacturing and food organisations. Passionate about learning and exploiting all aspects of CPD, David’s style is participative and inclusive. As well as authoring the CIPD L&D factsheet series, he co-authored the 4th edition of 'Learning and Development Practice in the Workplace' with Kathy Beevers and Andrew Rea.
Explore our related content
Understand how to create and implement a learning and development strategy and policy to support organisational success
Discover why talent analytics and big data are now must-have capabilities in HR
Our survey report and case studies, produced in partnership with Accenture, examine current practices and trends in learning and development (L&D)