Introduction

Most learning and development practitioners are concerned about their level of understanding of the impact of learning. Effective learning and development evaluation needs to be strongly linked with learning needs. The L&D strategy should include the organisation’s evaluation approach and describe how the impact of any individual programme, or series, will be measured.

This factsheet defines evaluation in an organisational L&D context. It explores typical evaluation methods, from post-training questionnaires to development metrics and quantitative survey methods. It also underlines why learning needs to be aligned with business objectives.

The CIPD is at the heart of change happening across L&D, supporting practitioners in providing insights and resources. We are proud to be at the 'epicentre' of this changing world of L&D.

Our new Profession Map encourages practitioners to view evaluation as learner engagement, transfer and impact. The quality and effectiveness of learning and development (L&D) activities can be assessed both formally and informally, and must be aligned to organisational performance.

Links with learning and development strategy

Implementing an effective learning and development strategy is driven by the organisation’s strategic goals and needs is widely recognised as important to business success. It’s essential to regularly review and assess the learning and development activities that support of such a strategy. To effectively evaluate L&D, it’s first necessary to have clearly identified the learning needs that flow through to the objectives of L&D programmes and agree what measures of success will look like.

It’s also helpful to carry out benchmarking exercises to gather evidence to support L&D activity. This may involve carefully considering the methods used, costs and ‘return on investment’ or ‘return on expectation’. See our factsheet on costing and benchmarking L&D.

Coverage of learning and development evaluation

The process of evaluating L&D can be done across a whole organisation, for a particular part, or some group within it, for example for those employees identified as ‘talent’.

Whilst the majority of organisations carry out some evaluation of learning activities, our 2019 Professionalising learning and development report showed about a quarter do not understand the impact of learning and development.

Evaluation activities can include:

  • ‘Happy sheets’ – post-training questionnaires asking participants to rate (usually on a Likert scale) how satisfied they feel.
  • Testimonies of individuals and direct observation.
  • Return on expected outcomes – for example, whether line managers testify during performance reviews that individuals are able to demonstrate those new or enhanced competencies that the learning intervention was anticipated to deliver.
  • The impact on business key performance indicators.
  • Return on investment – the financial or economic benefit that is attributable to the learning intervention in relation to the cost of the investment in learning programmes.
  • Development metrics – such as psychometrics or 360 feedback.
  • Quantitative survey methods to assess behaviour change.

While many organisations do some of these, many don’t act on the data collected, as highlighted in the report Making an impact: how L&D leaders can demonstrate valueby Towards Maturity, our strategic L&D research partner. This report also showed that while almost all organisations want to improve the way they gather and analyse data on learning impact, less than one third are achieving it.

As L&D moves away from offering formal face-to-face training into ‘social learning’, where sharing, impact and kudos are less tangible measures, measurement can become more difficult for some practitioners.. However, just because something is difficult doesn’t mean it should be ignored.

The Kirkpatrick model

The seminal model for learning and development evaluation developed and first published in the 1950s by US academic Don Kirkpatrick remains influential today. However, in research conducted by Thalheimer, it appears this model was first introduced by Raymond Katzell.

It outlines four levels for evaluating learning or training:

  • reactions – reaction to a learning intervention that could include ‘liking or feelings for a programme’
  • learning - ‘principles, facts etc absorbed’
  • behaviour - ‘using learning gained on the job’
  • results - ‘increased production, reduced costs, etc’.

This was helpful guidance when launched. However, in the 1980s Alliger and Janak found that the relationships between the levels were weak because each level is not always linked positively to the next.

Various surveys from the Association for Talent Development have found that most attention is focused on evaluation of learning at the reactions level because of the difficulties and time costs of measuring the other three levels. Thalheimer suggests eight recognised levels of learning evaluation, including some listed above, but he argues that some are highly ineffective.

Brinkerhoff success case method

A key criticism of Kirkpatrick’s evaluation model is that changes to performance cannot solely be linked to learning. The Brinkerhoff success case method (SCM) addresses this challenge by proposing a wider focus on systems.

Firstly, an SCM evaluation involves finding likely ‘success cases’ where individuals or teams have benefited from the learning. These typically come from a survey, performance reports, organisational data or the ‘information grapevine’. Those representing potential ‘success cases’ are interviewed and ‘screened’ to find out if they genuinely represent verifiable success with corroborating evidence from other parties. Factors that contribute to success beyond the learning intervention are also explored.

Secondly, an SCM evaluation looks at ‘non-success cases’ to discover those who have found little or no value from the learning. Exploring the reasons why can be very illuminating.

The approach asks four questions:

  • How well is an organisation using learning to improve performance?
  • What organisational processes/resources are in place to support performance improvement?
  • What needs to be improved?
  • What organisational barriers stand in the way of performance improvement?

Following analysis, the success and non-success ‘stories’ are shared.

SCM should not be seen as comprehensive evaluation method because of the nature of the sampling, but it offers a manageable, cost-effective approach to determine success insights and areas for improvement.

Philips' return on investment model

Philips and Philips built on the Kirkpatrick model by adding return on investment (ROI) as a fifth level. However, much ROI evaluating is carried out post project and does not build from a baseline. Another problem is that the arithmetic of ROI means that when a small cost learning intervention is set against a big project cost, it can look superficially impressive.

However, some commentators ask whether a financial model represents the best way to address the effectiveness of learning. Does stating an ROI of x% help an organisation address its learning needs and allow the L&D team to communicate their impact.

Easterby-Smith model

By the mid-1990s Easterby-Smith was able to draw together four main strands for the purposes of learning evaluation:

  • Proving – that the training worked or had measurable impact in itself
  • Controlling – for example, the time needed for training courses, access to costly off-the-job programmes, consistency or compliance requirements
  • Improving – for example, the training, trainers, course content and arrangements etc
  • Reinforcing – using evaluation efforts as a deliberate contribution to the learning process itself.

This model focuses on single learning programmes and was created in a time when the learning and development activities were largely training events.

The value of using models to approach evaluation

Each model of evaluation outlined above offers a specific approach and was developed to assess the value of individual training programmes.

Many organisations have an established approach for getting learners’ reaction to interventions. Commonly called a ‘happy sheet’, it looks at learner satisfaction levels of, for example, the facilitator, materials, venue etc. This is not the approach Katzell and Kirkpatrick advise in their first level of evaluation. They state the first level is the learner’s reaction to the learning itself.

The Towards Maturity report, Making an impact, notes four key areas in which leading L&D organisations are approaching learning evaluation:

  • Improving the way in which they gather and analyse impact.
  • Agreeing specific business measures as KPIs up front with business leaders.
  • Actively using learning analytics.
  • Using benchmarking as a performance improvement tool.

There's also the need to address important questions such as:

  • How is the L&D team delivering operational effectiveness?
  • How effectively is the functional capability of the workforce being developed?
  • How well are learning activities supporting the organisation’s critical success factors?
  • How does the learning strategy align to the brand and outputs of the organisation?

The ‘RAM’ approach

Drawing on our research findings, we developed an approach to learning known as RAM (Relevance, Alignment, Measurement). It's based on the need for:

  • Relevance: How existing or planned learning provision will meet new opportunities and challenges for the business.

  • Alignment: If the plan is move to an integrated blended approach, it’s critical for HR and L&D to talk to key managers and other stakeholders about what they’re seeking to deliver and how to achieve it. It’s also important to ensure that L&D activities are aligned to other key strategies such as reward, organisational development, engagement and other aspects of people management. Aligning with broader organisational strategy gives focus, purpose and relevance to L&D.

  • Measurement: HR and L&D teams effectively and consistently measure and evaluate their activities. It may be helpful to use a mixture of evaluation methods such as return on investment and broader measures of expected change and improvement such as return on expectation, and to link L&D outcomes to key performance indicators.

This approach focuses on the outcome, rather than the process itself.

Valuing your Talent

Valuing Your Talent research is helping employers to measure the impact of workforce skills and capabilities on their organisation’s performance. It’s being run by us in collaboration with the UK Commission on Employment and Skills (UKCES), Investors in People, the Chartered Management Institute (CMI), the Chartered Institute of Management Accountants (CIMA) and the Royal Society of Arts (RSA).

The programme is developing a common way of understanding the impact people have on the performance of their organisation. This is useful in evaluating as it provides insight to effective measurement tools.

The 70:20:10 Institute ‘Performance Approach’

The 70:20:10 Institute suggest L&D take on ‘performance roles’. They explore a role for ‘performance detective’ and ‘performance tracker’, where the detective role remit is to find out where data exists in an organisation, and tracker to provide insights from data for effective L&D provision in meeting stakeholder needs.

The focus on learning outcomes

An immediately obvious implication of L&D evaluation research is the need to focus on learning outcomes - broadly defined as some permanent or long-lasting change in knowledge, skills and attitudes - which is an output or outcome, rather than on any training itself which is an input.

The ’talent analytics’ perspective

‘Talent analytics’ is about ‘mining’ a whole range of data streams to gain insight into how people learn and develop. Looking at the way we develop talent and provide future capability is a challenging area. It provides opportunities for real time evaluation close to the operational pulse of the organisation and is therefore more likely to be useful as a decision tool. Essentially, it’s an evidence-based approach to demonstrate value. Read more in our Talent analytics and big data research report.

Advice for L&D practitioners

Measuring the impact, transfer and engagement of L&D activities can’t be done just by a questionnaire. L&D practitioners must work closely with stakeholders to agree success criteria for the whole L&D offering as well as individual programmes. L&D practitioners also need to work with the organisation to prioritise the available resources.

L&D practitioners need to question the value of traditional happy sheets. Is it the learner’s responsibility to ‘rate’ the facilitator or materials? To what degree does a value on a Likert scale apply to learner reaction to the learning, the engagement of a learner or, arguably the most important element, the impact on the learner’s performance? Brinkerhof uses an analogy that measuring the satisfaction of a learning event is akin to predicting the satisfaction and longevity of a marriage based on the quality of the wedding reception.

Books and reports

BEEVERS, K., REA, A. and HAYDEN, D. (2019) Learning and development practice in the workplace. 4th ed. London: CIPD and Kogan Page.

LANCASTER, A. (2019) Driving performance through learning. London: Kogan Page

PAGE-TICKELL, R. (2018) Learning and development: a practical introduction. 2nd ed. HR Fundamentals. London: CIPD and Kogan Page.

PHILLIPS, J.J. and PHILLIPS, P. (2016) Handbook of training evaluation and measurement methods. 4th ed. New York: Routledge.

Visit the CIPD and Kogan Page Bookshop to see all our priced publications currently in print.

Journal articles

DERVEN, M. (2012) Building a strategic approach to learning evaluation. T+D. Vol 66, No 11, November. pp54-57.

DIAMANTIDIS, A.D. and CHATZOGLOU, P.D. (2014) Employee post-training behaviour and performance: evaluating the results of the training process. International Journal of Training and Development. Vol 18, No 3, September. pp149-170.

MATTOX, J.R. (2012) Measuring the effectiveness of informal learning methodologies. T+D. Vol 66, No 2, February. pp48-53.

PHILLIPS, J.J. and PHILLIPS, P. (2011) Moving from evidence to proof. T+D. Vol 65, No 8, August. pp34-39.

CIPD members can use our online journals to find articles from over 300 journal titles relevant to HR.

Members and People Management subscribers can see articles on the People Management website.

This factsheet was last updated by David Hayden.

David Hayden

David HaydenL&D Consultant/Trainer

David is part of the CIPD’s L&D Content Team. He leads on the design and delivery of a number of L&D-focused products as well as keeping his practice up to date by facilitating events for a range of clients. David began his L&D career after taking responsibility for three Youth Trainees back in 1988 as an Operations Manager, and has since gone on to work in, and headed up, a number of corporate L&D teams and HR functions in distribution, retail, financial and public sector organisations. He completed his Masters degree specialising in CPD and was Chair of our South Yorkshire Branch for two years from 2012 before joining as an employee in 2014. David also has a background in 'lean' and has worked as a Lean Engineer in a number of manufacturing and food organisations. Passionate about learning and exploiting all aspects of CPD, David’s style is participative and inclusive.

Explore our related content

Top