By evaluating learning and development programmes, employers are better able to ensure that these initiatives are aligned with their business objectives and overall learning and development strategy. Effective evaluation of learning requires a working knowledge of learning needs in line with the broader L&D strategy and how L&D programmes support the whole strategy.

This factsheet examines the relationship between L&D programmes and the wider L&D strategy, as well as the evaluation methods typically used by employers, from post-training questionnaires and testimonies to development metrics and quantitative survey methods. The factsheet looks at traditional models of training and learning evaluation and highlights recent developments in approach, some based on CIPD research, which focus on learning outcomes, and the extent to which learning contributes strategic value and is aligned with business objectives.

CIPD viewpoint

Evaluating learning and development is crucial to ensuring the effectiveness of an organisation’s learning initiatives and programmes. Effective evaluation means going beyond the traditional ‘reactions’ focus on a simplistic assessment of learners’ levels of satisfaction with the learning or training provision. Rather, it’s important to use simplified yet sophisticated approaches such as our ‘RAM’ model (Relevance, Alignment, Measurement) to evaluate learning outcomes and the extent to which learning provision is aligned with business objectives. Such a focus helps to ensure that L&D interventions deliver value for both learners and organisations alike. 

Practitioners need to recognise that while levels-based evaluation typified in Kirkpatrick and ‘return on investment’ dominate our thinking, these are often poorly used. An output, change and improvement focus is much more productive. The promise of ’big data’ and its HR and L&D counterpart, talent analytics, will present new opportunities for effective evaluation. 

Evaluation methodologies must be defined before learning takes place for effective, unbiased, evidence-based evaluation to be effective. Evaluation, ultimately, is all about an organisation’s context.

The CIPD is at the heart of change happening across L&D, supporting practitioners in providing insights and resources. We are proud to be at the 'epicentre' of this changing world of L&D.

The evaluation of learning and development (L&D) is the formal or informal assessment of the quality and effectiveness of an organisation’s learning and development provision, usually by some measure of the merit of the provision itself (the input, for example the quality of course content and presentation) and/or by monitoring its impact (the outcomes, for example improved skills/qualifications or enhanced productivity/profitability).

Links with learning and development strategy

Implementing an effective learning and development strategy is widely recognised as important to business success, so it’s essential to regularly review and assess the use of learning and training programmes in support of such a strategy.

To effectively evaluate L&D, it’s first necessary to have clearly identified the learning needs that flow through to the objectives of L&D programmes and agree what measures of success will look like.

It’s also helpful to conduct benchmarking exercises to gather evidence to support L&D activity. This may involve careful consideration of the methods used, costs and ‘return on investment’ or ‘return on expectation’. See our factsheet on costing and benchmarking L&D.

Coverage of learning and development evaluation

The process of evaluating L&D can be undertaken across an organisation as a whole or for a particular part of the organisation or some group within it – for example, for those employees identified as ‘talent’ - exceptionally high-performing or high-potential individuals.

The majority of employers carry out some evaluation of learning interventions in their organisations which can include:

  • 'Happy sheets’ – post-training questionnaires asking course participants to rate how satisfied they feel with the standard of provision.
  • Testimonies of individuals and direct observation.
  • Return on expected outcomes – for example, whether line managers testify during performance reviews that individuals are able to demonstrate those new or enhanced competencies that the learning intervention was anticipated to deliver.
  • The impact on business key performance indicators.
  • Return on investment – the financial or economic benefit that is attributable to the learning intervention in relation to the cost of the investment in learning programmes.
  • Development metrics – such as psychometrics or 360 feedback.
  • Quantitative survey methods to assess behaviour change.

Evaluation can be very difficult in practice to measure the impact of learning, particularly its effect on business success. While many organisations undertake some of these, many do not act on the data collected, as highlighted in the report Making an impact: how L&D leaders can demonstrate valueby Towards Maturity, CIPD’s strategic L&D research partner. This also showed that while almost all organisations are looking to improve the way they gather and analyse data on learning impact, less than one third are achieving it.

These challenges become more prevalent as L&D moves away from only offering training days and moves into a social learning space, where sharing, impact and kudos are less tangible measures. However just because something is difficult does not mean it should be ignored.

The Kirkpatrick model

The seminal model for learning and development evaluation, developed and first published in the 1950s by US academic Don Kirkpatrick remains influential today. However, in research conducted by Thalheimer, it appears this model was first introduced by Raymond Katzell. It outlines four levels for learning or training evaluation:

  • reactions – reaction to a learning intervention that could include ‘liking or feelings for a programme’
  • learning - ‘principles, facts etc absorbed’
  • behaviour - ‘using learning on the job’
  • results - ‘increased production, reduced costs, etc’.

This was helpful guidance, but 30 years later Alliger and Janak found that the relationships between the levels were weak because each level is neither definitely nor always linked positively to the next.

Various surveys from the Association for Talent Development have found that most attention is focused on evaluation of learning at the reactions level because of the difficulties and time costs of measuring the other three levels. Thalheimer goes on to suggest that there are eight recognised levels of learning evaluation, including some listed above, but some are highly ineffective.

Philips' return on investment model

By the mid-1980s calls began to emerge for return on investment (ROI) analyses of training efforts. While some studies found very satisfying figures, the key criticisms of the ROI approach remain. For example, ROI provides a snapshot at only a single point in time, whereas practitioners might want to know more about the return on learning over time. Moreover, like virtually all other approaches to training evaluation, ROI focuses primarily on the training intervention rather than any planned, concurrent activities or coincidental factors that boost ongoing learning output and outcomes.

More rigorous approaches to ROI were provided by Philips and Philips, building on the Kirkpatrick model by adding ROI as a fifth level. However, much ROI is post project and does not build from a baseline. Another problem is that the arithmetic of ROI means that when a small cost learning intervention is set against a big project cost, it can look superficially impressive.

However, some commentators are asking whether a financial model represents the best way to address the effectiveness of learning. Does stating an ROI of x%  help an organisational address its learning needs?

Brinkerhoff success case method

A key criticism of Kirkpatrick’s evaluation model is that changes to performance cannot solely be linked to learning. The Brinkerhoff success case method (SCM) addresses this challenge by proposing a wider focus on systems and notes four key questions that need to be considered in the evaluation process:

  • How well is an organization using learning to improve performance?
  • What organizational processes/resources are in place to support performance improvement? What needs to be improved?
  • What organizational barriers stand in the way of performance improvement?

Firstly, an SCM evaluation involves finding likely ‘success cases’ where individuals or teams have benefited from the learning. These are typically gained through a survey, performance reports, organisational data or the ‘information grapevine. Those representing potential ‘success cases’ are interviewed and ‘screened’ to find if they genuinely represent verifiable success with corroborating evidence from other parties. Factors that contribute to success beyond the learning intervention are also explored.

Secondly, an SCM evaluation looks at ‘non-success cases’ to discover those who have found little or no value from the learning. Exploring the reasons why can be very illuminating.

Following analysis the successes and non-successes ‘stories’ are shared.

Owing to the nature of the sampling process, SCM should not be seen as comprehensive evaluation method but provides a manageable, cost-effective approach to determine success insights and areas for improvement.

Other approaches 

CIRO model

The CIRO model developed by Warr, Bird and Rackham (see Further reading) focuses the evaluation process on context, input, reaction and output.

  • Context – collecting information about performance deficiencies and from this setting training objectives.
  • Input – analysing the effectiveness of the training design, planning, management, delivery and resourcing to achieve the desired objectives.
  • Reaction – analysing the reactions of learners to enable improvements to be made.
  • Outcome – evaluating what actually happened as a result of training measured at the learner, workplace, team or department and wider business level.

Easterby-Smith model

By the mid-1990s Easterby-Smith was able to draw together four main strands for the purposes of learning evaluation:

  • Proving – that the training worked or had measurable impact in itself
  • Controlling – for example, the time needed for training courses, access to costly off-the-job programmes, consistency or compliance requirements
  • Improving – for example, the training, trainers, course content and arrangements etc
  • Reinforcing – using evaluation efforts as a deliberate contribution to the learning process itself.

A Towards Maturity report, Making an impact, notes four key areas in which leading L&D organisations are approaching learning evaluation:

  • improving the way in which they gather and analyse impact
  • agreeing specific business measures as KPIs up front with business leaders
  • actively using learning analytics
  • using benchmarking as a performance improvement tool.

There's also the need to address important questions such as:

  • Is the L&D function delivering operational effectiveness?
  • How effectively is the functional capability of the workforce being developed?
  • How well are learning interventions supporting critical success factors?
  • How do learning operations compare with those of other relevant organisations?’

The ‘RAM’ approach

Drawing on our research findings, we developed an approach to learning known as RAM (Relevance, Alignment, Measurement) focusing on the need for:

  • Relevance: how existing or planned training provision will meet new opportunities and challenges for the business.

  • Alignment: if the plan is to deliver a changed L&D offer, it's critical for HR and L&D to talk to key managers and other stakeholders about what they're seeking to deliver and how the function can help them achieve it. It's also important to ensure that L&D is aligned to other key strategies such as reward, organisational development, engagement and other aspects of the management of human resources. Alignment with organisational strategy and its marketing and finance strategies and other dimensions of corporate strategy gives focus, purpose and relevance to L&D.

  • Measurement: it's also critical that the HR and L&D function effectively and consistently measures and evaluates its interventions. It may be helpful to use a mixture of evaluation methods such as return on investment and broader measures of expected change and improvement such as return on expectation, and to link L&D outcomes to key performance indicators.

This approach is useful for learning and development programmes by maintaining a focus on the outcome, rather than the process itself.

Valuing your Talent

A major research and engagement programme Valuing Your Talent is helping employers understand how to measure the impact of workforce skills and capabilities on their organisation's performance. It's being run by us in collaboration with the UK Commission on Employment and Skills (UKCES), Investors in People, the Chartered Management Institute (CMI), the Chartered Institute of Management Accountants (CIMA) and the Royal Society of Arts (RSA).

The programme is developing a common way of understanding the impact people have on the performance of their organisation. This insight is useful in evaluating L&D as it's important to establish a shared language across the organisation, and have an awareness of the most effective measurement tools.

The focus on learning outcomes

An immediately obvious implication of L&D evaluation research is the need to focus on learning outcomes - broadly defined as some permanent or long-lasting change in knowledge, skills and attitudes - which is an output or outcome, rather than on any training itself which is an input.

The ’talent analytics’ perspective

Learning is about developing individual and organisational talent and capability, and a new perspective on this is provided by talent analytics. Simply put, this is about ’mining’ a whole range of data streams to gain insight into how people learn and develop. Looking at issues such as how we use logic, risk and segmentation to optimise the way we develop talent and provide future capability is an exciting and challenging area of HR. It also provides the opportunity for real time evaluation close to the operational pulse of the organisation and is therefore more likely to be useful as a decision tool. Essentially, it's an evidence-based approach to demonstrate value. Read more in our Talent analytics and big data research report.

Books and reports

PHILLIPS, J.J. and PHILLIPS, P. (2016) Handbook of training evaluation and measurement methods. 4th ed. New York: Routledge.

STEWART, J. and CURETON, P. (2014) Designing, delivering and evaluating L&D: essentials for practice. London: Chartered Institute of Personnel and Development.

PAGE-TICKELL, R. (2018) Learning and development: a practical introduction. 2nd ed. HR Fundamentals. London: CIPD and Kogan Page.

WARR, P., BIRD, M. and RACKHAM, N. (1970) Evaluation of management training: a practical framework, with cases, for evaluating training needs and results. Aldershot: Gower.

Visit the CIPD and Kogan Page Bookshop to see all our priced publications currently in print.

Journal articles

DERVEN, M. (2012) Building a strategic approach to learning evaluation. T+D. Vol 66, No 11, November. pp54-57.

DIAMANTIDIS, A.D. and CHATZOGLOU, P.D. (2014) Employee post-training behaviour and performance: evaluating the results of the training process. International Journal of Training and Development. Vol 18, No 3, September. pp149-170.

MATTOX, J.R. (2012) Measuring the effectiveness of informal learning methodologies. T+D. Vol 66, No 2, February. pp48-53.

PHILLIPS, J.J. and PHILLIPS, P. (2010) Confronting CEO expectations about the value of learning. T+D. Vol 64, No 1, January. pp52-57.

PHILLIPS, J.J. and PHILLIPS, P. (2011) Moving from evidence to proof. T+D. Vol 65, No 8, August. pp34-39.

CIPD members can use our online journals to find articles from over 300 journal titles relevant to HR.

Members and People Management subscribers can see articles on the People Management website.

This factsheet was last updated by David Hayden.

David Hayden

David HaydenL&D Consultant/Trainer

David is part of the CIPD’s L&D Content Team. He leads on the design and delivery of a number of L&D-focused products as well as keeping his practice up to date by facilitating events for a range of clients. David began his L&D career after taking responsibility for three Youth Trainees back in 1988 as an Operations Manager, and has since gone on to work in, and headed up, a number of corporate L&D teams and HR functions in distribution, retail, financial and public sector organisations. He completed his Masters degree specialising in CPD and was Chair of our South Yorkshire Branch for two years from 2012 before joining as an employee in 2014. David also has a background in 'lean' and has worked as a Lean Engineer in a number of manufacturing and food organisations. Passionate about learning and exploiting all aspects of CPD, David’s style is participative and inclusive.

Explore our related content