Performance appraisal has been a common point of discussion over recent years, in particular with descriptions of how to dump, ditch, axe or abolish it. Often the arguments and case studies cited don’t live up to the sensationalist headlines, it being more a case of changing performance appraisal than killing it, but changes are certainly afoot in many organisations. The main trends appear to be towards making them more regular, two-way, or informal and light, rather than process-heavy, time consuming and rigid. Case studies of ‘leading’ practice are interesting – giving insights into what changes can be made and how – but invariably they give little if any evidence on the most important question: does it work? Do the changes being popularised have the desired effects of actually improving performance? What kind of evidence is there, you may ask? Isn’t it enough to rely on organizational data, the views of stakeholders and our professional experience? The evidence-based approach would argue not. There is often a wealth of scientific research on cause and effect and we’d be fool not to weigh this up in a considered way. The CIPD’s research with the Center for Evidence-Based Management does just this. The report, Could do Better?, presents a short systematic review of the research on what works in goal setting and performance appraisal, what explains the impacts and what are the contextual factors we should take note of. Three things are clear: there is lot of robust evidence, including high quality meta-analyses; goal setting and appraisals can work well or badly; and we know some of the reasons why. To take the beginning of the performance cycle, we find that different types of goals or targets work in different settings. Putting the SMART acronym (typically: specific, measureable, assignable, realistic and time-related) to the test, evidence shows that targets that are both specific and challenging (but realistic) are effective when the tasks in hand are relatively straightforward and predictable. But for more complex jobs – for example, where tasks involve making data-based decisions or have a number of interlinked stages – targets are more likely to contribute to performance when they are not specific (‘do your best’) and what works best of all is goals focused not on outputs, but on learning or behaviour. For appraisals themselves, a crucial point to recognise is that it’s not so much what managers do, but how employees react that makes the difference. One simple lesson we can draw from the research on this is to check in with employees after appraisal. From my conversations with practitioners, this doesn’t seem to be anything like common practice. It seems more common that managers as well as their reports are simply relieved to have got through the appraisal ‘process’ and have the paperwork done and dusted. But checking whether employees found it fair and useful is important. If they didn’t, you know that more conversation is needed, as the whole performance management cycle could start to unravel at that point. The proposal to ditch performance appraisal is certainly over-simplistic but has at least encouraged some more serious thinking about performance appraisal. What’s its function, what might be kept or scrapped and how it can best serve its ultimate aim of improving performance? These are the sorts of questions we do well to ask. To hear more about what the best scientific evidence tells us about performance management, join our international webinar, on 14 December.
Thank you for your comments. There may be a short delay in this going live on the blog page as we moderate the comments added to our blogs.
Subscribe to the CIPD Newsletter