I found this an interesting article, given the fact that I've just put in a proposal for a major tender with a global company, who will be looking in detail at how we will evaluate our learning proposals.  Graham O'Connell will be part of the 'great evaluation debate' in November, so I will be keen to hear what others think before then.  As L&D Manager for an organisation with over 35,000 people, evaluation was always a contentious issue.  CFOs always want to know what the ROI is, however, that's not always that easy.  I think there's a tipping point (Malcolm Gladwell comes to mind) when it comes to learning and development.  If I look back on my own career and development, I can highlight some pivotal moments and training courses, yet I've been on one course or another since I was 18 (a very long time ago!).  So how can I distinguish one course had the biggest impact.  How can I evaluate where the turning point was, or where the organisation got it's best value for money?  Here's what Graham says on evaluation:


Most evaluation is flawed - deeply flawed - but is still a vital ingredient in modern L&D.


The so-called Anna Karenina principle states that every unhappy family is unhappy in its own way. So it is with evaluation; the variation in flaws is seemingly endless. Sometimes it is the lack of an essential prerequisite. For example, it is nigh on impossible to prove that leadership development adds value until you know what value leadership adds. Often it is the blind adherence to a particular doctrine, such as busily converting return on investment evidence into a numeric value, through some reassuringly complex formula, only to find that the senior team don't have faith in the results. Other times it is the sneering avoidance of the tried and tested. Some people make it a badge of honour to dismiss end-of-course evaluation sheets as not worth the paper they are printed on, as if L&D were the only service in which customer feedback is unwelcome.


So what does work? Well, the irony is that most of the advocates of evaluation - including me - are more dependent on rhetoric, albeit underpinned by experience, rather than on hard evidence. Where there is evidence, it is often self-justifying, conveniently proving the approach being advocated. In any event, I take a broad definition of evaluation that embraces continuous improvement, assures learning and proves worth. And different types of evaluation work for different purposes.


But it is the last area - proving worth - that attracts the headlines. This is where the debate has most heat and, occasionally, glimpses of light. For my part, I am a great believer in defining and agreeing what you are trying to achieve right up front; setting out what success looks like in business terms. Being unequivocally business-orientated and focused on business impact is vital; I don't find it remotely in conflict with my other obsessions of being focused on individuals who, after all, are the ones who do the learning, and being focused on the quality of learning design and delivery. They are the essential trinity in my book.


I am a healthy sceptic, however, when it comes to hard-measuring business impact. There are too many confounding variables and weak links in the chain from individuals' learning to ultimate business success. Pursuing outcomes - yes; getting definitive proof - unlikely.


I favour an up-front investment appraisal approach (what are we trying to achieve? What L&D is needed? How much will that cost? What are the risks and chances of success? Is it worth it?). At least at this stage your assessment can affect key decisions. Measuring after the event has the whiff of a bolted horse.


But there is more to evaluation that measurement. Qualitative evidence in the form of success stories can be persuasive in keeping your stakeholders on board, and can show a return on their expectations. Just don't forget the basics too, like the role evaluation can play in continuous improvement. That way, you might just end up with a happy family.