Within the learning and development community there is a great deal of interest currently in enhancing evaluation expertise.
A number of things are driving this, not least of which is the pressure by management (ever mindful of budgetary concerns) seeking data about the effectiveness of the training spend. While most instructional designers and others associated with training and development are aware and in favour of evaluation; they can benefit from increased understanding of effective evaluation techniques. Our institutions of higher education don’t provide much help – there are few courses available that have been designed specifically for practitioners in the corporate or organisation space.
Kirkpatrick is well known in the learning and development world, and intuitively seems to fit the bill for most organisations, however it may have some limits, particularly when management wants to know if programs are really helping the organisation to achieve its business goals. Professor Allison Rossett, a highly respected US academic with years of corporate and government consulting, has also pointed out challenges to evaluation from the new ways of individualised learning using technology, informal learning, etc. Her article Leveling the Levels (published by ASTD in T+D Magazine) provides insights and food for thought – how does one apply Kirkpatrick’s evaluation approach in these situations?
Without an evaluation strategy, practitioners are largely doomed to ‘random acts of evaluation’, relying heavily on surveys and measuring reactions and application as best they can, often in a rather haphazard manner. “Have survey, will evaluate” seems to be the order of the day. A strategic approach to evaluation allows for engagement with key stakeholders, the implementation of a focussed evaluation plan, the presentation of credible and relevant data, and the ability to identify key actions to improve or leverage the success of what has been delivered.
Impact Evaluation is the evaluation that management is most interested in. They want to know if training is contributing to the achievement of key business goals. Impact Evaluation is my terminology for Success Case Evaluation, developed by Prof Rob Brinkerhoff – business leaders understand immediately what Impact Evaluation is focussing on. This is a research-based approach that can be applied to any intervention, not just training. For example, evaluating the success of a merger of two companies across the country, to see why the merger worked well in some areas but not others; or assessing the effectiveness of a new performance management process in an organisation are two non-training examples. Not only does Impact Evaluation provide data on the extent to which training (or other intervention) is contributing to the achievement of business goals, but it uncovers reasons why or why not, as well as systemic issues that can be leveraged for greater impact in the future. Brinkerhoff has written two books about Success Case Evaluation, but this article The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training sets out the nuts and bolts of his approach. It is not rocket science, for sure – no stats needed! But there are a number of traps for inexperienced evaluators and attending an Impact Evaluation master-class is an excellent way to learn and practise the foundational skills needed for this methodology.
Many people can knock together a backyard shed, given an elementary knowledge of building and a few tools. To build a house requires recognised skill and expertise that come from education and practice in the real world of building. So it is with evaluation – Kirkpatrick’s levels will get you started, but without a strategy for evaluation and some in-depth training in Impact Evaluation it will be extremely difficult to uncover what is really going on and why, and then to figure out what to do in response!
For further information about DeakinPrime's evaluation workshops: