top of page
Search
Writer's pictureCarol Mannion

Measuring Impact of Learning & Development

Updated: Oct 7, 2020

Throughout my 10+ years in Learning and Development, capturing the true impact of learning programmes has been a constant challenge. I’ve seen so many learning events come and go, superb energy in the room, amazing feedback from participants but then… abyss, as people return to their day to day roles and the event fades to memory.


We congratulate ourselves as learning professionals when we design and deliver a high impact session. We can see the pennies drop and the light bulbs igniting during our delivery. But how can we be sure that this illumination translates to action or behaviour change in that learner’s day to day life? Without a solid approach to collecting evidence of this learning transfer to the real world, we may never capture the true impact of our work.


To me, this is the equivalent of an architect never seeing the final building or a painter never stepping back to see the completed canvas. How can we truly feel fulfilled in what we do if we don’t witness the true impact. Many facilitators tell stories of former participants contacting them at a point well beyond the learning event to tell them of the impact the learning event has had for them. These are usually ad hoc, feel good moments for the facilitator. I argue that we should not leave this feedback to chance. We are missing out on seeing the impact of our work (good and bad) and critically, we are missing an opportunity to report this impact to our stakeholders or clients.


How can we take that step back and view the completed building or canvas along with the blueprint or sketch? In my view, it doesn’t need to be complex but it does require us to broaden our focus beyond the lower tiers of the Kirkpatrick model.


The Kirkpatrick Model challenges us to assess the impact of learning across 4 levels; Reaction, Learning, Behaviour Change and Business Impact. Participant Reaction is most commonly evaluated via a participant survey. We often refer to this as the “happy sheet”. In my experience, the output of this type of survey usually comprises some warm and fuzzy responses about how great the facilitator was along with some not so positive responses about the air conditioning or comfort of the chairs! As facilitators, these level 1 surveys tell us little that we don’t already know. After all, we have been present in the room throughout.


But what message are we giving our participants when we distribute a “happy sheet” at the conclusion of a learning event? “Just in time” feedback tools are becoming more and more common across customer service contexts. We see emoji buttons in the supermarket and at airports asking us to tell the provider how happy we are with the service or the facilities. Footfall, frequency and immediacy unite to give the provider a picture of how the service is being received by its customers. The very presence of these buttons gives us an insight into what is valued by the service provider…customer experience.


In my view our service as learning facilitators is too complex to rely on customer reaction or satisfaction alone as a measure. Continuously asking participants if they “enjoyed” the learning event sends the wrong message as to what we value. Far more relevant, in my opinion is how active the participant was during the session, how challenged they were in their thinking and how motivated they are feeling now to apply the learning. Real learning occurs when people are pushed out of their comfort zone so a level of disruption and discomfort is to be expected.


Moreover, what happens in the days, weeks and months after the event is critically important to build a picture of realised impact. Understanding the changes that participants have made in their real world contexts gives us insight at the higher Kirkpatrick levels (Behaviour Change and Business Impact). However, to carry out a meaningful assessment of the impact of a learning programme after the fact, we must define and clearly understand what the purpose and expected learning outcomes of that particular intervention were when it was conceived.


So, how can we better approach Evaluation of Learning & Development? I have two suggestions.


1. Define what success looks like.


If we are to elevate our evaluation practices to address the higher tiers of Kirkpatrick (i.e. Behaviour Change and Business Impact) we simply must identify our “end in mind”. What observable behaviour changes are we looking for? How will the business be different as a result? Is there a metric (engagement or otherwise) that we are looking to effect through this learning intervention? If these questions cannot be answered up front, the relevancy of the learning intervention, in the first place, must be called into question. Developing an impact map outlining what changes in observable behaviours and business measures are being sought, gives the learning facilitator and the client organisation a shared understanding of the purpose of the intervention and serves as a barometer after the fact to measure and report on the true impact.


2. Replace the “happy sheet” with a “motivation sheet”.


Survey your participants on their likelihood to apply what they have learned. Ask them how motivated they are feeling on a scale of 1 to 10 to do something different as a result of this learning intervention. Ask them to think of specific actions they could take or specific applications for the learning in their real-world context.


Every learning event, regardless of its length should incorporate some time and space for action planning. Participants need some time to decide what they will do with their newly acquired knowledge or skills. The act of writing this down or sharing with a peer will greatly increase the chance that the participant will act on it after the event.


As learning facilitators, we must think of our role as broader than delivering an enjoyable and engaging learning event. We can design an impact assessment approach into the programme by being very clear purpose and learning outcomes at the front end. We can increase the chances of learning transfer by asking the right survey questions of our participants. The true measure of our work is only visible in the changes participants make as a result of the learning. We can add great value to our participants and client organisations by taking a more proactive approach to tracking these changes and reporting the impact of our work. Only then, can we step back to view the completed canvas!


96 views0 comments

Recent Posts

See All

Comments


Post: Blog2_Post
bottom of page