Why Is Learning Evaluation Hard?

I regularly speak with organizations about measuring the effectiveness of training. And I’ve noticed a common challenge among L&D teams—learning evaluation can be difficult. But it doesn’t have to be.

In this post we'll explore how asking one question can not only uncover gaps in your training, but also surface how to align your training in a more measurable way.

The key to instructional design success

As we mentioned in our last learning evaluation blog post, one of the first and most important steps in the instructional design process is establishing what success looks like. This should be as simple as asking:

What goal should the training achieve?

So, as learners complete the training, we can begin to look at the extent of the training’s success in achieving these goals. We also can examine where the training may have gone wrong.

But surprisingly often, L&D teams are unable to answer this question. This is usually because a training program has been limited to a topic and some loose learning outcomes, but there’s no defined business goal.

It’s a common story. As an L&D professional, the business asks you for a course on X and you deliver. It’s not your fault; you’re an instructional designer, so you design instruction.

But unless you understand the business drivers and design the course around those drivers, all your hard work isn’t going to have an impact on learners or the organization’s bottom line.

In other words, it’s just another course that doesn’t necessarily help people become better at their jobs. This approach isn’t good enough, and it can’t continue. It’s time for a paradigm shift. We need to move from being designers of instruction to improvers of people.

What’s wrong in these training examples?

Let’s look at some examples of corporate training that went wrong and how we can improve them.

Read each scenario and see if you can identify the reasons why evaluation might be difficult. Then, scroll down the page to compare your thoughts with our insights and recommendations.

For each scenario, consider:

  • Was the training successful?
  • What was the impact of the training?

Then reflect on your answers to those questions:

  • How confident are you in your answers?
  • If somebody disagrees, can you defend your answers?

NOTE: Any resemblance of these scenarios to events real or fictional is purely coincidental.

Scenario 1: New Hire Training

All new hires are required to complete onboarding training. This includes a message from our CEO, an eLearning course about the company’s history, and training on health and safety, information security, and equality and diversity.

We also ensure new hires get a tour of the office on their first day and have a scheduled first-week review with their line managers.

About 95% of new hires have completed the online elements of the onboarding training before the end of their first month, and 79% of these employees pass all three mandatory training assessments on their first attempts. Our employee retention rates are standard for the industry.

Where do you think this training went wrong? Read our insights.

Scenario 2: Product Launch

We recently launched training for a new product called Garemoko. The product team sent presentation slides and information about the product, which we built out as an e-learning course with our authoring tool.

We designed various scenario-based elements and featured engaging videos. A scenario-based quiz at the end tested learners’ knowledge.

The course saw completion rates of 94%, which was higher than previous courses, and we had positive feedback from the sales team reflected in our NPS survey responses. In its first month, sales of Garemoko were 5% higher compared to the previous product release.

Where do you think this training went wrong? Read our insights.

Scenario 3: Diversity Awareness

A team member who works in one of our stores was accused of serious discrimination against a customer. The incident received global media coverage and caused serious reputation damage to the company.

In response, we rolled out awareness training across the company. Every staff member received face-to-face training, and we deployed an eLearning course that customer-facing employees are required to complete on an annual basis. We haven’t had another reported incident of this type since training was deployed.

Where do you think this training went wrong? Read our insights.

And now for the answers…

Now that you’ve read and considered each scenario, it’s time to compare your answers with ours

Scenario 1: Is the new hire training successful?

It certainly has some success in that most people are completing it, but attendance is a very low bar to set for success. 79% seems like a good pass rate, but how easy—or hard—is the assessment?

And how can we be sure it’s assessing job competency and not just their ability to guess the right multiple choice answer, or grab screenshots of the content while taking the course?

Or did they happen to already know all the answers before the training? Average retention rates are ok, but we have no way of knowing how much or little of that is down to the new hire training.

But more important, there’s nothing to say that completion rates, assessment scores or employee retention are the right success criteria for the new hire training. We don’t know what the training was designed to do, so we really can’t say if it was successful.

Scenario 2: Completion and positive feedback are good, but...

Again, completions and positive feedback are good signs. If nobody completed the training, it definitely did not have an impact, and bad feedback can be useful in identifying problems with the training. But on their own, these factors don’t actually show if the training was successful.

In this scenario, the goal is more obvious: increase sales. And on the face of it, the new product did see higher sales in the first month compared to other product releases.

The trouble is, there could be a whole range of reasons why sales were higher for Garemoko than other products. Perhaps this launch was better timed, the company’s brand has gained more popularity, or the economy is doing better since the last product release.

Or perhaps the product has enhanced features, costs less, or was targeted at a different demographic compared to previous product offerings.

Even if we can attribute the success to the work of the salesforce, can we be sure that the training was the main contributor to that improved performance? And if sales don’t do so well, is that due to an issue with the training? How can we know where the problem lies?

We don’t have information about the steps in between completing the training and the sales figures to show the impact of the training on learning and the work of the salespeople. As a result, we lack the evidence to show that the training was responsible for that success.

Scenario 3: Investigate the underlying issues.

This time the goal is crystal clear: make sure an incident like this never happens again. The challenge is that these incidents are so rare (hopefully) and so serious, that you don’t want to have to wait until after the next incident to realize that the training was not effective.

The same problem exists with any training designed to prevent or prepare for a low probability, high-impact risk. This might include training for:

  • Passenger aircraft crew members during an emergency landing
  • Bank employees in the event of a robbery
  • Health practitioners who treat patients
  • Information security professionals handling a data breach

So on the face of this scenario, this training has been successful because there hasn’t been another incident, but based on the information in the scenario, there’s little reason to be confident there won’t be another incident in future.

To be more certain, there would need to be some investigation into the underlying issues that led to the original incident and then evidence that the training had addressed those issues.

Up Next: ADDIE Instructional Design Model

These scenarios and explanations are meant to show the difficulty in designing learning that’s effective, impactful, and measurable. And this process is even harder if you don’t have clear goals for the training as well as a clear plan of how the training will achieve those goals.

So what can you do about it? Well, keep reading this blog series to find out as we delve into several instructional design models—starting with the ADDIE method and how it can lead to the kinds of challenges we explored in this blog post.

Don’t miss out! Subscribe to Watershed Insights to have updates delivered straight to your inbox.

Subscribe to our blog

And the learning measurement survey says...

See how global changes have affected the world of learning and development by reading the results from our seventh annual survey. It shares an evolving picture of L&D’s relationship with measurement and business impact—including real-world examples and extended commentary.

eLearning Learning

This website stores cookies on your computer to improve your experience and the services we provide. To learn more, see our Privacy Policy