Advanced Evaluation is less common than Data Evaluation, which means there’s no well-worn path leading the way. Organizations venturing into this level of complexity are often doing so in ways that are different from others. We see this in our Learning Analytics Research Study data, where there isn’t one analysis type that has significantly more report views than the rest. That's why we’ve divided the Advanced Evaluation complexity into five analysis types, which we’ll explore in this post along with examples of what organizations are doing in terms of this complexity.
What is Advanced Evaluation?
As a quick reminder, Advanced Evaluation looks at things—such as correlations and regression analysis—and applies statistical techniques to understand, not only what happened, but also why it happened. This type of evaluation also creates theories about causation, which allows you to focus on what’s working best while scrapping ineffective learning.
In other words, advanced evaluation asks: Why is this happening?
Advanced Evaluation & Analysis Types
Now, it's time to learn more about the five analysis types we identified that fall under the Advanced Evaluation complexity:
Our Learning Analytics Research Study data shows there isn’t one analysis type that has significantly more report views than the rest when it comes to Advanced Evaluation.
The term “chain of evidence” relates to the idea of showing the impact of training on business performance by tracking evidence of the chain of events—ranging from the learning experience and knowledge gained to improved performance and business impact. This process is loosely based on Kirkpatrick’s four levels of learning evaluation.
One type of advanced evaluation is to pick two links in this chain (as illustrated above) and evaluate the extent to which they are related. This analysis looks to validate the logic of the chain envisaged by the learning design and the effectiveness of the learning strategy in practice.
For example, this example correlation report from Watershed shows the relationship between assessment score (a measure of learning) against a customer satisfaction rating (a business KPI).
Drop off analysis looks at where people are exiting a particular process. For example, this might mean looking at:
- how far through people watch a video,
- the slides where people drop out of an e-learning course, or
- how far people get through a MOOC before they disengage.
The following example shows drop off analysis for an xAPI-tracked game we hosted during a conference. Perhaps, confusingly, there’s negative drop off from launching the game through starting an attempt—which was because not everyone used the launcher, and because a single player could start multiple games from the same launch. We then see a massive drop off between people starting and finishing the game.
This helped reinforce that, while the game might be great in a work context, people just didn’t have time and interest to finish several rounds of game play at a large-scale conference. (And, actually, they didn’t need to play every round—as a few rounds were enough to show off the game).
Segment analysis involves first identifying a specific group of people, and then selecting that group for further analysis. For example, you might want to:
- know what learning activities are most popular amongst your top salespeople, or
- compare average scores for people who generally use mobile versus those who favor desktop.
The following example shows a scatter plot, which is one way to identify a segment. The highlighted yellow area identifies managers with high point-of-sale gross profit (POS GP) percentages and low chargeback (i.e. rebates) percentages.
This group also can be filtered in other reports for further analysis. For instance, you can see how people with high POS GP percentages and low chargeback percentages performed on an assessment compared to people with low POS GP percentages and high chargeback percentages in order to assess the effectiveness of the assessment at predicting KPI values.
Workflow analysis looks at how learners find or access learning resources. Did they come from a search, recommendation, or homepage link? This kind of analysis can help determine the best way to promote new or featured content while using the general information architecture of your platform.
The following example looks at the number of times different items were launched from a particular panel on a platform. The 4-Hour Workweek is clearly the most-clicked recommended item.
So, what's special about it? Perhaps it's the one in the top left, has a catchy image, or everybody just wants to know about four-hour workweeks. With some further digging, you can use this information to improve the clickability of future recommendations.
Qualitative survey responses can help you understand the reasons behind your quantitative data. Data makes much more sense when you know the context, and qualitative information provides that context. Don’t overlook it!
For example, if everybody fails question 37, and the feedback says there's a bug with question 37 (i.e. all the options are the same), you know why everyone failed that question. You also can use it to further explore significant successes and failures as directed by Brinkerhoff's Success Case Method.
Currently, there’s no clear path as you move into the advanced evaluation phase of learning analytics; you just have to make one. Either pick the most relevant analysis type that we’ve listed in this post and implement it in your organization, or explore other ways to understand why things are happening.
Up Next: The Predictive and Prescriptive Complexity
Next week, we reach the dizzy heights of the “Predictive and Prescriptive” complexity. Hold on to your hats; it gets windy up there!
Getting started is easy.
Download this checklist to start tracking the analytics you already have in place and where you’d like to go next. Continue your journey by downloading the following eBook, which walks you through getting started with learning analytics.