Data evaluation is by far and away the most common complexity level of learning analytics we observed in our learning analytics research study. And within this complexity, we found 16 types of analysis represented, which we’ve bundled into three headings and will explore in this blog post.
Analysis Types for Data Evaluation
In our last blog post, we outlined the four complexities of learning analytics—including data evaluation, which asks whether what's happening is good or bad.
At this complexity level, we’re applying high-school level math—such as averages, means, modes, and basic statistics—to aggregate the data and establish benchmarks. Typically, most analytics fall into the basic data evaluation category, but still offer tremendous value and opportunities for some big wins.
Here are the analysis types we found in the this complexity:
1) Frequency and Completion
Frequency and completion analysis types look at how often things happen and if things have happened. This includes:
- Attendance Analysis: How many people showed up? Are sessions and training resources well utilized?
- Certification Analysis: How many people are certified? Have people obtained their required certifications?
- Completion Analysis: How many people completed a course, video, quiz, etc.? Have people completed what’s required?
- Utilization Analysis: How often is a learning resource used? What are the most-used resources?
- Engagement Analysis: What level of engagement does a resource have (e.g. comments, likes, shares)? What’s engaged with the most?
- Frequency Analysis: How often does something happen? This is a catchall for anything that doesn’t fall into one of the analysis types covered above.
For example, the following report shows frequency analysis of mistakes during a mock code blue simulation.
This report shows that three steps of the Mock Code Blue see significantly more mistakes than the others. This is a good indication that more training may be needed in these areas.
Outcome analysis looks at the outcome of things, such as how well people perform or how a learning experience is rated. This includes:
- Before-and-After Analysis: What changed before and after an event (e.g. assessment scores, job performance, etc.)?
- Learner Ratings Analysis: How did learners rate a learning experience?
- Confidence Analysis: How confident are people in their answers?
- Time Spent Analysis: How much time is spent learning?
- Progress Analysis: How far are people through an event or experience?
- Question Analysis: How well are people answering questions?
- Cost Analysis: What's the cost to provide the event or experience?
- KPI Analysis: Where do KPI values stand?
The following confidence analysis example shows a comparison of confidence against correctness.
This kind of report could be used to identify people above the line who are more correct than they are confident, and people below the line who are more confident than they are correct. Confident but wrong people can be dangerous because they will act on their incorrect knowledge.
A couple of analysis types that don’t fit in the first two collections are:
- Resource Analysis: Analysis of resource metadata (e.g. How long are pieces of content?)
- Search Analysis: What learning content are people searching for? How often is the search feature being used?
For example, the following report shows use of search by device over time:
The majority of search is happening on desktop, and there was a significant increase in both desktop and mobile search in the spring of 2018. This may be a useful prompt to dig deeper into why activity increased so dramatically or to proactively evaluate if existing content is accessible and useful on mobile devices.
What Does Data Evaluation Look Like in Practice?
While there’s a wide variety of analysis types, our research study revealed that the majority of report views within the data evaluation complexity relates to completion analysis or utilization analysis. (This is a broadly similar picture to categories, where most views are for program and experience reports.)
A lot of the completion analysis reports relate to program analytics with some relating to experience analytics. And many of the utilization analysis reports relate to experience analytics with some relating to learner analytics.
As we discussed in our last post, organizations are generally looking to get the basics down first, before moving onto more complex analytics around impact and effectiveness.
And these basics mean reporting that people have completed the things they are supposed to have completed, and reporting on the usage of learning resources and platforms.
In the breadth of different analysis types, we do see some organizations moving beyond these basics, but that’s not quite yet the norm.
Even within data evaluation, there’s a lot you can do beyond completion and utilization, but few organizations are doing so. Think about how you can get more data about engagement and other areas of analysis for richer insights. Use the worksheet below as a starting off point.
Up Next: The ‘Advanced Evaluation’ Complexity
In our next learning analytics blog post, we'll explore some examples of the next level of complexity: Advanced Evaluation and its five analysis types.
About the author
As a co-author of xAPI, Andrew has been instrumental in revolutionizing the way we approach data-driven learning design. With his extensive background in instructional design and development, he’s an expert in crafting engaging learning experiences and a master at building robust learning platforms in both corporate and academic environments. Andrew’s journey began with a simple belief: learning should be meaningful, measurable, and, most importantly, enjoyable. This belief has led him to work with some of the industry’s most innovative organizations and thought leaders, helping them unlock the true potential of their learning strategies. Andrew has also shared his insights at conferences and workshops across the globe, empowering others to harness the power of data in their own learning initiatives.
Subscribe to our blog