Even the best of us can make mistakes. So, the more data sources and people involved in your learning ecosystem, the more likely someone is to break the rules and processes you’ve defined as part of your xAPI governance strategy. To avoid issues, it’s vital that you test, monitor, and enforce your rules and processes.
After all, if you don’t do it, who will? You are the law!
A cautionary L&D tale
Why are these steps important? Let’s say, for example, an organization already had good xAPI processes and rules in place.
But, when a newly hired instructional designer started building and updating courses, no one realized every course that designer touched was given the same activity ID.
This continued unnoticed for several months until users started complaining about how their completions weren’t being recorded.
The resulting tracking data was unusable because there was no way to differentiate courses from one another, as they all had the same ID. That also meant the L&D team couldn’t track which courses learners actually completed.
Monitoring, testing, and enforcing xAPI rules and processes could have caught this problem before it caused too much damage. Even better, these steps could have prevented any problems from happening in the first place.
Use a sandbox to test xAPI data
An important part of your testing processes is checking new xAPI data sources in a sandbox account that doesn’t affect your production learning record store (LRS).
This rule applies to everything from individual eLearning courses to recently implemented platforms to other large data sources. Every time you publish a course, test the data it generates so you can resolve any errors before the course is launched to real learners.
Test xAPI data with L&D reports
Be sure to test the data using reports, rather than taking it at face value. Errors that may be missed when simply looking at JSON code will stick out like sore thumbs when they’re transformed into visualizations.
For example, when looking at a large amount of data you might easily overlook a number that’s unexpectedly large. But, it’s much more difficult to miss a bar chart with a bar that's 100 times taller than the others.
As you look at reports, ask yourself if the data seems reliable.
- Does it make sense intuitively?
- Is the data clean and well structured?
- Do different data sources use similar xAPI statement structures, and is their data represented in the same way in reports?
- Are the metrics consistent with your expectations?
- If possible, compare what you’re seeing to any existing reports. For example, do the reports from your LMS data give you the same results?
If you see discrepancies, take time to figure out where and why the data doesn’t match your expectations and then adjust the data or your expectations accordingly.
Check out our xAPI implementation guide if you notice your data doesn’t line up with your expectations and/or with existing reports.
To give you an idea of the process involved, the following graphic illustrates the kinds of checks you might make when reviewing statements from a typical e-learning course quiz using a report on that data.
Review verb and activity IDs
In addition to having processes to test new data sources, it’s helpful to have a regular process to check the identifiers being used in your account.
Create reports that list all the verbs, activity types, and/or activity IDs being used and keep an eye out for new IDs that might not follow your rules and processes, contain typos, or have other bugs.
If you use Watershed, for example, you can configure reports to list particular types of identifiers that have appeared for the first time in the last 30 days.
Up Next: How to deal with bad xAPI data (Part 5)
So, what do you do when you uncover an issue with your live data? In our next post, we’ll explore ways you can to tidy up when issues arise.
NOTE: Judge Dredd is used here only to illustrate the examples in this blog post. Watershed is not associated with, sponsored by, or affiliated with Hollywood Pictures or Cinergi Pictures Entertainment.
About the author
As a co-author of xAPI, Andrew has been instrumental in revolutionizing the way we approach data-driven learning design. With his extensive background in instructional design and development, he’s an expert in crafting engaging learning experiences and a master at building robust learning platforms in both corporate and academic environments. Andrew’s journey began with a simple belief: learning should be meaningful, measurable, and, most importantly, enjoyable. This belief has led him to work with some of the industry’s most innovative organizations and thought leaders, helping them unlock the true potential of their learning strategies. Andrew has also shared his insights at conferences and workshops across the globe, empowering others to harness the power of data in their own learning initiatives.
Subscribe to our blog