Assessments are essential for understanding employees’ skill levels, identifying places for growth, and ensuring compliance across the organization. But what happens when some employees use less-than-honest means to pass those assessments? Cheating prevention can be difficult, which is why it’s just as critical to evaluate completed assessments for any instances. In this post, we explore the business case for monitoring assessment data in order to detect and address cheating.
Cheating on learning assessments is potentially more prevalent than you might think. So it’s not surprising that several Watershed clients have uncovered cheating after looking at their data more closely. While cheating detection may not be at the top of your list for a learning analytics business case, it can be an important benefit alongside other uses.
If you’re new to this series, we encourage you to read the introduction, which provides an overview and recommendations for making the most of this series.
What Is Cheating Detection?
Cheating detection is identifying where cheating is taking or has taken place. Historically, in-person observation has been the predominant method for proctoring quizzes and assessments. But as most learning is moving online these days, we’ll focus on using data to detect cheating.
Specifically, we’ll discuss using data anomalies to identify cheating—such as learners completing tasks in an unexpected order, taking significantly longer or shorter times to complete a task, or repeating a task an unreasonable number of times.
Remember, cheating detection doesn’t prevent people from being dishonest, but instead enables you to:
- take appropriate action to address any cheating,
- find opportunities to improve learning programs by removing any “loopholes” that may contribute to cheating, and
- discourage learners from cheating if they know they are more likely to be caught.
How to Use Learning Analytics to Spot Cheating in Online Assessments
Learning assessments may be tied to certain rewards—such as prizes, recognition, or even promotions—based on skills and competencies those assessments “prove.”
In some cases, successful completion is required for continued employment. In other cases, people may be motivated to pass quickly, so they can get on with their work. Regardless of the situation, where there is motivation to pass an assessment, there also may be motivation for some learners to pass with less-than-honest methods.
The platforms you use to run assessments should be designed with features to prevent and discourage cheating. But people can be surprisingly creative. As some of the stories I’ll outline in the next section illustrate, it’s easy to miss a potential vulnerability or loophole—especially when learning, not security, is your top priority.
If you can spot cheating, you or a manager can follow up with appropriate action depending on the severity—whether that’s sending a friendly warning, asking the learner to retake the assessment, or even formally disciplining someone.
Detection is about using data to address instances of cheating with the aim of righting any wrongs and creating an environment where cheating is less likely to happen in the first place (because people expect to be found out).
Something’s Not Right: Real Examples of Online Cheating on Employee Assessments
For confidentiality reasons we won’t name names, but several Watershed clients have uncovered instances of learners completing learning and assessments in unexpected ways. Consequently, these learners potentially:
- scored better in an assessment than they otherwise would have, or
- skipped learning content they were expected to complete.
1) Skipping ahead to get ahead.
In the first example, a client used a system where learners could skip training content by successfully passing a pre-test on their first attempt. Learners who did not pass the first time were required to work through the learning content and then complete a post-test, which was identical to the pre-test and that learners could repeat until they passed.
The data for learner scores revealed that some learners realized that the post-test could be accessed at any time—including before they attempted the pre-test. So, to avoid the risk of failing the pre-test and having to complete the learning content, these learners actually took the post-test first. They noted all the correct answers before going back to complete the pre-test, ensuring they passed and could skip the related learning content.
These reports illustrate the example above and show that Beverley Howard and Kelly Pena (not real names) have post-test scores despite scoring 100% on the pre-test. You can look at the dates of their assessments and see they both took the post-test before the pre-test.
2) Why learn when you can guess?
In this example, a client used a scatter plot to identify assessments with exceptionally high attempt counts and total time spent. They found one assessment stood out with an average total time spent of more than 8.5 hours per learner and a similarly high average attempt count approaching 50.
Learners were spending a whole day or more attempting this assessment until they passed. This assessment was high stakes—learners had to pass it for health and safety reasons to continue working.
Yet, passing after that many attempts during that many hours is potentially more indicative of persistence than actual competence and learning. The assessment’s effectiveness to ensure employees were sufficiently competent was being undermined by the number of times learners were able to retake it.
In this example report, the Mandatory Safety Procedures assessment has a significantly higher attempt count and average time taken than the other assessments. This could be a red flag that learners don’t know the answers and have completed the assessment by trying every option until they pass.
Clients have also used Watershed to identify where the completion time for content was unrealistically short. It was clear that learners were not even looking at content, but were clicking through it to generate a completion.
If there’s a reason learners need to complete content, then they also need to read it for that same reason. Learners clicking through content benefits nobody and is a waste of valuable time.
This example bar chart shows two learners completed the course, which should have taken 30 minutes, in 2 and 4 minutes respectively. This suggests they skipped through the content rather than engaging with it.
In each of these cases, Watershed was used to identify learners behaving in ways that were not expected and which undermined the value of the assessment or learning content. By identifying these issues, the organizations were able to follow up as they felt appropriate.
How Does Watershed Support Cheating Prevention for Workplace Assessments?
Watershed’s Report Builder gives you the tools to build reports that look at your learning and assessment data in detail so you can see everything learners have done beyond just a simple pass or fail. Anomalies and outliers in these reports may point to unexpected learner behaviors you need to address, including cheating.
What you do with that information to address cheating is up to you. But with Watershed, you can identify when and where it is happening to make sure your action is well informed.
Hint: When you think you’ve identified cheating, avoid reacting too quickly, but instead take the time to check the data carefully, and talk with those involved before making accusations. It’s sometimes possible to come to the wrong conclusions when looking just at the data and a conversation can often be helpful to get the whole picture before taking action.
Detecting cheating requires ongoing monitoring, and Watershed’s report dashboards are updated automatically as data flows in from your assessment tools and other learning platforms. This enables you to keep track of issues as they arise, rather than playing catch-up later.
Making the Case: Why Should the Business Care about Online Cheating?
If learners are required to pass an assessment to demonstrate skills and knowledge or to complete a piece of content, then presumably there are good business reasons behind it. These reasons might relate to compliance, health and safety, or commercial processes.
Whatever the business goals, if learners can pass an assessment without actually having the required skills and knowledge, that assessment is not going to be as effective at meeting your goals as it otherwise would be.
For these reasons, whenever there is a business case for assessments, there is also a case for preventing cheating. And as we explained above, detecting cheating is a vital part of prevention.
How can you convince stakeholders of the value?
You might need to convince stakeholders of the value of cheating detection. That’s because they might assume cheating isn’t common or that it’s not significant if it does happen.
To address the first issue, consider some pilot research into one high-stakes assessment that many people in your organization have completed to see if you can find any evidence of cheating. And any evidence you do find could undermine the usefulness of that assessment until there’s cheating detection in place (since you won’t know who legitimately passed the assessment).
For the second issue, consider the purpose of that high-stakes assessment and what might happen if that purpose was subverted by cheating. If the consequences of cheating really are low impact, perhaps consider whether that assessment actually has any value in the first place! If an assessment is worth doing, it’s also worth making sure the results it generates are accurate.
Next Course: The Business Case for Skill Data and Analytics
Identifying when learners are misbehaving is helpful to ensure the integrity of your assessments. Once you’ve done that, you can trust your assessment data to evaluate all the skills and competencies in your organization.
This data is important to help you match people with certain skills to complementary roles and identify skills gaps that may require further training. In the next post, we’ll look at the business case for skill data and analytics and their significant benefits.
About the author
As a co-author of xAPI, Andrew has been instrumental in revolutionizing the way we approach data-driven learning design. With his extensive background in instructional design and development, he’s an expert in crafting engaging learning experiences and a master at building robust learning platforms in both corporate and academic environments. Andrew’s journey began with a simple belief: learning should be meaningful, measurable, and, most importantly, enjoyable. This belief has led him to work with some of the industry’s most innovative organizations and thought leaders, helping them unlock the true potential of their learning strategies. Andrew has also shared his insights at conferences and workshops across the globe, empowering others to harness the power of data in their own learning initiatives.
Subscribe to our blog