Big Data Challenge: How to Measure & Analyze L&D Impact [Part 3]

PART 3: How to Measure & Analyze L&D Impact

Q&A

During the webinar, attendees were encouraged to ask questions. Below are the questions and answers.

We estimate time by adding up all the audio. In Captivate, it provides the time for you, but doesn't break down per slide.

Great idea! For courses without audio, a common metric is one minute per slide. A lot of the time these metrics aren’t evaluated in any way for accuracy, though, so it’s interesting to compare data about estimated time with actual time taken; if you see big differences, then your metrics may not be that accurate.

How do you determine total time? Most I had underwent was a guestimate or a factor of 1 to 2 minutes per slide.

Because we received the data from the LMS, I don’t know how total time was calculated in the example given. However, a common way to estimate total time is to use a metric of one minute per slide.

Did I see something about competencies?

We didn’t cover competencies in this webinar, but take a look at the second half of “Getting Buy-In From Your Boss,” which contains examples of using Watershed for competencies.

So how flexible is the charting? Can you design your own charts? How would you specify a query for such a card?

When it comes to Watershed, charting is very flexible. You also can design your own charts using Explore, which is an easy-to-use report configuration tool for L&D professionals, managers, and other stakeholders. You don’t need to be a data scientist or programmer—you can just dive right in and start reporting.

How many of these reports or dashboards were out of the box vs. custom built?

Different learning analytics platforms offer different reporting options. In these examples, everything is out of the box and configured using Explore, Watershed's report builder.

Are these report cards created using the basic config options, or are many of them created in the Advanced Configuration?

Watershed’s Advanced Configuration allows more technical users to get “under the hood” of report configuration and access more complex functionality that isn’t available via our user interface (UI). We’re continually adding more functionality to simple configuration, while being super careful not to overload less technical users.

There was a mix of report cards shown in the webinar—some were created using simple configuration, while others were created using Advanced Configuration. While some cards were originally created with Advanced Configuration awhile ago, they can now be created using simple configuration because of new functionality we’ve incorporated into the UI.

Do you have an example of a report that shows the effectiveness of a course in terms of people's ability to apply what they learned in the course back on the job and factors that either helped or hindered their ability to apply what they learned?

Yes! The medical example I shared in the webinar is a great example. Here are two more examples that deliver just that:

1) CLIENT STORY: MedStar Health
2) WEBINAR: The Art of Getting Buy-In from the Boss

Is this completion / attempts for one course or all courses?

You can report on a single course, a collection of courses, or all your courses together. There are filters in Explore that let you choose. I showed examples of one course [see recording from 11:20-16:27] and of all courses [see recording from 7:30-11:10] during the demo.

Looking forward to use cases. I hope more than just leaderboards in gamification.

Yep! Leaderboards can be a great way for managers and others responsible for people to keep track of who’s most and least active and successful. Leaderboards also can help motivate learners in competitive contexts. We covered leaderboards in the webinar [around the 29 minute mark] alongside lots of other types of reports, including content analytics and comparing learning to performance.

It would be more helpful to see the course material, than see the results. What does "turn" mean on the game example? Are they dropping out before completing the game??

You can find a video of the game and more information on LEO’s website. The dropout analysis report I built shows people who drop out at each stage toward winning the game: launching from the LMS, opening the game home screen (which should happen automatically after launch), starting to play, completing the game with a win or a loss, and completing the game with a win.

In the demo data shown, a lot of people were dropping out without completing the game. In this instance, it was because we were using internal data from Watershed team members playing the game. When you’re testing a game, you don’t always have time to go right through to the end every time.

I imagine you could track "On hacker playing card A, learner played card X"? That would give analytics on what learners understand, and what they don't—individually, and as a population. Have you, could you go there?

That’s not exactly how the game works; the player’s role is more proactive in protecting his or her network than reactive (see the video linked above). But if the game did work how you imagined, then the metric you proposed could certainly be interesting to report on!

To get an idea of how much of the content the player had interacted with, we created a report that looked at:

  • which of the unique cards the player had used, and
  • which unique cards the player had seen used by the hacker.

Is the information security game based on a simulation or accessing a VM running a virtual environment in which the game participant is operating? If the former, can the system be used to monitor a VM?

[Editor’s note: VM = Virtual Machine]

The game isn’t a simulation at all. It’s a virtual card game designed to raise awareness of cybersecurity threats and ways to avoid them. See the video above.

I would want a third pie chart showing courses appropriate for each grade.

Great idea! We could get this data from course metadata tags, for example.

You said something at the very start about links to the recordings for the previous 2 webinars. Where are those, please?

You can find all of our webinars on our Resources page.

I think percentage works better vs. exact numbers

Sometimes, yes. It depends on the data and the insights you are looking for. Both are possible with Watershed!

What is the source of the performance data? Does it feed into Watershed?

So to get more specific visualizations, the xAPI has to be coded directly connected with the media object (e.g., tracking a video, or clicking on a like button for your Twitter / LinkedIn account)?

There are a few ways to get data into Watershed:

  • xAPI—either native support in the source application or as a customization to add support
  • Connector application that pulls data from the application and then pushes it to Watershed as xAPI
  • CSV import uploaded into Watershed and translated into xAPI statements

When you refer to many ways of importing CSV data, xAPI data, etc., are you referring to Watershed only, or in general these are the possibilities when using xAPI in an xAPI- ready LMS?

As far as I’m aware, our CSV import feature is unique to Watershed. It’s a really important feature, though, as a lot of data doesn’t exist in xAPI statement format. It’s a feature that’s unlocked a lot of doors for our clients—who would have struggled to get started with xAPI at all without it.

We are using currently using iSuite and iSpringLearn for our LMS. Any information on how Watershed replaces or augments the reporting we can get from iSpringLearn would be appreciated, during or after the webinar.

I’m not familiar with iSpringLearn’s reporting in particular. But for many LMSs, we find that Watershed provides more detailed, flexible, and often nicer-looking reports than might be available within the LMS. Because reporting is all we do, we have to make it awesome; whereas for LMS vendors, reporting is one feature among many competing priorities.

The real advantage of an external Learning Analytics Platform like Watershed is when you combine data from various sources. Your LMS might have amazing reports about what’s happening inside your LMS, but that’s all. With Watershed, though, you can consolidate that LMS data and compare it with learning and performance data from other sources.

Do you know they're [the learners] launching from mobile devices because these devices are registered to the company?

The mobile app example I showed had separate apps for iPhone and iPad users and, therefore, the organization tracking the learners knew the device used based on which app was used. Another example collected device data from the user’s browser via a piece of metadata called the User Agent Header. The User Agent Header can sometimes be less reliable, as some browsers are configured to hide this information for privacy reasons.

Is there a mechanism for determining why learners are taking longer than expected? Are they a) super engaged and loving it? b) frustrated and taking forever? Or c) are the learning leadership bad at forecasting what it will take to complete those pieces?

With the particular organization who we created the reports you’re asking about they wondered if maybe the extra time was caused by language barriers, with people for whom English was not their first language taking longer. In fact, people whose native language wasn’t English took similar lengths of time of native English speakers.

Another avenue of investigation, if the data was available, would be to look more closely at the detail of exactly which courses, slides and interactions people are spending the most time on and see if there are any commonalities between them.

Do you have any examples to show where you have connected data on learner answers/choices to conclusions about level of student knowledge/understanding?

We have one client (not featured in this webinar) who is comparing competency scores as determined by Storyline assessment questions with competency scores as determined by job performance. They are using this information to identify learners:

  • who have a good level of understanding, but are not applying it in the workplace, or
  • who are successful in the workplace despite a low level of understanding.

This was probably covered in the earlier sessions, but how does the Watershed application know what are the measures so that reports can be created?

Watershed has a feature called Measure Editor that empowers users to map the properties of an xAPI statement to aggregations—such as average, count, min, max, first, last, etc. Once created, these measures are available to everybody in the measures dropdown. Often, we begin client implementations by using the Measure Editor to create a starting set of measures.

I'm unclear how this data is telling me if what's happening is good or bad. It seems like it's still basic data.

There’s a few ways of visualizing data in Watershed—whether the data you’re looking at represents success or a need for change. In reports that show data over time, you might want metrics to be high—such as sales metrics. So, to show that success, the line chart should go up and to the right. Conversely, there are metrics you might want to be low—such as dropout rates. In that case, you’d want the line chart to go down and to the right to indicate positive progress.

You also can use benchmarks to compare with metric values. These benchmarks might be based on previous performance, project goals, or industry standards.

Do you need to use a specific xAPI profile for different analytics "blocks" to work?

xAPI profiles are a great idea, which we fully support; however, today the majority of xAPI applications do not follow a particular profile. As a result, we’ve designed Watershed in such a way that the measures and reports are flexible to work with any xAPI data. Measures can be configured to look at any xAPI statement, and reports can be filtered by any activities and verbs found in the xAPI statements sent to the LRS.

Are you getting more metrics via xAPI compared to what is tracked via SCORM Compliant [courses] (e.g., what each user answered and compared to the general response for a particular question)?

SCORM 2004 actually defines quite a lot of detail when it comes to e-learning courses and assessments. The challenge is that not many courses implemented all of these features, so a lot of SCORM tracking in practice is even more limited than what was possible with SCORM. You can get user responses with SCORM, but often you don’t.

xAPI is a lot more flexible than SCORM, so it can be used to transmit basically any data point. Of course, that doesn’t mean products that implement xAPI will send that data. In fact, there’s no minimum required data that has to be sent via xAPI.

That’s one of the reasons we have a Certified Data Sources list at Watershed. These are products that are not only xAPI conformant, but also send detailed data and are easy to connect.


Recommended Resources

This website stores cookies on your computer to improve your experience and the services we provide. To learn more, see our Privacy Policy