Big Data Challenge: How to Measure & Analyze L&D Impact [Part 2]

PART 2: How to Measure & Analyze L&D Impact

Q&A

During the webinar, attendees were encouraged to ask questions. Below are the questions and answers.

Are there trends in the types of companies or industries for those who are in Levels 4 & 5?

M. Rochelle: First, organizations that are at Levels 4 and 5 are more focused on behavioral and organizational outcomes. More than half of Level 1 to 3 companies say they don’t have the proper metrics to measure beyond Kirkpatrick Level 2, while only 32% of Levels 4 and 5 say the same.

With that said, organizations that are at 4 and 5 have overall better financial performance, lower turnover rates, higher employee engagement and are at the top of their industry sector in competiveness, innovation and the best places to work.

How are you handling security around LRSs and learning analytics of personal data?

M. Rustici: Security for an LRS is no different than for any other modern SaaS application. When it comes to privacy and sharing of personal data—in the ultimate vision of xAPI—learners are storing and reporting on all their personal learning experiences. They could then share that data with other parties who would have a responsibility to not use share it in unauthorized ways.

This vision also supports the seamless transfer of records across organizations and a kind of portable learning transcript that can follow learners wherever they go. In such a world, there are a lot of privacy issues that must be addressed.

But right now, the industry isn’t there yet. Most organizations are only collecting and analyzing data about learning that is happening within their own systems, so privacy is less of a concern.

There are some companies, countries, unions, etc., that have particular rules about what learning can and cannot be viewed in personally identifiable ways. In these cases, that data isn’t often stored or recorded in any way to begin with.

Other times, a robust LRS will include the ability to anonymize views into data, show data only at an aggregate level, or limit the data that can be seen based on user permissions.

M. Rochelle: I would add that in the learning environment, the usage of PII is extremely low if not non-existent. This is not a concern because learning is usually tracked by a single delimiter, such as an IP address associated with the learner or username/ID. Companies using PII in learning technology are creating unnecessary exposure.

What exactly are Level 4 and 5 measurements? Will we see some sample metrics of Level 4 and 5 types of measurements?

M. Rochelle: Metrics used at this level all have one thing in common. Learning is not the outcome of measurement (e.g., complete the course, take the test, receive the grade), but rather individual, team, and organizational performance change is the ultimate outcome measures.

Reducing defects on the plant floor, risk management, sales performance, client satisfaction, reduction in cost, faster product launches, geographic expansion, accelerating onboarding, building talent pools, and others are used to measure how efficient and effective learning was in making these things happen.

Companies also will measure innovation rates based on the effectiveness of learning, along with employee engagement, the ability to attract talent, career advancement rate, and building talent mobility.

Learning data is also mixed with other business data to create “what if” scenarios and predictive models. In other words, learning data is no longer used retrospectively, but prospectively with other business data to evaluate future learning needs.

What are your thoughts on pre-tests and post-tests with actual application of knowledge for measurement?

M. Rochelle: Assessments are great, but I would not limit their predictive abilities with application of knowledge (Level 2 Kirkpatrick/Kaufman). Instead, assessments should be used to measure behavior changes/new ways of thinking and acting. Change in behavior is the lead indicator to change in performance.

What are your thoughts on learning and development's evaluation processes connecting to performance management to connect learning/behavior and individual contributions to the organization?

M. Rustici: I think it’s a great step forward, but I also like to caution against prematurely using analytics as a purely deterministic lever in high-stakes decisions (e.g., raises, promotions, etc.).

Remember that you get what you measure and as soon as there is a defined numeric outcome, people will start to optimize for that numeric outcome, even if it doesn’t perfectly represent real contribution. Also remember that analytics are much more accurate with broad sample sizes than they are with small (individual) sample sizes.

M. Rochelle: I agree, you should reverse engineer the idea. Performance management should be continuous and should be the identifier for further learning and coaching opportunities based on observed behavior and performance.

Is Visa's platform hosted on AWS?

M. Rustici: Visa’s Watershed instance is hosted on AWS. You can see from the diagram (Slide 27) that they employ many other SaaS tools and some of their own internal connectors as well. I don’t know where or how they are hosted…and the beauty of xAPI (and modern cloud frameworks) is that it doesn’t matter.

Any correlations with the Kirkpatrick model that also talks about assessment levels?

M. Rochelle: This is a strong correlation at Level 3. Assessments should be used to understand the relationship between learners and what they are trying to learn. An assessment should first be able to point out where an individual’s staring point is before embarking on the learning journey.

Second, assessments should be targeted to measure an individual’s predilection for the learning because that will be the lead indicator as to whether or not he or she will try to apply the learning.

In other words, this is when learners move from education and awareness to exploration and trial and error (i.e., trying out a new way of thinking and acting or behavior modification; Level 2 beginning of Level 3) to see if it works for them.

I call this “trying on your learning.”If learner don’t make it over this hump to adoption, commitment, and ownership (full Level 3), then Level 4 will not be achieved (performance change).

Do you think some organizations are hesitant to adopt learning analytics because they (a) don't know how to do it - don't have the talent to do it (b) don't believe learning data is real, i.e. you can't quantify or qualify what people learn (c) are afraid of what the data might tell (d) all of the above?

M. Rustici:

  1. Yes, absolutely. A big part of what we hope to accomplish at Watershed is simply educating the market and sharing best practices. As an industry, we have so much to learn. A common misconception is that it takes a super smart math genius to do analytics. That may be true for advanced evaluation and predictive/prescriptive analytics, but with the right tools, anybody can get started with basic measurement and evaluation.
  2. Sometimes, but I think that’s hogwash. No, we are never going to know exactly how we have rewired the neurons in the brain, but there are so many other interesting and useful proof points we can look at.
  3. Sometimes, but rarely. I may be an optimist, but I generally believe most people want to do a good job. When they see that something is not working, they want to find a way to fix it.
  4. Yes, and probably a few more, including:
    1. The status quo—we’ve been doing the same thing for so long, it’s hard to change.
    2. Bad tools—when the “analytics” you’re used to getting from your LMS is limited to completion rates and attendance, it’s hard to get very excited.
    3. Lack of organizational buy-in—let’s face it, L&D isn’t always the most highly funded part of an organization, and it can be hard to get investment for new things.

Can we use Big data for LMS?

M. Rustici: This type of is so much bigger than what you will ever collect in an LMS. Even the largest LMS deployments don’t come close to the volume of data that the Silicon Valley titans use to do true “big data” analytics and machine learning.

I think that learning data as a category is getting bigger and bigger. We have reached a critical mass of accessibility to do some very interesting things and our capabilities will grow tremendously over the next few years. But it will be a long time until we get to true “big data.”

That’s important to remember when setting expectations for a learning analytics project. You can’t do machine learning until the machine has enough good data to learn from!

I think that learning data as a category is getting bigger and bigger. I believe we have reached a critical mass of accessibility to do some very interesting things and are capabilities will grow tremendously over the next few years. But it will be a long time until we get to true “big data.”

That’s important to remember when setting expectations for a learning analytics project. You can’t do machine learning until the machine has enough good data to learn from!

What instructional design models do you believe can assist learning teams in making the transition to how to prepare for our minds and stakeholders for analytics?

M. Rochelle: Move to Clark, Mayer, Moreno, Willis principles. You need to develop outcome-based learning. Start with the demonstrable, observable change in behavior/task completion that would signify the learner “got it” and then work backward into building learning that drives to the “got it” endpoint.

Dick and Carey / the ADDIE instructional design model is not optimal for this approach as an exclusive approach. You need to build RED, SAM, and agile learning design approaches that allow you to iterate quickly with learning and allow learners “to live their learning” through:

  • simulations,
  • games,
  • adaptive learning (AI or at least machine learning), and
  • AR/VR.

It is no longer about building a course(s) and having a person start at the beginning, go to the end, and be stamped “trained.” People want to practice with their learning in a safe environment before trying it out. Traditional ID processes don’t/can’t fully take into consideration this highly dynamic learning process.

Furthermore, traditional ID processes don’t/can’t fully support continuous learning through interval learning, burst learning, or other forms of spaced learning on its own.

You can't boil the ocean in a day...so how do we move toward stronger learning measurement? Especially in global organizations, as this could be a huge culture shift?

M. Rustici: I think the answer is that you just start. Start with some measurement and simple evaluation in one program. Find a new program that will have some visibility and bake in a measurement strategy from the beginning.

In my experience, when you start to show off the data and the results, it becomes a lot easier to get more buy-in. I like the concept of “scaling at the edge.” Find an area that isn’t the core.

Find a system that will be easier to change and isn’t a central, monolithic LMS. Do something interesting there and use that as a beachhead to effect change in other parts of the organization. And yes, it could be an enormous cultural shift if we do it right!

My biggest struggle is many organizations cut off the formal instructional design analysis and design phase. What recommendations do you have on how we can gain buy in from the individuals controlling the project plan/schedule?(Often, it’s a project manager who does not understand learning and development.)

M. Rochelle: Developing learning is no longer a linear end-to-end process, so it can’t be treated like a project. Developing learning is more of an agile/scrum development process, where versions of learning are launched quickly, feedback is gathered, and then a new version comes out that is more refined or better targeted for the learning audience.

Learning needs to be flexible, agile, and adaptable. Walk down the hall and sit with the IT department and learn about agile/scrum development processes. This design concept steps away from traditional course development and creates bite sized, highly consumable learning objects that can be repurposed and reconfigured instantly to address varying learning audiences.


Recommended Resources

This website stores cookies on your computer to improve your experience and the services we provide. To learn more, see our Privacy Policy