Learning Analytics: A Recap

     

Since we started our What's Learning Analytics series, there seems to have been an explosion of dialog about this topic. We’re particularly excited to see so many of the “Predictions for 2017” include data and analytics as a trend that will blossom in the coming year.

learning-analytics-data-wla.jpegIn particular, the following two articles help define this term and the best-practice processes for getting started. 

  1. What Is Learning Analytics? by John R. Mattox, II, Ph.D. (In addition to this online article, you can take a deeper dive by reading John’s book.)

  2. Learning Analytics: A Practical Pathway to Success by Sharon Vipond, A.D. Detrick (While this is only an online summary, you can read the full article with an eLearning Guild membership.)

In What Is Learning Analytics, John defines it as:

the science and art of gathering, processing, interpreting and reporting data related to the efficiency, effectiveness and business impact of development programs designed to improve individual and organizational performance and inform stakeholders

And in this blog series, we've defined it as:

the measurement, collection, analysis, and reporting of data about learners, learning experiences, and learning programs, for purposes of understanding and optimizing learning and its impact on an organization’s performance

I’m encouraged to see a lot of consistency between these two definitions. I like John’s explicit inclusion of “efficiency” as a primary goal of learning analytics versus our implied notion of efficiency via our use of the word “optimizing.” That encapsulates our concept of learning operations analytics nicely.

Sharon and A.D. define the levels of complexity of analytics according to the traditional Gartner model.

  • Descriptive
  • Diagnostic
  • Predictive
  • Prescriptive 

Watershed defined our our levels of complexity as:

  • Measurement
  • Data Evaluation
  • Advanced Evaluation
  • Predictive and Prescriptive

The Gartner levels are perfectly good and widely used, but, as an industry, we’re not sophisticated users of analytics yet. So, Watershed broke up the traditional lower levels to better reflect the earlier parts of the journey and make learning analytics more accessible. We feel it’s important to make the first levels achievable and worth celebrating.

As expressed, the Gartner model omits any form of evaluation. We felt it important to add some explicit language in our Data Evaluation step around determining not just what happened, but whether that represents a positive or a negative outcome. While it might not yet be possible at this level to demonstrate statistically significant results, it is a good time to start establishing baselines and targets. These conversations are a great forcing function to ensure organizational alignment and to inspire a team to shoot for lofty targets.

Sharon and A.D. define a process consisting of six steps:

  1. Hypotheses/Assumptions
  2. Capture and Clean Data
  3. Analyze and Report
  4. Use the Findings
  5. Refine Offerings
  6. Build Supporting Content

Watershed defines our five-step process to getting started as:

  1. Plan and Gather Data
  2. Review and Clean Data
  3. Operationalize
  4. Explore and Analyze
  5. Build and Refine

These processes are relatively similar, but Sharon and A.D. make one assertion that I disagree with. They encourage a very deliberate capture of a predefined set of data intended to validate or refute a hypothesis. This practice is a great way to start an Advanced Evaluation of a Learning Program. For that type of analysis, it's absolutely essential to have a known set of good, clean data to build an experiment that will drive organizational change. 

However, Sharon and A.D. also assert that:

While data mining is effective in specific situations, most L&D departments do not have the volume and depth of data on their own to mine for valuable insight. Instead, learning analytics is a process that benefits from advance planning.

I disagree. While most L&D departments don’t have enough high-quality data to do the sophisticated data mining common in “big data,” they certainly have enough data to find some unexpected insights just by looking around at what they have.

At the “small data” level, some of the most immediately actionable insights come from the simple act of starting to visualize whatever data you have access to. Let’s call this “data digging” the simple act of looking at the data you have and getting to know it. Data digging doesn’t often lead to robust statistically significant proofs of causation, and it certainly won’t predict future outcomes. But, it is quite useful for spotting outliers, abnormalities, and unusual trends to investigate.

In other words, data digging is a great way to spot things that aren’t working as intended. Those issues are often immediately actionable and lend themselves to quick fixes. Quick wins are incredibly important to organizations getting started with learning analytics and looking for ways to demonstrate its worth.

For example, one Watershed client was data digging and noticed that “no shows” for a particular ILT seemed to spike every Thursday. A bit more digging easily attributed these absenteeism spikes to a particular group and job function that has an important deadline to meet every Friday. A simple reconfiguration shifted these learners’ schedules away from days when they have tight deadlines.

Let's keep it going!

As we wrap up this blog series, I’d like to thank everybody who participated. We’re truly appreciative of all the reviewers and guest contributors who helped make our learning analytics series a success. It's also really exciting for us to see this conversation starting, and we're looking forward to more shared ideas and debate in the coming months. Let’s keep the discussion going!


Getting Started Is Easy

As you can see, performance analytics open a world of insights and data-driven decision capabilities. And the possibilities for what you can measure and evaluate about your learning programs is nearly endless. Remember, getting started is easier than you think. Even just a few data points can yield powerful results. Use the following guide to help you get started right now!

New Call-to-action


About The Author

As an innovative software developer turned entrepreneur, Mike Rustici has been defining the eLearning industry for nearly 20 years. After co-founding Rustici Software in 2002, Mike helped guide the first draft of the Tin Can API (xAPI) and invented the concept of a Learning Record Store (LRS) - revolutionizing the Learning and Development world. In 2013, he delivered on the promise of Tin Can with the creation of Watershed, the flagship LRS that bridges the gap between training and performance.

When Rustici Software was acquired by Learning Technologies Group (LTG) in 2016, Mike became the CEO of Watershed, where he continues to be an expert in the area of eLearning conformance as well as Learning and Development analytics. He’s also presented on a variety of topics, ranging from disruptive technology and performance improvement to company culture and business innovation.