Bringing Standards to the L&D Profession: The TDRp Initiative

In 2010, I joined forces with 29 other L&D industry leaders to create Talent Development Reporting Principles (TDRp) with the goal of establishing a standard framework for L&D. Inspired by the measures and standards used in accounting, we set out to create a standard framework for L&D.

And, after 24 rounds of revisions, we finalized a framework that not only provides a type of standards similar to those in accounting, but also provides a common language for the world of L&D.

Do L&D’s standards add up?

Accounting has four types of measures—revenue, expense, assets, and liabilities—and three standard statements—income or profit & loss, balance sheet, and cash flow.

The industry also has a set of standards called Generally Accepted Accounting Principles (GAAP in the United States and International Financial Reporting Standards outside the United States). When TDRp first came together, though, the L&D world didn’t have anything like accounting measures or standards.

With TDRp, we wanted to create a simple and easy-to-use framework consisting of three types of standard L&D measures and three L&D standard reports (so, three and three versus accounting’s four and three).

We also wanted to build on the excellent work done in our profession during the last 70 years, especially that done by the Association for Talent Development (ATD, then called ASTD).

As a result, the L&D framework we created has three types or categories of standard measures—effectiveness, efficiency, and outcomes—and three types of reports—Operations, Program, and Summary.

Let’s start with measures.

Effectiveness measures address the quality of the learning program or initiative. In learning, we are fortunate to have the four levels popularized by Donald Kirkpatrick and the fifth level (i.e., ROI) popularized by Jack Phillips. These five levels all speak to the quality of the learning.

  1. Level 1 measures the participant’s or sponsor’s satisfaction with or reaction to the learning—certainly an initial measure of quality.
  2. Level 2 measures the amount learned or transference of skills or knowledge.
  3. Level 3 measures the application of that knowledge or change in behavior. If participants didn’t learn anything or if they don’t apply what they learned, I think we can all agree that we don’t have a quality program. (This may reflect a lack of engagement or reinforcement by the sponsor, but we still have a problem.)
  4. Level 4 measures impact or results. Because this is the initial reason for undertaking the learning, it is hard to argue we had a quality program if there are no results.
  5. Last, ROI (Level 5) provides a final confirmation of quality (or return on our investment), assuming we have properly aligned the learning to our organization’s needs and achieved high-quality outcomes on the previous four levels. Most organizations have measures for Levels 1 and 2, but have relatively few measure at the higher levels.

Who Contributed to TDRp?

More than 30 industry leaders contributed to the creation of TDRp-including Kent Barnett then at Knowledge Advisors, Tamar Elkeles then at Qualcomm, Laurie Bassi from McBassi & Company, Jack Phillips from the ROI Institute, Jac Fitz-enz from the Human Capital Source, Josh Bersin from Bersin by Deloitte, Cedric Coco from Lowe's, Karen Kocher from Cigna, Rob Brinkerhoff from Western Michigan University, Kevin Oakes from i4cp, Carrie Beckstrom from ADP, Lou Tedrick from Verizon, and David Vance from the Center for Talent Reporting, just to name a few.

Efficiency measures are most commonly about the number of courses, participants, and classes as well as utilization rates, completion rates, costs, and reach, just to name a few.

Typically, these measures by themselves do not tell us whether or not our programs are efficient. Rather, we need to compare them to something else—which may be last year’s numbers, the historical trend, benchmark data, or the plan (target) we put in place at the beginning of the program.

Now, we have a basis to decide if we are efficient and if there is room to improve and become more efficient.

That leaves outcome measures. Unlike effectiveness and efficiency measures, most organizations are not measuring outcomes, and few even talk about them.

That is unfortunate because outcome measures are the most important of the three types, especially to senior leaders who make L&D funding decisions. In accounting, this would be like reporting expenses and liabilities, but never talking about revenue or assets.

No one would have a complete picture of what we do, and it would be hard for anyone to understand why we have those expenses and incur those liabilities.

So, what are these all-important outcome measures? Simply put, outcomes represent the impact or results that learning has on your organization’s goals or needs.

Suppose a needs analysis indicated that your salesforce would benefit from a consultative selling skills program and product features training. And suppose that you and the head of sales agree that training makes sense.

Furthermore, the two of you agree on program specifics—including mutual roles and responsibilities, especially how the sponsor will reinforce the learning and hold his or her own employees accountable.

Now, what impact on sales can we expect from this training? How much of a difference can training make? A lot? A little? Enough to be worthwhile?

This is the realm of outcome measures that may be subjective (but not always), but very important nonetheless. Sometimes, the Level 4 impact or results measure from the list of effectiveness measures will do double duty as an outcome measure, and that is okay (The same thing can happen in accounting, too.).

Or, other measures will be selected. In any case, with outcome measures we are at last focused on how we align with corporate goals or needs and what type of impact we can have—and this is what your senior leaders have been waiting for.

How does xAPI fit into all of this?

I think xAPI will enable an explosion of rich and much-needed effectiveness and efficiency measures—allowing us important insights into behavior.

For example, we can discover how long and in what ways learners interact with specific content, which will allow us to provide real-time feedback, fine tune the learning, and design better, more customized learning in the future. These would be examples of efficiency measures.

xAPI also allows us to collect effectiveness measures, such as learner feedback at points within the experience (Level 1), their learning throughout the experience (Level 2), their thoughts on application or what they will need for successful application (embedded in the learning as Level 3), and even estimates of impact or insights into optimization (Level 4).

All of this, in turn, allows for a greater contribution of learning on outcomes.

In conclusion, TDRp provides a much-needed measurement and reporting framework for learning, and xAPI enables collection of microdata on a personal level that has the potential to revolutionize both the learning experience and its measurement. A very exciting future awaits!

Subscribe to our blog

eLearning Learning

This website stores cookies on your computer to improve your experience and the services we provide. To learn more, see our Privacy Policy