In this installment of our seven steps for learning evaluation, we'll show you how to design the program itself—including how to capture, aggregate, display, and use data related to evaluation metrics.
Now that you've finished the first three steps in our learning evaluation method—align program goals, define success metrics, and identify existing successful training programs—it's time to start on the design phase.
Let's do this.
Start by designing your program with evaluation metrics you want recorded throughout the program. The program design should also include a plan for how these metrics will be captured, aggregated, and then displayed to relevant stakeholders.
During this step, assess the feasibility of capturing each metric. Chances are, you'll need to make a few adjustments to some of your metrics so you can capture them easier. Because of financial and labor constraints, some metrics might be de-prioritized or dropped entirely. Using the metrics wish list you created during Step 2, determine the most useful metrics by comparing the costs of capturing, aggregating, and presenting each one.
As part of your program design, consider using A/B testing—comparing two versions of a particular element to see which one performs better—and your learning evaluation metrics to judge which element(s) was most effective so you can follow the best approach when taking the whole program live.
When you've designed how metrics will be captured, aggregated, and displayed, it's time to launch your program. We recommend setting up dashboards to monitor metrics throughout the program.
Step 4 Objectives
- Consider the feasibility of evaluation metrics and finalize your list.
- Design monitoring dashboards and analysis reports.
Making It Happen
- Design your evaluation alongside the design of the program.
- Weigh the cost/benefit of each metric to be captured. Prune the nice-to-haves from the essentials.
- Consider both ongoing monitoring and analysis.
Up Next: Learning Evaluation Step 5, Monitor
Our next blog post explores Step 5 of Watershed's Learning Evaluation Method and shows you how to monitor the success and progress toward your program goal. Don't miss out—subscribe to Watershed Insights to have updates sent to your inbox.
[Editor's Note: This blog post was originally posted on February 11, 2016, and has been updated for comprehensiveness.]
About the author
As one of the authors of xAPI, Andrew Downes has years of expertise in data-driven learning design. With a background in instructional design and development, he’s well versed in creating learning experiences and platforms in corporate and academic environments.
Subscribe to our blog