Virtual Reality + Watershed

     

A few months ago, I began exploring how emerging technologies can be used with Watershed. The first exploration with voice technology was quite fun, which led to more than a few ideas about where to go next. One idea definitely stood out: virtual reality (VR).

I tried VR for the first time a few years ago when a coworker brought in an early Oculus Rift development kit and we took turns trying it out. Despite being a new technology, it was quite impressive and immersive. It didn’t take long to suspend disbelief and forget about reality.

These days, all the major tech companies are investing in the VR space. Although, the quality of the experience, VR capabilities, and cost widely vary, you can get started for less than $10 using your phone and a simple DIY Google Cardboard viewer. The quality is decent; however, for true immersion, try the Oculus Rift ($399) and HTC Vive ($799). Both require a powerful computer to run them, so the total cost for a higher-end VR kit is between $2,000 and $3,000.

Designing the Experience

My approach typically starts with: 

1) Identifying a technology

In this instance, we focused on VR technology.

2) Creating a use case around learning and training

Our learning and training use case is rooted in the safety space and asks users to demonstrate their abilities to identify safety hazards in an industrial environment. They aren’t told in advance the number of safety hazards they must identify, and they can also leave the simulation at any time.

3) Designing a working prototype that sends data to Watershed, where it can be stored and reported on

For the simulation, I decided to go with a Google Cardboard app that uses my iPhone as the platform. The Google Cardboard platform offers three inputs you can work with: look at, look away, and click.

I designed the simulation so users must identify hazards by clicking the VR headset. Based on Google Cardboard's input constraints, the person’s position in the VR space is fixed. (Higher-end VR kits, however, offer more inputs, motion tracking, and positional tracking.)

A key part of tracking the simulation with xAPI is to identity the person using it. Since this is a Google Cardboard app with limited inputs, I created a user experience that has two modes of use: phone only and phone in a VR headset.

Users sign into the simulation experience and view instructions on a phone. Then, they put the phone into the VR headset to use the simulation. When done, they remove the phone from the VR headset to see their scores. By doing this, users start and end the experience in the same mode.

Here's a breakdown of the various screens and the corresponding usage modes.

Watershed VR Screen Modes

Building the Prototype

Most VR simulations are built using game engines. I’ve spent the past year learning how to develop games with Unity. So, I was excited to discover I could use this knowledge for the prototype. I started by building out the separate scenes needed based on the experience I had designed. I wired the scenes up using simple buttons at first so I could switch between them to see how the overall experience felt.

Final prototype in UnityScreenshot of final prototype in Unity

Unity provides native VR support for Google Cardboard. Once you instruct a scene to use VR, as you look around while wearing a VR headset the phone’s gyroscope tells Unity where to orient the camera in the virtual world. I also used two components of the Google VR SDK for Unity, which makes interacting with objects in a VR environment really straightforward: GvrReticlePointer and GvrEventSystem. The GvrReticlePointer component—think of this as a visual targeting device—is used to show users that an object can be clicked. The GvrEventSystem component is used alongside Unity’s built-in event system. This is how I could find out when a safety hazard was clicked so I could send data to Watershed. 

To work with Watershed, I used the open source TinCan.NET library. I ran into some challenges getting this library to work on my iPhone. This was due to the JSON serialization library included with TinCan.NET. Apple restricts this from running on their devices for security reasons. The JSON .NET asset for Unity solved this problem, as it uses a different approach. Once I got past that challenge, sending statements to Watershed was a breeze.

After I had these core technical pieces working, I turned my attention to creating the simulation environment and UI screens. I originally wanted to create all the 3D assets myself. However, for the sake of time I instead leveraged Unity’s Asset Store. This allowed me to construct the simulation environment in a matter of days instead of months. I’m using a mixture of both free and paid assets. In total, I invested $139 in assets. I’m pretty happy with the results though part of me still wants to dust off my 3D modeling skills. Another part of me wants to add roaming zombies.

Final Prototype Screens

Watershed Virtual Reality Sign-In ScreenThe sign-in screen collects a user’s name and email address.

Watershed VR Usage Modes

The instructions screen is necessary since there are two usage modes.

Watershed VR User Experience
This is a capture of a user looking at a safety hazard.

Watershed LRS Virtual Reality Simulation
The safety hazard name is displayed when a user clicks it.

Use VR to track safety hazard training in Watershed
A user clicks the button on the ground to end the simulation.

Watershed VR Safety TrainingThis screen tells a user to transition back to the phone usage mode.

See VR training results in WatershedThe prototype ends by showing users their score.

And here's the data in Watershed:

Watershed VR Data

 

This doesn’t compare to seeing it live, but here is a 360° video of the final simulation environment:


Testing the PrototypeWatershed VR Prototype DemoFor testing the prototype, I used a set of Merge VR Goggles. I invited everyone at Watershed via Slack to swing by and give the VR prototype a shot. It’s always fun to see how people use things you design. I set up all the equipment, which included my iPhone, the VR headset, and a swivel chair for anyone who wasn’t comfortable using the VR headset while standing. I also had Watershed on my laptop so people could see their results in our product. 

I discovered quite a bit during testing:

  • No one sat in the provided chair. They all chose to stand.

  • Most people didn’t use the VR headset’s included straps, instead choosing to hold it up to their faces the entire time.

  • Not everyone was sure how to get started with the prototype, despite the instruction screen. This makes me wonder if these types of learning experiences always need to be facilitated.

  • Most people instinctively removed the VR headset after they clicked the end simulation button.

  • People were unsure what to look for (i.e. what is an industrial safety hazard).

  • One person felt uneasy; however, she said any VR experience makes her feel that way.

People also had a lot of ideas about what should be tracked in this type of simulation:

  • Simulation start

  • Simulation end

  • Simulation session time

  • Hazards identified

  • False positives

Last, people had great ideas for the types of Watershed reports we could build:

  • Users who identified all hazards

  • Users who dwelled longer/was uncertain about the hazards in the simulations

  • Sequence of hazards identified by users

  • Hazards most identified by users

  • Hazard identified first by users

  • Hazard identified last by users


Real Examples of VR in Learning

While we have more than a few Watershed clients who have implemented VR simulations as part of their learning programs, we can’t share them publicly. But I did find some great examples during my research.ITI VR Crane & Rigging SimulationsMursion VR & AIIntervoke Medical Use CaseSTRIVR VR 360 Video

Conclusion

Like our last prototype, sending data to Watershed was the easy part despite one small technical hurdle. The majority of my time was spent understanding how to work with the Google Cardboard platform, researching safety hazards, designing the learning experience, and building out the simulation’s environment.

With higher-end VR capabilities, the prototype could have done more. There are still constraints to consider, however having access to more inputs would be welcome. Even a simple prototype like this could have a practical application in learning. VR has definitively found a real use case, beyond entertainment, and I’m excited to see how it evolves with the modern learning ecosystem.

How have you used VR in your L&D programs? Tell us in the comments below.

[Editor's Note: This blog post was originally posted on July 24, 2017, and has been updated for comprehensiveness.]


Want to see more?

Geoff Alday

About The Author

Geoff leads our design efforts for the Watershed product. He loves learning random things. He wants to believe Bigfoot is real.