Last December, we posted a bumper series of blog posts about the new Watershed features we’d introduced during the previous 12 months. Now, six months later—thanks to client feedback and the toil of our excellent developer team—we’ve got some more awesome features we’d like to share with you!
Sentiment Analysis: Use AI to see what people really think about your learning
Analytics tools are great at analyzing quantitative data (data that is either a number or can be aggregated into a number, such as scores or completion counts). You can measure and analyze this data with traditional statistical methods such as counts, percentages, sets, and modes to discover relationships and patterns.
So how do you try and gauge how your learners are feeling from the feedback they leave throughout your ecosystem? Comments, forum posts, text-based survey responses, or feedback surveys are all places where learners can express how they are feeling. This qualitative data contains natural language that's been written by learners in their respective speaking and writing styles.
While qualitative data could theoretically be classified or defined in the same way as quantitative data (i.e. counting or aggregating words, phrases, or patterns), it won't really tell you anything useful about the underlying context or creators’ feelings and intent. And this is where sentiment analysis comes into play.
What Is Sentiment Analysis?
Sentiment analysis uses machine learning (a form of artificial intelligence) to identify, extract, understand, and quantify natural language data so it can be analyzed using statistical methods.
In Watershed, we use what is known as a machine learning model to do this. This model has been taught to assess the feelings and emotions expressed by the text. This includes dissecting the grammar, word order, and construction of the phrase (this is known as lexicographical analysis) to ensure that all of the text is analyzed—not just individual words or phrases.
So How Does Sentiment Analysis Work in Watershed?
Watershed’s sentiment analysis model detects and analyzes text as it is received. The model then assigns it:
- a score from -1 to 1 (with -1 being very negative and 1 being very positive), and
- a category (negative, neutral, or positive)
You can then analyze this data with Watershed’s standard reports to gauge how people feel about training. We’ll be expanding on how you can apply this to your learning programs in a follow-up blog post that will be published shortly.
The sentiment scores can then be sliced and diced like any other data and displayed in most Watershed reports. There also is a new measure category in all Watershed instances called “Sentiment Analysis,” which you can use to analyze the data or as a basis for creating your own measures.
A note: Please be patient while our feature adapts to real-world data sets
Sentiment analysis is a beta feature, meaning this is the first phase of the launch - it needs real data to perfect the results. While we have utilized the best machine learning algorithms to build this feature, its “model” is still quite young. It will need time to adapt to real-life scenarios, to improve and evolve its own learning and adaption to the word sets used by real learners.
So we’ll be closely monitoring performance, and we also encourage users to report back any anomalies found (see how via the support article below). Anomalies can have their score corrected, and the machine learning will learn and adapt from these corrections. The nature of machine learning does mean it will take a little time though, so we appreciate your patience.
Find more information about Sentiment Analysis on our support site, or contact our support team or your account manager for more information.
Top Items: Build leaderboards that only return the top learners
Watershed’s most-used report type is the Leaderboard report. People use it to display all kinds of information like popular content or top learners. And now, you can configure Watershed’s leaderboards to only display the top 5, 10, 15, 20, 25, 50 or 100 results.
Hide Personally Identifiable Information: Configure, utilize, and share reports without worrying (too much) about data privacy
Previously, when letting others view a report—either via a share or directly in the application—whose dimension was set to “Person,” it would display personally identifiable information (i.e. names or IDs) of the learners the report is about.
But due to popular demand, we’ve created a setting that hides this information but still has a row for each person included in the report. This can come in handy when reviewing qualitative data, such as commentary.
More Program Report Upgrades: Build program reports about more people, faster
Last time we discussed product releases, many were related to Program reports. This time, we have two small quality-of-life improvements to add to the mix.
Create Program report steps using context activity IDs.
Normally, Watershed requires you to provide one or more xAPI statement IDs in a report. But for Program reports, you often report on data that are linked together with the same context activity IDs (like a parent or grouping).
The Program report now has an option to bulk import program steps that are linked in this way. Once the steps are imported, you can change the order and remove any unwanted ones.
People filters now support up to 15,000 people.
When you build a program report, you often want to limit the amount of people reported on using a people filter. Previously, this would limit your report to 10,000 people—but it’s now been increased to 15,000.
About the author
Subscribe to our blog