More L&D Reports with Less Page Load Time [Dev Delve]

I try really hard to avoid being the stereotypical angry dev nerd who tells users they’re doing it wrong. In my mind, it’s completely unacceptable to tell users they’re using an application beyond its intended use. Besides, isn’t that the whole point—don’t you want people to use something you created in a way that’s beyond what you could have imagined?

And it’s one of these instances that left me with a challenge I was determined to figure out: How do I maximize the number of L&D reports per user dashboard while limiting load times and reducing latency?

TL;DR

  • Problem: Requests to load too many dashboard reports on a single web app fails.
  • Solution: Throttle the processing of heavy loads by queuing requests (or promises) and process a few at a time, and cancel outstanding requests on page bounce.
  • Result: Establish faster page load times without limiting the number of dashboard reports.

Deep Dive

Like many modern applications, such as Jira or Salesforce, Watershed is a single-page webapp that offers a configurable, unique landing page (or dashboard) to each of its users.

Since our application deals primarily in analytics and reports, a typical user’s dashboard is filled with reports (or cards) on learners, training programs, course completion, etc.

What’s challenging, though, is when users within an organization share several of these dashboards with one another because those users can add more reports to each shared dashboard.

Thus, the number of cards on each dashboard can grow very fast.

There Aren’t Problems—Just Opportunities

This challenge first surfaced when an end user reported performance issues, stating the dashboard was taking a long time to load. Then, that same user reported other pages within the application were not only loading slowly, but also missing pieces of the UI.

As in, even browser requests to load minified HTML files and heavily cached data were failing!

Upon initial investigation, the problem was obvious. The user had too many dashboard reports. There was an easy fix, which was imposing an arbitrary limit on the number of reports that could be added to a dashboard.

Several users within this organization had hundreds of cards on their dashboards, which is way more information than anyone can quickly digest and comprehend (at least within the scope of a single page view).

If we limited the number of cards, we’d be removing hundreds of cards from their dashboards. So, by eliminating one issue, we’d only be creating a new one.

My mind was made up. There would be no limit to the number of cards on a dashboard. The challenge became "how are we going to accomplish this?".

But, before how, the real question was "why are we doing this?" Why was a high number of reports causing a performance hit? And why was it causing OTHER pages to slow down as well?

What I found really weird was that this slow load time was happening even if EVERY report on the page was cached. That is, the server would return immediately.

I realized the issue was originating from the client’s side and their browser was limiting the number of connections, which meant that at most I could depend on six connections to the server.

Making the situation more difficult, once all these requests fired off, they would hang and use all the available browser connections. So, even if a user navigated away from the page, the entire website was slow and unresponsive.

It didn’t matter if we were crunching numbers and generating reports at blazing speeds, the user was already gone—and while these reports returned to the browser, the user was actually waiting for another page!

The solution came in two parts.

  1. Cancel requests for reports if a user navigates away from a page. That is, if the user logs in and then immediately leaves the landing page, it stops waiting for those request to return. (This is the most critical step.)
  2. Throttle the number of report data requests to prevent saturating all the connections.

Canceling data requests equated to investigating how we created each request on the front end. A bit of investigation led to a $http.get() and a simple Google search led to a great stackoverflow answer!

This is really a clever trick that needs more recognition. You see, each request is a promise and the timeout for each request is a number or a promise that, once resolved, will cancel the request.

So, now there’s a central list of cancel promises. And if the user navigates away, all of those promises are resolved, which in turn cancels every pending request.

For throttling requests, I took a little journey into angular frameworks. The landing page (i.e., dashboard) is really a collection of HTML elements with angular controllers attached.

Each controller has an isolated scope. That means when all cards are added to the landing page during the initial load and the digest cycle has run, each controller will individually fire off requests and end up fighting for resources.

The answer, of course, was to queue these requests and only fire a small number at a time. What better way to do this than with promises? (If you’re counting, that’s three layers of promises!)

So that brings us to the final flow. As data requests come into a central service, they’re added to an array, and promises are returned. As each promise is resolved, the actual request is fired.

Each request has a timeout property which can be a promise. I call this the canceler in the code below. If we resolve this promise we can cancel the request. As each request completes, it will resolve the next promise in the array that will fire of the next request. (See the code example below.)

Here's an example of a bad request:

Bad Data Request Example

And example of a good request:

Good Data Request Example

Or, just follow the pseudo code below:

Have a different workaround that's helped? Let me know in the comments—I'd love to add more tools to our arsenal.

Subscribe to our blog

Where Can I Get Help?

To learn more about using all of Watershed's features, visit our help section. You're also welcome to contact us if you have any questions or need help.

This website stores cookies on your computer to improve your experience and the services we provide. To learn more, see our Privacy Policy