Identify Issues and Prevent Slowdowns in your Vue.JS Apps


In this lightning talk, you will see: 

1. The lack of visibility on Vue.js applications. 

2. How to install the Sentry Vue.js SDK. 

3. How you can use Sentry to identify errors and slowdowns within your Vue.js App.


Hi there, my name is Simon. I'm a Solutions Engineer at Sentry. What we focus on is code observability. With Sentry, you can understand the health of your application from the front-end to the back-end. Sentry is designed for all developers, meaning we support all major languages and frameworks including vue. With the Sentry SDK on your application, you can alert the necessary team members and enable your developers to optimize the developer and customer experience. The Sentry platform provides multiple perspectives on the health of your application. But what we'll be focusing on today is error monitoring and performance monitoring. To get started with Sentry, we would go to the Sentry documentation site, search for vue, or click to see all 99 supported platforms. In this javascript section, we have a link to vue. The installation process is very easy. It's an npm install and a few lines of code to configure vue on your application. What this will do is hook Sentry onto your global error handler, and as users are interacting with your application, events and transactions are sent to Sentry. What I've got on the side here is a demo vue app, which is very basic, but let's just take a look together. I just refreshed it for us because I've set it up so that an HTTP request is sent on page load. We'll review what that means on the performance side of things, but for now, let's click onto these error buttons. As these errors are happening, they're also being sent to Sentry. I've configured some alerts for us, so we're actually getting notified of these errors as they're happening. On Slack, I just clicked into a notification that'll take us straight to this error, and these details on the error are all displayed in front of me. We got the error, the error message, the magnitude of this error, how many times it happened, against how many users. We also integrate with source control management systems, so we have details on maybe the commit that could have caused this as well, links to those commits, contextual information through these tags like the user details, the environment they're in. We can add our own tags, but what we probably care about the most at this point is the stack trace. That's right below, and in a Sentry-less world, we'd be dealing more with a minified stack trace, optimal for performance, but not great for human readability. But thankfully, Sentry helps out with that. During our build process, we had uploaded our source maps, Sentry took that, translated the stack trace for us, and we have this beautiful human readable stack trace, including highlighting the line where the error happened, and we see that the response at okay was false.

We also know the file to take a look at as well. Let's all keep that in mind, very curious stuff. But just below that, we have more context, a timeline of the activities in our breadcrumbs. We can add our own breadcrumbs as well, and this is all automatically instrumented. We can adjust the time to a more T minus format where when we hit time equals zero, we hit that internal server error. This is all great to get more context. If that wasn't enough, this error had happened about 50 times. We've actually gathered all the tags from all these errors into this heat map section to the right. With all this information, the context and content from the front-end perspective, we can consider creating a new ticket or linking to an existing ticket through these integrations with issue tracking tools.

I also want to bring our attention to the distributed trace feature we have in Sentry. Since we have Sentry installed on our back-end as well, we see a relationship. We see that there is a child event and a related error that we should take a look at. This is referring to that response that okay was false, coming in from our back-end. If we click into this, we can see the back-end tags, back-end stack trace, back-end breadcrumbs, everything that we see here, but just from the back-end perspective.

Let's just switch gears for a moment here. There's a lot on the issue side and we've got just enough time to check out the performance health that is provided by Sentry. Right away, we see our web vitals. That's how long it takes for the first thing to load, largest thing to load, input delay, stability of our page, latency over time, and distribution over the past 24 hours. That was quick, right? Lightning talk. But what is standing out here is the user misery score. As I highlight over it, it gives us a definition of this metric. The same goes for all these other metrics as well. But let's just follow our nose, click into the homepage transaction here, and you can see the duration breakdown, how long it took for these transactions to complete over the past, in this case, 24 hours. We can interact with this graph, highlighting a different section. That'll update the time-frame, of course, and also the events that show up.

Most of the operation is covered in this HTTP red. Clicking into an event ID, we can take a look at the operation breakdown in this waterfall type of graph.

It took about 15 seconds for this transaction, and most of the operation time was coming from this HTTP client request. This is going to our backend, and we can see that because we got this plus icon here. We got distributed trace setup. If we didn't, we would just not see this plus icon and we can't expand it.

Luckily, we can and we can see that a lot of this time is through these sequential database queries. Now we have a way to move forward, work with our backend team, optimize these database queries, make it like an asynchronous processing type of situation, and that'll improve that user experience, reduce that user misery as well.

Just to recap for a moment here, on our view app, we clicked a few of these error buttons. We're notified through our integration with Slack to get to our error page, looking at the full context and content from the front-end perspective, and with distributed trace, could also do it from the backend perspective.

On the performance health side of things, we have a summary of that health through these web vitals, followed our nose through that user misery score on that homepage transaction, took a look at the operation breakdown, and we can make some improvements on these suboptimal database queries.

In any case, thank you so much for reviewing this view app with me. Have a wonderful rest of your conference. Thank you very much. Thank you.

8 min
20 Oct, 2021

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic