Learn about tools to trace frontend to backend data and increase visibility on errors and performance. We'll go through how to know which teams are responsible for which error, what the impact is, and all the context needed to solve it.
Monitoring Errors and Slowdowns Across JS Applications

AI Generated Video Summary
Sentry is an error monitoring platform that helps developers optimize the customer experience by alerting them of errors and slowdowns. It supports all major languages and frameworks, with a focus on error monitoring, performance monitoring, and release health. The Talk explores how Sentry organizes and represents error data, analyzes error details and tags, and investigates backend issues, performance problems, and release health. Collaboration with backend teams is emphasized to resolve issues and optimize transaction time. The Talk also highlights the importance of analyzing graphs, issues, and regressions to identify areas for improvement in release health.
1. Introduction to Sentry and Error Monitoring
Hi, I'm Simon, a solutions engineer at Sentry. We monitor errors and slowdowns in JS applications, connecting developers to the end user experience. With the Sentry SDK, we alert team members and developers of errors and slowdowns, optimizing the developer and customer experience. The Sentry platform focuses on error monitoring, performance monitoring, and release health. We support all major languages and frameworks, with Node.js as a starting point. We'll generate transaction and error data to demonstrate how Sentry organizes and represents them. Let's take a look at how errors are represented in Sentry, including frequency, contextual data, and tag information.
Hi there, my name is Simon. I'm a solutions engineer at Sentry, and we'll be talking about monitoring errors and slowdowns across JS applications. Sentry is designed squarely for developers. We'll tell you when your code is slow, when it's broken, and clues on why. We're connecting the end user experience as closely as possible to the developers that make those experiences happen.
With the Sentry SDK on your apps, we'll alert the necessary team members and developers when those errors and slowdowns happen. Let them make those commits and the changes to optimize that developer and customer experience. The Sentry platform is divided into these five pillars here. We'll be focusing on the first three on the left, so that's the error monitoring, performance monitoring, and release health side of Sentry.
Now, to get started, we would be on the Sentry documentation page. We support all major languages and frameworks button here to get into all of that. But probably the place we have Node.js in the front page to get started with that, the install on the packages that are necessary with a Yarn ad or NPM install and configure with Sentry init, including that DSN. So that's a data source name. It'll tell your application where to send error and transaction events to, and that's your project in Sentry.
Now, I've got an app here for us to take a look at together. We'll generate some transaction and some error data, and we'll take a look at how that's represented in Sentry, how it's organized by releases as well. Now, to get started, we'll click into browse products to take a look at the available plant things to buy. And it's taking a few seconds here. We'll take a look at that slowdown momentarily, but I'm going to finish up with this user flow, added a couple items to our cart, checking out to purchase them, right? We've encountered an error, surprise, surprise, but let's take a look at how that's represented in Sentry.
Now, I've got a Slack alert set up. So in a few seconds here, we'll be notified of the error that we just triggered from that checkout process. Click into this notification with, you know, some context behind the scenes as well of what just happened, but let's take a deeper look. So from that link, I'm taken to the who, what, when, and where of the error that we just experienced together, right? So this error, this 500 error has happened 160 times to 60 users. Some context about its frequency over the past day, 30 days, first and last seen across releases. So that's really helpful. And also some aggregated tag information on the right over here. So we've taken all 160 times this error has happened, gotten some contextual data and heat mapped, organized it. As you see here, customer type is small plan, large, medium enterprise. It's affecting all our users. So we want to take a deeper look into that.
2. Error Details and Analysis
Let's focus on the details of one of the 160 times this error has happened. We'll look at the tags, the stack trace, and the timeline of activities that led up to the error.
Now let's focus on the middle pane here. So these are the details of one of the 160 times this error has happened. We can page through to other ones. In any case, let's take a look at these tags. So the key value pairs for this specific occasion, MacOS Chrome, it was a large customer, some other details. But what we care most about at this point is probably the stack trace. Without Sentry, we'd be dealing more with this minified, not human readable stack trace. But since we've uploaded our source maps at build time, we see this very beautiful human readable stack trace and we can take a look at the exact line of code where it happened. It looks like the response from the back end was not okay, so we can dive a little deeper into that. We've also got a timeline of activities that led up to our error, which we can filter by as well.
3. Backend, Performance, and Release Health
Now let's dive into the backend and see the stack trace and error details. We can work with our backend team to resolve the inventory issue. Moving on to performance, we analyze the web vitals and transaction details. We identify a problem with the largest contentful paint web vital and investigate further. By examining the transaction breakdown, we find a slow HTTP client operation due to sequential database queries. We can collaborate with the backend team to optimize and improve the transaction time. Lastly, we explore release health, including crash rates and specific details. We can analyze graphs, issues, and regressions to identify areas for improvement.
Now, just going back to the top of this issues page, I see there's a connector to a child event. So that's our node backend. Now clicking into that we can see everything that we just reviewed, but now from the backend. The who, what, when and where. With the stack trace as well, we've got our node stack trace and we got an error because there was not enough inventory for our product because the inventory item count and quantities just weren't happy together. Okay, so we can work with our backend team, figure that out.
Great, so that's the first item here, error monitoring. Let's jump into performance. So going back to our frontend and our performance tab, we got our web vitals right away. So that connects that end user experience to what we as developers care about and what is actionable for us. Our products transaction, which took a few seconds to load as we noticed earlier that list of products, it's failing on our largest contentful paint web vital. So let's dive a little deeper into that.
We've got our transactions table, look into a transaction summary of how it's been performing over the past day. We can adjust the timeframe or this is a really fun graph to work with and highlight over a specific period. But in any way, in any case, what we care about is getting into the details of a specific transaction. So clicking into that, we have the operation breakdown of what just happened or how it's performing against our web vitals. So it's failing on that LCP, 10.6 odd seconds over that 11.18 total transaction time. Super problematic, but a lot of the operations looking down is not taking too much time. We see this one HTTP client operation that's taking 10 and a half-ish seconds. So let's dive into that and look at what operations are happening, right? Here we can see that there are some database queries, some select statements that are sequential, and we can work with our back-end team to figure out how to optimize that, reduce that LCP problem, and improve that transaction time. Great. So that's two checks off the box here. Now let's take a look at release health. So we organize our details about our projects in releases. We can sort by specific properties as well. And we support semantic versioning. Here we've organized it by 22.x releases, and we can adjust to see details about our production environment over the past 14 days in the terms of sessions, or users are available as well. In any case, once you filter according to that, we get all the details about these releases.
So our crash free rate in the latest version is 84%. Adopted 7% of the time, that's okay. Crashes aren't too bad, but jumping into a specific one, we get the specific details about these releases. So we can organize by different series of graphs, take a look at the issues as well, and the regressed issues, unhandled ones, figure out which ones to delve into, and go back to where we started with the issues page, and also take a look at how it fits into the transactions behind the scenes as well. So in this case, two transactions, two errors, everything kind of just fits together with Sentry in the ways of going into a release, taking a look at the errors behind the scenes, and also the performance details all connected together. Great.