Hi there, my name is Simon. I'm a Solutions Engineer at Sentry. What we focus on is code observability. With Sentry, you can understand the health of your application from the front-end to the back-end. Sentry is designed for all developers, meaning we support all major languages and frameworks, including and especially react. With the Sentry SDK on your application, you can alert the necessary team members and enable developers to optimize that developer and customer experience. The Sentry platform provides multiple perspectives on the health of your application. But what we'll be focusing on today is error monitoring and performance monitoring. With that in mind, I have a react app for us to take a look at, and it's in this e-commerce site. Let's click in and browse our products. It's taking a long time for this to load. We'll have to look into that a bit later, but we'll just have to continue on with the user flow here eventually. I'll add a couple of items to our cart here, proceed to checkout, and I think we'll proceed to checkout. This is a horrible demo site, obviously. We encountered an error, surprise, surprise. But let's take off our end-user hat for a moment, put on our developer hat, and see what's going on. Let's open our developer tools, look at the console, reproduce it, look at the network tab, cross-reference our source code. I mean, there's plenty of things we can do and gets a bit tedious. Thankfully, we've got Sentry set up and we can do it all in that one place. I'll show you how we got started. That's in the Sentry documentation page, docs.sentry.io, write it down. In any case, we've got our supported platforms in this button here. Pride of place, we have react on the front page, so you don't have to go far. Click into that, it's an easy installation, npm install or yarn add, and a few lines of code to configure it. What this will do is get Sentry on your application, hook it onto your global error handler.
As users are interacting with your application, events and transactions are sent to Sentry, and that's where we can look and understand the health of our application.
During that checkout fiasco, we did get notified through an alert that was set up. We don't have time today through this lightning talk to go into alerts, but just know that it's there.
We've got our summary of our error right away, including the magnitude, how many times it's happened to how many users, contextual tags as well. We can add our own tags if we'd like, but let's just skip ahead and go to what we're all thinking about. That's the stack trace.
In a Sentry-less world, we'd get minified stack trace, which is not the funnest thing to understand. But during our build process, we've uploaded our source maps and Sentry is able to take our source maps against the stack trace that was provided and unminify in this beautiful stack trace we've got here, human readable, highlighting the exact line that it hit on for this error. We've even got this if statement details on that, and the response that okay was false. That's not good, but we'll keep that in mind and continue looking at more context. We've also got a timeline of the events which we can filter by. We can change the time frame to a more T minus style, and we can add our own breadcrumbs as well. That'll help us with understanding what led up to this error.
Since this error happened multiple times, around 60 times, we've collected the tags from all the errors in this heat map on the right. We've also got all our issue tracking integration set up handy over here where we can link to existing tickets or create a new ticket from here as well. Now that we've got all the context and content from our stack trace from the front-end perspective, I want to call our attention to distributed tracing. This feature that we have since we've also got Sentry set up on our back-end service. Clicking here, we can see that there's a related error. There's this exception that happened on our back-end. We might want to work with our back-end team to figure out what was going on with that. I'll just give us all a hint since we don't have that much time. It's a problem without enough inventory and there wasn't a good way to deal with that. An error happened in the back-end, happened on front-end, blah. In any case, let's switch gears and go to our performance page. This will give us our performance health of our application.
Web vitals are listed out on the front, how long things took for the first thing to load, largest thing to load, input delay, latency distribution, and over time, we can adjust the time-frame from the past day to another time-frame. But let's remember that the products page took a long time to load. We can see that evidence on the high user misery score as well. You can highlight each of these metrics and it'll give you more details on how it was calculated. But let's follow our nose, go into the products page. We have our duration breakdown, how long it took for these transactions. This is an interactive graph, so I can adjust the time-frame over by highlighting or changing it up here as well. But what I think might help most is taking a look at the operation duration. It's highlighted in this HTTP red, it's covering basically all of it. That's very curious. HTTP red is not an official color, but that's what I'm calling it.
Clicking into a specific event, we can see the operation duration in this waterfall type of graph. If we just had Sentry installed on our front-end, this is exactly what we'd see. But we also have this plus. That indicates we've got Sentry set up on the back-end as well, and we can see how the transaction was carried over from a front-end request to the back-end to get the details on our products and get it returned to us on the front-end. A lot of the details are on these database queries and we can see it's sequential, not the most optimal. We might want to work with our database team, our back-end team to optimize these database queries, maybe asynchronous processing. In any case, we've got a way to move forward. Just to recap for us, our react app was slow on loading the products, and we also got an error in our checkout process. Since we were notified, we could take a look at all the details on the back-end side and the front-end side with that distributed tracing. On the performance end of things, our performance health and web vitals are listed out in our performance page, followed our nodes and saw that high user misery score, and that led us to this page where we have our operation breakdown and we know that we can make improvements on our database queries. Thank you so much for walking through these problems in our demo site with me. Have a wonderful rest of your conference. Thank you very much. Thank you.