The benefits of Node.js to develop real-time applications at scale are very well known. As Node.js architectures get more and more complex, visualization of your microservice-based architecture is crucial. However, the visualization of microservices is incredibly complex given the scale and the transactions across them. You not only need to visualize your Node.js applications but also analyze the health, flow, and performance of applications to have a complete observability solution. In this talk, we'll go over the challenges of scaling your Node.js applications and tools (such as distributed tracing) available to you to scale with confidence.
Comprehensive Observability via Distributed Tracing on Node.js8

AI Generated Video Summary
Welcome to the session on comprehensive observability via distributed tracing on Node.js. We'll explore the challenges of microservices and troubleshoot distributed applications using an example. Correlation is the missing piece in troubleshooting distributed applications. Distributed tracing helps pinpoint issues that logging or metrics may miss, reducing mean time to resolution. It provides visualization of microservices architecture, actionable data, and enables code optimization.
1. Introduction to Observability
Welcome to the session on comprehensive observability via distributed tracing on Node.js. In this session, we'll look at the new challenges in microservices, troubleshoot distributed applications using an example, and build a sustainable observability strategy for your company. Microservices have great benefits but also bring new challenges such as observability. Traditional monitoring systems make it hard to know what's happening under the hood.
Hello, everyone. Welcome to the session on comprehensive observability via distributed tracing on Node.js. I'm the host for the session. I'm Chinmay Gaikwad. I'm a technical evangelist at Epsigon.
Let's get started with the session. In this session, we'll look at the new challenges in microservices, specifically focusing on observability. We'll also look at how to troubleshoot distributed applications using an example, and finally we'll look at how to build a sustainable observability strategy for your company.
So let's start with the challenges on microservices. We know microservices have great benefits including scalability, speed of development, decreased system administration time, but microservices have also brought about new challenges such as observability in microservices. Using traditional monitoring systems, it can be nearly impossible to know what is going on under the hood. We'll explore this into much details in the upcoming slides.
2. Troubleshooting Distributed Applications
Let's start with metrics, which are a great way to identify issues. Logs tell us why something went wrong, but they are not sufficient in a microservices-based environment. The traditional way of debugging involves looking at metrics, then logs, but it lacks context. Correlation is the missing piece in troubleshooting distributed applications.
First, let's see how to troubleshoot distributed applications. So we know the three pillars of observability are logs and traces. We'll deep dive into tracing a bit later. Let's start with metrics. Metrics are a great way for opps to figure out if something has gone wrong. Some examples of metrics include CPU usage, memory usage. We also have business level metrics such as bounce rates, revenue, click through rate, etc.
Logs on the other hand, tell us why something went wrong. So for this session, let us consider an example of a virtual shop. As you can see, the SAP server authenticates requests using Auth0, and then pushes them onto the Kafka Stream. A Java container pulls the stream and updates a DynamoDB table. Let's say there was a situation where users complained about OAuth that was sent but not handled. Where would you start?
Traditional monitoring solutions come at the expense of higher resource utilization because they have multiple high-heavyweight agents. And they also have the ability to only collect host metrics or are purely metric-driven. Metrics, as we have seen, really only let us know that something is broken, but not when or why. Context is absolutely critical in today's environments. Using the traditional way, first you look at Kafka metrics. You don't see anything abnormal here, so maybe look at the DynamoDB metrics next. We see some spikes here, so that's pretty interesting. So for debugging this, you need more data. And more data means logs. But are logs really sufficient in a microservices-based environment? Let's look into it.
We all know what logs look like. Personally, I have a love-hate relationship with logs. I love the fact that they are available, but I hate digging through them. I've sat myself digging through hundreds or even thousands of lines of logs hoping to spot that outlier. What if I knew the exact path that request is taking through individual services and components? Logs are good to debug on the list, but they don't really work as a starting point in a highly distributed system. So in a workshop example, if you're very lucky, you'll be able to spot the problem, but it might take a very long time. So let's recap of what are the things that are missing here. It essentially boils down to correlation.
3. Correlation and Benefits of Distributed Tracing
Correlation between metrics and logs and between different services is crucial for finding the exact problem. Distributed tracing helps shine a light on the needle in the haystack, revealing issues that logging or metrics may miss. By using distributed tracing in the virtual job example, we can pinpoint the problem, such as a missing key ID. By focusing on specific services like the Kafka stream and Auth0 microservice, we can identify the root cause, such as an expired token. This approach significantly reduces mean time to resolution and detection compared to traditional monitoring solutions.
Correlation between metrics and logs and between different services. The correlation will help us find the exact problem. So how do we correlate these pieces? That is where distributed tracing comes into picture. I'm sure most of you must have heard of tracing lately. Many vendors offer some form of distributed tracing. Even service meshes are now building support for it. This tracing essentially helps shine the light on the needle in the haystack that logging or metrics can miss. Just because your application has 15 or 20,000 services doesn't mean a request will travel through every single one of them. At most it will travel through a fraction of them. So using distributed tracing to our virtual job example, you can now see where the problem is. The problem is the missing key ID. With key ID though, once you focus on the Kafka stream, you see that the user name is missing. And then, when you focus on the Auth0 microservice, you can see why is it missing exactly. It is because of an expired token. So specifically for Auth0, you now know that you should be using refresh tokens instead of access tokens. In short, this has reduced your mean time to resolution as well as detection back quite a lot as compared to traditional monitoring solutions.
4. Benefits of Distributed Tracing
Distributed tracing provides several benefits, including visualization of microservices architecture, actionable data, and narrowing the scope of services. It also helps pinpoint where time is being spent in the code, enabling optimization. At Epsilon, we have designed our product with lightweight agents, supporting different environments and providing rich context across metrics, events, logs, and traces. Building an observability strategy requires planning and clarity on business goals and architecture, choosing between DIY and managed approaches, implementing the solution, and ensuring scalability. Choose a proactive strategy. Thank you for attending!
Distributed tracing has a number of benefits. Let's look at a few of them. So typical architecture has a number of microservices involved, and one of the most important features of an observability solution is visualization. User should also expect actionable data within these complex visualizations and service maps. For example, in these visualizations, you should be able to see the latency between the components as well as areas where thresholds have been crossed.
As you have seen in the previous example of a virtual shop, distributed tracing based solution can also help narrow the scope of services. That actually takes out the guesswork in determining what has gone wrong. Without such smart filtering abilities, the architecture map becomes nothing more than an exercise in chaos theory.
Another great benefit of a distributed tracing solution is to pinpoint where time is being spent in the code. Here is an example of spans which make up a trace. These essentially can tell you if a significant portion of the time is spent waiting for an external API call or perhaps they have an inefficient database call that needs refactoring. At Epsilon, we have designed our product around the best practices for observability by talking to industry experts and our customers.
For example, we should have an automated approach, which can consist of lightweight agents, which won't consume a lot of your resources. Also, these agents should support different environments such as virtual machines or containers or serverless. We should be able to do this with rich context across metrics, events, logs, and traces that allow you to search full payload or custom tags. Observability should not only tell you if something has gone wrong, but pinpoint to where and why exactly to help reduce your mean time to detection and resolution.
And finally, if you have to build an observability strategy, you have to plan well in advance. First of all, have clarity on your business goals and architecture model and determine your approach, DIY or manage. Both of their pros and cons. For example, using DIY, you can use one of the open source solutions, but it will require a significant development effort to get it right, if you get it right. Then implement the observability solution and finally ensure the scalability because microservices can scale really fast. Scaling of the observability solution is super critical. Finally, I would like to end this session by saying that you should choose a strategy which enables you to be proactive and not reactive. Thank you for attending the session. Please visit the URL on the slide for a special offer.
Comments