The Clinic.js Workshop


Learn the ways of the clinic suite of tools, which help you detect performance issues in your Node.js applications. This workshop walks you through a number of examples, and the knowledge required to do benchmarking and debug I/O and Event Loop issues.

71 min
04 Jul, 2022


Sign in or register to post your comment.

AI Generated Video Summary

This Workshop focuses on using Clinic, a suite of tools for diagnosing performance issues in applications. It covers tools like clinic doctor, clinic flame, and clinic bubble prof. The workshop demonstrates how to measure server speed, diagnose performance issues, analyze MongoDB performance, and optimize code for better performance. It also emphasizes the importance of establishing a baseline, understanding user perception of latency, and using tools like Clinic Flame and BubbleProf for performance analysis.

1. Introduction to Rafael and Clinic

Short description:

I am Rafael Gonzaga, a member of open source organizations, including Node.js and clinic.js core teams. Fun fact: I love C++ and even have a C++ tattoo. This workshop has a few requirements, such as Node 16, Docker, and the exercise to download. Clinic is a suite of tools that helps diagnose application performance issues, including CPU, memory, bottlenecks, and more. In this workshop, we'll cover three tools: clinic doctor, clinic flame, and clinic bubble prof. Clinic heat profiler is not covered.

I am Rafael Gonzaga. I am a member of a few organizations in the open source. I am Node.js core member and specify core member and also clinic.js core member. I am from NeoForm as well and I am based in Brazil. So a fun fact about me is that I love C++. I have a pen of C++ here and I also have a tattoo, a C++ tattoo here on my fist, and it has a bug, obviously.

OK, so let's move on. Few requirements to this. OK, let me just move it here. One second. Presenter. OK, it might be because of E3. Don't worry. Can do it right now. OK, few requirements to this workshop is at least node 16, a clean jazz a few not all the tools from from the suite of clean jazz is supporting a node 18 because it changed at the V8 and the V8 change at the sync operations. And can just was not created to support it officially, but it might work. You need to have Docker and also it doesn't matter your operation, operational system.

Firstly, download this exercise I will send in the chat right here. Just sent. Cloned people locally and we'll move on in the next slide. Okay. So let me introduce cleaning then you can go ahead and you this workshop will work basically I will get send you an kind of exercise. You try to do it alone and then you do it together and explain a bit of the clinic. The suite that we use.

Okay. So basically, clinic is a suite of tools. It contains it will help you diagnose your application. Basically you'll be able to check the CPU, the memory. You will be able to find bottlenecks, memory leaks, CPU dropping, and also I operation that is causing major issues in your application. Okay, so it consists in for four tools, but in this workshop, we'll see just three. The tools are doctor, clinic doctor, clinic flame, clinic bubble prof and clinic heat profiler. Clinic heat profiler is not covered in this workshop because it's relatively new and it needs a new workshop just for that because it's a long callback.

2. Introduction to Clinic Flow

Short description:

First, install Node clinic. Then, we will go through a guide to the clinic flow in performance analysis. Finding performance issues can be complex and requires profiling and benchmarking. Replicating and diagnosing bottlenecks takes time. There are different approaches to solving bottlenecks, such as the scientific method and mapping the application. Establishing a baseline is crucial before optimizing. In this workshop, we will use AutoCano, a tool similar to Apache Benchmark.

Okay, so let's move on. First thing that you need to do, install Node clinic. You can just run npm install g clinic and you'll be able to run clinic dash dash help. I will wait for everyone to install it.

Okay. Uh, okay. So, you know, you, you now have click install. Very good. Now the workshop with this workshop, we will go through a guide to the clinic flow, how normally we do it in performance analysis, by the way, and performance engineer at NearForm. So I use it to do it in client applications, for instance, a lot of client applications that I can't say the name, unfortunately.

Okay. So usually finding performance issues can be a complex task. Usually it involves lucky. Yeah. I'm sure. Profiling, benchmarking and now knowing exactly the right tools. It means that it's very hard to identify a bottleneck by just using console.log.

Okay. Yeah. The vulnerability is totally safe. Don't worry about that. Okay. So to solve a performance issue, usually we have a long path. You find performance issues, then you need to identify it. The second step like, oh, okay, this is a bottleneck in my production application. But this is a production application. You can't replicate it locally. The next step is to replicate in a prod-like environment, not the live one. Once you have it, you need to diagnose to find the bottleneck and then replicate it in your local environment. Then you can use the right tool to find the bottleneck. And usually it takes time. You need to stare at the screen for a long time, go walk your dog and maybe in the middle of the journey you think oh this might be the issue. And then you try to solve it. There are several ways to solve a bottleneck. There are several approaches to solve a bottleneck. You have the scientific method that basically you have a hypothesis and you test the hypothesis. And also you have ways to map your entire application. It will depend on the context that you are working on.

Okay. So but in the end of the day, I hope that you have a performance application. Okay. Actually in the end of the month, the week, I don't know, it takes time. The first thing that you need to do is establish a baseline. When you are, for instance, you are working on API, you need to establish a baseline before optimizing. So before optimizing anything, you need to have data, you need to have dashboards, you need to have at least something to compare. For instance, how can you measure how much improvement you got without a baseline. You can't.

Okay. So here in this workshop we will use AutoCano this is a tool written in Node.js, it's pretty pretty good. It's pretty similar to Apache Benchmark or any kind of benchmark.

3. Measuring Server Speed and Running Clinic Doctor

Short description:

Basically, you will throw requests to your HTTP servers and get stats like requests per second, latency, P99, and P15. To measure server speed, run the application, open localhost 3000 in your browser, and refresh. But for reliable results, use HTTP2 and autocanon. Install NPM, autocanon, run the HTTP server, and autocanon localhost 3000. Ensure benchmarking on a dedicated machine. AutoKano shows server speed. Run clinic doctor and AutoKano, then go to clinic doctor for results. No issues detected.

Basically it will throw a lot of requests to your HTTP servers and then you get some stats like how much requests per second or in 20 seconds, it doesn't matter. And the latency, the P99, the P15. And that's it.

Okay. So our first server. The first thing that you need to do is once you have cloned the repository, you can go to the folder, hello world. Hello world. Okay. In this folder you'll see that there is a HTTP, a basic HTTP server there. And how can you measure how fast it can go? Okay. Firstly, run the application and open localhost 3000 in your browse and refresh it a bunch of times and make an assumption, oh this is fast or this is not fast. Okay. This is not scientifically, it will not work. Absolutely. Okay. You need to get more reliable results with using HTTP2 as I mentioned that we use autocanon here.

So the first thing that you need to do is install NPM, install autocanon. Please do that and answer me in the chat when you are done. Okay. After installing, run the HTTP server and in another terminal, run autocanon localhost 3000 and send me the result, this result here. How much request in 10 seconds and how much throughput. Okay. It really depends on your machine, by the way.

So one thing to mention here that is very important, when you are performing a benchmark, make sure to be a dedicated machine, which means that if you are in a Zoom call, it will affect your benchmark. If you are just moving your mouse, it will affect your benchmark. Okay? So usually when I perform a benchmark, I run a dedicated machine in a cloud provider, in a dedicated CPU. Because if you just get EC2 in AWS, it contains a shared CPU. It means that your benchmark will not be totally available. Okay? But running locally is totally accepted. But if you want to measure very close to the production environment, I strongly recommend to run a dedicated machine. Okay. Let's move on.

So AutoKano is telling us that our server is really fast. It really depends on your machine, as I said. But this is expected, because the server is not doing much. Okay? It's just a simple HTTP server. Now, we'll run the clinic doctor. So open two terminals. The first one you run clinic doctor dash dash no hello world index dot JS in the first terminal. And the second one, you run the AutoKano again. And then, when the AutoKano finishes, you go to the clinic doctor and do a control C. And then it will open a tab in your browser. Okay? When you're done, let me know. So basically we'll get a doctor pretty similar to this. I mean, it will depend on your machine. You can see the event loop delay bigger or low. It will depend on your machine. But in a C080P server, you see that it was detected no issue. Okay? It means that there's no recommendation.

4. Clinic Doctor and Database Setup

Short description:

ClinkDr helps diagnose performance issues in your application and guides you towards more specialized tools. Symptoms such as low CPU usage, blocking garbage collection, frequent event loop block, or a chaotic number of active handles might indicate a potential problem. No issues were found in the simple HTTP server example. It is recommended to use Node 16 and ensure that autocannon stops before running the control-c command. Clinic Doctor can be run with the --autocannon flag to automatically run autocannon when the server starts. Clinkdoctor serves as the entry point for performance analysis, redirecting to more specialized tools like clinic frame and clinic heap profiler. MongoDB is required to simulate API requests, and Docker is recommended for running it locally.

Okay? It means that there's no recommendation. So no issue was found. So everything looks good. So ClinkDr basically help diagnose performance issues in your application and guides you towards more specialized tools to look deeper into your specific issues. So symptoms such as low CPU usage, blocking garbage collection, frequent event loop block, or a chaotic number of active handles might indicate a potential problem.

So simple HTTP server, simple Dr, no issues, let's keep going. You got CPU usage, yeah, it might be like, as I say, most of the benchmarks is affected by the Zoom call where what is happening in your machine. For instance, if you are with your game, your machine are running in background, it can affect, okay? So, but in theory, you show the add, like if you run it again, we have a more dedicated environment like if you close other tabs, or you make sure that the node.js will not be affected. The CPU usage that should not be detected, okay? Yeah, the RSS is basically resident set size, it is covered in the other workshop, the heap profiler workshop, but also covered in the heap profiler, okay? This is totally memory related, but I think that we will see something related to it in this workshop, okay? You are using node 14, try to use node 16, please. And also, make sure that the auto-canon stops and then you run the control-c just one time, okay? I can run it here, if you want. Let's go to the, let's increase the font size, going to real Hello World, and here, if I run node index, actually, clinic-doctor-node-index-is, no, is helloworld-index, okay? And then, autocannon, okay, the autocannon is done, so I go to the other terminal, and I press control-C, and then the data will be analyzed and a new tab will open just like that, and then no issue is detected, okay? So in the future, if you are doing it in your production, in your app, and you are finding any bug, please raise an issue in the ClinicJS repository, okay?

So there's one thing that will help you in further analysis, that is in Clinic, doctor, you can use one on –on –port and then your comment, or also ––autocannon directly, it means that, for instance, instead of having two terminals I can just do this, let me move it here, here, okay, I can ––autocannon in the root path and then I pass my application. So when autocannon finishes its work it will close the Node.js application automatically, so look at that, running it, and it will close, and when done it will open a new tab right here automatically. So you can check it if the two terminals doesn't work for you for some reason, you can do that. So yeah. Sorry, could you explain that command? Oh, sure. So let me increase the font size. Okay. We have ClinicDoctor –autocannon. Autocannon is just like it will wrap the autocannon, basically instead of running these and in another terminal you run the autocannon. Like this, right? You can run in a single command. So it means that right here, if you run a ClinicDoctor with –autocannon, it will run autocannon when this server starts, okay? When the node HeloWord-index.js starts a server, a socket, it will run autocannon but in which path specifically. So you pass the parameters of autocannon here. So I want that autocannon run the full parameters in the route slash. The route route is the same as localhost 3000 slash. Or for instance, if I want – for some reason I want to run a different route test. So it will perform autocannon basically here. Autocannon localhost 3000 test.

OK, OK, so awesome now you have insight of Clinkdoctor. And when you are usually when you are doing a performance analysis, Clinkdoctor is the entry point for everything, because it will give the visibility that you need in order to find any kind of bottleneck. For instance, if you are debugging a CPU bottleneck, the Clinkdoctor will say that the CPU is a problem. The entry point is still Clinkdoctor. Then it will be redirected to clinic frame, that we will see further. If you have a problem with memory, Clinkdoctor will say, go to the clinic heap profiler and then you can check that. Let me see the other image. Next work. You need to set up a database because we will use MongoDB to run locally to perform, to simulate some request, to simulate a database, not a database, but to simulate a real API request.

Okay? So first, pull the image in your machine. I know that for MacOS guys, Docker is, like, difficult. But if you are having trouble with Docker, you can check the script that we have created to run MongoDB locally. But I don't recommend. Try to use Docker, please. You can skip this step if the Docker container is not working, okay? And NPEX is one. NPEX one. Do we need to test it to see if it works somehow or just run the command? Actually, if you run both commands and don't bring any error, you should be able to go to the browser, localhost 3000 and you will see a result. Very similar to this. 3000. Oh, it's not this. Let me see exactly. OK, no, no, don't don't worry. But if you just run docker PS and the container is up, that's fine.

5. Running the New Server and Testing Performance

Short description:

Now, let's move on to the new server. Run it using 'node index' and access 'localhost:3000' to check the data. If you encounter any issues, it may be related to your MongoDB server. In that case, run 'Docker run' again and perform 'Docker psnc'. Afterward, run Auto Canon to test the server's performance. NPX and installation both work, it's a matter of personal preference. Great, let's continue.

That's enough. OK. Let's move on then. So now you have among the running way of imported NPM metadata so you can run to the new server. Basically the new server is pretty similar to this. If you go to that, if you go to the NPM care server, you see this this code. OK. And if you run if you go to the to the server and run it, node index and go to localhost 3000. Let me see. You should be able to get this kind of data. Can you check if you can get it? So it means that the Mongo servers your MongoDB is not KakaoPore. So basically you need to run the Docker run again. Perform a Docker psnc. Okay, now try running Auto Canon against it and see how many requests per second it can deliver. Try to run it. I would do it right now. So if you are like confused that just follow me, okay? I run my server and another terminal I run Auto Canon. Is there a difference if you run it via NPX or you install it? Actually it's just a preference. I usually don't like to install global scripts in my machine. I mean, even NPX, running it, it's like a personal preference. Okay. Great. 51 requests in 10 seconds, 100 requests in 10 seconds. In my machine, 81 requests. Oh, the Android machine is very good by the way. So, let's move on.

6. Running Clinic Doctor and BubbleProf

Short description:

Try running Clinic Doctor against the slow server to investigate the issue. If Clinic Doctor reports a potential IEO issue, consider using Clinic BubbleProf for further analysis. BubbleProf provides insights into async operations and can help identify bottlenecks. The UI displays bubbles representing different operations, with colors indicating their performance. MongoDB activity can be tracked using BubbleProf, and the UI provides stack traces for each operation. By analyzing the stack traces, you can identify connections to MongoDB and potential performance issues. BubbleProf is a valuable tool in your performance analysis toolkit.

So you probably have not said that the auto-canon stats, the server is quite slow, it's very low. Let's try to investigate it. So try to run Clinic Doctor against this server and say, see what it reports. Like, you run your node index, but just instead of that, you will run Clinic Doctor, and then you perform the auto-canon to node index, okay? Try to do it.

So basically, you get something pretty similar to these, CPU usage and a potential IEO issue. Usually, when you go to recommendations here, we'll see that Clinic BubbleProf is the recommendation to run it, to understand, because Clinic BubbleProf is like, it's called BubbleProf because there are a lot of bubbles on it, nothing especially, really, but if you run Clinic BubbleProf, you might see something interesting. So let's go back here.

As you can see, the recommendation is to use BubbleProf. So basically what you need to do is just run, instead of running the clinic doctor, like this, you know, you just remove doctor and put BubbleProf and run, okay? Basically, you get the deprecation warning, but I have fixed it in the 18 version of Node.js, so don't worry about that. That's totally fine. Do these colors mean something or not? Yes, it means. When you hover dependencies, you'll see that some things, I mean, might not show right here, but it will show like, when I hover dependence, something will be highlighted here, Node core as well. And normally the operations is grouped by the yellow, is the operation taking more time. The purple is the fastest one, and the middle, as you can see here. Basically, the async operations is all handled by BubblePro. Basically, as you can see here, I run that script and I got 4,500 calls of AsyncResource over 10 seconds in my application. It means it's a lot of asynchronous call, but it doesn't mean a problem necessarily. You can check here the historical data of the async operations, and here you'll see a bunch of bubbles. Here, actually, our server is very small, so you see less bubbles. But in a big application, you'll see a lot of bubbles. That's pretty cool, actually. And the idea of the bubble is basically to track async key operations. So when you hover, the first one is our entry point, is our Fastify HTTP server, pretty similar to Xpress one, but better. When you go to the left side, you'll see that there's a npm-con server, and if you click here, it will be expand, and as you can see, this is the main connection to MongoDB, so we don't really care about it because the connection is created once. So it will not affect our benchmark at all. Okay, but for instance, if you click here, you will see the stack trace for the operation, like it will be created in this code stack, okay? Going back, so the left side doesn't matter to us right now. The right side is npm care server. And if you click here, you'll see that there's one frame. It was called firstly in my object index.js line 11. If I go to my server in the line 11, let me just make sure that, yes. My line 11 is this. Is basically my call to the MongoDB database. Okay? So it means, okay, I'm taking, let me see here. Let me just decrease. As you can see here in the, in the stack trace, it's taking nine seconds just in this IOP operation. Okay, usually during the profiling, during the benchmark, nine seconds. I mean, 98% of our time were spent in this connection. Okay, so and the second one is the other cursor request. If you click here and check the stack trace, it will show you that index line 3, 3t. And it means that these are another call to the MongoDB database. These are another carry. Okay, so okay. Now we have found that most of our time were spent in this connection. What we can do to improve that, okay? An idea, like, what is the issue or how can we improve that? So bigger bubble, more time is spent there.

Basically it is. So, the bubble prof showed that there is a ton of MongoDB activity, as you can see the cursor or something like that. And this is dividing into MongoDB bubbles and also the create connection and you can check in the UI. The bubble prof is hard to read. Honestly, it's very hard to read. But once you get more familiar with that, I think that it's good to have in your suite of tools.

7. Investigating MongoDB Performance and Adding Index

Short description:

Bubble prof indicates that most of the time, 98%, is spent in MongoDB. We need to investigate how much time is spent on the database. Running the carry command without .explain doesn't show the delay, but running it with .explain reveals a co-scan issue. Co-scan means a full scan of the database, which is not efficient. To improve this, we can add an index and switch to the 'modules indexed' collection. After making these changes, re-run AutoKernel to see the performance improvement.

Okay. So, let's move on. So, bubble prof is telling us that a lot of time, 98%, is being spent in MongoDB. So, we need to investigate how the time is spent. Like usually, when you are working in a handler, which is doing MongoDB connection, database connection, it's expected that most of the time is the spending in the database. That's totally fine. But we need to check how much is spent there. Like the connection, the carry is taking how much time?

So, if you go to the container, same as we tried it to Andrew, you go to the container and you run the use NPM, actually go to the container with this command. I will run here. The container ID is basically you get in the Docker PS. Okay. And then you go to this command, use NPM to switch to the database to write database. And then you run this carry. DB modules, fine. Basically the same carry that we are doing in our database. In your application is the same one. Okay. So if you run it several times, you will not see that there is almost one second. I mean, 1,500 actually 500 milliseconds of delay. You can not see that. If you run this carry with .explain. .explain. Yeah, it can take a few seconds. It really depends off your machine. Don't worry too much.

I don't know if I'm saying your name correctly, but I apologize. But if you run the same carry with .explain, you see a result pretty similar to this. This workshop is not focused on database, but you need to know how database works in order to optimize it. So as you can see, when you run .explain, you will see that we are getting co-scan as Kakuper. I don't know your name, sorry, but say that's totally right. Co-scan means a full scan of your database. It means that in this request, in this carry, we are reading the entire database until you get the response and our database is not so big. So it will be even worse if you are handling a big database, okay? So co-scan is not good for us. Any idea how can we improve that? Like just throw ID in the chat, how can we improve that?

Okay, we have found that co-scan is a problem because it scan the entire table until you get it to us. Great, add index. Great, great, great, great. Fortunately, we already had a switch to Postgres, a good one, very good. We have another database. You can check it by, let me see here. If you go to, if you run db.getcollectionnames, you'll see that there are other collections. The other collection is called modules indexed. So, if you just change the modules here to modules indexed, it is very, it's way more fast and don't use a code scan anymore. Look at that. So, what you can do now is go to our exercise in the b collection, line six, you change the modules to modules indexed and perform the AutoKernel again. Which result are you getting? So, just to, for reference, I run it without the module indexing. AutoKernel with the full values. And I'm getting almost 80 requests in 10 seconds. If I change to modules indexed, restart my server and run it again. Again, I'm getting 1,000 requests, a big jump, right? But do you think that the current request in 10 seconds is good? For instance, let's assume that our ATTP server is handling more requests.

8. Analyzing Request Latency and User Perception

Short description:

Again, I'm getting 1,000 requests, a big jump, right? But do you think that the current request in 10 seconds is good? For instance, let's assume that our ATTP server is handling more requests. To do that, you can just add one flag to AutoCano. I can handle, I want to handle 100 concurrent clients at the same time. If I run it, obviously it should get more requests per second because it's more clients requesting, okay? But it doesn't mean that your server is fast at all. For instance, right here, when we run the AutoCanon with the full values with 10 concurrent clients, we are getting a max of latency of 131 milliseconds. Kind of acceptable, okay? But when we increase the concurrent clients to 100, our max is 800 milliseconds and our average is almost 700 milliseconds. And this is not acceptable, okay? I wrote a blog post. Yeah, you probably got some errors because the machine timeout and so on, okay? I wrote a blog post, I think it's this one, this one. That is with this amazing image that when your client is, for instance, you enter in a website, okay? And you want to buy a product, I want to buy a new keyboard, by the way, I love keyboards. And I go to this website and I select my product, and if they answer, if I click the debut and the brutal perform a request, do something, and I get their response, I get a feedback in between zero to a hundred millisecond, I will feel, oh, this is instant, oh, this is very fast, that's fine. When it goes from 10 seconds to 300 seconds, from a hundred to 300 milliseconds, I will feel a small perceptive delay, and when it goes from 300 to one second, I will feel, okay, machine is working, but I can wait, but when it expand that, like more than one second, likely, I will have a context switch, the task will be abandonment. For instance, when I have a context switch, what I usually do, I go to my Twitter and I look to other things. Oh, for instance, I find the same keyboard in the competitor website, and it's not great. We don't want it for your clients, okay? And this data is not taken from anywhere. There is a project called Human Benchmark, which is pretty good, by the way, you can check. And these milliseconds, it's true, okay? It's not throwing rules to you, okay? Let's move on.

Again, I'm getting 1,000 requests, a big jump, right? But do you think that the current request in 10 seconds is good? For instance, let's assume that our ATTP server is handling more requests. To do that, you can just add one flag to AutoCano. I can handle, I want to handle 100 concurrent clients at the same time. If I run it, obviously it should get more requests per second because it's more clients requesting, okay? But it doesn't mean that your server is fast at all. For instance, right here, when we run the AutoCanon with the full values with 10 concurrent clients, we are getting a max of latency of 131 milliseconds. Kind of acceptable, okay? But when we increase the concurrent clients to 100, our max is 800 milliseconds and our average is almost 700 milliseconds. And this is not acceptable, okay? I wrote a blog post. Yeah, you probably got some errors because the machine timeout and so on, okay? I wrote a blog post, I think it's this one, this one. That is with this amazing image that when your client is, for instance, you enter in a website, okay? And you want to buy a product, I want to buy a new keyboard, by the way, I love keyboards. And I go to this website and I select my product, and if they answer, if I click the debut and the brutal perform a request, do something, and I get their response, I get a feedback in between zero to a hundred millisecond, I will feel, oh, this is instant, oh, this is very fast, that's fine. When it goes from 10 seconds to 300 seconds, from a hundred to 300 milliseconds, I will feel a small perceptive delay, and when it goes from 300 to one second, I will feel, okay, machine is working, but I can wait, but when it expand that, like more than one second, likely, I will have a context switch, the task will be abandonment. For instance, when I have a context switch, what I usually do, I go to my Twitter and I look to other things. Oh, for instance, I find the same keyboard in the competitor website, and it's not great. We don't want it for your clients, okay? And this data is not taken from anywhere. There is a project called Human Benchmark, which is pretty good, by the way, you can check. And these milliseconds, it's true, okay? It's not throwing rules to you, okay? Let's move on.

9. Analyzing Event Loop Delay with Clinic Flame

Short description:

If you look at the latency, it's higher, indicating a bottleneck. We need to start from Clinic Doctor to identify any new issues. The event loop delay occurs when a synchronous task blocks the event loop, causing a delay in processing. Clinic Doctor detected an application event loop issue, recommending the use of Clinic Flame. Running FlameGrab with the 'flame' command will generate a flame graph for analysis.

So, basically, if you look to your latency, you will see that the average is higher, right? Kiryakos? Yes. Yeah, so that's not acceptable. Yeah. That's totally not acceptable. So, okay, we have improved our app, totally. Absolutely improved our app. But we still have a bug. We still have a bottleneck. We still need to improve with that.

What we can do, okay, we fixed the bottleneck, we got more requests per second, but it's still a problem. We still have a bottleneck. What we can do, we start from the point zero. The point zero is Clinic Doctor. So, try to run Clinic Doctor again and see if you can pick up any new issues, okay? I will let you to run it and try to analyze a bit of the Clinic Doctor, okay?

Okay, basically, you got something very, pretty similar to this, right? You got a potential event loop issue. Do you know what is the event loop issue? What this means, after all? Can somebody just explain or open your mic and try to explain? Basically, the event loop delay is basically what, as you know or might know, doesn't really matter, but Node.js is a non-blocking platform. It doesn't mean that there's a single thread in Node.js, there's a bunch of threads in libuild, but the main loop is a single thread. It means that it ticks every time it will run your process. For instance, I want to do a HTTP call. Actually, MongoDB call. I go to my loop, and the loop looks, okay, I want to create this connection to the database. It will create a connection database and keep going with the loop. And when the connection is established, the event loop will receive a notification. Okay, this task is done. I can take it again. And that's fine, I have operation. So the loop is not blocking, it's still going on. But let's assume that you are doing a synchronous task. Which means synchronous task is that you are waiting for something, you are doing a synchronous operation before moving to the next one. It means that, for instance, if you are iterating over something, it is a synchronous operation. Even if you are doing like, doesn't really matter, okay. It means that you are running your event loop and it's blocking in the middle, in your task. And the time that you see in event loop delay is the time that your task was blocked in the event loop. It means that, for the next check of the event loop to get new tasks, it will need, it needed to wait at least 40 milliseconds to get the next one. And usually, the event loop delay millisecond should be very low, okay. Try to make it very low. So, as you can see, Clinky doctor show it, that it was detected, application event loop issue. And what is the recommendation? Use ClinkFlame to do that. Okay, you can read more. You know, it will have a better explanation of event loop delay. You can check right here, you know, what is the blocking operation, synchronous operation, asynchronous operation. You can check the half rest, next step and so on. By the way, we have a long tutorial over the Clink.js website, okay.

So, okay, the recommendation is to use ClinkFlame. Let's use that. You can just, the same as you did for, the same as you did for Bubbleprof, you can run for FlameGrab, okay. Basically, the thing that you need to do here is removing doctor and adding flame. Just simple like that, okay. Do it, and it will generate a flame graph. Flame graph is amazing, by the way. Generate it and let me know in the chat, okay? We are almost done, okay? Don't worry.

10. Analyzing Flame Graph and Event Loop Delay

Short description:

Flame graphs show CPU usage and call stacks. Compute magic is a bottleneck and can be removed. Event loop utilization doesn't always indicate a delay. Event loop delay is acceptable. False positives can occur, so further investigation may be needed.

When you're done, try to read the flame graph. Not read, but just play with a few things and I will explain better. Okay, flame graphs. This is a nice tool to see how much time was spent in the CPU. Which function is spending more time in your CPU? As you've seen in the last slide, Clinkdoctor showed that a potential ILO operation is blocking the event loop. Normally, CPU heavy tasks. And the recommendation was to run clinkflame. Clinkflame generate this flame graphs. How to read that? It means, the x-axis is basically bigger means more time spent in the CPU. If you hoover, for instance, this one, it is showing that 40% of the time that you are running this profiling, CPU was working in these function. It's a lot, really. It's really a lot. And the y-axis is basically the call stack. So for instance, we have the index.js line treating and we have the cursor and so on. We can remove the dependence here if we want, but we need to keep it there. If we want to see V8 call stacks, we can just click here, okay? But basically we are calling from this call, this call when it's done, we are basically calling compute magic. It's a function as you can see here, compute magic is the bottleneck, compute magic is doing a lot of CPU work. How can we do to solve that? Any idea? So compute magic is used a lot. So if you go to near form compute magic, you'll see that it's creating a hash iterating over several times and you're creating the hash with a JSON stringify. By the way, JSON stringify is a synchronous operation as well. So we are iterating with a lot of for loops and we are doing that. I mean, if you look to the code, this usually happens, OK? It happens a lot. I mean, I deliver performance consistency for several companies. And a lot of companies are using function that don't really need, OK? For instance, the magic is not a thing that we need in this application. We can probably just remove. Because this endpoint, as you can see here, this endpoint is returning the five newest and the five oldest rows in your database. So it means that there is no mention to magic. The magic was not expected to be there. I mean, it was old feature, but nobody used it. And the best optimization is always to remove something. For instance, in HTTP, the best request, the fastest request, is a request not made. So QpytMagic is a thing that we don't really care, we don't need. So what we can do is just remove that. Pretty simple. I would love to remove a lot of things, OK? I normally like to remove things. In FastPy, I love to remove a lot of things in order to make it fast, OK? So try to remove and run autokernel again, and show me the results, OK?

Okay, yeah, that's totally accepted, because basically, event loop utilization doesn't mean event loop delay. Let me explain it a bit. This is a thing that we are fixing in, by the way. Event loop utilization means just that you are performing work. You are using the event loop. So for instance, you run your HTTP server. And your HTTP server, you run also AutoCAN. AutoCAN is performing a bunch of requests to your server. So it's expected to see even loop utilization around 100%. But it really depends off your machine, OK? So if your machine can handle more requests than others, the event loop utilization will not be so big, OK? So in your case, it is showing event loop utilization. But if you look to the event loop delay, it is not more than four milliseconds, so that's totally acceptable, OK?

Yeah, thanks. So for most of you, you see that everything looks good, right? As in my image, as in my dashboard here. Everything looks good. Does it mean that it can show that there is a problem, but it's not actually a problem? And how to understand those cases, like which are false positives? Because here you explained that's OK, but if we are on our own, we just do it, and we think, OK, still a problem. What should I do? Yeah, totally.

11. Analyzing Performance and Using BubbleProf

Short description:

To identify false positives in performance analysis, it's important to understand the expected behavior of CPU usage, event reutilization, and event loop delay. Even if Clinic Doctor shows no issues, it doesn't mean there is nothing to optimize. Establishing a baseline is crucial, and in this workshop, we aim to handle 500-1,000 requests in 10 seconds. To further analyze performance, try running the application with BubbleProf.

What you can do, for instance, in case of the Kirakos, instead of running AutoKanon in the same command as Clink Doctor, you can run AutoKanon in a different terminal. And basically, you run your server with Clink Doctor, and you run the AutoKanon in another terminal. And when AutoKanon ends, you wait for a couple of seconds, like three seconds, before sending a control C to your application. So then, the event reutilization should be less, should be low. And then, you will not see any issue. But to identify false positives is hard, but it really depends how much experience you have on that. When you are performing a benchmark, it's expected to see CPU usage on under high usage and event reutilization. What can't be high is the event loop delay. Regardless of the load that you are sending to your server, the event loop delay should be lowest possible. Okay? So, identifying false positives is basically, in the case of a QRPCOS, is not a false positive, is a positive. It's like the event loop delay is doing a lot of work. The event loop utilization is doing a lot of work and that's expected. That's fine. Because you are doing a performance analysis, a benchmark analysis. You can check in my machine that the event loop utilization is around 94%. And that's pretty similar. Okay? But one thing important to mention here is that even Clinical Doctor showing that there's detected no issue, it doesn't mean that your server is done. You don't have anything to optimize. Clinical Doctor is not a silver bullet for everything. You need to understand by yourself, like you have established a baseline in the beginning of this workshop. At least in my machine, I was handling around 400,000 requests. And now I'm handling, let me see how much here I don't remember. Let me run again, okay? Note index, and auto canal. So, I'm handling almost 49,000 requests in 10 seconds. It's far away from our baseline. I mean, it's very difficult to be close to the baseline because the baseline is just a simple HTTP. And here we are using a framework, we are using database connection, so you want reach, at least in my machine, you want to reach 500, 1,000 requests in 10 seconds, okay? But we need to be better. For me, this amount of requests in 10 seconds is not good. I can improve, even the clink doctor's showing to me that there is no potential issue apparently. Try to run the same application with BubbleProf, okay? I will run it here as well. Clink doctor, so BubbleProf.


Question about Fastify and Clinic Usage

Short description:

I have a question about using Fastify in the baseline. Even though Fastify is fast, it adds overhead compared to the native Node.js HTTP server. However, in this workshop, we're focusing on showing layers of improvement, so we're using Fastify in the second application to simulate a business requirement. Attaching Clinic to a production workload running in the cloud is possible but not recommended as it will handle fewer requests. When running Clinic with webpack or TypeScript, make sure to build everything before running Clinic. Source maps are not directly supported by Clinic, but some tools like Clinic Flame may work well with them. Bubble Prof does not support source maps. If you encounter an error, try switching to the latest version of Node.js. The 'premature close' error in Bubble Prof has been known for some time.

I have a question. Go ahead. On our baseline index.js, we don't use, we just use the HTTP coming from Node.js and we don't use the Fastify to set the baseline because like you said, it's not Framework, we don't use anything. Why don't we just start a Fastify server in the baseline so that we know that there is some small overhead but there is some. So to count the request we can serve using the bare minimum from the. Yeah, that's a fair question. That's a totally fair question. Basically, let me just remove it here. No trace here, no trace here. Basically, we have added a baseline because even in this workshop, we are not showing a business application. We are showing layers of improvements. Festify is fast, but it still is a layer on top of the HTTP. Once we need to use Festify, we need to understand how much overhead it adds on top of the HTTP. So in your case, we are using Festify in the second application just because it's a business requirement. We are simulating a business requirement. We can, for instance, for our baseline, we can, instead of running the HTTP, the node.js HTTP, just run Festify. We can, of course, but the purpose of this workshop is to show layers of improvement. Even Festify being very fast, it's still a layer of improvement. Yeah, okay, because I don't think, despite this is fast, and I know because I'm using Festify for years, it adds a bit of overhead rather than the native node.js HTTP server. So in order to, in our comparison, we cannot read the request that we got when we run the first index.js. So I, in my opinion, would be a more fair comparison if the first, if the baseline was also using Festify. But okay, I understand the point of view. I'm just saying that we cannot improve further than a certain point. Yes, exactly. As we're using more frameworks, libraries, whatever that we, we cannot avoid using them in our business application. Exactly, that's a good one. I just fixing clinic here. Sometimes it has a bug, but it's like a dependency of clinic. It's possible to attach clinic to a production workload running cloud. Yeah, it's possible, but it's not recommended. When you attach clinic to something, it will handle way less request. So it was designed to run locally in a developer machine. What about a webpack and TypeScript? Okay, to run clinic with webpack or TypeScript application, you need to build everything. You really don't want to run clinic before the bundling. So make sure to run the clinic in your last application. As your last application, yeah. Yeah, you should be able to see recommendations. Yeah, with source maps, we don't support directly source maps. It might work for a few tools of the suite. I guess that Clinic Flame works well with it. But I don't think that Heap Profiler will work well, honestly. Let me run it again. Did you guys run the Bubble Prof? And as you can see in the warning here, in the道, Bubble Prof does not support source maps. So it answers your question, I guess. Okay, I'm having an error. So what I can do is I will switch my NoteJS version. It doesn't really matter because I have these already here, but did you guys run the BubbleProf with the latest NoteJS version? With the latest application? Done, okay. Got an error. Okay, no problem. Which error? The error is like premature close? Kiryakos? Yes, yeah, this error is in Bubble Pro for some time.

Analyzing Bubble Pro and Optimizing Code

Short description:

Bubble Pro is a tool that visualizes the software's call stacks and performance. By analyzing the bubbles on the right side, we can identify synchronous operations that cause delays. To optimize the code, we can make the operations asynchronous using promises. By refactoring the find method and using promise.all, we can improve performance. The newest and oldest values can be obtained using prompts.all and the array can be sorted accordingly. This approach improves performance without affecting the code's functionality.

It's the same that I'm getting. This is like a flag here and we are just working on that to solve that. So, but basically let's go to my, to this image that I have here. Okay, Ramesh. Go ahead, thank you for your participation by the way. But basically, if you run the Bubble Pro, you'll get something pretty similar to this. And as we have seen, the left side of the Bubble Pro doesn't really matter. But the right side is an important piece of the software of this application. If you click here, if you run here, you'll see that basically Bubble Pro is a bunch of bubbles and you have call stacks here. Here, you'll see that NPM care server basically is also a database connection as we have seen. It's taking time. And then when it's done, it calls another database cursor. It means that this is synchronous. Like the second operation is waiting the first one to complete to run. So if you look at the code, we are not using the newest in the second carry. So what we can do here is basically be asynchronous. We don't need to wait one to run the second. Prompts out is a good option. So what we can do here is basically Mongo JS uses the old approach, the callback hell approach. So what we can do is basically import prompts file from U2. This is building on Node.js. And then I will be able to run it. Let me see one thing here. Just one second. I'm just answering someone here in the presentation. Okay, what we can do basically is using promise.all, okay. Basically we need to recreate the find method to recreate the find method, so let's refactor that. So motif.add is basically if it's sorted or not, so we return a new promisee. Actually, we can just remove that because we are using the promise.all, right? So a good approach to do that is basically, let me show without the promise.all. So you can refactor if you want, but basically, the performance is the same, okay? So let's create the promise.all. The new promise. Basically, I have the resolve and I have the reject, and it has a function and I will calc.find and sort based on the modified, obviously. Limit five, and whenever I receive the response here data, and if I have an error, I will reject the error with error. Otherwise, I will resolve the data. Simple like that. So what we can do here is basically, we have the newest and we have the oldest. Let's comment it. Before let's remove it from here, paste here, and let comment it because we will do it better. So basically we have newest and oldest here. That equals prompts.all, and the array is basically, we can find with minus one for newest, and we can find with one for oldest. And that's pretty much it. Let me see if this work, node index. Should you return on line eight or like else at least, because you resolve in any case, right? Is it okay? Sorry? I'm sorry, on line 12 now. Yeah, actually I just rejected. Let me just adjust it to you. Okay, go ahead. I mean, you like 12, so if error, you reject and then you always resolve. So it's not like else resolve. Yeah, it is the same. When you reject something, basically it will throw, but it's the same as this.

Running New Solution and Performance Comparison

Short description:

Now let's run the new solution and see the performance improvement. There seems to be an error, but after fixing it, the server is working correctly. However, the number of requests handled may not show a significant increase due to the zoom count. Comparing it to the old version, there is a slight delay, but using promise.all for logging can optimize the code. It's important to note that the performance increase may vary in different cases.

Okay, good. Okay. Okay, so now we have a simple solution and it's shown to be more fast, right? So let's run it. Running it and performing auto canal again. I was handling almost 50,000 requests in 10 seconds. Let's see how much now. Okay, there is an error. Let me see what's happening. Localhost 3000. Okay, I think that I'm having an error here. It's basically, limit five, the sort, cifine. No problem here. Modified. Should it be like an object modified one or? Oh yes, you're right. Thank you. Let me just check if this working. Yes, it's working. And then let's see how much requests. Not too much, not too much. Yeah. Basically, let me see if I'm using everything correctly. Is this working? Yeah, I am. But it really depends of the, sorry, it really depends of the zoom count because I mean, I have delivered this workshop several times and there is no big jump but is a small perceptive improvement. So, just for comparison, I can like go to the old one, these, I guess, right? Copy that. And let's see here. Remove the magic. Remove this. Remove this. And run it again. It will handle less, or almost the same, but like a small perceptive delay. But always use when you need to log. As you can see here, 2k less requests. The really affects that, okay? But yes, this is, you can use u2.promise.fy to optimize it if you like to refactor. So basically, just one thing here. I forgot to mention, let me see. Okay, you should see a small performance increase as well. Yeah, as I thought, it's not too much, in this case, but most of the cases, a lot.

Watch more workshops on topic

React Summit 2023React Summit 2023
171 min
React Performance Debugging Masterclass
Workshop Free
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Workshop Free
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Workshop Free
- Introduction
- Prerequisites for the workshop
- Fetching strategies: fundamentals
- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)
- Test your build and serve it on Vercel
- Future: Server components VS Client components
- Workshop easter egg (unrelated to the topic, calling out accessibility)
- Wrapping up
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.
: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
Workshop Free
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:
- User authentication - Managing user interactions, returning session / refresh JWTs
- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents
- A quick intro to core authentication concepts
- Coding
- Why passwordless matters
- IDE for your choice
- Node 18 or higher
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
Workshop Free
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Few developers enjoy debugging, and debugging can be complex for modern web apps because of the multiple frameworks, languages, and libraries used. But, developer tools have come a long way in making the process easier. In this talk, Jecelyn will dig into the modern state of debugging, improvements in DevTools, and how you can use them to reliably debug your apps.
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
24 min
React Concurrency, Explained
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Our understanding of performance
user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.
Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk