Out of the Box Node.js Diagnostics

Rate this content

In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.

You can check the slides for Colin's talk here. 

34 min
17 Feb, 2022


Sign in or register to post your comment.

AI Generated Video Summary

This talk covers various techniques for getting diagnostics information out of Node.js, including debugging with environment variables, handling warnings and deprecations, tracing uncaught exceptions and process exit, using the v8 inspector and dev tools, and generating diagnostic reports. The speaker also mentions areas for improvement in Node.js diagnostics and provides resources for learning and contributing. Additionally, the responsibilities of the Technical Steering Committee in the TS community are discussed.

1. Introduction to Node.js Diagnostics

Short description:

This talk is about getting diagnostics information out of Node.js without using a lot of third-party tools. Over the years, a lot of work has been put into diagnostics specifically and how we can improve them in Node. For most use cases, you can do a lot of debugging and getting information out of Node, just using the Node executable and Chrome.

♪ Hi, everyone. Thanks for coming to my talk titled Out of the Box Node.js Diagnostics. So, this talk is about getting diagnostics information out of Node.js without using a lot of third-party tools.

So, just for a little bit of background, getting diagnostics out of Node.js used to be pretty hard. Node.js used to follow a small core philosophy, which meant that a lot of functionality was left up to NPM modules. So, we had things like Node.js inspector for debugging, Node.js heap dump for capturing heap snapshots, Long John for asynchronous stack traces and things like that. But, over the years, a lot of work has been put into diagnostics specifically and how we can improve them in Node. But a lot of people might not be aware that these things exist yet. And for most use cases, you can do a lot of debugging and getting information out of Node, just using the Node executable and Chrome.

A lot of the content in this talk is actually from the official CLI documentation, if you want to look that up on your own. And this talk also assumes the latest version of Node 16.

2. Debugging with Environment Variables

Short description:

One of the oldest ways to get diagnostics information out of Node was via debug environment variables. There are two flavors: Node debug for JavaScript and Node debug native for C++. When starting the executable, specify the environment variable with the subsystems to listen to. The application will dump information to standard output, including connection events.

So, one of the oldest ways to get diagnostics information out of Node was via debug environment variables. So, if anyone has ever used the debug module on NPM, it's very similar to that. So, you can use it to log a lot of extra information from Node's core during execution. And there are actually two flavors of this. So, there's Node debug for getting information from JavaScript land and Node debug native for getting information out of the C++ layer. And so, whenever you start your executable, you just specify this environment variable with a comma separated list of the subsystems that you want to listen to, as shown in the example at the bottom of this slide. And so, whenever you run your executable or your application, the it will dump a lot of information to standard output. You can see kind of an example here. The subsystems are prefixed with their name. So, net and HTTP 2 in this example. And then the process ID and then, you know, various debugging information. So, whenever, you know, a connection is established, a connection is closed, and things of that nature.

3. Warnings and Deprecations in Node.js

Short description:

Warnings in Node.js are used to notify users of potentially risky actions. There are several flags available, such as --no-warnings to disable all warnings, --redirect-warnings to save warnings to a file, and --trace-warnings to print a stack trace. For example, when requiring the sys module, a deprecation warning is shown, and using --trace-warnings reveals the source location. Deprecations in Node are identified by an ID, which can be looked up in the documentation for more details.

So, the next thing I wanted to talk about was warnings in Node.js. So, warnings are used to let the user know about that they're doing things that are potentially risky and we kind of advise against. So, there are a handful of flags that you can use here. The dash dash no warnings flag will disable all warnings. You might not want to do that. The dash dash redirect warnings flag can be used to take all warnings and dump them into a file somewhere so that you can look at them later. And then the dash dash trace warnings will actually print a stack trace whenever a warning is encountered. And this can be really useful for, you know, a warning is omitted, but you're not really sure why, it could be coming from deep inside of your node modules folder. So, the stack trace will actually kind of give you more insight into what's going on. And so, an example of that is shown here. So, if you were to require the sys module, sys is a, you know, an ancient module from node's core that has been long deprecated, but is still there. We don't want people to use it, so whenever you require it, you'll see this deprecation warning is printed, and then you can use the dash dash trace deprecation flag, as I mentioned before, to, I'm sorry, the trace warnings flag to see where the where in your code that's coming from. So, if you look at the arrow I have in the stack trace here, you will actually see it's coming from example.js on line one, and so it's very easy to kind of tell what's going on. So, one thing to note here is you'll see in the message dep 0025, so all deprecations in node are given an ID, so you can actually go into node core's documentation, we have a specific site page just for deprecations, you can look up that deprecation ID and kind of get more information than what's available in the stack trace here.

4. Deprecations in Node.js

Short description:

Deprecations in Node.js are similar to warnings and have flags to control their behavior. The node deprecation flag hides deprecation warnings, while the trace deprecation flag prints a stack trace. The throw deprecation flag throws an exception when a deprecated feature is used. This can be useful for identifying and addressing deprecated code during testing or CI.

So, the next thing I want to talk about are deprecations, which are similar to warnings. It's a specific class of warning. Again, we have similar flags for dealing with them, so we have the node deprecation flag, if you don't want to see any deprecation warnings in your code, then if you know what you're doing, then you might want to use that. We also have dash dash trace deprecation, which prints a stack trace, similar to the warnings flag that I showed a little bit ago. And then we also have throw deprecation, which actually will throw an exception anytime you use a deprecated feature. Now, this isn't something you probably want to run in production, but in your CI code or when you're doing testing, it's very helpful for kind of finding things that you need to address and kind of move out of your code.

So we have a similar example here. Again, this is require sys because a deprecation warning is just another warning. So here I've used the dash dash throw deprecation for the same code as before. And in this case, you'll actually see a thrown exception, which is nice because it does some highlighting of the stack trace. So you'll notice that a lot of the stack frames here are grayed out. Those are things that come from like Node's core, for example. And the line that is still in black with the arrow pointing to it is the line in your code where the deprecation is coming from.

5. Synchronous I.O. and Event Loop Blocking

Short description:

Synchronous I.O. blocks the event loop, which can negatively impact performance, especially in server applications with multiple requests. Node's trace sync I.O. flag can be used to report synchronous I.O. after the first turn of the event loop. An example is synchronously reading a file, which results in individual warnings for each operation. The stack trace shows the synchronous I.O. operations called after the first turn of the event loop.

So the next thing I wanted to talk about was synchronous I.O. and how you can find it in your code. So synchronous I.O. blocks the event loop. This used to be a real performance killer. It still is to an extent, but now we have worker threads which can kind of mitigate things a bit. But either way, you probably don't want to block your event loop if you don't need to. Especially if you're running an application like a server where you have multiple requests being handled at the same time. If you block the event loop on one of the requests, then all the other requests will suffer as well.

So when you're setting up your server, it's fine if you're doing synchronous I.O. during the startup phase, but once you start moving to serving traffic, you probably don't want to allow that. And so you can use Node's trace sync I.O. flag to report any synchronous I.O. that happens after the first turn of the event loop. So an example, which I have here is we're using a set immediate, so that we know that we're now moving on past the first turn of the event loop, and then we're synchronously reading a file from the file system. And so read file sync does a couple of operations under the hood. It opens the file, it will also stat the file, read the data from it and close it. And because all of these are happening synchronously, they'll all result in individual warnings. And so you can actually see in the stack trace that's provided here, open sync, try stat sync, read sync and close sync are all being called out as synchronous IO after the first turn of the event loop. And of course, if I were to remove the set immediate here, this would not report any warnings because we would still be on the first event loop turn.

6. Tracing Uncaught Exceptions and Process Exit

Short description:

Next, let's discuss tracing uncaught exceptions and process exit in Node.js. The trace uncaught flag helps locate exceptions in your application code, providing the path where they were thrown. Similarly, the trace exit flag shows where the process exit call originated. Additionally, we'll talk about unhandled promise rejections and how Node.js 15 changed their behavior to throw exceptions. You can configure this behavior using the unhandled rejections flag with different modes: throw, strict, warning, warn with error code, and none.

Next, I wanted to talk about tracing uncaught exceptions. Sometimes, an exception is thrown in your application code, and it can be challenging to locate where it originated, especially in large applications or when it comes from the node modules directory. The stack trace on the error itself might not correspond to the actual location of the exception. To address this, you can use the trace uncaught flag when running your application. This flag not only prints out the error and its stack trace but also provides the path within your application where the exception was thrown from.

There is a similar flag for tracing process exit. If a node module calls process exit, it can be challenging to determine what is happening in your application. By using the trace exit flag, you can see where the exit call originated in the code and the exit code. For example, the output might indicate that the environment exited with code zero.

Now, let's move on to discussing unhandled promise rejections. By default, in JavaScript, unhandled rejections are ignored, which is generally acceptable in the browser. However, in a server environment, this can lead to serious problems. Starting with Node.js 15, the behavior of unhandled promise rejections was changed to throw an exception. This change makes it more explicit and allows you to better handle promise rejections. You can configure this behavior using the unhandled rejections flag, which supports five different modes: throw, strict, warning, warn with error code, and none.

7. Handling Promise Rejections in Node.js

Short description:

The new behavior in Node.js makes it more explicit and allows better handling of promise rejections. You can configure this behavior from the CLI using the unhandled rejections flag. There are five modes available: throw, strict, warning, warn with error code, and none. An example with unhandled rejections set to strict is shown, where a promise rejection becomes an uncaught exception.

So the new behavior makes it a lot more explicit what's going on and allows you to better deal with your promise rejections. So you can configure this behavior from the CLI. If you use the unhandled rejections flag you can change the various behaviors. So right now we support five different modes of unhandled rejections. Throw and strict will both turn a rejection into an uncaught exception. Throw is the default as of node 15. Throw tries to emit an unhandled event before throwing whereas strict moves straight to throwing. So one kind of gives you the better opportunity to handle the rejection first. And then there's warning mode which will display a warning to the console. Warn with error code which is the same as warn but also sets the process exit code to one. And then none which is the JavaScript default of swallowing unhandled rejections. So here is a quick example with unhandled rejections set to strict. If I call promise.reject and pass in an error object it will turn into an uncaught exception and you can you know kind of deal with it that way so you'll see the error message and stack trace just the same as if you were to say throw new error in this case.

8. Introduction to the Tick Processor in v8

Short description:

The tick processor in v8 is a command line based sampling profiler that provides comprehensive information on where you're spending your time in libraries, JavaScript code, and C++ code. By running your application with the --prof flag, you can generate a v8 profiler output file. After collecting the information, you can process it using the --prof-process flag in Node. The processed output shows the time spent in shared libraries, JavaScript code (with optimized functions marked by an asterisk), and C++ code.

So next I wanted to move on to talk about the tick processor which is available in v8 that I don't think a lot of people are aware of. It's a command line based sampling profiler and it's nice because it can show you in its output where you're spending your time in libraries as well as JavaScript code and C++ code. So it's pretty comprehensive and so the way it works is you'll run your application with the dash dash prof command line flag and this will dump the v8 profiler output into a file. You can read the file if you want but it's not really meant for human consumption so what you'll do is after you collect that information you'll run node again with the dash dash prof process and pass in the name of the log file that was generated and so you can see an example of that at the bottom of the slide here here and so this is an example of what the processed output will look like so you'll see you know at the top it shows you where you're spending your time in shared libraries and then the next section shows you where you're spending your time in JavaScript code. You'll notice here that we have two functions process immediate and normalize string and both of those have an asterisk in front of them. The asterisk means that v8 was actually able to optimize the code in that function. And then the bottom section is just a breakdown of where your application is spending its time in C++ code.

9. Introduction to v8 Inspector

Short description:

Node and v8 now have first-class support for the v8 inspector, which is Chrome's dev tools used with Node applications. To set it up, start Node with the --inspect flag and specify the host and port. Be cautious when binding to a public address, as it can be a security vulnerability.

So starting a few years ago Node and v8 actually started having first class support for the v8 inspector which is essentially just Chrome's dev tools used with Node applications. So if you've done any type of browser development or even Node development in the past few years, there's a good chance you've seen Chrome's dev tools. It includes things like a step debugger, profilers with GUI interfaces as opposed to the CLI based one I showed before. And so the way that you actually set this up is you start Node with the dash dash inspect flag and then you can actually specify the host and port that you want it to listen on on your machine. By default it's going to listen for on 127.0 to 0.1 port 9229. And then you can also set it up so that you as soon as you start the application it will set a breakpoint for you so you don't have to worry about doing that by hand and that's with the dash dash inspect dash BRK flag which works other than the breakpoint in the same fashion and then one security tip is be careful if you're going to bind to address or anything else that's really a public address on your machine. This used to be the default for node and you know it was reported as a security vulnerability because if you have a server that can be reached publicly and you're bound to a publicly exposed address like that then technically it's possible for an attacker to actually connect to the debugger and start kind of messing with your code and run time. So that's just something to be aware of.

10. Using the v8 Inspector and Dev Tools

Short description:

Continuing with the v8 inspector example, running Node with the --inspect flag connects to the debugger via a WebSocket URL. The dev tools provide a high-level view of CPU activity, including function calls and memory usage. Heap snapshots are useful for detecting memory leaks, allowing you to compare objects between snapshots.

So next I wanted to talk or I'm sorry just continuing with the v8 inspector example, you know the text at the top kind of shows you how you would run it node dash dash inspect break example.js. It will print out some information so it tells you where the debugger is listening you know via this WebSocket URL and then point you back to the documentation if you have you know more questions or need to do additional reading. But you can you know the picture at the bottom here just kind of shows you what it's like whenever you're dropped into the dev tools. Again a lot of people have probably seen this before but if you haven't this is kind of what you can expect to see.

11. CPU Profiler and Heap Snapshots in Chrome DevTools

Short description:

Chrome DevTools provides a CPU profiler and heap snapshots for diagnostics. The CPU profiler allows you to collect profiles using command line flags or manually through the DevTools UI. The DevTools view of a CPU profile shows a high-level view of CPU activity over time and the functions being called. Heap snapshots are useful for debugging memory leaks, comparing snapshots to identify objects not being cleaned up. Heap snapshots can be captured through CLI signals or automatically near heap limit. They can also be collected manually via the DevTools UI.

One of the nice tools that is available in Chrome DevTools is a CPU profiler. It shows you what functions are executing over time. I want to point out that this is not the same as a flame graph. A flame graph actually takes stack traces over time and kind of consolidates them into into one larger stack trace so you can see where your application is generally spending its time. If you're looking for flame graphs, there's an excellent tool on MPM called zero x that you can play around with.

But as far as CPU profiling goes, Node actually has a couple of command line flags that allow you to manipulate the CPU profiler without actually needing to go in and do this by hand. So if you pass the dash dash CPU prof flag, it will collect a CPU profile for you. It will start the profiler when the application starts up, and then write out the profile when your application shuts down. The CPU prof dir flag allows you to specify where you want your profiles to be written to, if you need to change from, you know, the default location. Similarly, the CPU prof name flag allows you to give your profile a different file name. And then the dash dash CPU prof interval flag allows you to define the sampling interval of the profiler. So, if you needed to, you know, sample the stack more frequently or less frequently, you can control it there. And then, of course, you can also go into DevTools in the UI and collect profiles by hand, if need be.

And so, this is what the DevTools view of a CPU profile looks like. So, there is a kind of a region at the top that shows a high-level view over time of the CPU activity. And then the highlighted window is shown at the bottom, with the colored stack traces. You can actually see what functions were being called. So, you'll see a lot of, in this example, module.run main, as well as require. So, you can, you know, from looking at this, you can kind of understand that this is the startup phase of your application, where a lot of modules are being required and configured. And then the behavior kind of changes after that to be more, you know, more dependent on the traffic that you're serving.

So, another nice feature of Chrome DevTools is heap snapshots. So, these are very useful for seeing what's going on in the JavaScript heap, and they're really, really helpful if you're trying to debug a memory leak. So, you can actually take a snapshot at one point, let your application code run for a while, and then take another snapshot and compare them. And you can, you know, if you'll notice, like, you know, before you're in the first snapshot compared to the second snapshot, there are a ton of new objects that aren't being cleaned up that could, you know, help you trace down the memory leak. And so, you can manipulate all of these things from the CLI as well. So, the dash-dash heap snapshot signal allows you to capture a heap snapshot if you send a specific signal to the process. And then there's also heap snapshot near heap limit, so it'll automatically try to take a heap snapshot for you when you're almost out of memory. This is not guaranteed to always work out, because it does take additional memory to collect the heap snapshot, but it's definitely something worth investigating. And then, similarly to the CPU profiles, you can collect these manually via the DevTools UI if you have a break point set in your code. This is just a quick look at the heap snapshot view.

12. Diagnostics and Debugging Techniques

Short description:

It shows a list of every type of object broken down by its constructor. You can track down memory leaks. TLS connection tracing provides information directly from Node.js. Post-mortem debugging creates a core file of your application, but has downsides. Diagnostic reports offer a simpler alternative to postmortem debugging.

It shows a list of every type of object broken down by its constructor. And then it'll show things like the object count. In this specific example, there are four event emitters in the code. The shallow size tells you how much memory that actual object is holding onto. And then the retained size follows the pointers from those objects to tell you kind of the cascading effect of holding this object in memory. And so you can do a lot of useful things here while you're trying to track down memory leaks.

The next thing I wanted to talk about was TLS connection tracing. It used to be if you wanted to diagnose TLS issues, you had to have the open SSL client set up and pass some command line flags and things like that. But now you can get that information directly from Node.js. So the dash dash trace TLS flag from the CLI will dump all the same information for all TLS connections, just so you know this will be very noisy, so you definitely don't want to enable this in production. You can also set it on the individual socket level with TLS socket dot enable trace. And then you can also set it whenever you're setting up a socket or whenever you're setting up a server with the enable trace option pass to create server and TLS connect. Again, this will dump a ton of information. So just be prepared to sift through it.

The next thing I wanted to touch on was post-mortem debugging. So the way this works is you'll actually use the abort on unhandle, I'm sorry, caught exception flag to create a core file of your application. And this is very useful because you get the entire state of your application. But there are a lot of downsides. So first, you have to set up your system to collect core dumps, which is not usually the case by default. But you also will need an external tool like llnode to be able to inspect the core file and see what's going on in there. And while llnode is extremely powerful, it's constantly trying to play a catch-up game with changes to the heap representation inside of v8. So it can be very hit or miss as far as from one version of Node to the next how well it works. But you can actually inspect JavaScript objects. You can see a mixed C++ and JavaScript stack trace. But I would definitely say this is for more advanced use cases and like I said, your luck may vary from version to version. So in Node, we wanted to have kind of a lower barrier and something simpler than postmortem debugging. So we introduced something called diagnostic reports. This is a human readable report containing all types of information about the process presented as a giant blob of JSON. And you can generate these under different conditions such as a crash, sending a signal to the process. You can do it programmatically through an API, etc.

13. Generating Diagnostic Reports in Node.js

Short description:

This section covers generating diagnostic reports in Node.js using CLI flags. These reports contain information about the operating system, process, thread pool, stacks, and more. Care must be taken with sensitive information. The process.report.getreport API can be used to create reports, which include versions, event triggers, file names, process and thread IDs, and more. It's important to handle these reports with care and redact any sensitive information before sharing. The talk concludes with gratitude for attending and an acknowledgment of the value of learning about diagnostics.

And so this includes lots of things like information about the operating system, the process, what's going on in the LibV thread pool as far as handles and whatnot. The C++ stack, the JavaScript stack, and a lot more. I guess one thing to kind of note is this can contain sensitive information such as environment variables so handle them with care.

So to generate a diagnostic report, we have a collection of CLI flags that you can use. So report on fatal error is if there's a C++ crash, you can create a diagnostic report. Report on uncaught exception is basically what it says. If there's an uncaught exception in JavaScript, a report will be generated for you. Report on signal, if you're passing, you know, signaling the process, you can configure which signal you want to listen on. So you can do that via the report signal flag. Report directory and report file name are used to configure where you're going to store the diagnostic report and what you want it to be named. And then there's a dash dash report compact flag which will kind of make it a single line of JSON, so it's a little more easy for machines to consume.

So just a real quick look at what, you know, a portion of one of these looks like. I've created this one via the process.report.getreport API. You can see, you know, a subset of what's available here, so it has things like the report versions, this allows node to version the reports for backwards compatibility reasons. The event that triggered this, so this was a JavaScript API called to get report, the file name, in this case it's null because this was just dumped to memory. When it was collected, process ID, thread ID, a whole bunch more stuff, so I definitely encourage you to play around with these and, as I said before, you know, these can contain sensitive information, so handle them with care. If you need to share them with other people, you may need to redact some information by hand, but it's just JSON, so it shouldn't be too tough.

So that's everything that I had for today. Again, thanks for coming to my talk, and feel free to reach out to me on Twitter, GitHub, anywhere. Thanks, everyone. I hope everyone is, like, writing lots of stuff on the dependent paper. So you asked the question, have you ever had a bug in your application but couldn't obtain the proper metrics to fix the problem? And 82% have answered yes. Yes, so I'm hoping that some of the information that got out of the talk today might help with that in the future. I think if, you know, if the same poll was asked five or six years ago, the answer would have probably been a lot closer to 100%. It used to be a really tough thing to do, and yes, the project has just made a ton of progress since the old days of Node. Yes, yes. I mean, even 84% is a lot, but yes, most of the people don't, I mean, know about the new things coming in and all. So I see lots of good comments in the channel that was lots of knowledge, amazing talk and I did not know anything that he said. So, I mean, people learn a lot today. So that was a cool session.


Learning and Contributing to Node.js Diagnostics

Short description:

Lots of stuff that I didn't know about. Wow. Yes. Awesome. One follow-up question on your talk would be, where can I learn about the things that you mentioned? Two places: the official Node.js documentation, specifically the CLI documentation, and the Node.js website's Guides section. For those interested in working on Node.js diagnostics, there is the option to contribute to the project on GitHub or join the diagnostics working group. Areas for improvement in Node.js diagnostics include postmortem debugging and integrating flame graphs into Node.

Lots of stuff that I didn't know about. Wow. Yes. Awesome. So, okay. So, one follow-up question on your talk would be, where can I learn about the things that you mentioned in the talk? Yes. So, two places really. First, if you go to the official Node.js documentation, and then on the left-hand side there's all the different, you know, subsystems inside of Node. If you scroll down to the command line, I think it's CLI or Command Line Interface documentation, this talk came almost exclusively from that page and all the information there. And then if you also go onto the Node.js website, there's a section called Guides, and we have guides on different things like, you know, getting diagnostic information, creating flame graphs, running your application in Docker, and just a whole variety of different things like that. Okay. So, yeah, do check it out. For anyone who is listening here, go check out all the documentation and learn more about it.

So, after this talk, like, lots of people have learned new things, and they would be interested more in diagnostics. So, let's say someone is interested in working on Node.js diagnostics. So, is there a way that they can get involved with it? Absolutely. So, if you're interested in actually working on the project, you can come to, you know, github.com slash Node.js slash Node. That's where the project, you know, really lives. But we also have a diagnostics working group, which is a team of people who kind of specifically dedicate time to improving diagnostics in Node. And I believe the URL for that is github.com slash Node.js slash diagnostics. And, you know, you can go there and if you're really interested, join the working group and contribute back or just kind of follow along and see what people are talking about. Awesome.

So, there's a question by Azentyl1990. What are some of the areas you still see improvements could be made in diagnostics with Node.js? Um, that's a good one. So, I touched on postmortem debugging a little bit. I would love to see postmortem debugging get better. Unfortunately, that requires kind of a lot of cooperation with V8. And, you know, not that they're they don't cooperate, but I don't know that they see as much value in it. And then I would also like to see flame graphs have a kind of more first class citizenship inside of Node. So, right now, that's one of the few things I briefly touched on in the talk where you would actually still have to go outside of Node and dev tools to be able to look at.

Node.js Diagnostics Q&A

Short description:

Johta developer asks about Node.js applications or libraries for diagnostics. Dev tools and Clinic by NearForm are recommended. In serverless environments, diagnostics may be limited to the provider's offerings. The speaker is excited about a small improvement to log visibility in dev tools. The talk transitions from errors to diagnostics in Node.js.

So, I think that would be a great addition. Yeah, that definitely looks like a great addition. Uh, Johta developer asks, any Node.js application or library that helps to visualize or collect all those diagnostics locally that you could recommend like flame graphs, aside from Chrome dev tools. Yeah. So, dev tools will be the main one. I'm trying to think. So, I know Node source has a distribution of Node that has a lot of like diagnostics things built into it. That's going to be more proprietary. I think you may have to pay for it or they might have a free trial. I think there's a, a module out of NearForm called Clinic that has a lot of this stuff integrated with a nice little UI. So, I would look at those two things first.

Okay. Thank you. And a CC Chris asks, which diagnostic techniques do you recommend having Node.js in a serverless environment? In a serverless environment? So, it's kind of tricky there because you, you probably don't have access to all of the same things. Like you can't just, you know, create a CPU profile and easily get it out. For a serverless, you're kind of more at the whim of the provider, I think. So, for example, I work at AWS, and Lambda has pretty good diagnostics information. You can run Node.js there, have logs go into CloudFormation, I'm sorry, not CloudFormation. I can't remember off the top of my head, pressure. There's a logging system there you can have the logs dumped to, and just, you know, other integrations with AWS services that you can take advantage of. But yeah, in a serverless environment, some of these things from this talk may not apply.

Okay, so I have a question for you because you are a part of Node.js TS committee. What are some of the features that are something which is coming up or you are looking forward to, excited about, anything like that to share? So this is a really little thing, but you and I were actually talking about this backstage. Sometimes when you would log things from your node application when you are connected to dev tools, it would show up in the console instead of in dev tools. And there was actually a pull request opened recently to try to improve that. So I think that's one of those, you know, a small thing but a nice UX improvement. Yeah, I'm really excited about that. That would really be a great addition, small, but still it will be impactful I guess. So, okay. You touched lots of points on diagnostics and actually before your talk, we had a talk on errors in Node.js, right? So it was a very nice transition from errors to showing all the diagnostics and in Node.js and how you could do stuff like that.

Responsibilities of TS Committee

Short description:

The Technical Steering Committee (TSC) is the last resort for resolving conflicts and making technical decisions in the TS community. Ideally, all decisions should be made on GitHub, but when strong opinions hinder progress, the TSC steps in to make a decision or vote on the issue. People are encouraged to explore further on Google and GitHub after the talk.

I have a last question for you for sure. What are some of the responsibilities for you as a member of TS committee? I'm really interested in that.

So the Technical Steering Committee is kind of the last resort as far as resolving any types of conflicts and technical decisions. Ideally, nothing should come to the TSC. Everything should be decided on GitHub amongst the collaborators. Sometimes we get to a point where people have strong opinions and an issue just can't progress and the TSCO often then be pulled in to make a decision and occasionally even have to vote on it. But yeah, I would say that's the biggest thing.

Yeah, that's interesting. So I do not see any more questions but I'm sure people got a lot to explore after this talk. So they are going to do lots of Googling and GitHub and stuff. And so, everyone... Let me just... I'm still seeing the poll, so yes. If you have any other questions for Colin, you can still talk to him in the special chat. So, Colin will be available in his speaker room. And thanks a lot, Colin, for such an in-depth and wonderful talk. It was great, and people have learned a lot. Thank you so much for being with us here today. Thanks for having me. Bye.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk

JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
JSNation Live 2021JSNation Live 2021
19 min
Multithreaded Logging with Pino
Almost every developer thinks that adding one more log line would not decrease the performance of their server... until logging becomes the biggest bottleneck for their systems! We created one of the fastest JSON loggers for Node.js: pino. One of our key decisions was to remove all "transport" to another process (or infrastructure): it reduced both CPU and memory consumption, removing any bottleneck from logging. However, this created friction and lowered the developer experience of using Pino and in-process transports is the most asked feature our user.
In the upcoming version 7, we will solve this problem and increase throughput at the same time: we are introducing pino.transport() to start a worker thread that you can use to transfer your logs safely to other destinations, without sacrificing neither performance nor the developer experience.
Node Congress 2023Node Congress 2023
30 min
Building a modular monolith with Fastify
In my journey through Nodeland, I saw most teams struggling with the free-form nature of Node.js development: there are no guardrails for maximum flexibility. Yet, not all paths offer a smooth ride.
How to build applications that are well-organized, testable, and extendable? How could we build a codebase that would stand the test of time?
In this talk, we will explore how to avoid the trap of Singletons to create robust Node.js applications through the use of Fastify plugins: we will build a modular monolith!

Workshops on related topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.
: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:
- User authentication - Managing user interactions, returning session / refresh JWTs
- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents
- A quick intro to core authentication concepts
- Coding
- Why passwordless matters
- IDE for your choice
- Node 18 or higher
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.

JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:
- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).
- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
React Summit 2022React Summit 2022
164 min
GraphQL - From Zero to Hero in 3 hours
How to build a fullstack GraphQL application (Postgres + NestJs + React) in the shortest time possible.
All beginnings are hard. Even harder than choosing the technology is often developing a suitable architecture. Especially when it comes to GraphQL.
In this workshop, you will get a variety of best practices that you would normally have to work through over a number of projects - all in just three hours.
If you've always wanted to participate in a hackathon to get something up and running in the shortest amount of time - then take an active part in this workshop, and participate in the thought processes of the trainer.