How to use ChatGPT with Node.js

Rate this content
Bookmark

ChatGPT is revolutionizing the internet. In January 2023, ChatGPT reached over 100 million users, making it the fastest-growing consumer application to date. Don't miss this talk and learn how to To use ChatGPT in a Node.js application, we will use: OpenAI API to interact with the ChatGPT model, we will get an API key from OpenAI, and then use an API client library to make requests to the API from your Node.js code. Learn hacks on how to optimize your productivity with ChatGPT and have fun with artificial intelligence!

32 min
14 Apr, 2023

Video Summary and Transcription

Today's Talk introduces Chat GPT and its integration with Node.js, highlighting its exceptional performance and natural language capabilities. The speaker demonstrates how to interact with ChatGPT using Node.js and showcases examples such as selecting avatars and getting jokes. The Talk also discusses the use of ChargePT for extracting important information and interacting with databases. Important considerations when using ChatGPT, the potential of GPT-4, and the impact of AI on jobs are also covered. Security concerns and the use of extensions like Runme in Visual Code are mentioned as well.

Available in Español

1. Introduction to Chat GPT and Node.js Integration

Short description:

Today, we're going to learn how to chat GPT with Node.js. Chat GPT revolutionized the AI world with its exceptional performance and mind-blowing natural language answers. It's a state-of-the-art language model capable of generating human-like answers. Chat GPT has 175 billion parameters, over 100 times larger than the previous version. To integrate Node.js with Chat GPT, you need to go to OpenAI API, get an API key, and use five lines of code to create a completion based on user prompts.

Thank you. You probably seen me already. I am Liz. Today, we're going to learn how to chat GPT with Node.js. I am from Columbia, I'm head developer advocate at a very cool company called Stateful. I'm a community leader, speaker, Node.js evangelist, and blogger. This is my Twitter, Liz Parody at Stateful Twitter, in case you want to chat or ask questions or connect, we can just connect there.

So artificial intelligence can be divided into before and after chat GPT. Before, there were some AI, someone functional but not as popular or as accurate as chat GPT including IBM Watson, Google DeepMind, Microsoft Cortana, Amazon Alexa, and others. But chat GPT quickly revolutionized the entire AI world because of its exceptional performance, versatility, free availability, and absolutely mind-blowing natural language answers. I'm pretty sure that most of you, probably all of you, have ever used chat GPT at this point. And it won't be a surprise because chat GPT reached a hundred million users in just two months after its release. And because of this, it set a record in history by becoming the fastest growing consumer application ever to date. In case you haven't used chat GPT, it's a state-of-the-art language model that was developed by open AI and is capable of generating human-like answers to a wide range of natural language inputs. It was trained on a massive data set of texts from the internet and it enabled it to learn a wide knowledge and language patterns. One of the most interesting things about chat GPT is that it's difficult to distinguish what was written by a human and what was generated by the model. For example, what I just said you won't be able to tell if it was the product of my own deep research an analysis or chat GPT told me to say this.

So, how did they do it? Imagine that each of these points is 1 billion. So, chat GPT has the whooping of 175 billion parameters, so that is like a very big number. But for humans it's really difficult to process like these big numbers. To put it into scale, 1 billion seconds is equivalent to 31.7 years, so 175 billion seconds is equivalent to 5545 years, it's like a really big number. The previous version of chat GPT that was released in 2019 had 1.5 billion parameters, so the latest model is over 100 times larger.

Now, how can you integrate Node.js with chat GPT? Let's see it. Well, first you have to go to OpenAI API, which is the best and easiest way to interact with chat GPT, and provide a simple and straightforward way to, yeah, to interact with chat GPT, and then you get an API key that you can put in your code. So how do you get this API key? You go to OpenAI API, you sign up or sign in, and then you click on view API keys, and then you click to the button, create new secret key. This key, you just copy and paste it in your code, and it will allow you to interact with chat GPT. Now show me the code. So with these five lines of code, you can do wonderful things using Node.js applications. So first you create a completion that will send a request to the OpenAI, and then it will create a completion based on a given prompt that's given by the user, and it has a model and a message properties. The model property will specify which version of ChatGPT to use.

2. Interacting with ChatGPT and RunMeme Extension

Short description:

In this case, it's GPT 3.5 Turbo, and the message, which includes the user, and the content that will be the prompt. I'm going to do a demo using RunMeme, an extension developed by my company. So let's go here. I'm just going to do a basic example of interacting with ChatGPT. I import the dependencies, create a configuration with the API key, and an AI API with an input and output. Then I ask the chatbot a question, wait for the response, and handle errors. That's all you need to integrate a chatbot with Node.js. It would be even cooler if I could select avatars from the CLI, like Leonardo da Vinci or Yoda.

In this case, it's GPT 3.5 Turbo, and the message, which includes the user, and the content that will be the prompt. And at the end, it will just return the content with the first response, so that's all you need.

I'm going to do a demo using RunMeme. This extension was developed by my company, so if you can check it out, it will be great. So let's go here. I'm just going to do like a basic example of interacting with ChatGPT when I can ask ChatGPT questions for my terminal, and then ChatGPT will respond.

So this is the extension I was talking about, which is this button. You can run read-me's inside the BS Code. So if I click this button, you'll say, what type of chatbot would you like to create? So I would just be like, Steve Jobs ChatBot. So I say, hello, and then, hi there, I'm Steve Jobs ChatBot, how can I assist you today? So I will say, tell me something every developer should know. And then it will take a few seconds to call the API, and if internet is fast, it shouldn't take very long.

So how did I do this? Only with 38, 39 lines of code is able to do this. So first we import all the dependencies, then I create a new configuration based with the API key that was provided by openAI. Then I create a new AI API with the configuration, and I create an interface, but with an input and an output. The input would be my questions, the output would be the chatbot response, and then I have the first question, what type of chatbot would you like to create, this is what I say, Steve Jobs. And then I have the user input that says, say hello to new assistant. And these are some of the lines of code I said before, they're quite important, which is create chat completion with a message and a model, as we said before, we're going to use the GPT 3.5 turbo, that's the model that we're going to use. And here it will have the first response. Then here the user is prompted to enter the next input, and if there is no response from the API, it will say there is no response, please try again. And if there is an error, it will catch it here. And that's it. That's all you need to do to create, to integrate chatbot with Node.js. But it would be even cooler if I can just select from the CLI some avatars. For example, if I could pick from Leonardo da Vinci, or William Shakespeare, or Yoda, or Steve Jobs, like, that would be cool. So if I run this, I can choose from here. What type of chatbot would you like to create? So let's say I'm one Yoda. So it says, perfect, now tell me your question.

3. Running ChatGPT and Examples

Short description:

If I run this, I can choose a chatbot type like Yoda. I ask it to tell me a joke that makes everyone laugh. It responds with a programmer joke. ChatGPT is more than just a chatbot. It can boost your project's career, company, and personal life. It even saved a person's dog by diagnosing its illness. Another example is a customer seeking support for a SmartWatch that won't turn on.

So if I run this, I can choose from here. What type of chatbot would you like to create? So let's say I'm one Yoda. So it says, perfect, now tell me your question. So I'll say, okay, I am in front of maybe 100 developers, I don't know how many are you here, 100 developers. Tell me a joke that makes everyone laugh. Let's see what it says.

Okay, a joke I have for you. Why did the programmer quit his job? Because he didn't get a raise, hahaha, oh my god, that's so bad. That's really bad. I don't have any more questions for you, Laura. No, thank you.

So, how do I do this? First, I have the utils here, but I have four avatars, I can include more, or delete one, or yeah. So, here I have them, and then in index here is the code. I'm not going to go deep into this, but I'm using ink to select the colors and to interact in the terminal. And then I have three steps. One will be select the chat bot, the prompt and display the answer. Here is the initial state of ChatGPT. Yeah. So, that's the rest of the code that will be published. Okay.

But ChatGPT is more than just a chat that gives answers and gives really bad jokes. It's also a powerful model that can boost your project's career, company and personal life to the next level. One example of a personal life is that ChatGPT saves this person's dog's life because the dog was very sick and he took it to the vet and the vet did some blood exams, but they were not sure what they had. So, this person put all the blood tests into ChatGPT and ChatGPT told him exactly what he had and he was able to save his dog. I mean, this is pretty incredible technology. Let's say another example. Let's say I am a customer and I need support from one of the products I buy. If I say this, Hello Superteam, I'm having problems with SmartWatch 1.1.1 that I bought. I received a package a week ago, I paid in cash and now it doesn't turn on. I check the battery, try to reset it and press all the buttons, but it still doesn't work.

4. Extracting Important Information with ChargePT

Short description:

To extract important information using ChargePT, we can use natural language processing to identify the product name, issue description, issue summary, and payment method from a given message. ChargePT can automate the extraction process and provide accurate results. This capability is beneficial for automating customer support and delivering personalized responses. The impact on customer support staff will be discussed later.

I think it's a defect in the product and I want to find a replacement. Let me know what the next step is. So if I have this message and I want to extract the most important information using ChargePT, what I could do is, I go here to read me, and I want to just get the product name, the issue description, the issue summary and the payment method. I just want to know these four things using ChargePT. So ChargePT is able to read that message, there is natural language, and then it will be able to extract that the product name is the SmartWatch111, the issue description is the fitting in the product, the issue summary is refund or replacement request, and the payment method is cash. So with this, let me check the code. So I'm importing everything that I need and using the Minds library here. Then I will create these four entities, the product name, the issue description, the issue summary, the payment method, in this case it's cash or credit card, but it can be other stuff. And now I have the message, and I print it into the console. And that's it, that's how we can extract information from ChatGPT. This is quite cool. Things like this can help you to automate message, emails, chats, and provide the customers with exactly what they need. So now we can ask, what will happen to the people that work in customer support? Well, that would be something we will talk about later.

5. Interacting with Databases using ChargePT

Short description:

Let's consider another example where we have a large database of products. By using ChargePT, we can check the availability of specific products based on the inventory. ChargePT interacts with the database and provides the answer to the customer's query. The code for this example involves importing dependencies, creating an AI with the API key, and defining the database and customer query. This demonstrates how chatGPT can revolutionize interactions in various domains, including customer support.

Another example would be this one. Let's say that I have a big database with a lot of products or a lot of whatever you want. I just want to know, based on the inventory, if there is availability of some products. For example, if I say, do you folks have five MacBook Pros with M.2, with 96, gigabytes in RAM, and three iPads in a stack. There is a big database and if I just click this button, so it would read the database, and then if you have the question here, is there iPads on the stack? And then ChargePT is looking, and the answer will be, for the customers, yes, we do have five MacBooks Pro, with M.2, in a stack, but we don't have any iPads. So yeah, this is how ChargePT can interact with databases and stuff. So let's very quickly see the code. Yeah, I just import everything from all the dependencies. I create a new AI with the OpenAI API key, and then this is a very small database. It's just two products, but imagine that there's like thousands there. So yeah, we have a name, a description, and how many products are in this stack. And then here we have the list of actions available for AI. We have the name, description, action, and we have the memory, the prompt, and the customer query is, if we have five MacBook Pros and if we have three iPads. The question, and then we just bring the answer to the console. So yeah, all of this like chatGPT can really change how we interact with the internet and customer support, and yeah, so many things in the personal life and in work. So all this code is going to be here in a stateful chatGPT demo, if you wanna check it out. Also, the slides are going to be there.

6. Important Considerations when Using chatGPT

Short description:

Important considerations when using chatGPT: understanding limitations, choosing the right input format, monitoring API usage, providing sufficient context, handling errors and exceptions, ensuring data privacy and security, prompt engineering, adhering to API usage policies, avoiding bias in training data, and using fine tuning if necessary.

So some important stuff to take into account when using chatGPT, to ensure the best experience for developers and also for users. The first one is understanding its limitations. ChatGPT is a machine learning model that relies on training data and algorithms to provide answers, and sometimes it might not be able to give you the best, accurate or relevant answers to all your queries or questions, even though sometimes you see that chatGPT is able to do absolutely anything, to give you the exact code that you need to save your dog's life. The truth is that chatGPT does a lot of things that I kind of do.

For example, we kind of make your coffee, yet, but maybe in the future, who knows? Choosing the right input format. ChatGPT allows different input formats, including plain text, html and JSON. So, yeah, we just need to make sure to choose the right input that matches our needs. Monitoring the API usage, it's important to keep an eye on the API usage to avoid exceeding rate limits or running into billing issues, because it can get quite expensive if you don't keep an eye on that.

Also, chatGPT works the best when you provide sufficient context. For example, it can generate more accurate and relevant responses. This person was able to create a fantastic game, I mean, through chatGPT by giving a series of step-by-step and a lot of context of how the game should be, and chatGPT just came up with this game. Also, even though context is really important, it's also important to have to provide clear and concise input, avoid overly complex and convoluted sequences or questions, just be clear and simple and straightforward, but also with context. Handling errors and exceptions. So chatGPT API, OpenAI API, just like any other APIs, can sometimes throw errors or exceptions, and when this happens, we should provide users with clear error message and instructions on how to proceed and have a contingency plan. So in case the API was failing today and I wasn't able to give my demo, I just had a screenshot of what it's supposed to reply. Also, ensuring data privacy and security, including encryption and secure storage of sensitive information. I got one API key blocked because I put it accidentally on GitHub, so just don't do these types of things.

Also, prompt engineering is the description of the task that chatGPT or any AI is supposed to accomplish. The better the prompt, the better the result. So there's like a whole thing about prompt engineering and I'm going to give some resources at the end of this talk. Adhering to API usage policies and guidelines to avoid potential legal issues or other consequences, and also avoid bias in training data. Be mindful of potential biases in training data used to train the chatGPT model. The data should be as diverse as a representative as possible. For example, chatGPT is going to face its third lawsuit by the Australian mayor because of a bribery scandal. So, yeah, it's important to take into account biases and legal stuff because to not get into problems. Use fine tuning if necessary. So, fine tuning, if you're not satisfied with the result of chatGPT or the AIs, you should consider fine tuning it with initial trainees' data or tweaking the model's hardware parameters. Fine tuning basically means to make the model work best for your use case. So, we can find fine tuning in the bottom right corner here. And fine tuning lets you get out of the model with higher-quality results and prompt design, token savings due to shorter prompts and lower latency requests.

7. Fine Tuning and ChatGPT Plugins

Short description:

Fine tuning improves learning by training with more examples, providing better results. No need to provide examples when fine tuning for your use case. Information is until September 2021. OpenAI released ChatGPT plugins for up-to-date information, computations, and third-party services. Plugins like Expedia, Kayak, Shopify, OpenTable are available. Create your own plugins. Limited alpha version.

So fine tuning basically improves learning by training by many more examples that can fit in a prompt, giving you better results in a wide number of tasks. When you fine tune your model for your use case, you don't have to provide examples anymore. So, this can help you to save codes and have lower latency requests.

Also, it's important to take into account that all the information is until September of 2021, so if you try to ask the most important news in January 2023, it will say, like, I don't know how access to future events and information, but you're my training data cut up date on September 2021. So, yeah, this is a small problem, a big problem in ChatGPT. But to partially solve this problem, OpenAI released ChatGPT plugins that allows you to access up-to-date information, run computations, or use third-party services. These plugins allows you to retrieve real-time information, like a sports score, stock prices, and latest news. So, with this plugin, I will be able to tell what was the most important news in 2022 in January, and retrieve knowledge-based information like company docs, personal notes, and perform actions on behalf of the user, like booking a flight or ordering food. We can be able to book our vacations and book our whole lives, basically, with ChalGPT. And these are some of the plugins that are available, including Expedia, Kayak, Shopify, OpenTable. So, I will be able to say, I'm looking for a vegan restaurant in Berlin, can you please book me one great restaurant for Friday night at P.M.? And ChalGPT will be able to do this for us. The plugins, you can create your own plugins that work for you, like for your documentation, for your personal things. And it's also a limited alpha version, so most of the developers won't have access to this yet.

8. GPT-4 and its Capabilities

Short description:

GPT-4 is the improved version of ChalGPT, capable of processing videos, images, speech, and text. It can create fully functional websites and develop video games without coding experience. GPT-4 can also explain jokes and demonstrates superior performance on academic and standardized texts. OpenAI partners with Microsoft, Morgan Stanley, Duolingo, Strive, and Khan Academy. GPT-4 can detect issues in Ethereum smart contracts, and Microsoft announced that Bing will run on GPT-4. The latest report by Goldman Sachs suggests that 300 million jobs may be affected by AI, with administrative workers, lawyers, architects, and engineers being the most impacted.

Now, let's talk very briefly about GPT-4. GPT-4 is the improved version of ChalGPT. It can process videos, images, speech, text, and yeah, it's just incredible. For example, you can just draw something, like a website, and GPT-4 is able to create a fully function of website based on your draws. Also, GPT-4 can develop an entire video game for you. This person was able to create the famous game of Pong under 60 seconds. So, you don't even have to have any coding experience to create games. You just need ChalGPT-4.

Also, GPT-4 is able to explain jokes. This is going to be great for me because I'm really bad at understanding joke sometimes. So, you just page a meme or you just page a joke and it will explain why it's funny. And also, GPT-4, the technical report, demonstrating superior performance on academic and standardized texts, such as the bar exam. In comparison, ChalGPT achieved the 10 percentile and GPT-4 reached the 90 percentile. That means that GPT-4 was able to surpass 90 percent of the people that took the bar exam.

There's OpenAI partners, including Microsoft, Morgan Stanley, Duolingo, Strive, Khan Academy. Duolingo is using GPT-4 to have more engaging conversation, Strive to prevent frauds, and Khan Academy to ask individual questions to students to improve their knowledge. GPT-4 is capable of detecting issues in Ethereum smart contracts. This person is the director of CONCASE, and through GPT-4 was able to see that there were some vulnerabilities and some areas of exploitation of contracts. So this can help you save money a lot. Also, Microsoft announced that Bing will run an OpenAI GPT-4. Can you imagine if Bing became more popular than Google? I mean, this would only be possible through AI. That's... yeah.

And finally, some ethical aspects. The latest report of Goldman Sachs says that 300 million jobs could be affected by the latest wave of AI. The most... the jobs that are going to be affected the most are administrative workers, lawyers, architectures, and engineers. I hope they're not talking about software engineers. But yeah, who knows? Now... and the work that are going to be affected the least is all the manual work.

9. Impact of AI on Jobs

Short description:

In the short and medium term, we will experience a boost in productivity. Most jobs will be complemented by Chargipiti, with only a small percentage being substituted. Job displacement due to automation has historically been offset by new job creations, cost savings, and increased productivity. AI, including chatGPT and GPT-4, presents exciting opportunities for adaptation. While it may be scary, GPT is reliable, and we have control. Thank you!

Like, installations, maintenance, repair, construction, and cleaning. But the question is, what will happen to our jobs? Like, what will happen when one developer is able to do what ten developers are doing? Like the famous 10x developer. Well, the truth is that there is so much, like, software to be built, and there is so much product and roadmap backlog that it's very likely that in the short and medium term, we will experience a boost in productivity.

Also, it's important to differentiate between substituting and complementing. Because most of the jobs, 63%, are going to be complementing. I mean, Chargipiti are going to be complementing these jobs. I'm pretty sure that all of us have already been using Chargipiti to complement our jobs, to help us with a lot of stuff, and only 7% of those jobs are going to be substituting, and 30% of the jobs are going to remain unaffected.

Also, job displacement because of automation, which has happened already many times in the past, like the agricultural revolution, the industrial revolution, more historical events. When this happened, it's normally been offset by new job creations, significant labor cost savings, high productivity for non-displaced workers, also, AI can raise annual used labor productivity growth by 1.5% over a 10-year period. I'm personally very excited about the future of AI. I think all of us should just work very deeply with AI and chatGPT, GPT-4, to make this work for us. And it's all about adaptation before we get to this point. Are you real? Well, if you can tell, does it matter? I mean, we have to admit, it's a little bit scary. But don't worry, GPT is good. We won't end up like in terminators or things like that. It's all good. We have this under control. So thank you. And hasta la vista, baby. If you have any questions, this is my Twitter. And these are some resources that you can use for GPT and chatGPT and all. So thank you very much. That's it.

10. Ensuring Security of Internal Company Data

Short description:

How can we ensure the security of our internal company data shared with OpenAI API? The documentation provides a series of clear step-by-step instructions to secure your application. There have been some reports of information being stolen by ChatGPT, which raises concerns about security.

All right. Thanks a lot, Liz. We have a lot of questions like I expected, to be honest. Yeah. Let's start with the top one. How can we ensure about security of our internal company data shared with OpenAI API? I mean, there is a whole section when you go to documentation of AI in OpenAI about security. I didn't read the whole thing, but there's like a series of clear step by step how you can make secure your application. Also, I think there's also some scandal, because there's been some information that is being stolen by Chad, GPT, and yeah. I mean, there's a whole thing about security.

11. Extension for executing scripts in Visual Code

Short description:

The extension used to execute scripts in Visual Code from MD files is called Runme. It allows you to run commands from readmes, making documentation and demos easier. Install Runme in VS Code to simplify the installation of multiple docker commands.

Yeah, I see. What is the name of extension that you use to execute these scripts in Visual Code from MD files?

Yeah, so please install it. My company is working on it. I'm working on it right now. It's called Runme. If you go to VS Code, just install Runme, and you can just run all your commands from the readmes, which can help you with documentation, with demos, if you want to, if you have like 10 docker commands that you want to install, you just click buttons and install it for you. So please go ahead and install Runme. All right. Yeah.

12. Usage of ChatGPT and Database Access

Short description:

You can limit the conversation to a specific topic by using the API. Companies providing database access to JetGPT may pose risks. The 96GB m2 slide shows product availability. AI tools replacing Node.js is unlikely in the near future.

Next one is about the usage of ChatGPT. Can you limit the conversation to a specific topic by using the API? Yeah. I mean, just as you can configure the avatar, I mean, you can say, I want to speak like, yeah, like Steve Jobs or Yoda or whatever. You can also limit the conversation to ChatGPT. You can say like, okay, you're going to be like, give me answers that are 160 characters long. And it only talks about, I don't know, the cosmos or the space or whatever. And you can definitely limit the conversations. You just have to do that in the beginning, in the prompt when you set the configuration.

All right. You had one slide with the dots, if you remember. So what these parameters mean in the 1.7, 5 billion? The 7 billion parameters is like all that was collected from the internet and all these different parameters that were used to be able to provide answers to the users. So, yeah. Right. And I think, relative to what we had, what companies really want to provide database access to JetGPT? Isn't it actually risky and non-compliant? Sorry, can you repeat the question? What companies really want to provide database access to JetGPT? Isn't it actually risky and non-compliant? Well, to be honest I don't know. Yeah. Another one was regarding one of the slides was 96GB m2. What happened with this? If you remember, of course. No, I mean, oh sorry, I did miss that. Oops. You're now going to think about it. Yeah. So, basically, it will show like if there was availability of this product. I totally forgot to show that but good catch. Nice. Do you think AI tools can replace Node.js in the near future? Who is going to replace? AI tools artificial intelligence. I don't think in the near future. I think maybe it could happen but in a far away future. Like long term. Like not short term. I don't think that happening really.

13. Using chatGPT with Node.js and RunMe Extension

Short description:

How about using chatGPT with Node.js to enable its usage in banned countries? Since OpenAI uses human text to train ChatGPT, ensuring that our Node.js applications exclude bias is crucial. By providing a diverse and representative dataset, biases can be avoided. Additionally, RunMe is an extension that simplifies running commands in VS Code, making it easier to onboard new team members and test the functionality of blog posts or documentation.

How about using chatGPT with Node.js to enable its usage in banned countries. In banned countries where it's not available. Hmm. Wow. I don't know really. Very complex topic. Yeah. It's exploding. We have more and more questions and we have a few minutes. But after the talk. After the questions. So you can also find Lise in the Q&A room. Of course. So a few more maybe?

Yeah. How about using ChatGPT with Node.js to name... Oh, is that's what I asked? Sorry. Since OpenAI uses human text to train ChatGPT, how can we ensure that our Node.js applications are excluding bias? Well, that's when you have to give... The data should be as wide as a representative as possible because it's easy to get into bias if you train the model to just think a certain way. So if you have a diverse set of data and there's a representative, that's how you can avoid biases on your Node.js applications.

Yeah, there was... I think you would like to answer that. Can you tell a bit more about RunMe? RunMe? Oh, yeah, sure! So, yeah, just basically I'm working on that with my company, and so, yeah, it's just an extension that is very cool and I would super appreciate you go to VS Code and install it. You can find it on the Marketplace or you can go directly on VS Code and just look for RunMe and install it and then, yeah, just basically with just one button you can run all the commands that you need and you can make like, for example onboarding people. It could be even better because sometimes where there's new people there's so many things to install and there's so many documentation and stuff. If you just have everything clear and just click buttons and install everything that's also good. Also, for blog posts or for documentations that you have out there, if you want to check that they're working correctly you can just use RunMe to just click buttons and see that it's working correctly. So, yeah, I'm really excited to be working in this product, so please download and install it. My company will like it. It will allow me to keep going to events like this, so please. Awesome. Thanks a lot, Lis.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
JSNation Live 2021JSNation Live 2021
19 min
Multithreaded Logging with Pino
Top Content
Almost every developer thinks that adding one more log line would not decrease the performance of their server... until logging becomes the biggest bottleneck for their systems! We created one of the fastest JSON loggers for Node.js: pino. One of our key decisions was to remove all "transport" to another process (or infrastructure): it reduced both CPU and memory consumption, removing any bottleneck from logging. However, this created friction and lowered the developer experience of using Pino and in-process transports is the most asked feature our user.In the upcoming version 7, we will solve this problem and increase throughput at the same time: we are introducing pino.transport() to start a worker thread that you can use to transfer your logs safely to other destinations, without sacrificing neither performance nor the developer experience.

Workshops on related topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.
React Summit 2022React Summit 2022
164 min
GraphQL - From Zero to Hero in 3 hours
Workshop
How to build a fullstack GraphQL application (Postgres + NestJs + React) in the shortest time possible.
All beginnings are hard. Even harder than choosing the technology is often developing a suitable architecture. Especially when it comes to GraphQL.
In this workshop, you will get a variety of best practices that you would normally have to work through over a number of projects - all in just three hours.
If you've always wanted to participate in a hackathon to get something up and running in the shortest amount of time - then take an active part in this workshop, and participate in the thought processes of the trainer.
TestJS Summit 2023TestJS Summit 2023
78 min
Mastering Node.js Test Runner
Workshop
Node.js test runner is modern, fast, and doesn't require additional libraries, but understanding and using it well can be tricky. You will learn how to use Node.js test runner to its full potential. We'll show you how it compares to other tools, how to set it up, and how to run your tests effectively. During the workshop, we'll do exercises to help you get comfortable with filtering, using native assertions, running tests in parallel, using CLI, and more. We'll also talk about working with TypeScript, making custom reports, and code coverage.