ChatGPT is revolutionizing the internet. In January 2023, ChatGPT reached over 100 million users, making it the fastest-growing consumer application to date. Don't miss this talk and learn how to To use ChatGPT in a Node.js application, we will use: OpenAI API to interact with the ChatGPT model, we will get an API key from OpenAI, and then use an API client library to make requests to the API from your Node.js code. Learn hacks on how to optimize your productivity with ChatGPT and have fun with artificial intelligence!
How to use ChatGPT with Node.js
From:

Node Congress 2023
Transcription
Thank you. You probably see me already. I am Liz. Today, we're going to learn how to use chat GPT with node.js. I am from Colombia. I'm head developer advocate at a very cool company called Stateful. I'm a community leader, speaker, node.js evangelist, and blogger. This is my Twitter, Liz Parody, and Stateful Twitter. In case you want to chat or ask questions or connect, we can just connect there. artificial intelligence can be divided into before and after chat GPT. Before, there were some AI, somewhat functional but not as popular or as accurate as chat GPT, including IBM Watson, Google DeepMind, Microsoft Cortana, Amazon Alexa, and others. But chat GPT quickly revolutionized the entire AI world because of its exceptional performance, versatility, free availability, and absolutely mind-blowing natural language answers. I'm pretty sure that most of you, or probably all of you, have already used chat GPT at this point. And it won't be a surprise because chat GPT reached 100 million users in just two months after its release. And because of this, it set a record in history by becoming the fastest growing consumer application ever to date. And in case you haven't used chat GPT, it's a state-of-the-art language model that was developed by OpenAI and is capable of generating human-like answers to a wide range of natural language inputs. It was trained on a massive data set of text from the internet, and it enabled it to learn a wide knowledge and language patterns. One of the most interesting things about chat GPT is that it's difficult to distinguish what was written by a human and what was generated by the model. For example, what I just said, you won't be able to tell if it was the product of my own deep research and analysis or chat GPT told me to say this. So, how did they do it? Imagine that each of these points is one billion. So, chat GPT has the whooping of 175 billion parameters. So, that is like a very big number. But for humans, it's really difficult to process like these big numbers. To put it into scale, one billion seconds is equivalent to 31.7 years. So, 175 billion seconds is equivalent to 5,545 years. It's like a really big number. The previous version of chat GPT that was released in 2019 had 1.5 billion parameters. So, the latest model is over a hundred times larger. Now, how can you integrate node.js with chat GPT? Let's see it. Well, first, you have to go to OpenAI api, which is the best and easiest way to interact with chat GPT and provides a simple and straightforward way to, yeah, to interact with chat GPT. And then you get an api key that you can put in your code. So, how do you get this api key? You go to OpenAI api, you sign up or sign in. And then you click on View api Keys. And then you click to the button Create New Secret Key. This key, you just copy and paste it in your code and it will allow you to interact with chat GPT. Now, show me the code. So, with these five lines of code, you can do wonderful things to your node.js applications. So, first, you create a completion that would send a request to the OpenAI and then it would create a completion based on a given prompt that's given by the user. And it has a model and a message properties. The model property will specify which version of chat GPT to use, in this case, it's GPT 3.5 Turbo, and the message, which include the user and the content that will be the prompt. And at the end, it will just return these, the content with the first response. So, that's all you need. I'm going to do a demo using RunMeme. This extension was developed by my company, so if you can check it out, it will be great. So, let's go here. I'm just going to do like a basic example of interacting with chat GPT when I can ask chat GPT questions from my terminal and then chat GPT will respond. So, this is the extension I was talking about, just this button, you can run read me's inside the BS code. So, if I click this button, it say, what type of chatbot would you like to create? So, I will just be like Steve Jobs chatbot. So, I say, hello, and then, hi there, I'm Steve Jobs chatbot, how can I assist you today? So, I will say, tell me something every developer should know. And then, it will take a few seconds to call the api and if the internet is fast, it shouldn't take very long. Okay. So, one of the most important things every developer should know is to never stop learning. Technology is a programming languages, it will rapidly. So, keep up with the latest trends and techniques is crucial for staying competitive in the industry. Thank you, Steve Jobs. So, how did I do this? Only with 39 lines of code is able to do this. So, first, we import all the dependencies, then I create a new configuration base with the api key that was provided by OpenAI. Then, I create a new AI api with the configuration and I create an interface but with an input and an output. The input will be my questions and the output will be the chatbot response. And then, I have the first question, what type of chatbot would you like to create? This is where I said Steve Jobs. And then, I have the user input that says, say hello to new assistant. These are some of the lines of code I said before, they're quite important which is create chat completion with a message and a model. As we said before, we're going to use the GPT 3.5 Turbo, that's the model that we're going to use. And here, it will have the first response. Then, here, the user is prompted to enter the next input and if there is no response from the api, it will say there's no response, please try again. And if there is an error, it will catch it here and that's it. That's all you need to do to create, to integrate chatbot with node.js. But it will be even cooler if I can just select from the CLI some avatars. For example, if I could pick from Leonardo da Vinci or William Shakespeare or Yoda or Steve Jobs, like, that would be cool. So, if I run this, I can choose from here, what type of chatbot would you like to create? So, let's say I'm one Yoda. So, it says, perfect, now tell me your question. So, I'll say, okay, I am in front of maybe 100 developers. I don't know how many are you here, 100 developers. Tell me a joke that makes everyone laugh. Let's see what it says. Okay. A joke I have for you. Why did the programmer quit his job? Because he didn't get a raise. Ha, ha, ha. Oh, my God, that's so bad. That's really bad. I don't have any more questions for you, and no, thank you. So, how do I do this? First, I have the utils here, but I have four avatars. I can include more or delete one or, yeah. So, here I have them and then index here is the code. I'm not going to go deep into this, but I'm using ink to select the colors and to interact in the terminal. And then I have three steps. One will be select the chatbot, the prompt, and display the answer. Here is the initial state of chatGPT. Yeah. So, that's the rest of the code that I will, it will be published. Okay. But chatGPT is more than just a chat that gives answers and gives really bad jokes. It's also a powerful model that can boost your project's career, company, and personal life to the next level. One example of a personal life is that chatGPT saved this person's dog's life because the dog was very sick and he took it to the vet and the vet did some blood exams, but they were not sure what they had. So, this person put all the blood tests into chatGPT and chatGPT told him exactly what he had and he was able to save his dog. I mean, this is pretty incredible technology. Let's say another example. Let's say I am a customer and I need support from one of the products I buy. If I say this, hello, super team, I'm having problems with a smartwatch 111 that I bought. I received a package a week ago. I paid in cash and now it doesn't turn on. I check the battery, try to reset it, and press all the buttons, but it still doesn't work. I think it's defect in the product and I want to find a replacement. Let me know what are the next steps. So, if I have this message and I want to extract the most important information using chatGPT, what I could do is I go here to read me and I want to just get the product name, the issue description, the issue summary, and the payment method. I just want to know these four things using chatGPT. So, chatGPT is able to read that message. There is natural language and then it will be able to extract that the product name is the smartwatch 111. The issue description is defect in the product. The issue summary is refund or replacement request and the payment method is cash. So, with this, let me check the code. Here. So, I'm importing everything that I need. I'm using the Mines library here. Then I will create these four entities, the product name, the issue description, the issue summary, the payment method. In this case, it's cash or credit card but it can be other stuff like, yeah. And now, I have the message and I print it into the console and that's it. That's how we can extract information from chatGPT. This is quite cool. Things like that can help you to automate message, emails, chats, and provide the customers do exactly what they need. So, now we can ask, what will happen with the people that works in customer support? Well, that will be something we will talk later. Another example would be this one. Let's say that I have a big database with a lot of products or a lot of whatever you want. And I just want to know based on the inventory if there is availability of some products. For example, if I say, do you folks have 5Max Books Pro with M2, with 96 GB of RAM and 3 iPads in stock? And there is a big database and if I just click this button, so it would read the database and then it will have the question here, is there iPads on the stock? And then chatGPT is looking and the answer will be for the customers, yes, we do have 5Max Books Pro, M2 in the stock but we don't have any iPads. So, yeah, this is how chatGPT can interact with databases and stuff. So, let's very quickly see the code. Yeah, I just import everything from all the dependencies. I create a new AI with the OpenAI api key and then this is a very small database. It's just two products but imagine that there's like thousands there. So, yeah, we have a name, a description and how many products are in stock. And then here we have the list of actions available for AI. We have the name, description, action. And we have the memory, the prompt and the customer query is if we have 5Max Books Pro and if we have 3 iPads. That's the question. And then we just bring the answer to the console. So, yeah, all of this, like, chatGPT can really change how we interact with the Internet and customer support and, yeah, so many things in the personal life and in work. So, all these code is going to be here in a stateful chatGPT demo if you want to check it out. Also, the slides are going to be there. So, some important stuff to take into account when using chatGPT to ensure the best experience for developers and also for users. The first one is understanding its limitations. ChatGPT is a machine learning model that relies on training data and algorithms to provide answers. Sometimes, it might not be able to give you the best accurate or relevant answers to all your queries or questions, even though sometimes you see that chatGPT is able to do absolutely anything to give you the exact code that you need to save your dog's life. The truth is that chatGPT does a lot of things that it can't do. For example, it can't make your coffee yet, but maybe in the future, who knows. Choosing the right input format. ChatGPT allows different input formats, including plain text, HTMLs, and JSON. So, yeah, we just need to make sure to choose the right input that matches our needs. Monitoring the api usage, keep an eye, it's important to keep an eye on the api usage to avoid exceeding rate limits or running into billing issues because it can get quite expensive if you don't keep an eye on that. Also, chatGPT works the best when you provide sufficient context. For example, when you can, it can generate more accurate and relevant responses. This person was able to create a fantastic game, I mean, through chatGPT by giving a series of step-by-step and a lot of context of how the game should be, and chatGPT just came up with this game. Also, even though context is really important, it's also important to have, to provide clear and concise input. Avoid overly complex and convoluted sequences or questions. Just be clear and simple and straightforward, but also with context. Handling errors and exceptions. So chatGPT api, OpenAI api, just like any other APIs, can sometimes throw errors or exceptions and when this happens, we should provide users with clear error message and instructions on how to proceed and have a contingency plan. So in case the api was failing today and I wasn't able to give my demo, I just had the screenshots of what it's supposed to reply. Also ensuring data privacy and security, including encryption and secure storage of sensitive information. I got one api key blocked because I put it accidentally on GitHub, so just don't do these types of things. Also, prompt engineering is the description of the task that chatGPT or any AI is supposed to accomplish. The better the prompt, the better the result. So there is like a whole thing about prompt engineering and I'm going to give some resources at the end of this talk. Adhering to api usage policies and guidelines to avoid potential legal issues or other consequences. And also avoid bias in training data. Be mindful of potential biases in training data used to train the chatGPT model. The data should be as diverse as a representative as possible. For example, chatGPT is going to face its third lawsuit by the Australian mayor because of a bribery scandal. So, yes, it's important to take into account biases and legal stuff because to not get into problems. Use fine tuning if necessary. So fine tuning, if you're not satisfied with the results of chatGPT or the AIs, you should consider fine tuning it with initial training's data or tweaking the model hyperparameters. Fine tuning basically means to make the model works best for your use case. So we can find fine tuning in the bottom right corner here. And fine tuning lets you get out of the model with higher quality results and prompt design, token savings due to shorter prompts and lower latency requests. So fine tuning basically improves learning by training by many more examples that can fit in a prompt, giving you better results in a wide number of tasks. When you fine tune your model for your use case, you don't have to provide examples anymore. So this can help you to save codes and have lower latency requests. Also it's important to take into account that all the information is until September of 2021. So if you try to ask the most important news in January 2023, it will say, like, I don't know how to access to future events or information, but you're my training data cut update on September 2021. So yeah, this is a small problem, a big problem in chatGPT. But to partially solve this problem, OpenAI released chatGPT plugins that allows you to access up-to-date information, run computations, or use third-party services. These plugins allows you to retrieve real-time information like sports scores, stock prices, and latest news. So with this plugin, I will be able to tell what was the most important news in 2023 in January. And retrieve knowledge-based information like company docs, personal notes, and perform actions on behalf of the user, like booking a flight or doing food. We can be able to book our vacations and book our whole lives basically with chatGPT. And these are some of the plugins that are available, including Expedia, Kayak, shopify, OpenTable. So I will be able to say, I'm looking for a vegan restaurant in Berlin. Can you please book me one great restaurant for Friday night at 8 p.m.? And chatGPT will be able to do this for us. The plugins, you can create your own plugins that work for you, like for your documentation, for your personal things. And it's also a limited alpha version, so most of developers won't have access to this yet. Now, let's talk very briefly about GPT-4. GPT-4 is the improved version of chatGPT. It can process videos, images, speech, text, and yeah, it's just incredible. For example, you can just draw something like a website. GPT-4 is able to create a fully functional website based on your draws. Also, GPT-4 can develop an entire video game for you. This person was able to create the famous game of Pong under 60 seconds. So you don't even have to have any coding experience to create games. You just need chatGPT-4. Also, GPT-4 is able to explain jokes. This is going to be great for me because I'm really bad at understanding jokes sometimes. So you just paste a meme or you just paste a joke and it will explain why it's funny. And also, GPT-4, the technical report, demonstrates superior performance on academic and standardized text such as the bar exam. In comparison, chatGPT achieved the 10th percentile and GPT-4 reached the 90th percentile. That means that GPT-4 was able to surpass 90% of the people that took the bar exam. There's OpenAI partners including Microsoft, Morgan Stanley, Duolingo, Stripe, Khan Academy. Duolingo is using GPT-4 to have more engaging conversation. Stripe to prevent frauds and Khan Academy to ask individual questions to students to improve their knowledge. GPT-4 is capable of detecting issues in ethereum smart contracts. This person is the director of Concase. And through GPT-4 was able to see that there were some vulnerabilities and some areas of exploitations of contracts. So this can help you save money a lot. Also, Microsoft announced that Bing will run an OpenAI GPT-4. Can you imagine if Bing's become more popular than Google? I mean, this would only be possible through AI. That's, yeah. And finally, some ethical aspects. The latest report of Goldman Sachs says that 300 million jobs could be affected by the latest wave of AI. The jobs that are going to be affected the most are administrative workers, lawyers, architectures, and engineers. I hope they're not talking about software engineers. But yeah, who knows. Now, and the work that are going to be affected the least is all the manual work, like installations, maintenance, repair, construction, and cleaning. But the question is, what will happen to our jobs? Like, what will happen when one developer is able to do what 10 developers are doing, like the famous 10X developer? Well, the truth is that there is so much software to be built on, and there is so much product and roadmap backlog, that it's very likely that in the short and medium term, we will experience a boost in productivity. Also, it's important to differentiate between substituting and complementing. Because most of the jobs, 63%, are going to be complementing. I mean, ChildDPPT are going to be complementing these jobs. I'm pretty sure that all of us have already been using ChildDPPT to complement our jobs, to help us with a lot of stuff. And only 7% of those jobs are going to be substituting. And 30% of the jobs are going to remain unaffected. Also, job displacement, because of automation, which has happened already many times in the past, like the Agricultural Revolution, the Industrial Revolution, and more historical events. When this happens, it's normally been offset by new job creation, significant labor cost savings, hyper productivity for non-displaced workers. Also, AI could raise annual use labor productivity growth by 1.5% over a 10-year period. I'm personally very excited about the future of AI. I think all of us should just work very deeply with AI and ChildDPPT, GPT-4, to make this work for us. And it's all about adaptation before we get to this point. Are you real? Well, if you can tell, does it matter? I mean, we have to admit, it's a little bit scary. But don't worry, GPT is good. We won't end up like in Terminator or some things like that. It's all good. We totally have this under control. So, thank you. And hasta la vista, baby. If you have any questions, this is my Twitter, and a faithful Twitter. And these are some resources that you can use for GPT and ChildDPPT and all. So, thank you very much. That's it. All right. Thanks a lot, Liz. We have a lot of questions, like I expected, to be honest. Yeah. Let's start with the top one. How can we assure about security of our internal company data shared with OpenAI api? I mean, there is a whole section when you go to documentation in OpenAI about security. I didn't read the whole thing, but there's like a series of clear step-by-step of how you can make secure your application. Also, I think there's also some scandal because there's been some information that is being stolen by ChildDPPT. And yeah, I mean, there's a whole thing about security. I see. Yeah. What is the name of extension that you use to execute these scripts in visual code from MD files? Yes. So, please install it. My company is working on it. I'm working on it right now. It's called RunMe. If you go to VS Code, just install RunMe, and you can just run all your commands from the ReadMe, which can help you with documentation, with demos, with, yeah. If you want to, like, if you have like 10 Docker commands that you want to install, you just click buttons and it installs it for you. So, please go ahead and install RunMe. All right. Yeah. Next one is about usage of ChildDPPT. Can you limit the conversation to a specific topic by using the api? Yeah. I mean, just as you can configure your, the avatar. I mean, you can say, I want to speak like, yeah, like Steve Jobs or Yoda or whatever. You can also limit the conversation to ChildDPPT. You can say like, okay, you're going to be like, give me answers that are 160 characters long and it only talks about, I don't know, the cosmos or the space or whatever. You can definitely limit the conversations. You just have to do that in the beginning, in the prompt when you set the, yeah, the configuration. All right. What, you had one slide with the dots, if you remember. So, what these parameters mean in 1.75 billion? The 7 billion parameters is like all that was collected from the internet and all these different parameters that we use to, yeah, to be able to provide answers to the users. So, yeah. Right. And I think it's relative to what we had. What companies really want to provide database access to ChildDPPT? Isn't it actually risky and non-compliant? Sorry, can you repeat the question? What companies really want to provide database access to ChildDPPT? Isn't it actually risky and non-compliant in your opinion? Well, to be honest, I don't know. Yeah. Another one was regarding one of the slides was 96 gigabyte M2. What happened with this? If you remember, of course. No, I mean, oh, sorry. I did miss that. Oops. Yeah. Now that I think about it. Yeah. So, basically, it will show like if there was availability of this product. I totally forgot to show that, but oops. Good catch. Nice. Do you think AI tools can replace node.js in the near future? Who's going to replace? AI tools. artificial intelligence. Yeah. I don't think in the near future. I think maybe it could happen, but in a far away future, like long term, like not short term. I don't think that's happening really. How about using ChildDPPT with node.js to enable its usage in banned countries, in banned countries where it's not, they're not available? Hmm. Wow. I don't know, really. Like it's a very complex topic. Yeah. Yeah. Yeah. It's exploding. We have more and more questions. We have a few minutes, but after the talk, after the questions, you can also find Liz in the Q&A room, of course. So, a few more maybe. How about using ChildDPPT with node.js to enable, that's one I asked, sorry. Since OpenAI used human text to train ChildDPPT, how can we ensure that our node.js applications are excluding bias? Well, like that's when you have to give, like the data should be as wide as a representative as possible, because it's easy to get into bias if you train the model to just think a certain way. So, if you have a diverse set of data and there's representative, that's how you can avoid biases on your node.js applications. Yeah. That was, I think you would like to answer that. Can you tell a bit more about RunMe? RunMe? Oh, yeah, sure. So, yeah, just basically I'm working on that with my company. It's just an extension that is very cool and I would super appreciate you go to VS Code and install it. You can find it on the marketplace or you can go directly on VS Code and just look for RunMe and install it. And then, yeah, just basically with just one button, you can run all the commands that you need and you can make, like, for example, onboarding people, it could be even better because sometimes when it's new people, there's so many things to install and there's so many documentation and stuff. If you just have everything clear and just click buttons and install everything, that's also good. Also, for blog posts or for documentations that you have out there, if you want to check that they're working correctly, you can just use RunMe to just click buttons and see that it's working correctly. So yeah, I'm really excited to be working in this product. So, please download it and install it. My company will like it. It will allow me to keep going to events like this. So, please. Awesome. Thanks a lot, Liz. Thank you.