MERN Stack Application Deployment in Kubernetes

Bookmark

Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.



Transcription


All right. So let's get started with our workshop today. So hi, everyone. Welcome to this workshop about Murn applications and kubernetes. So a lot of content that we will cover. As I said, I will be monitoring the chat. There is also a recording that will be made available tomorrow. You should have access to all of it. So don't worry too much about taking notes. If you want to follow along, I'll probably go fast because there's a lot of content. But at least you'll have the recording. You'll be able to look at it in your own time and kind of pause when you need to if you want to try it on yourself or try it on your own afterwards. I'll also share a GitHub repo and you'll have all the resources that you need to reproduce everything that we'll be discussing today. All right. Perfect. So out of curiosity, who here is javascript developer? Who's completely new to Murn? Who's completely new to kubernetes containers? If you want to just drop a line in the chat, that will help me to guide me in the process or in the workshop. I'll try to spend more time in the places where people have an actual interest. But the plan for today basically is to go and build a full stack javascript application using the Murn stack. So we'll be using that. We'll start from scratch. I've got an empty folder here just waiting for some code. So we'll build an application, frontend, backend, connect to a database and running on node.js. So and then we'll try to containerize all of those different components, all of those different containers, those different little containers and those little services that we will eventually then at the end, ultimately deploy to kubernetes. So we've got people that have some experience with kubernetes, a little bit less with Murn, a little bit less with kubernetes, a lot of experience with javascript. Trying to get into node.js. That's good. That's good. You'll see how to build a basic application on node.js here. We'll start from scratch. I won't go into the advanced concepts of Express or react, but you should be able to have enough to get started at least. I think that's kind of the goal here. At least you'll know what to Google when you hit an error at this point. Intermediate knowledge. Awesome. Awesome. Perfect. Jack of all trades. And I think that's very important, especially for this type of workshop. We will be touching a lot of things. We'll be touching frontend. We'll be touching some backend. We'll be touching some devops. So a lot of different things. So if you're one of those generalist developer, I think this is a very, very good workshop for you. All right. So let's just jump right into our subject. But right before let me introduce myself. So hi, my name is Joel. I am based in Ottawa, Canada, as I've already mentioned. If you joined a little bit early, I work as a developer advocate for mongodb. And if you ever, ever want to reach me, if you ever have any questions, any comments, just feel free to follow me on Twitter. That's usually the easiest way to get in touch with me. I'm more than happy to answer any questions that you might have. DMs are always open. It's just an easy way to get in touch with me. And sometimes I post some smart stuff, mostly useless stuff, but sometimes smart stuff. It's an easy way to get in touch. Okay. So we'll talk about a little bit about the Mernstack. So one of the very popular ways to deploy applications nowadays is using three tier application architecture. So basically you've got clients that connect to a backend server, which then connects to a database server. So we've got an example here of our application. So basically the front end is downloaded from the web, runs inside of the browser. We've got large single page applications. There's a lot of frameworks nowadays. I don't have to, you know, I'm pretty sure you're all familiar with some of the major players. So react, angular, vue. And those let you build those front ends, those clients. And those clients will connect to a backend server. So that's a very popular way to build your application. You've got an api, and then you've got your front end. And this way you can easily expand your application. If you need a mobile application later on, well, it's kind of easier if you already have that api built. So that's a big difference compared to, let's say, older PHP applications where everything was stored in the backend. So you've got a backend, and then your backend will be in charge of connecting to the database server. There are different ways to do that. So that's a very popular one. That's the one we'll see today. You could also go serverless and connect directly to a database if you were using some sort of data platform, a cloud data platform. But we're not going to go into that today. We'll really look at the case where you have a backend server. And it has a lot of benefits. Not ones that we will exploit a lot today, but there's a lot of benefits where the security is a little bit better. So you can really tweak your queries and you can kind of abstract away the database so users can't see anything on what's going on in the backend. So that's a little bit more secure. There's a few benefits here. But when we talk about the Merge Stack, really, we're looking at three different technologies. So for our database, we're looking at mongodb. So that's the M, the Express, that's the E. So that's an Express server. It's also the N. So Express running on node.js. Express is a framework to build backends or HTTP servers running in node.js, very popular one, probably the most popular nowadays. And finally, our frontend will be running react. So we'll build a react application, a node.js running Express backend, and we'll use a mongodb database. So what we'll be doing in the first part, so I'll try to keep that in three separate parts, as I said. So building the Merge Stack, containerizing everything, and then deploying on kubernetes. So the first thing for our Merge Stack will start by deploying a mongodb. I'll use the container here. I'll cheat a little bit. And I won't go into the details of it. I'll come back to that in the container part. And then we'll build a node.js application. Finally, we'll build a react frontend, all of that ideally in less than an hour. So gotta stretch those fingers. I was trying to think of the simplest application I could build, really keep it as simple as possible. And I decided to build a guestbook. Yeah, we've got some people that are clearly familiar with GeoCities already. I was trying to figure out, and if you're not familiar with GeoCities, that was the hosting platform in the 90s. And a lot of websites have this feature, which is to have a guestbook. That was one of the top features where people could just put in their name and put in a comment, and they would all display on their website. So it would persist data. That was such an advanced feature, such a cool feature. But it's simple because there's a simple form, no authentication whatsoever, just have everybody put in. And Alex, you're right. I was thinking about implementing that calendar as well, but I decided to keep it very, very simple and only have that guestbook. But oh, yes, I so wanted to have that calendar on that page. All right. So what we'll have here is those three tiers. So we will have our Mongo database. It will have one simple database that has one collection in it. So it will have the MernCates. Cates is an acronym used for kubernetes if you're not familiar with. So MernCates, and it will have an entries collection. Once again, if you're not familiar with mongodb, database is a database. Instead of using tables, it uses collections. And instead of using records inside of a table, we'll talk about documents in a collection. But essentially, it works in a very similar way. The way to access the data is a little bit different, but we'll get to that. Next, we'll have an Express server. Our Express server will have two routes. It will have one route to fetch all of the data from the website, so get entries. So it'll just send back all of the guestbook entries. And it will have one post route where people will be able to post a new route. So we'll create that api. And then we'll have that react application that will have just a single route that will display both the form and all of the entries. So just keeping it as simple as we can. So this one will be running on NGINX eventually once we'll deploy that into kubernetes. All right. So I guess we're ready. So let's get started with some actual coding. So as I said, we'll actually go through the whole process of building a full MernStack application. So let me store that away for now. Open up my terminal. I'll create a new folder for this event. So devops.js. There it is. Okay. CD into it. All right. Nothing so far. That's all good. So we'll start by creating our backend, as I said. So we'll just create a folder called back. And from here, let's just open up VS code and start coding. Actually, that's not entirely true. Just before I get started coding, one of the things that you'll want to do is to create an npm in it. That'll take care of creating your package JSON file. I'll just get all of the defaults accepted. So I already have this file now created. You know, it provides some information to node.js. So if you're not familiar with node.js, look up what package JSON files are. But they're pretty much used by any project nowadays, any javascript project. So there shouldn't be anything very different here. We'll be using npm to install different packages. So the packages that I'll need for this application are Express, which is the server framework that we'll be using. I'll also need a couple of dependencies, and I'll just add them right away. So .env will be one. Course will be one. And I'll need the mongodb drivers eventually. So that's all we'll be needing for now. So I'll just go ahead and install that. The other library I'll be using, which if you're following this afterwards, you might want to install Globally, because it's a very useful tool, Node.mon, which tracks changes on your project and just reloads your server every time. So it's, instead of stopping your server, restarting it every time, a very useful tool. I already have it installed Globally. So I'll just use it. So Node.mon. Of course, there's no file right now. So it crashes, which makes perfect sense. But I guess now that we've got this one, we are ready to get started with our application. All right. So let's make sure that this is actually readable. And we'll go into our back folders. That's good. We'll create our first file. I'll just need to move that video. All right. Our first file, index.js. And immediately you should see that our Node.mon was restarted. So you can see that it's really tracking those changes. All right. So if this is not big enough, or if you can't see or if there's anything, feel free to use the chat again. I'll be more than happy to actually use it. I've got a question. I'll get back to that question, Globally, in just a few minutes, if you don't mind. All right. So open the new terminal. That's all good. And now we're ready to get started. So let me just look at my cheat sheet here. The first thing that we'll need is to declare our Express server. I'll be using the require syntax right now. Some people will say I should be using imports. Let's stick to the classics for now and keep it as simple as possible. So I've required Express. That's all I need to start my Express server. I'll also require another library right away. And once again, we'll get back to that. But .env, I'll require it and run the config method right away. And I'll come back to that one. The next thing I want is to declare my port. I'll be using an environment variable for the port on which I should be running this application. So using environment variables are a great way to make sure that your application can run in multiple environments. So say on my local machine, I want to run it on port 5000, or if I want to switch that on port 5001 because I already have something on port 5000. Using environment variables is an easy way to make sure that our application can run in multiple environments. Now that we've got that, let's instantiate our application. So Express. So we've got an application that is available for us. And now we can start adding routes to that application. So we'll add a get route for the slash healths endpoint. That's an endpoint that is often used by kubernetes services or by services running on kubernetes. So we'll just use that as a standard. Every request has access to the actual request that came in as well as the response that we will be sending. It's really hard to speak and try to debug all at once. All right. So what this route will do is that it will send a simple object and the object will only say status. Okay. That's all we'll do. We'll just assume that. I mean, if the route is available, then we know that all statuses are good. Everything is green. So we'll just send status. Okay. And we will also send a status of 200, just to acknowledge that everything is running. So if we've got a service that wants to know if it's running, it can check that route and check that there's a 200 there. Finally, we need to start our server. So we will listen on a specific port. And there is a callback that is available to us. And in here, we'll just log a message confirming that the server is actually started. So servers started on port that can be useful for debugging purposes. So we know which port it started on. There you go. All right. So it looks like I have my first express server. So if we're looking at started on port undefined, so that is because, well, I didn't define or haven't defined a port yet. So export port equals 5,000. And now if I restart, you can see that it's now running on port 5,000. Right. So I now have a server. I have a specific route. So let's just open up a new window. There it is. And try to hit five, that local host 5,000, 5,000 slash health. And there we go. So we've got status. Okay. So we've got our full first server running. So, so far, so good. So we've finally got our full node.js express server running. Well, it's a very simple one. It has a simple single endpoint, but we've got something coming up. Now that I have my server up and running, I'll just go ahead and start a Docker instance. Not a Docker instance, but a mongodb running inside of a Docker. Installing mongodb can be complex, and then maybe you don't want all of it. Using Docker is very convenient for that. It's ephemeral. So once I stop that Docker instance, it'll just kill everything else with it, remove any data that was inserted in my database. So it can be very useful for testing purposes on your local development environment. So to start a mongodb container, I use the Docker run command. We'll give it a name. Actually, I should make sure that I don't have one running already. So, okay. Docker run, we'll give it a name. We'll call it mongodb. Dash dash rm, make sure it cleans everything up. Once it's done, dash d runs it in the background. We will map some ports. So that tells Docker that any incoming request to that port should be mapped inside of that container. It also uses environment variables. So we can specify what is the root user name. So we'll say the user name for root will be user. Very simple. Not very secure. And then we've got mongodb root password. And we'll use pass. Once again, not the most secure username and password, but hey, that'll work. And finally, we tell what image we want to start or to run. So we'll be running the Mongo image, which is maintained by the Docker community. So there it is. I've got a confirmation. My Mongo instance is actually started. So if I do Docker PS, I can see that I now have, well, eventually, there it is. So we can see that I've got a server running on my machine. So it's that easy to install mongodb when you're using containers. So don't have to worry about installing anything or configuring anything. It just takes care of everything for you. Of course, I already downloaded the image. I had it in my cache. So it was a little bit faster. The first time you might run it, it might take a few seconds. But apart from that, it's very, very easy and very quick to start. If you want to verify that it's installed, I've got the Mongo shell in here. I can connect to mongodb colon slash slash user pass at 127 001 port 27017. And that should open up a shell. And I should be able to look at the databases. So of course, there's nothing in here right now. But in theory, I should be able to interact with our database right now. All right. So the next thing that I will do here is that I'll actually go back to my code right here. And I'll add a new file called .env. That file will be used to actually store a different environment variable. So I could store my port number. And I will also store the connection string that I have for my mongodb instance. So the one that I've just used. So mongodb colon slash slash user pass at 127 001 colon 27017. All right. So all defaults. And now that those two are in my .env, once this command will run, so it will actually use a .env library and running the config method on it will take those environment variables and just inject it in process.env. So now process.env actually has access to those two variables. So that's very convenient because now you can add this into a gitignore file. Just add your .env file and you don't have to worry about accidentally sharing your connection strings or your passwords and all that stuff. I don't know how many times I've actually sent a connection string to GitHub. It's like one of the errors that I do all the time. All right. So now that we've got those, let's actually add some routes. We'll want to add some routes to be able to add and retrieve content from that database to create our full node.js server. So I'll just come back here. I will eventually need my course library just so that I can do cross-origin resource requests. Don't worry too much about it. So we'll just do app.use and just use that middleware. So that's kind of boilerplate stuff. So we want to make sure that you're able to send requests from another domain. You'll need to do to add that. Next up, we'll add our mongodb. We require mongodb. So that requires the mongodb native driver. I had the question in the chat about why use the mongodb driver versus Mongoose. Mongoose is another library that works for mongodb stuff. I prefer to use mongodb the driver just because it's native. It uses all the syntax that I'm used to work with. At this point, I don't need to enforce schema in my front end. So Mongoose is more of an ORM or ODM, like an ORM for mongodb. However, in this case, that is not exactly what I need. What I need is really an access to the data. I want to keep it flexible. And if I needed to enforce the schemas, I could do that on the database as well. So that would be my approach in this specific case. But if you're already familiar with Mongoose, feel free to use that one as well. I'm just more familiar with the syntax of the driver, and I find it a little bit more intuitive to use. All right. So we've got our port is defined. It will now fetch that from the .env file. We can also add our connection string. So process.env.connection string. So now we have access to that. We will eventually need to use another middleware. Use express.json. This will enable us to read messages that come in JSON format in our post so that we'll be able to add new entries to our database. We'll also define just a general variable here, dbconnected. So we're not connected to the database. We'll just keep that as a status that we can add to our health service eventually here. So now I'll go ahead and create the code to actually connect my database. So that'll be getmongodb. And that will be, I'm defining that as its own function so that I can use async await syntax here. So there it is. And now I can declare my Mongo client constant, which will require mongodb.mongoclient. And actually I'm realizing that I already defined it so I can just use mongodb here. Yep, that's the one. So defining a Mongo client. And now I will define the client that I'll be using for my application. So it's await so that we make that synchronous. mongoclient.connect. We'll use our connection string. We will use a few features. So use new URL parser. True. So again, those are pretty much just boilerplate standard stuff options that you would use. Use unified topology. And true as well. So don't worry too much about those options for now. There we go. Okay, so now that I have that, I will try to get a database. So await client.db. And our database we'll call it mernkates. There it is. And we will not const, but we will change the status of db.connected. So if everything went well, this one will be equal to true. Now in theory, and if you look at my GitHub report I'll share later on, you'll see that I have a try against that. And if it is successful, I changed the content of the variable. But we'll just keep it very, very simple and only focus on a happy path for now. And now I'll return my database. So this one will create a connection pool. So if there's a lot of traffic into your website, it will start spinning up new connections. Or otherwise it will use a single connection to the database. It really tries to optimize. There's a lot of things that mongodb makes for, the Mongo client does for you. And back to the question about Mongoose. It actually uses the Mongo driver behind the scenes as well. So all right, so now it's time to actually call this function. So I'll generate a db global variable called db. So we'll just get our Mongo database. That will return a db object. So I'll just assign it to db. So I'm just taking that database here and assigning it to that global variable here. Okay, so now we can change our health route to make sure that we return the db connected. Let's just make sure we set db status, db connected. So we'll send the status whether the database is connected or not. So that'll be helpful for debugging, for example. And then we'll get started with some actual routes to our applications. The first one will once again be a get route. It will have a request and response object. So all get routes are, well, all the routes in Express are using that same syntax. So it's pretty much very always similar to that one. What we'll do in here is that we will get all the entries from, oh, and I'll make sure that this one is actually a nascent route as well. So I'll await db.collection. So entries, remember all of my entries should be in the entries collection. And I will find, I enter an empty filter. So just find all the records and convert that to an array. So that gives me all of my entries. So I can just use a response dot send entries, dot status 200. And we're just sending back all the entries. So really, you can see that the syntax here is relatively simple. At least I find it very intuitive when you're using the mongodb driver. So just specify the collection, find all of your records, convert that into an array and just send that back to the client that made the request. Of course, if we want to test this out, well, we should be able to test it out at the moment. So I can actually go here and say slash entries. But that will return me an empty object. Notice that it's actually nice because even if we don't have a database in a collection yet, it'll actually create it for us upon the first connection. But it will return an empty array if they don't exist. So that's what we got. But now we also want to make sure we have a route so we're able to add new entries into our database. So to do that, we'll use a post request, post to slash entry. It will also be an async function, request response. And there it is. So what we'll do here is that we will actually create an entry and we will just, once again, happy path only. So I'll just use directly the body of the message or of the request that was sent. So just take the body. I'll assume that it's already in JSON format and we will insert that directly into our database. So await db.collection entries. And we'll do an insert one. So we only want to insert one entry in our database. Request.body. Not request.body. Let's use that entry, of course. There you go. And then we'll send back that response. Somebody is unmuted right now. So if you wouldn't mind, I'm fully open to actually, if you have any questions, to unmute yourself. But in the meantime, while you're typing, please keep it muted. So what we'll send back is the actual result from mongodb. And we'll return a status 201, which means object created. So that should help us there. Right. We've got a question. Is it safe to use Mongo Client to avoid SQL attacks on inserts? It is safe to use Mongo Client. Of course, there are no SQL injections in this case because, well, we're not using SQL. There's that. There are also little things that you might want to be a little bit aware of. You might want to sanitize this. Don't use the request directly. That's just good practices. Make sure that you only add the fields that you actually need so that there's not someone that tries to put in some stuff there. But apart from that, it should be perfectly fine. All right. So I think we've got our full server running right now. So if we go back here, we should still see that server is started. It crashed at some point, probably while I was writing. So let's try to add a new entry to our database. Because we don't have a UI right now and I can't use my browser because my browser only does get requests. I'll actually use curl here. So we'll do curl. We'll send a payload. So it has to be JSON. So I need those quotes around it. So that will be a message from anonymous. And it will say message. Cool site, dude. Yeah. Back in the 90s. Okay. And then we need to add some headers. So we'll send a message with content type. Content type. Application slash JSON. I think that's it for my header. And then I'll make sure this is a post request. And finally, I will add my route, HTTP localhost 5000 slash entry. All right. Looks like it worked. So I've got an inserted ID back. So it sent me the result back. So if I try to open up my browser again, well, you can see that my entry was saved in the database. So that's it. I've got fully working backend server got a full, you know, that's a node.js. Like that's pretty much your most basic node.js you could ever have. Again, be careful. Make sure to use sanitize and stuff. Make sure to use try and catch where things could potentially happen. But apart from that, I mean, if you're looking for like the most simple web server you can have, that's pretty much it. So now that you've got your server, you could use it, run it with Postman and try to explore and play around with it. So that's it. We've got, we've got a full node.js backend. Let's, let's try to go ahead now and create all the front end to be able to access this database. So let me just change to my folder here. So I had my backend. So I'll just use NPX here to create a react app that will run the create react app. I'll just give it a folder. So front. So this is actually running create react app, which is an npm module that can be used to just create your react application. So it'll really generate all the boilerplate that you need. And I said, most, most modern projects use create react app nowadays to bootstrap their project. It just makes everything easier. You've got everything ready. You've got your web pack configured. If you had to do that by hand, it's, it's a lot of trouble, a lot of, a lot of boilerplate stuff that you need to do. So using that really, really helps you. So we'll use it here. But then we'll, we'll just remove all of the code and we'll start writing our own code for everything that has to do with the actual application. All right. So now we've got our front end. So let's actually change the folder. It is when, as part of the boilerplate, it actually has a .git folder already. So we'll go ahead and just remove it. So we don't have a git inside of our git. So there it goes. That removes the git repository altogether. And then we're almost ready to, well, we're ready. We can just use npm start and that will actually start our project. So it will start those, those file watchers. It will also start an auto compiler and it will open up a browser window that we can actually use and see the progress of our application as we're working on. Let's just close that. Getting started. We'll go back to front. Now there is one little change that you'll need to do eventually for that NGINX server to work. And I'll just add it right now because I always forget about it and it always comes and haunt me at a later point. So what you need to do is just add home page dot. And it'll know where it can find all the resources from this point, from the parent folder. Especially if you go down with multiple routes, it'll make your life easier. But yeah, don't worry too much about that again. So we've got an application. It's now up and running. So if we go to source slash app.js, we should see that page. So source app.js. We can see that we've got that logo and that text that was right there. So we can see it on the page right now. So let's actually go ahead and just remove everything. We'll create our own page. We want a guest book. So not exactly the same. So you can see that it reloads automatically. So there's nothing left. I'll just remove that logo as well. And I'll start writing some code. So if you're not familiar with react, basically it uses the JSJSX syntax. And it basically lets you kind of write some html as part of your javascript. It gets later on transpiled into some actual javascript code. But it makes it for a visual way. First time I saw JSX, I was not happy with that. I was like, no, no, that's not going to work for me. But once you get familiar with it, it's actually very useful, nice way to actually write some code. So we'll have our application. We'll have an h1 and h2. Welcome to my guest book. And then I said we'll have a single form at the top of the page where we'll be able to add entries, which will be listed a little bit lower right under these. So I'll create a first div. I'll have one label. The label uses or is for the field with the ID name. Note that in react, you can't use for because that's a javascript keyword, right? So that's why they use html for in the html or the JSX here. So just a quick side note. But let's add a label. So that will be for the name field. And we'll have an input. Type text. Type text. There it is. And it will have the ID name. There we go. And then we'll need another form or another one for the message. And actually, let's not use a input text. We'll use a text area here. ID equals message. And oops, we need to change this one as well. There we go. I think we've got it. Of course, if you see me do a typo or if you see me do a bug as I'm typing, feel free to let me know. Live coding can be hard sometimes. And then we'll need a button to be able to submit that form. So once again, we'll go with just the classic way. Wow. Type, equals button. And it will have an ID, submit button. And that's it. We'll just call it submit because that's what we had in the 90s. And then there's this thing where everybody would always add like a clear button. I don't know why. I don't know why we did that. But we want a 90s experience. So we'll add a clear button just to clear the form. I think that's it. So now we should have a page with a form. So nothing fancy here. As I said, it's just a page with two fields and two buttons that do, well, absolutely nothing for now because, well, we didn't add any code. One other thing that we'll want to do is to start listing all the guestbook entries afterwards. So we'll use a reusable component here. So what we'll do is that we'll we'll create a components folder. And in here, we will add a new file guestbookentry.js. And we'll use that for our guestbook entries. The reason for not using, somebody asked me what's the reason for using type button instead of type submit. It's just because I didn't want to, when you hit enter, it automatically submits the form. And I want to use it on click. Like there's there's no real reason, just mainly because I prefer to use the type button really. Submit will really submit the form and reload the page when you use it correctly, the right way to use the submit. So I kind of prefer to use it that way. I prefer to use a type button. So just to remove that default type submit. Other questions. So it won't refresh the page there. Oh, there we go. Peter, that's the case. And then we need to handle submit and start propagation. So yes. So yes, the one way would be to use on submit and then have to stop propagation so it doesn't reload. It's just a lot of code using type button. It just makes it easier. Not fully sure that it's good for accessibility and readers though. So something to keep in mind. All right. So we're ready to create that component. So we'll do an export default function. And that will be guest book entry. And we'll take some properties. So those props are when you, so what this will do basically is that in our JSX we'll be able to use guest book entry. Prop one equals, or prop one equals hello, blah, blah, blah. And we can add more. So really we'll be able to use that item as an html tag basically. So that's where JSX comes really, really useful is that you can basically create your own tags with their own properties and you can reuse those in code. All right. So we will start by creating this. We will return. And we will, oops, oops, oops, breaking everything. Return. And we'll return a JSX block. JSX block. All right. And we'll use div class name. I'll need a few classes just to make sure I can eventually style this a little bit. Note that once again, class is a reserve keyword in javascript. So you can't use class. So it has to be class name here. So if you copy and paste code blocks, like bootstrap or something, be careful because that is always an error that comes up. So we'll add a, just a horizontal ruler. We'll give it the class divider. You also always need to close your tags for JSX to work. So that's why I use those self-closing tags here. And then we'll have another div class name equals gb guestbook name. And it will have a span with a class name. I want to make sure that I have the right one, but I think it's label. We'll say name. And then it will take the property name and just display it on the screen here and render it right here. And then we will have another one, which will be very similar. So we can just copy, but this one will be gb message. We'll also have a name, message, props.message. All right. So now we'll be able to use a tag, pass it a name and a message, and it will just display those tags. So that's just kind of how you would create a reusable component in react. All right. So let's go back to our application and let's just add those entries in here. To use those, we will need to add the entries into the state of our application. So to do that, I'll need to import a few libraries. Import set state from react. Set state is a react hook. So very useful way to work with react application. So if you haven't done the switch to hooks yet, you should definitely look that one up. So what I'll do here is that I'll define my two variables, entries and set entries. So entries will be the state of my application and set entries will be used to set the state of my application. So I'll use state and we'll give it an empty array for now. Eventually, though, it would need to use the data from the mongodb server. Use state. Thank you. Somebody spotted that. Thanks. All right. And now that we've got that, we will also import our new component. So import guestbook entry from component guestbook entry and we will loop through all of our entities. So all of the entries dot map. I will just use E and we will return guestbook entry. Name equals, actually, we can just expand the whole object. There you go. We'll keep it very, very simple. Expecting. Oh, yes. I need to return this. There you go. All right. So this one will just be exactly the same as if I would have done name equals E dot name and so on for each one of the properties inside of our object. So now if I look at it, well, there's nothing because there's nothing inside of my entries. So we could add a fake one, just the name. Name Joel. Message. Hello. And if we look at it, we should have one entry in here. All right. So we've got our application. It's actually fully working. So we've got our full guestbook, but it's just not connected to a backend. Actually, there's one thing missing right now, which I think could really benefit. I'll just need to, I'll need one second here to sorry. There it is. I'll just need to open up my original project because I should have had it, but oh, it's actually open right here. So let's remove this one. And I have it right here, but it's hidden by the chat window. There it is. All right. Let's just do one little thing that we'll just go back into here and add some styling. I won't go into the details of styling, but because we wanted a GeoCities guestbook, I tried to give it that same look and feel as we had back in the day. So welcome to my guestbook, and I have my form now working and I can submit and clear. So I just need to actually add some code to make sure that everything works. But it looks good, doesn't it? All right. So I won't go through the css of that, but it really doesn't bring any value to this workshop, I'm sure. So let's go back to our app.js file and actually connect to our server now. We'll need another hook here, which will be useEffect, which will define a callback that will be executed once the component is loaded and when we want it to load, actually. In my app, I'll just add a small function here just to add my base URL. That'll make it easier for us in the future. So you never want to hard code the base path of your URL because that'll change from production and from environment. So eventually we'll want to put that inside of an environment variable. We'll get back to that later on. But for now, let's just isolate that so that we can actually use that environment variable. We'll also remove that fake data. So we'll actually connect and start using some real data in just a few minutes. And then we'll need to define a function to fetch the entries from our server. So that will be a nasync function. Yep, that looks good. Let entries from dvequal. We'll do an await. We'll use the fetch api. We'll use base URL. And remember, what was the URL for it? It was slash entries that would return all the existing entries from our database. Then we take our response and convert that into a JSON object. Again, make sure that you do add a catch block. Make sure that you try to catch those errors before they happen. But we're just looking at happy paths for now. All right. And now that I've got my entries from the database, I can just set entries to use entries from dve. So I'll just directly put those in. That'll set those into my state. It will change the value of the entries here. And this one will automatically be reloaded once it fetches those entries. So far, so good. So we now have a function to actually start those. And now we'll need to use our use effect hook. This one will start or be triggered once the page or the component is mounted. We'll just fetch entries. And you need to specify when to do that or when to run this use effect hook. You could... Come on, I'll get it. There it is. You could specify different things inside of your state. So whenever the entries change, please start that hook again. But in this case, we'll just keep it blank, which we'll say, we'll just run it once. And once it fetched the entries, or once it loaded the component, it will actually fetch the entries. All right. So let's test it out. So we can see that we now have the entry from Anonymous, the one that we did from the cURL request earlier on. So we can see that it is actually connected to the database. Pretty cool, isn't it? All right. So let me just... There's a few questions that are coming in. I guess we need to have the key value on the dot map. We need marquee. Yeah, definitely. We're going to talk about caching on the response of endpoints. No, we're not going to talk about caching on the endpoints. Not at this point, just because, yeah, there's a lot of content to cover. But you're right. You could add caching as well on the different endpoints that you have in here. All right. So we'll need different variables for our state as well for our form. So let's just add a couple of other state variables. So we'll use the same type of syntax, right? So use state. And this one will be an empty string for now. We can do another one for the messages as well if you want. SetMessage. Use state, empty variable to get started. So those will be attached to the name and message fields here. We will also write some code for our handle... For our submit button. So just do cons handle submit. It'll be a nasync function. So might as well put it right here. And this one will actually do that fetch to the post route that we've created earlier. So fetch from base URL slash what was it? Slash entry. There it is. And because it's a post, we need to add a couple of more options here. Boy, I can't type anymore. It's going to be nice in an hour or two. This will use a post method. And we will also specify the headers. So basically we're just doing exactly the same thing as we did with the with the curl request earlier. Content type. Content type. Application slash JSON. There we go. And finally we'll need to actually pass in the body of the request. Which will be a JSON.stringify version of this object name message. Right. There it is. And then we'll have our then we'll parse the response into a JSON object and we'll have access to that afterwards. And yeah, we won't do anything with the result for now. We'll just do a console log maybe so we can see it in our console. You should probably verify if the operation was successful and so on. But let's just not do it for now. We'll then clear the form and we will fetch the entries again. Again, you should probably just add it to the form instead of fetching the entries all over again, especially if you've got a large data set. But you know, keeping on a happy path. So that will go and use the fetch entries right here. So let's just go ahead and create that clear form. Clear form function. And we'll just set name and set message to empty variables. So that should take care of it. Finally, we need to connect our form. So we'll just say, where is it? Input field. We'll say value equals name. And the other one will be value equals message. And space here. Remove that one here. There we go. All right. So now this one is connected. Here's an interesting thing. Now you're not able to type anymore because it'll actually always refresh to the value of name and message. So we've got to make sure that when there's a change here, it actually updates those name and message variables. So in order to do that, we'll say on change. For the event, set name, not message. Name. E.target.value. So it'll just take the new value of that field and update our name here. We'll do the same thing for our message. So on change. Set message. E.target.message. All right. If we try this out now, now I can actually type. And now in theory, if I, but let's put in a real message. So Joel, hello. Now in theory, if I click on submit, I should be able to send a message to my server. Now it seems like it didn't like it. So let's check out what happened here. Submit. Oh, I know what happened there. Of course, we need to actually say on click. Handle. Submit. And in this one, we'll say on click. What was it? Clear form. All right. So this time it should work. So let's try this. Submit. And there we go. So I can now add data to my application, to my guestbook. Something happened. I'm not sure why. The message wasn't picked up. So let's just take a quick look. Why message? Message. Let's take a look at the slash entry. What was sent? And it only sent name. That's odd. Message. Message. I'll try to see if I can debug it in a second unless somebody can spot the error here. Message. Set message. I don't see it off the top of my head. Let's just try it again. Hello. And it doesn't seem to be sending the entries. All right. So that's fine. Let's not worry about that. Yeah, no, it doesn't save the messages. What I'll do is that I'll actually cheat and just take my code from the other application that I have, my backup one. Let's copy and paste. Oh, you know what? It might be in the guestbook entry. No, no, no, no. I said it wasn't fetched. All right. Let's just overwrite all of this. And now, remove this. Cons base URL equals Google host 5,000. Okay. There you go. So now we've got the application should be working. Let's remove that config. All right. It really doesn't want to let me do it, right? There it is. That should work. Okay. So now it's working. We've got our application. I can submit and I can submit new messages and they're accepted. So that's it. That brings me right on the one hour mark with a full MyrnStack application that we've built so we can see that we now have a full backend. So we've built our node.js backend here, which uses an express server. It has two different routes that are defined. So entry is an entry. It actually has three. It also has one to tell us the status of the database connection. That will be useful later on. We've also created our full frontend. Let's just go back to the frontend. We've used react. We've used the JSX syntax to create that message or that form. And then we connect to the local host 5,000. So we connect to the backend that we've created. And it just posts new entries or fetch new entries or posts new entries. So based on whatever we want to do with the form. So we've got our full MyrnStack application. As I said, I've kept it as simple as I could. So it was really trying to do that simplest application as possible. And it seems like we've got it. So it seems like it's working. And we can see that when we add an entry, we've got the response that is sent back to us. So yeah, congratulations. You've done your first Myrn application. All right. So that was the first part. Let's move on back to this presentation again. I'll just take that for my little break. Let's go back to the second part of this workshop and try to containerize all of those things. So if you're not familiar with containerization technology, basically, this is the definition from the Docker website. A container is a standard unit of software that packages up code and all of its dependencies. So the application runs quickly and reliably from one computing environment to another. So really, containers are all about packaging up your application into kind of like one giant zip file that not only contains the source code of your application, but it also contains everything needed to run that application. So that means in addition to just your source code, you would also have the node.js runtime for your backend. Basically, you have a whole operating system with the runtimes and with your source code. So everything needed to run that application is all packaged together. And then you can take that package and you can run it using Docker. You can put that into something like kubernetes to make sure it runs in the cloud. Now, the advantage of using containers is that they run exactly the same everywhere. Because it contains everything needed to run, it always, always runs the same. They're also meant to be ephemeral. So if you shut down a container, it will destroy everything and you can just start a new one and it's its own environment again. So it's a different way of thinking about working. You have to be to make sure that all of your applications are stateless so that they don't have any states so that you can easily tear those containers down and replace them with new ones as needed. So in our case, we have this Merge Stack application. So we've got those three tiers. So what we'll do here is that we'll actually have three containers for each one of our tiers. So we will have our database. So that one is already running in a container. So I've told you, I've shown you right at the beginning, I've started my container with mongodb and it's running on my computer right now. So it spins up a full mongodb server and it's ready to go. So you saw how easy that was to just start that server. And I don't have mongodb installed on my machine. So it just saves a lot of efforts and maintaining and I don't need to keep up the latest versions and so on and just always run the container. We will also create a container for our backend. That container will have the Express server and we'll also create another container for the front end. Now, each one of those containers will contain a lot of different things. So for the front end, for example, we'll have a Linux operating system, we'll have an Nginx web server and then we'll have our three files, one html, one css and one javascript. So we'll make sure that we build our package, store only those files onto an Nginx server and it will be blazing fast. It will really serve those files in the most efficient possible way. For the backend, we'll do the same type of thing. We'll build our own container, which will have, again, a Linux operating system. It will have the Node.js runtimes and it will also have our index.js files and the Node modules folder prepared for that specific environment. So that's very important because if you do an npm install on a MacBook, for example, it might actually install dependencies and some runtimes that will be different between a Linux operating system and the macOS operating system. So if you just copy blindly your Node modules folder, you might run into issues. So you want to make sure you run that npm install inside that container. And for mongodb, we've got the same type of thing. We've got the whole mongodb running. It has its own volume right now. So it writes into files into the container. But once I stop that container, all those files are gone. My database is gone forever. So we'll see later on how to persist that data. Right. So what we'll do now is that we'll create a backend container, frontend container, and well, mongodb is already inside of a container. However, data is not persisted for now, but we'll use the cloud version. So I'll start spinning up a cloud instance for mongodb. All right. So I see that we've got questions. So just before I run into that coding part, multiple instances. In an nginx proxy, we won't be using an nginx proxy. We'll kind of use it, but not directly. And multiple instances. So if you want to run multiple containers, that's when we'll start using kubernetes. So that's really the power of kubernetes. So yes, we will deploy multiple instances once we get there. All right. So let's get coding again. So in order to create container images, you will need to, can I show previous slide? Can I find the slides? This one? Or was it this one? I'll show the slides afterwards. Actually, there's nothing wrong with sharing the slides right away. Oops. Oops. I've sent them to only one of you. Everyone. There you go. So yeah, feel free to browse the slides as I'm going through them. That'll be easier than me going back and forth between the slides. My pleasure. Okay. So the first thing that we'll want to do now is to get started with our first container. To create a container, we will use what is called a Docker file. So I'll just go into, and literally a Docker file is a file called Docker file. So far it's easy. And it's just a set of instructions that we tell Docker how to create the container that we want to use. And they always use kind of the same syntax. You start from a base image. You never create an image from scratch. You start from a base image. In this case, I'll start from a node.js, Node 16 image. So this is an image that exists that is maintained by the community and it has node.js installed, all the runtimes, and I want it to use version 16. So you could not specify a version. It will always take the latest. But if you want to make sure that you enforce something and you want to make sure it runs the same, exactly the same everywhere, be as precise as possible. So if you want to run 16.4.1, go into as much details as you want. I'll just use Node 16 for now just to make sure it runs the same and it'll just pull the latest node.js image from version 16. I'll just hope that doesn't break anything. So I'll change the working directory. So that's the working directory inside of my container. So Docker will kind of create that container using a Node 16 image. It will cd into slash opt slash app, and then it'll be able to perform operations from there. So what I want to do is to copy my package JSON file, the one from my machine, into that slash opt slash app folder of my container. So it'll just take that package JSON. And from there, I want it to run that npm install command. So remember when I said you want to make sure that you run the npm install inside the container so that it actually downloads and runs and compiles everything for that specific environment. So that's what we're doing here. So that's why we take that package JSON and then we run npm install. When you're building your images, it creates layers inside of your system, and those layers are then uploaded afterwards. If you want to be optimal, that's the way to do it. You want to make sure that everything that the things that change the less often are at the top. So it creates those layers, and they're cached, and they're always based on the previous layer. So if that layer doesn't change, this one will be based on the same one. So it will reuse the cached layer here. Again, that npm install will reuse that cached layer here. So it's a lot faster to build your images and to push your images to your registry afterwards. So yeah, it's just a good practice to only push your package JSON, run that npm install, and then start copying your source code into that container. I only have one single file, so I'll just copy that file and use the cmd to tell it what command to run once the container is started. So it'll just use node. that will start my index.js server. Right, so that's it. node.js servers, the images are typically very simple. That is a typical node.js server image or Docker file that you would run. So now that you have a Docker file, you can come back to your console, find the right one. Which one is it? This one, this one. I'll need a third one. And to devops, there you go. I'll go into my back folder. I should have a Docker file in there. And I'll use the Docker build command. I'll give it a name. So I'll call it Joel Lord slash devops back. And I'll use dot to tell it the Docker file is right here. So Docker build will go ahead and it will run all the instructions that we set. So it'll start by downloading that node 16 image. That one was already downloaded. It changed the working directory, copied the file, runs that npm install inside of our container, copies that index.js file. And that's it. We've got our image working. The other thing that you can do here is that you can actually push this through a registry. Of course, you'll need to make sure that you're logged in first. But I should be logged in to my Docker Hub account already. And because I am, I can do a Docker push. Docker push and the name of my image. What was it? devops. I forgot what it was called. devops back. Okay. devops back. And that will take care of sending that image. And you can see that it has all of those layers. And you can see that I did a dry run yesterday. So it uses the same layers right here. So all those are exactly the same layers. So basically when we pushed, it just told Docker how to reuse those cache layers here. And then finally, those final layers changed. So just only push those. So you can see that I've used that caching mechanism here. That was an accident, but that's good. Okay. For my database now, I'll move away from that container because my container doesn't have, like it doesn't hold state and doesn't, there's a lot of things. Like it's really hard to, and you don't want to create a container that will actually have the data in it because, well, if the container crashes and come back, it will actually, you know, it'll destroy all the data, the new data. So to do that, to make my life easier, I'll just use a cloud instance. So you can go to cloud.mongodb.com, cloudmongodb.com. And that will take you to the mongodb cloud platform. What I'll do is that I'll create a brand new project, a brand new project. You can see I already had my try run here. So I'll just call it DevOps2. And that's good. I could add some team members if I needed to, I don't want to. And from here, I'll just go ahead, create my first database. You can actually, and signing up for an account is free. So don't worry, no credit cards needed or anything. And you can actually deploy free databases. So just go ahead, create a shared folder or shared cluster, specify the region where you want it to be hosted. I'll just keep all the defaults and just click on create server. It is secure by default, so you'll need to enter a username and password. Did I actually accidentally delete that, close that window? I don't know what I did there. So I'll try to find it again. Where was DevOps2? There it is. All right. So I was at database access. So the first thing you need to do is to add a new user. We'll say user pass. And we'll just keep all the defaults again. So I now have a user created. The other thing is that it doesn't accept a request from anywhere in the world. So you want to make sure that you add at least one IP address. So in this case, I'll just use mine. So now you have my username and password, but you still can't log into this cluster because the request will have to come from my current IP address. All right. So back to our deployment, we can see that it's creating that cluster. So really it's creating a very similar to what we did with our container, but it creates three instances. So it'll have that whole replica set. All the data will be persisted. It takes care of a lot of things for you. And we'll actually want to connect to this one later on. In order to be able to connect to it, though, what I'll need to do is to get a connection string. So you can go to connect, go to anyone. It doesn't really matter. And it will provide you actually this connect to your application. And you can actually get the code sample to connect to your application. So you can see here that this is my connection string. If I wanted, I could actually have, which is pretty much the code that we've already created into our node.js server. So you have all the code needed with the imports and everything needed here. If you were using a different programming language, you could also just use it from here. So just get your connection string for the programming language that you're actually using. All right. But let me just go here, remove this, and actually copy this connection string. This is what I'll be using right now. Because if I try to run that server and try to connect to my local machine, actually, let's just give that a try. So let's run the container that we've just built, the back end container. Run it in detached mode so it runs in the background. Make sure that we clean it up afterwards. I will give it a name. So Mern, Kate's back. We'll map some ports so that our incoming requests to my laptop are mapped all the way to the container. We will pass in the environment variables. Remember, when we had those environment variables, now it won't be using .end. It will actually use the environment variables that we're passing here. We'll use connection string. And if we use the same connection string as before, mongodb slash slash user pass at 127 001 colon 27017. So that was my original connection string. And then I specify the image. So the image was devops dash back. All right. So that will start my server. Ports are not available. That makes sense. That makes sense. So I'll need to stop my back end. That was my front end. Doesn't matter. And that'll take a second before it actually lets me do something again. It really didn't want to run that. Let's see what we have. Okay. So let it run there in the background. Not sure what it's doing. But let's start to run that container again. All right. So it's already in use. Well, of course. Docker stop. Docker remove. And let's start this all over again. All right. So I started this new container. Docker PS. I can see that my container is running. And if I do Docker logs. I see that it started. But if I try to do curl localhost 5000 slash what else? I see that DB status is false. So it's not connecting to the database anymore. And that's because that container is an entirely isolated process. It doesn't know what exists outside of it. So it's looking for a mongodb instance on 127.001, which is in its own process. And there is no mongodb database running there. Mongo database is actually running on the host, which is beneath it. But the container doesn't have access to the host machine. That's a security feature. That's very important. You don't want your containers to have access to that. That would be very dangerous. But it's also a little bit tricky. So how do you get those containers to work and to talk to each other? We'd need to do that internally inside of a kubernetes network, but we're not there yet. So what we'll do here instead, we'll just stop that container and burn gates back. Oops, I think it eventually crashed because it couldn't connect to the database. And then it just disappeared afterwards. What we'll do is that we'll actually change that connection string. And I'll go here again, copy this one over again and just use that connect. Oh, it didn't add a password. Docker stop. I already had it there. There you go. Docker PS. And I need to remember what my password is. It's probably that. And there it is. So now it seems to be running. So if I try that Docker log again, and there it is. All right. So server is connected. And if I do a curl to my slash health route, it is connected. So that works. So you can see now that because I have that environment variable, I'm easily able to connect to multiple databases. So because that one on my local machine didn't work anymore, I connected to my cloud instance using that connection string. So everything worked out of the box. And now I'm connected to this instance right here. So I can actually go in here, browse my collection, take a look at my database, my entries, and you can see that I have one entry already. As I was working on it yesterday, I can run a localhost 5000 slash entries, and I should be able to see this entry right here. So you can see that I'm connected to that new database now. All right. So I've got my back end, and now the back end is running inside of a container. All right. So we're ready for that one. Let's move on to using a front end container. Now, front end containers are a little bit more tricky, but we'll get to it. So once again, we'll create a Docker file here. And actually, just before we jump into this, we'll need to do one slight change to our application here. So in order for our application to be able to use environment variables, .env will be overwritten. Yes, .env will only use the values from the file if there isn't a value in the environment. So I hope that answered that question. In order to use environment variable in the front end, it gets a little bit tricky because, think about it, when you run an application or you run it inside your own browser, so you download the application from the server, so you download the react application into your browser, and it is actually executed on the browser. So the browser doesn't have access to the environment variables from the server. So what we'll need to do is to remove this here and actually move that to a file, and we'll overwrite that file when the server starts. Right? And I'll get into the details of that as we progress here. But the one thing that I'll need to do first is to make sure that I have a file that I can overwrite, and I'll just use a config.json file. I'll move in my base URL variable here. Base URL. And let's just keep localhost 5000 for now so that it works on our local machine. And go back here. And now we won't be using this. Instead, what we'll do is that we will import. We'll import config from config.json. There it is. Right? And in here, instead of using base URL, we'll use config.base URL. Same one here. So now our application is still running locally. So nothing changed as far as the application is concerned, but we've isolated those things in here. So what we'll want to do now is to create a container. We want to build that application, and we'll want to make sure that we change this to base URL instead. So we'll just use the same name and we'll use $baseURL. Because we have that inside of our server, we'll be able to do an environment variable substitution, overwrite base URL with the actual value from the environment, and then we will serve that from our NGINX server. Right? So there's a few hoops that we'll need to jump. There is a blog post that is available that really describes this whole process here. I've linked to it. I'll share a link at the end so you'll actually have access to that. If you want to build, whether it covers angular, react, as well as vue, they're all slightly different, but it's kind of a template for building those containers. So now that we've got this one, we can start actually working on our Dockerfile. I'll need a new Dockerfile. Where is it? Where is it? Front Dockerfile. All right. So just like we did for the other one, we'll need to start from a base image, but this one will be a little bit tricky. Remember I said I want to run everything from an NGINX server, but I will need to actually build my react application first. So I'll need to use a two-stage container image. So what I'll start by doing is that I'll start with a node.js image. I'll eventually need JQ, so I'll put in some environment variables here for that container. So I'll say use JQ version 1.6. We will run a wget command to actually download JQ. Now, there is no way I can actually type that full URL without making a mistake. So I'll just copy and paste it from my notes here, if you don't mind. So please bear with me. So just do a wget to GitHub, set a line, JQ, release, download, JQ, JQ version 1.6, JQ 64, Linux 64, because our container is running Linux. Don't forget about that. So I need the Linux version. Copy that over into slash temp slash JQ Linux, and I will just run the following command to move it from temp slash JQ Linux 64 into usr bin JQ. So I'm just moving that file over, and now I'll have access to that file. I'll just need to give it the right permissions, slash usr slash bin slash JQ, and now I'll be able to use JQ inside that container. Once again, this is all running inside the container, so you don't have access to it on your machine, but you can actually use it inside of your container. Now that I have that, so that's my basic image with the JQ now, I can change my working directory slash opt slash app. I can copy all of my files. I'll copy everything. I'll be lazy. I'll even copy my node.js folder, my Node modules folder, but we will rerun a build and we'll reinstall everything, so don't worry about it. Now what I'll need to do is to actually go ahead and find that config.json file and change any values from base URL to dollar base URL, right? And any field that would be in here, any property in here, will have that same structure. So if you have environment two, it will change to dollar environment two. You could also use sed, but because I didn't want to hard code anything, I've used JQ to make sure that it's a little bit more flexible. So really I'm looking for all the keys and I'm changing the value for dollar key. Makes sense? So what the command I'll run here is JQ. I'll convert everything to entries, then pipe that to the map values function in JQ. I'll find the key and I will give it the value dollar plus that key. I think so far so good. I close this and then close this one and then pipe that into reduce to all of the arrays values as item. And I will take this one and concatenate that with the dollar item and close this. I think that should work. So move that to src.config. So apply that on config.json and then copy that over to slash source slash config.tmp.json and then finally move that source slash config.tmp.json file back to overwrite that config.json file. Easy peasy, right? It took me a while to actually get that right command and I guarantee that I probably made a typo in there somewhere. But let's leave it like that for now. So basically what it does is that it will look through each one of those keys and just change the value to be the value of the key, dollar sign and the value of the key. And now because it will change that to base URL, we'll be able to do an environment variable substitution to change that to the actual environment variable value. Right. So yes, again, go to the article. It explains exactly what happens there. Finally, we'll do that npm install. So like I said, I copied over the node.modules, but let's just overwrite everything. And then finally we'll run npm run build. So that will create our react package. It will actually create one JS, one html, one css file. And then we'll be able to take those files and store them and serve them from an nginx server. So we'll only have the end result. We don't want to serve. We don't want to run node.js like we do on the development server. We want to really have those three static files and serve only that and we'll be blazing fast. So that will run. That'll give me my three files. But now I need to actually create my nginx server. So I'll do a second stage. So using another from tells Docker that it has a second stage here. We'll use a version number again. So 117. I'll need to specify the folder for my javascript files. That is part of it's kind of explained in the article. But it's just to make sure that we can reuse the same Docker file for all of our javascript frontend projects. But you could hard code that as well. Slash static slash JS slash star.js. So those are all of our javascript files are there. So this is where it will look for to change those environment variables. Okay. So now I'll need to use a specific starting script. So I'll read it afterwards, but I'll just add the code to copy it. So start nginx.sh. I'll create that afterwards. But we want to copy that file into our container. So slash user slash bin slash start nginx.sh. And now we will change the permission on that file. Slash user bin. It's not user. I don't know if you've recently learned that. I keep saying slash user just because I don't know. I'm so used to it. All right. And then finally, we will change to our working directory. Working directory will be slash usr share nginx share, which is the default folder for nginx to serve file from slash html. And finally, we will copy from stage zero. So from that first stage here, we will copy everything inside the slash opt slash app slash build folder into this folder right here, into our current working directory, into the shared folder right here. Finally, we will change the entry point of that image that tells it once the Docker container is started, run start nginx.sh. All right. So instead of using the default executable that you have to start that nginx server, run this start nginx image instead. All right. Oh, okay. So far, so good. Now, because I'm for the just to try to take a little shortcut here, I'll actually just copy that start nginx file. And we'll create a new file, start nginx.sh, and I'll just copy it here. So basically, what I do here is that I try to extract all the environment variables using print env, and I find the name of the variables. So I just extract the first part up until the equal sign and just take that first argument. I'm using said here for you, Gopala. And then I just override those and I just extract all the environment variables. Now, for each one of those files that were specified in the js folder, environment variable, I just make sure that I do that environment substitution for each one of the existing variables. So if I have an environment variable called base URL, it will actually try to overwrite dollar score base URL with the actual value from the environment. And then I create that temp file and then just rewrite that file to the original name. And finally, I start my nginx server. Right. So a little bit of voodoo magic here. So don't worry too much. Like I said, I'll share the article. If you're really interested, you can go through the details. But for now, you'll just have to trust me that it works. Finally trust me that it works. And I think I've got everything so far. So we're ready to actually go to our front folder and run that Docker build again. So we'll use a tag slash devops front and from doc from the current folder. And that will start compile that the whole image. And that one that one will take a little bit longer because it has to start from the node js. It has to copy over all of those files. There's a few little things that I will need to do. It seems like it's actually downloaded one of the images. So you remember when I said I'll use the latest version of node js node 16. It seems like I probably had an older version cached on my machine. So it's actually fetching the new image right now. And now it's going through all of those stages. It actually downloaded the the jq file, copied that over, change the working directory. It's now copying all of my files inside of that container, running that jq command, running npm install and npm run build. So this will take a couple of seconds again. You can see the progress as it goes. And pretty much doing pretty good on time. I'm like five minutes late. As I'm waiting for this one to run. Is there any questions with running javascript applications inside containers? I guess I didn't leave a lot of time and just finished right away. It was a bit faster than I expected. All right. Seems like we're good. So what I'll do is that I'll actually go ahead and run that container now. So I'll just do a docker run dash d dash dash rm dash dash name, mern, kates, front. I'll map some ports. This container is running on port 80. But I can't have anything running on port 80 on my machine. So I'll map any incoming requests to port 8080 on my machine and redirect that to port 80 inside that container. I'll specify that base URL. So it'll be using localhost 5000. 5000. And then the image will be the one I've just created. Assuming that everything went well, it should be working. Just like that. I missed the quote somewhere. There we go. And really? Again? All right. There it is. Docker ps to see what I have running right now. So I've got a container that was started two seconds ago. Seems like it's still running. So I can try to do a curl localhost 8080. It seems like it's working because I have javascript disabled in curl, obviously. So it tells me that I need javascript. But if I actually go in here, connect to localhost 8080, I should now have my application. And it is connected to my backend already. And I have that message, which is in my cloud database, which was right here. So I can see that this is the value that was added there. If I added a new document to the database, let's just insert a document. I'll say name. Name. Someone anonymous again. Message. Hello world. All right. Insert to my database. If I refresh the page, you can see that I've got this new message. That's distracting. Remove this one. All right. So you can see that new message from the database. So it's really connected. Everything works. We've got everything now running into containers connected to a cloud database. So again, that can be very useful. Now, if you wanted to share a database across all the different people in your team using a cloud database here makes a lot of sense because then you've got exactly the same one. Everybody can connect to it. You can have your production environment and development environment for your databases as well. So now everybody can connect to it, put in some junk in there. You actually all have access to that same database. That makes it a lot easier. And again, there's a free tier that is available for, I think it's 500 megabytes of data. So I mean, it's a lot of data for a free tier. Before you hit that limit, you'll be good. If you want to run production service, you'll probably need a paid tier, but hey. So yeah. So that brings us to the end of our container. So everything is now running on container. You can either run the database locally or in the cloud, as I'm doing right now. And both our front end and back end are now running inside containers on my machine. So that brings us to the last stretch. We want to put everything inside of kubernetes now. So we'll take those containers and we will deploy them into a kubernetes instance. Now there's a couple of things or ways that you could do that. For this specific workshop, I'll be using Minikute. I did run everything on a digital Ocean server as well. It worked like a charm. And it's very easy to use digital Ocean to actually deploy kubernetes cluster. So I haven't documented it yet. But if you're interested, I'm definitely planning on writing a blog post about the different ways to deploy different deployments. So I'll be using one specific operator. I'll get back to that. There's another one that I want to explore. And then I want to give the instructions for both digital Ocean as well as the local environment. And I will send that soon. But again, just follow me up on Twitter. And as soon as I finish that article, I'll post that there. So if you're interested in a real cloud deployment, that's what we'll do. But for the case of this workshop, I'll be using Minikube. It's a little bit easier to get started with it. You don't need an external domain and point that into your kubernetes cluster. There's a lot of hoops that you don't have to jump through. So it is a little bit easier to use Minikube. But ultimately, all the YAML file that we'll be building will still work on both servers. All right. So let's talk about kubernetes. Actually, let's look at what is kubernetes. Again, we'll talk about that from the actual kubernetes website. So kubernetes, also known as K8s, is an open source system for automating, deployment, scaling, and management of containerized applications. So really, what kubernetes does is that it manages all of those containers. So we've created those. Now we're going to send those to kubernetes and tell kubernetes, now you take care of that. You make sure that they can actually communicate with each other, that they can talk to each other, manage all of that network stuff, make sure that they're exposed to the outside world, manage that, and make sure that you manage those containers. And that's the important part. If you notice that the container crashed or something didn't work, kubernetes will automatically take care of trying to restart those containers. If you need to scale your application, if you put up an ad for a Super Bowl, that's the case, or Black Friday comes, you want to scale your application. You want 15 of those NGINX servers running, while you can just, with a simple command, tell kubernetes, scale, add 15 containers, and it will take care of routing the traffic, doing all that load balancing and all that stuff for you. So it's really there where it really, really shines. And once you're done, just scale down, reduce to a single instance, and you're good to go. So really, really, that's what kubernetes does. It scales those containers up and down, and it takes care of all that networking, which is a lot. That networking part is a lot to make work. So the first thing that we'll do is that we'll create a pod for our backend. And a pod is one or multiple containers working together. Really, what you want inside of a pod is containers that are really directly related to each other. In our case, we'll have a pod for our backend, another pod for our frontend, and they shouldn't be inside the same pod because they're not directly related to each other. Maybe we'll want to scale up our backend for some reason, because maybe it has a lot of business logic and it takes a little bit longer. So we want to make sure they're decoupled and we can independently scale them up or down. So most of the time, a pod will have a single container. It might have different containers to check for metrics and do monitoring and things like that. But ultimately, as far as your application is concerned, you would have one single container per pod as a rule of thumb. So in this case, we'll have a deployment. Deployment is a way to describe what you want. How many pods do you want? And how do you want to scale them? And what are the resource limitations that you want to put on those pods? So that's a deployment. Once you put a deployment into kubernetes, it will actually create a replica set that will be in charge of monitoring those containers. So we'll deploy two containers, our two pods, for our backend here. And we'll need a service as well. A service is that network thing that will tell, well, here are all of my containers. And instead of trying to access those containers directly, you'll access that service. And the service will be in charge of sending those different requests to all the different containers. Each time you start your containers, they will start with a random name. Because if you want to scale it up to 15, you don't want to specify the name for each one of your containers. So that's how you keep track of those containers. Even though they have random names, it doesn't matter. You're always interacting with your service rather than with the pods directly. So this is what we'll deploy for our backend. For our frontend, we'll have something very similar. So we'll have, again, a deployment. We'll have, well, let's put in two instances of that NGINX server with our files. And then we'll have a service to expose all of those. Now, those two can, the way I'll be using it right now, they will be able to see each other internally. But the application, because it runs on the browser, it will still need to be exposed externally. And those endpoints in our application will be exposed externally as well. So in order to expose everything externally, we'll use what is called an ingress. The ingress will map traffic. So anything that comes to my website slash api slash something will be redirected to the backend service. And everything that is just anything else, basically, slash anything else, will be redirected to my frontend. So I'll make sure we've got those rules in place so we can expose them. And finally, we'll need to also do the same thing for our Mongo database. Now, deploying stateful applications, and I've already hinted on that, but it can be very, very tricky. We'll need a deployment. We'll also need a service in order to expose all of our Mongo servers. But we'll also need to set up persistent volumes, persistent volume claims. We'll need to add the connection string, make sure that that connection string is accessible inside of our kubernetes deployment so that we can use it directly as an environment variable. And you might want to start putting up the different things like sharding, as well as replica. Now, adding all of that, that's where it gets a lot, it gets really tricky. So instead of doing that, what we'll do is that we'll actually use an operator. An operator is a way to basically install applications inside of your kubernetes cluster. It will create a bunch of custom resources, and using those custom resources, you'll be able to connect to your database cluster. There are two or technically three different operators that you can use for mongodb. In this case, I'll be using the Atlas operator specifically. This one is actually connecting to the cloud instance that I've shown you before. So now I'll be able to maintain and manage all of that cloud instance from inside of my kubernetes cluster. So that's very useful if you have everything running inside kubernetes, and you want to make sure that you can still configure an external cloud service. There's also an operator that is there for the community version and the enterprise version of mongodb, which would actually deploy the actual servers directly inside of your kubernetes cluster. So again, I'll have a blog post about using those two, but I've decided to use the Atlas one just because I think it's a lot easier, and it's easier to use a cloud instance, in my opinion. So I'll be using the Atlas operator. So it lets you deploy and manage databases from the Atlas cloud. We'll use the database backend deployment as well. So we'll make sure that it actually fetches those images that we've pushed into a Docker Hub and run those inside of kubernetes, and then we'll do the same for our front end. All right, let's get coding again. I'm starting to lose my voice. Those are long days of a lot of speaking. I've got about 45 minutes to go, so I think I'm pretty much on time here. We should be able to deploy everything. So the first thing I'll do is I'll actually use, as I said, I've used Minikube, and I did a derive run. So I'll make sure that I delete everything. Just so you see, I am not cheating, and I'll restart my Minikube server. No, please don't. And I'll need to, let's just start by Minikube delete. This will take care of just deleting everything that is currently running, and I'll start from a brand new Minikube instance. Can inter-service communication be secured? Yes, you can. One easy way to do that is just using namespaces, and they won't be able to talk to each other. There are ways to do a lot more advanced networking management. It is possible. How exactly you would do it, that is a little bit outside of my expertise, to be honest, but it's definitely possible to make sure that they don't see each other. But yes, as I've mentioned, I will be configuring it so that everything is visible from inside the network. All right, so I'll start. If you're using Mac OS, I had issues with the default driver, so just make sure that you use the dash dash driver equals virtual box and that you have virtual box installed on your machine. Yeah, I can't remember exactly what was the issue, but yeah, something. There was something. So while this is actually running, it doesn't matter. It'll just say that it's up and running at a certain point. I'll get started with my kubernetes deployment stuff. So the first thing I'll do is that I'll create a new folder here. And I'll create a new file and everything in kubernetes uses YAML. Technically, you can also use JSON. And as much as I hate YAML, it's actually easier than JSON. It gets really hard and you get into Yeah, no, use YAML. It's easier. Also, it's kind of the standard in the industry. So if you're looking at any example, they're all using YAML. All the Docker, not Docker, but all the kubernetes files are using the same structure. So you will start by specifying an api version. You will also specify the kind of thing that you want to create. You will need to add some metadata. And then you will have the spec, which is like the bulk of what you're trying to create. So in here, we'll start by creating a deployment. And then we'll also create a service. And we'll also put that in the same file. So we'll just separate different objects using those separators, and we'll be able to add multiple objects inside of a file. So for my deployment, I'll be using app slash v1. So that's the version of the api. I'm looking for an object of kind deployment. In the metadata, the requirement is name. So we'll call it mernkathes back. And then you would also use labels. Now, labels are the way that you'll use to find the different components inside of your kubernetes cluster. You'll see as we start adding different things, we'll have more and more and more things inside of our kubernetes cluster. So using labels is an easy way to find different components that are all related to each other. Now, there are no real standards to how to use them. There's a couple of best practices that are written basically for a single application. Use an app label and then different components. I will use a different label. So what I'll do here is that I'll just use app and component. Component will be back and the app will be mernkathes. All right. I see that Minikube is now done. I've heard it because my fans stopped running. So if I do kube, kubectl is the tool that you use to interact with a kubernetes cluster. So kubectl get all. I should see that. Well, there's not much. There's nothing. Right now there's only a kubernetes service. All right. So back to our deployment. So I've got all of the boilerplate stuff. Now I'm going to go and write the spec of my deployments of the actual description of the object that I'm trying to create. So the first thing is that I said I want to run two replicas, two pods with the backend server running. Now I'll also tell kubernetes that, well, for the deployment, try to find any pods that match the following label. So find pods that have component back and make sure that you have two of those running at any given time. Right. And if you don't see those pods, well, here's a template to create a new pod. So give it a random name. So you'll notice that we'll add and basically we're just rewriting all of this, but for the pods now. So metadata, we won't give it a name because it will automatically assign one automatically create a random name for us, but we will add some labels. So it will have app case and it will have components back. Right. So remember we said, well, match those labels, find pods that have the component back, make sure we have two running. And if they, if you don't, well, create one that has that label, right. Cause it needs to find them. And those spots will have the following spec. They will use a container. So here's a list of containers that we will have inside of that pod. We'll have a single one and we'll give it a name. So Mern case back. Here's the image to use for this container. So I'll use, I'll use the one that I am 100% sure that works that I uploaded that I pushed earlier. But in theory, you would use the one that you've just created on your local machine and just recently pushed. And make sure that you open up the following ports. So container ports, 5,000. So I want to make sure that kubernetes knows that something is running on that specific port. And finally, putting those environment variables. So name port value 5,000. And what's the, oops, let me make sure that indentation is key to a healthy YAML file. Finally, connection string value. Let's leave it empty for now. I'm using the YAML markup tool as well as the kubernetes extension in VS code, which is why I'm getting a message warning about not having a resource limits. So let's ignore those weekly line right now for now, but you shouldn't theory add different resources here. All right. So let's just go ahead and apply this file for now. So we'll use kubectl again. Apply-f and oops, let's move into the right folder. I see I have a question. I'll just deploy this and then I'll get back to that question. Apply-f back. Error validating. All right. Template, metadata, metadata. And, and, and let me see if I can easily find that one or else I'll just revert back to my line three. Thank you. What's with line three? Oh, thank you. Sorry. That's all I can see. All right. Metadata. Boy. Error when creating. Version app slash V1. That should be it. I'll actually open up this file here. And I'll cheat and just copy everything over, make sure that I've got those two replicas, everything else should be good. And we'll just keep this one empty. All right. So I don't want to lose your time. So I'll just directly change it. And once again, you'll have access to the GitHub repo. Probably just a little typo somewhere. Okay. So we can see that our deployment was created. So now if I do kubectl get all, I see a lot more stuff, a lot more than I had earlier. Earlier, I just had that kubernetes service, but now I've got a deployment that is created, tells me that I've gotten zero pods out of two that are ready. It also created that replica sets. Remember I said the deployment will create a replica set. The replica set has its own random, the replica set has its own random name here. It has a random ID. And the replica set created those two containers, make sure that they're running. So we've got two containers with each one of the two pods with their random names, container creating. So it's actually downloading the image inside of my mini cube instance right now. So it's not accessing my local cache. And you should see, there you go. So both of them are now up and running. I can take one of those and do a kubectl logs and give it the pod ID. And I should have something, but now it gives me an error. I see that I have an error because, well, I didn't specify connection string, but that's fine. We'll get back to that connection string later on as we finish that deployment. So now that I have my deployment, I'll need my service as well. I need a way to expose those pods. I'll need to tell kubernetes how to handle all of that networking part. And let me just remove that. So for that service, I'll go ahead and create an api version. Again, it will use same structure. So I'll have V1 in this case. I'll create an object of kind service. Service. I'll have some metadata. I'll type it correctly this time. It's a name. We'll get the same name. You can use the same name for different kind of objects. I find it a little bit easier to track than having murnkates back service, murnkates back deployment. So yeah, I prefer to use that. Murnkates for the app name. The component is still back. It's the same component, so it's just the service for that component. And then I'll create my spec. For my spec, I'll tell it to find any pods that have the following label. So very similar to what we did to our deployment. So find any component that has that specific label. And then these are the mappings for the ports. So port 80 for our service. So the port 80 of our service will map to the target 5,000. So incoming requests to the service on port 80 will be mapped to the containers on the port 5,000 using protocol TCP. And we can even give it a name so we can call it by name afterwards, although I'm not using it. So if I go ahead and apply this file again, so I just apply back, you'll see that deployment was configured. So apparently I changed something in there. Hopefully it didn't break anything. And then the other one was also created. So now if I do kubectl get all, I will see that my two pods are still running, but now I have this service that is exposed to the internal IP address, as you can see here with this specific port right there. All right. So what else do we have? And even if we, you know, let's just do a, I need to open a new terminal window. I can delete a pod. And if I do a kubectl get all again, you will see that while I now have three pods, because it tries to gracefully remove the one that I've tried to delete. So this is that one right here. So terminating, but it immediately started a new one. So you can see it right here. So if you want to do a deployment, for example, you want to push version two of your application, what will happen is that it will actually try to start those new containers with the version two. If they crash, it will just keep those version one pods up and running. But once the version two pods are up and running, you can use, it will automatically start terminating the version one pods. So it takes care of making sure that you always have 100% as close as 100% uptime by making sure that you always have the appropriate pods running at any given time. All right. So we now have that up and running. We're trying to move on to the front end. Let's just go ahead and deploy the front end. Front will be very similar. So front.yaml. And I'll just need my secret notes here. If you will bear with me. That's not good. There it is. All right. Because I have my auto completion tools, I'll just actually use it. So AppC1 metadata. The name for this will be main. Gates front. We'll add some labels as well. So remember, we're just doing exactly the same thing, but this time for the front end. So component will be different. So I'll be using this one and matching label. So we'll be component front. That's the label I want to match. There it is. Make sure that you've got that label. We'll also add the app label here. So far so good. And then I come to my container section. I'll actually use the one from here because they're very similar. So back to front. And I won't be using those resource limitations. Come on. There it is. Okay. And let's remove this line. Okay. So instead of using back, we'll use front for the name. The image is also front. The ports, this one is actually running on port 80 already. So it's not running on port 5,000, but it's running on port 80. And then we had one environment variable for base URL. And the value in this case will be slash api. And we'll see, remember, I've mentioned any incoming route to slash api will be redirected to a specific service. So we'll create that ingress later on. That should take care of my front deployment. I also need to create a service. So we'll once again, use that auto completion just to make our life easier. Copy that name just to make our life easier. Copy that name because we'll be using the same. There it is. Selector will be component front. Let's just steal that from there. Should also still be labels for my metadata. And there it is. And selector component front. So far, so good. And now I have to map my ports. So I'll have port. Actually, port. Target is 80 and the port is 80. So the service will be listening on port 80 and the target port will inside the container is also port 80. Finally, we'll have the protocol TCP as well as a name. We'll use the same name. All right. So that should be it for my front end. So I can go and apply this file as well. Not F. Dash F. Front. And cross our fingers. There we go. Everything is created. So now if I do kubectl get all. You see that I now have a lot of stuff going on. I only have one container here. So I could just go back into my deployment and say, well, you know what? I forgot to mention I want two replicas here. I can apply this file again. And you can see that it was configured. So there's a change that occurred. If I do get all, you can see that I now have two containers. One that was started four seconds ago. The other one was started 30 seconds ago. I have my front service. I've got my two deployments, my two replica sets. So I'm getting more and more things. So that's where a label starts to be a little bit more convenient. So if I do kubectl get all dash L component equals front, I should be able to get only those that are only the things that are for that component specifically. So that's where labels get very, very useful. All right. Looking good so far. The next step will be to well, one thing that we can do now, actually, let me just. So we'll do kubectl exec. Just like we can do for our containers, we can actually run commands inside of a pod. So I'll open up a bash session inside of a pod. And you can see here that, well, I'm inside my container. So I have my static files. I've got my index.html. If I have curled installed, I probably don't on this specific. No, it's not installed on this specific container. But in theory, I could be able to actually ping myself. If I do print env, I should be able to see all of those services that are exposed. So this is how you would find another service. So currently, I'm inside of my front end container. But I can see that my back service is host. There we go. So I've got the IP address of my back service here. So thank you, Alex, for the suggestion. There are a lot of tools that are a little bit easier than kubectl in trying to just use the login. So that's one that I'm not familiar with. So I'll definitely look it up afterwards. But yeah, so you can see here how everything is connected, how I can do different things. And you can actually go into those pods if you need to debug anything. All right. So now those two containers are running inside internally, but nothing is exposed to the outside world. So I still can't reach any of those servers. So let's go ahead and create a new file. I'll create an ingress. And this one I'll actually just copy and paste for the sake of time. And that is not the one that I want. I'm pretty sure I have another one somewhere. Please, please. There it is. Okay. So what I'll do here is that I'll create an ingress to expose different services to the outside world. So I'll be using nginx as a proxy. So that's your nginx proxy. I can't remember who has that question earlier. I'll use the following annotation. I'll just come back to that in a second. But it just rewrites the URL that is sent to the pods. So here I specify my rules. So I specify that my first rule is anything that starts with slash api with an optional slash, followed by anything else. Look for the prefix. So start by anything that starts with those. Redirect that to my back end service. So that makes sense on port 80. And anything else will just be redirected to my front end service. Now this rewrite here is that I'm telling nginx to take that second argument here and just send that second argument as a request to our internal services. So remember our express server, our back end here, the routes were slash entries. They were not slash api slash entries. So if I would just send that whole request directly to the back end, the back end wouldn't know what to do with it because it doesn't have a slash api slash entries endpoint. So what I'm doing here right now is that I'm rewriting. So whatever incoming request to the ingress to slash api slash entries will be rewritten as slash entries and then sent back to my back end server. That's a little trick that you can use there to create those. So now I can apply this new file. There you go. It's now created. So now what I can do is to actually get the IP address of my Minikube instance. So it runs in a virtual machine on my local machine. And now I can do a curl on that IP address and it should be redirecting the traffic to my server. Now I'll have to figure out why it's not working, but let's try slash api slash entries and it's still not working. Oh, I know why. Minikube does not have the ingress add-on by default. So you'll need to Minikube add-ons. That's what happens when you delete your whole instance add-on ingress. So this will take a second. It will just download the additional add-on. You have to do something similar when you're dealing with a deployment in the cloud. You'll need to specify to expose and to essentially do a little bit of configuration to make sure that everything is exposed to the outside world. So in DigitalOcean again, you'll need to create a load balancer and make sure that the traffic is redirected to your application. So now I can actually ping that server so it returns nothing because it's not connected to a database. But if I take that IP address again, find my window right here. I can actually go there and have access to my application. All right. So it is actually working. It is exposed. It is exposed to the outside world. I've got my slash api, which is also exposed. The only last part is that I'm still not accessing a database. So I'll need to install my Atlas cluster inside of my cluster. So what I'll do here is that I'll use the Atlas operator. As I said, an operator is basically a way to create custom resources and just manage all of your applications. So it's a way for software vendors to help you as a software developer or as a devops engineer to be able to manage those external applications by people that are actually experts at running it. So what we'll do here is that we'll use the Atlas operator to make sure we have access to Atlas from within our kubernetes cluster. If I can just find my notes again because I'll need to find the actual file to install. And not this one. Almost there. Sorry. And where is it? Atlas operator. There we go. Okay. So what I'll do is that I'll actually use Atlas operator. You can search for it. Atlas operator. I want to GitHub. So it's an open source project. So you can see the details of it. It also specifies you the exact instruction to install the operator itself. So we've got this file here, which we'll actually just install directly from the GitHub page and just find that YAML file and just run it. So I can run this. And as you can see, there's a bunch of different things that it ran. So it's a YAML file and it created, well, it created custom resources. So you can see now that it created the Atlas cluster, the Atlas.mongodb endpoint inside of our internal api, kubernetes api. And now we'll have access to Atlas cluster objects directly from from within kubernetes. Now, another thing that I'll do is that I'll need to expose a secret. I want to make sure that I add a secret inside of my Minikube, which will be able to be... So I, as the administrator of our kubernetes cluster or mongodb cluster, I want to make sure that secret is inside of kubernetes. So I can actually create it and add it right now. So I'll do that. So create, create secret. We'll call it generic. It'll be mongodb Atlas operator api key. So those different api keys, I will be needing, I'll be needing those, but I can find them in the access manager. If I go to the organization access here. So you'll need to create api keys so that the Atlas operator can actually interact with your cluster. So just go ahead, create a new api key, devops, devops.js. We'll give it the owner permissions that I can actually create and manage all of our clusters. And you've got your private and public key right here. Once again, you'll want to add some entries, some IP addresses for be able to manage it. So now, even though if you can see my api key, you still can't access it because it needs to come from my server. Right. So that will create all of the things that you need. I actually have those somewhere. And this one uses my pre-configured. I'll just need to find it. Where did I leave those? Right here. And this one, I will go back to my terminal. Just hide this from you for a second. If I can manage to get that window inside the other lab in the other. Right. Okay. So there you go. So I've created all of my secret keys now. And they are now hidden inside of secrets in my kubernetes cluster. So I don't have to, well, share those with you. And now I need to also label those keys so that mongodb knows where to find those api keys. So label the secret called mongodb Atlas api key. Label it with atlas.mongodb.com slash type equals credentials. And for the specific namespace mongodb Atlas system. Right. So it's now labeled. So that's good. So that tells mongodb where to find the credentials. And next up, we'll need to create a password for our database user. So create secrets. Generic atlas password. And from literal password equals mernkates. Right. Not a very good secret, but that works. From literal. There you go. Next up, we'll label that new password with the mongodb type credentials again. So it now has those labels. And finally, I'll just need to create my Atlas file. So atlas.yml. And I'll definitely need my cheat sheet here. All right. So what I'll do here, I'll now be able to access my new objects that are my Atlas instances. So atlas.mongodb.com slash V1, I believe. It is V1. It's not officially V1 yet. It'll be released in June at mongodb World. So if you're interested, I'll be giving a talk there about it. Again, link at the end if you're interested. I'll create an object of kind Atlas project metadata. We'll be using name mernkates project. So that will create a project. So a little bit, remember when I started the cloud instance, I actually created a new project and I created a new cluster. So this is basically what we're doing. But from inside of kubernetes now. So I'll call it mernkates. And then I'll add project. IP access list. So you don't want to make sure that you add those IP addresses that will have access to that cluster. I'll just be lazy here and accept everything, any incoming traffic from anywhere. Comment. Never do that. Probably shouldn't do that. But for the sake of this demo, it'll be more than enough. So that will create an Atlas project. We'll also want to create a cluster. So we'll use api version. Atlas dot mongodb V1. We'll create an object of kind Atlas cluster. So that will take care of creating our cluster for us. We'll add some metadata. Name. Trying to type too fast now. Not project, but cluster. And finally, we'll need to add the spec. So spec. We'll need a project reference. We'll tell Atlas use the following project. This is where you will put in this new cluster. So name. We'll use mernkates project. Also cluster spec. So this is how you will define your current cluster. We'll give it the name cluster zero, which is kind of the default, as well as some provider settings. In this case, what I want to do is that I want to create a free shared cluster as I did earlier. So I'll say instance size, m0. So instance sizes are arranged from m0 to m, I don't know, a big number. They just basically define a number of CPUs that you want and all that stuff. M0 is a free tiered provider name. Because this one is on a shared server, the provider name is tenant. If you were using a production server, say an N30, you would use provider name aws or azure or GCP. So you could use any of the cloud. So region name now. Region name. You can deploy it pretty much anywhere in the world. There's a couple of hundreds of different regions that you can use. And because this is a tenant, I need to specify what's the backing provider. So it will be deployed on aws server here. All right. Almost done. Remember, what's the next step that I did when I created that cloud instance? I went and created a project first. It was devops 2. There it is. So devops 2. I created my cluster 0. I then went to database access and I created a user. So this is the next thing that we'll do here. So exactly the same process, but again, directly from your kubernetes cluster. So api version will be the same one. We'll create Atlas database user. User. And we'll add some metadata. We'll just give it a name. Let's forget labels for now. Atlas user. And here's the spec for it. I'm trying to type too fast now. All right. Spec. We'll specify some rules. Rule name will be read, write any database. All right. So we'll just give all the accesses. database. This is the authentication database. So it should probably always be admin. Admin. Next up, we'll need to specify for which project this is. So project name. No, project ref. Project ref. We'll use the one that we've created earlier. So project. And the username will be mernkates. And the password, we'll use a secret. So the secret that we've created earlier, the secret was called Atlas password. And the value was actually the same one here. All right. And I've got everything that I need now to actually deploy a new cloud cluster. So I can just do apply, dash F, Atlas. Oops. Atlas. What did I call it? Where is it? Oh, there it is. Let's just move this. All right. Those were long days trying to do all of that. Okay. We've got a validation error, but let's just try that. There we go. All right. So we've created a project. We've created a cluster. We've created a user. What should be going on right now is that we should see that mernkates cluster. And because we've just asked it to deploy a new configure, it will actually look for changes. And you can see that it's actually deploying all of that inside of our cloud cluster. In the meantime, it's still running. I could have created a completely new cluster, but it takes about five to seven minutes to actually deploy those clusters. So I just want to reuse one just to make it a little bit easier here. Now that I have this cluster and as it gets deployed, I can actually see it. I'll just need my cheat sheet again because I'll be using a lot of jq again. So I can see different things now. I can go ahead and just clear this. This one is up and running. I can do kubectl get atlas clusters. And I can see that I have a cluster that is connected now. It's been running for 67 seconds. I can do same thing for atlas projects and so on and so on and so on. So I can now access all of my different resources directly from my CLI, directly from kubectl. So it kind of gets all mapped in and I can manage everything directly from here. So that makes my life a lot easier. It takes care of all of that. Remember when I said you have to make sure that there's persistence in our volumes and all that and now everything is managed automatically. It's all managed directly inside of our cloud instance. So you can go there, edit, change, access the UI, do whatever you want, both from the UI and from your kubernetes cluster. And it's all in sync now. The last thing that we want to do is to do kubectl get secret. So by installing that operator, by running that file, by creating that project, it actually created a secret here. So the name of the secret is the name of the project, but all in lowercase, mern, kates, and no spaces, uses, dashes. And the name of the user and mern, no, that's dbadmin. Yeah. And mern, kates, user. I can see, no, I'll need to do kubectl. So I've changed it at the last minute, kubectl. There you go. There it is. All right. So name of the user, name of the cluster, and the actual user name, name of the project, cluster name. All right. So let's just copy it over, kubectl get secret. And I can see that I do have that secret. It is created. I can output the content of it in JSON format. And you will see here that I've got, well, look at that, all of my connection strings, and they're all base64 encoded. So what I'll do is that I'll actually just pipe that through jq. Oh, I don't have the right one here. So jq-r.data, pipe with entries, and take the value and pass that to the base64 decode filter. And there we have it. So that's our connection string. Now we have it. Remember earlier, I got that manually from the mongodb UI, but now it's part of my secrets directly inside of my kubernetes. So that last little thing that I can do now that I can do now is to actually go back to my backend deployment. I'll just need to find the exact one right here. I'm not looking at the right screen. backend deployment. Remember, I had no connection string here. So rather than keeping a value directly encoded directly into our YAML file that ultimately you would push to GitHub and ultimately expose your connection string again, what you can do is to take that value from an existing secret. So secret key reference. Take it from the secret key, mern. Well, I can't remember the name again. mern, kates, cluster, zero. There we go. So take it from this secret key and use the following key. So what was it called? Connection string standard. Actually, I'll use connection string standard right here. So use that key. Now we can go ahead and redeploy our kubectl apply back. And there you go. You can see that it was configured. Get all. You can see that it's terminating the processes. It's slowly starting the new ones, the new backend. There's one that was still running. So it's making sure to do that rolling deployment. And if we go back to not this one, oops, showing all of my emails here. Where is it? And I go back to the IP address. Oh, I forgot it. So let's just go and get that IP address, Minikube IP. And there it is. And we are finally connected to that new database. So you see that the data is different because I'm actually connected to mern, kates, cluster zero. If I go browse that collection. So I had two different databases. So you can see that it's connected to this one and I can use it. I can put in a new message on that guest book. This is so cool. Submit that. It gets saved here. I can go back and see this inside of my database. Everything is connected. All right. Sharp on time. 1130. So managed to get it done in two and a half hours. Let's just go back to those slides to wrap it up very quickly. All right. So this is our final result. So we finally got our GeoCD guest book up and running. It's using an Atlas database. So it has three nodes with replicas in place. So it's running on six virtual CPUs. I think they have two gigabyte of RAMs each. And then we've got two instances of our front end, two instances of our back end. All of that is deployed on a kubernetes cluster, which also needs three nodes running. So you've got another six virtual CPUs. So we've got 12 CPUs, about 32 gigabytes of RAM to run this GeoCD's guest book page, which is probably a little bit overkill, but it's never going down. I promise you. There's so many different things, so many replicas. It'll be highly available, that's for sure. So this is what we've done today. So there's a lot of content in here. Again, a recording will be available. You should have access to it by tomorrow. So you'll be able to look at it, skip the boring parts, skip the parts that you were already familiar with, and then slow down the parts that you want to reproduce or you want to try or explore. So you'll have access to all of that. I'll also share a link with the GitHub repo that has all the code that I've just shown. It has all the steps that I was kind of cheating and looking at right here. But basically, we've managed to build a full MearnStack application. We've got our mongodb server. We've got an Express server running on node.js, which we created three routes, technically. One to see the health and the status of our applications, as well as two routes to interact with the data. So they are connected to the database. And then we had a react front-end where we created the form and we listed a bunch of entries from our guest book. So we've built a full MearnStack application. So you saw how easy it is to use javascript all the way. Very intuitive. It's easy to keep in the same thing. And within an hour, we had a full three tier application up and running. We've packaged up everything inside containers afterwards so that those containers can then be taken and deployed everywhere. So we might make sure that they're available for maximum compatibility across different servers, different environments. We've used environment variables as well. And you saw how to create those inside of a front-end container. So that one was a little bit trickier, but we still managed to do it. Think of containers more like cattle over pets. That's one way to think about it. Those containers should be built in a way that you can easily take them down so they don't keep state, they don't keep anything. You can easily take them down and just spin up a new one when needed. And that's where kubernetes comes into play. kubernetes will help you to deploy and scale that application. Make sure that you have those pods running, that you can easily scale them down or up again, and make sure that they're always running. Now, when you have a database or a stateful application, that's where it gets really tricky because those containers, they will lose the state when they get restarted, and you don't want that. So in order to help you, you can create different persistent volumes. There's a lot of things that you can add. But if you have a database, chances are that your database vendor has operators for kubernetes. So look for those. You saw how easy it was to just install them. It was a single line of just a kubectl command. It installed a bunch of different resources, and from there, I'm able to interact with my database. I was able to create a new cluster or access a cluster. Actually, I didn't create one. I accessed one that was already existing, manage all of those users, manage those permissions, and everything was done directly from within kubernetes. So using operators are a lot, a lot easier than trying to do all of that by yourself. So that's all I had. So thank you so much for your time. I'll stick around for questions for a couple of more minutes if there are any. If you want more information, take a look at easyurl2.mermkates. You've seen that name probably. It always sounds weird when I say it, but you've seen it pop up a few times, so you should be imprinted in your brain now. But in there, you'll have access to the GitHub repo. You'll have access to a bunch of resources, how to build front-end containers, as well as just general information about kubernetes and mongodb. So feel free to take a look at that.
152 min
11 Apr, 2022

Watch more workshops on topic

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career