MERN Stack Application Deployment in Kubernetes

Recording available for Multipass and Full ticket holders
Please login if you have one.
Rate this content

Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.

152 min
11 Apr, 2022


Sign in or register to post your comment.

AI Generated Video Summary

The Workshop covered building a full stack JavaScript application using the MERN stack, including the back end with a MongoDB database and Express server, and the front end with React. The application was containerized using Docker and deployed to a Kubernetes cluster. The workshop also covered using operators for managing databases and ensuring secure service communication in Kubernetes. The final application was highly available and the GitHub repo with all the code and steps was shared.

1. Introduction to MERN Stack and Workshop Overview

Short description:

Check out some of our contemporary work. Introducing the new Goodnight Book. Building a full stack JavaScript application using the MERN stack. Deploying a MongoDB. Building a Node.js application. Building a React front end. Building a guestbook.

Check out some of our contemporary work, but there is one more special edition, the extra special edition to the beginning of the 2020 edition of the learning simulator. And let's get started. I'm delighted to introduce to you the new Goodnight Book for those who are familiar with sounds like this. It's a truly unique piece of technology and it's this book that you'll be using for your next session during the chat. There is also a recording that will be made available tomorrow. You should have access to all of it. So don't worry too much about taking notes. If you want to follow along, I'll probably go fast because there's a lot of content, but at least you'll have the recording, you'll be able to look at it in your own time and kind of pause when you need to if you want to try it on yourself or trying on your own afterwards. I'll also share a GitHub repo that we'll be discussing today.

All right, perfect. So out of curiosity, who here is a JavaScript developer? Who's completely new to MERN? Who's completely new to Kubernetes containers? If you want to just drop a line in the chat that will help me to guide me in the process or in the workshop. I'll try to spend more time in the places where people have an actual interest. But the plan for today basically is to go and build a full stack JavaScript application using the MERN stack. So, we'll be using that. We'll start from scratch, I've got an empty folder here just waiting for some code. So we'll build an application, front end, back end, connect to a database and running on node.js. So, and then we'll try to containerize all of those, all of those different components, all of those different containers, those different little containers and those little services that we will eventually then at the end ultimately deploy to Kubernetes. So, you've got people that have some experience with Kubernetes, a little bit less with MERN, a little bit less with Kubernetes, a lot of experience with JavaScript. I'm trying to get into node.js. That's good, that's good. You'll see how to build a basic application on node.js here. We'll start from scratch. I won't go into the advanced concepts of Express or React, but you should be able to have enough to get started at least. I think that's kind of the goal here. At least you'll know what to google when you hit an error at this point. Intermediate knowledge, awesome, awesome, perfect. Jack of all trades and I think that's very important, especially for this type of workshop. We will be touching a lot of things. We'll be touching front-end, we'll be touching some back-end. We'll be touching some DevOps. So a lot of different things. If you're one of those generalist developers, I think this is a very, very good workshop for you.

Alright, so let's just jump right into our subject. But right before, let me introduce myself. Hi, my name is Joel. I am based in Ottawa, Canada, as I've already mentioned. If you joined a little bit early, I work as a developer advocate for MongoDB. And if you ever, ever want to reach me, if you ever have any questions, any comments, just feel free to follow me on Twitter. That's usually the easiest way to get in touch with me. I'm more than happy to answer any questions that you might have. The DMs are always open. It's just an easy way to get in touch with me. And sometimes I post some smart stuff, mostly useless stuff, but sometimes smart stuff. It's an easy way to get in touch.

Okay, so we'll talk a little bit about the merge stack. So one of the very popular ways to deploy applications nowadays is you're using three-tier application architecture. So basically you've got clients that connect to a back end server, which then connects to a database server. So we've got an example here of our application. So basically the front end is downloaded from the web, runs inside of the browser, we've got large single page applications. There's a lot of frameworks nowadays that don't have to, I'm pretty sure you're all familiar with some of the major players, So React, Angry Bird, Vue. And those that you build, those front ends, those clients. And those clients will connect to a back end server. So that's a very popular way to build your application. You've got an API and then you've got your front end. And this way you can easily expand your application. If you need a mobile application later on, well, it's kind of easier if you already have that API built. So that's a big difference based on, compared to let's say older PHP applications where everything was stored in the back end. So you've got a back end, and then your back end will be in charge of connecting to the database server. There are different ways to do that. So that's a very popular one. That's the one we'll see today. You could also go serverless and connect directly to a database if you were using some sort of Cloud data platform, but we're not going to go into that today. We'll really look at the case where you have a back end server. And it has a lot of benefits, not ones that we will exploit a lot today, but there's a lot of benefits where the security is a little bit better. So you can really tweak your queries and you can kind of abstract away the database so users can't see anything that's going on in the back end. So that's a little bit more secure. There's a few benefits here. But when we talk about the MIRN stack, really we're looking at three different technologies. So for our big database, we're looking at MongoDB. So that's the M, the Express, that's the E, an express server. It's also the N, so express running on Node.js. Express is a framework to build backends or HDDP servers running in Node.js. A very popular one. Probably the most popular nowadays. And finally, our front end will be running React. So we'll build a React application, a Node.js running express back end and we'll use a MongoDB database. So what we'll be doing in the first part, so I'll try to keep that in three separate parts. So building the Murn stack, containerizing everything and then deploying on Kubernetes. So the first thing for our Murn stack, we'll start by deploying a MongoDB. I'll use a container here. I'll cheat a little bit and I won't go into the details of it. I'll come back to that in the container part. And then we'll build a Node.js application. Finally, we'll build a React front end. All of that, ideally in less than an hour. So gotta stretch those fingers. I was trying to think of the simplest application I could build, really keep it as simple as possible. And I decided to build a guestbook. Yeah, we've got some people that are clearly familiar with GeoCities already. I was trying to figure out, and if you're not familiar with GeoCities, that was like the hosting platform in the 90s.

2. Building the Back End

Short description:

We'll have a guestbook feature with a simple form and no authentication. The application will have three tiers: a Mongo database with one collection, an Express server with two routes, and a React application with a single route. We'll start by creating the back end, installing necessary packages and tools, and creating the initial files. We'll declare our Express server using the required syntax and set up environment variables for the port. Using environment variables allows the application to run in multiple environments. Now, let's instantiate our application.

And a lot of websites had this feature which was like, they have a guestbook. That was like one of the top features where people could just put in their name and put in a comment and they would all display on their website. So it would persist data, that was such an advanced feature, such a cool feature. But it's simple because there's a simple form, no authentication whatsoever. Just, you know, have everybody put in. And Alex, you're right. I was thinking about implementing that counter as well, but I decided to keep it very, very simple and only have that guestbook. But oh, yes, I so wanted to have that counter on that page.

All right, so what we'll have here is those three tiers. So, we will have our Mongo database. It will have one simple database that has one collection in it. So it will have the MernK8S. So, MernK8S is a nicronym used for Kubernetes, so if you're not familiar with, so MernK8S. And it will have an entries collection. Once again, if you're not familiar with MongoDB, database is a database. Instead of using tables it uses collections, and instead of using records inside of a table, we'll talk about documents in a collection. But essentially, it works in a very similar way. The way to access the data is a little bit different, but we'll get to that.

Next we'll have an Express server. Our Express server will have two routes. It will have one route to fetch all of the data from the website, so get entries. It'll just send back all of the end guestbook entries, and it will have one post route where people will be able to post a new route. So, we'll create that API. And then we'll have that React application that will have just a single route that will display both the form and all of the entries, so just keeping it as simple as we can. This one will be running on Nginx eventually, once we'll deploy that into Kubernetes.

All right, so I guess we're ready. Let's get started with some actual coding. As I said, we'll actually go through the whole process of building a full mergestack application. So, let me store that away for now, open up my terminal. I'll create a new folder for this event, so devops.js. There it is, okay. CD into it. All right, nothing so far. That's all good. So, we'll start by creating our back end, as I said. So, we'll just create a folder called the back. And from here, let's just open up VS Code and start coding. Actually, that's not entirely true. Just before I get started coding, one of the things that you'll want to do is to create an npm init. That'll take care of creating your package JSON file. I'll just get all of the defaults accepted. So, I already have this file now created. You know, it provides some information to Node.js. So, if you're not familiar with Node.js, look up what package JSON files are. But they're pretty much used by any project nowadays, any JavaScript project. So, there shouldn't be anything very different here. We'll be using npm to install different packages.

The packages that I'll need for this application are express, which is the server framework that we'll be using. I'll also need a couple of dependencies and I'll just add them right away. So,.env will be one, course will be one, court, come on course. And I'll need the MongoDB drivers eventually. So, that's all we'll be needing for now. So, I'll just go ahead and install that. The other library I'll be using, which, if you're following this afterwards, you might want to install globally, because it's a very useful tool, NodeMon, which tracks changes on your project and just reload your server every time. So, instead of stopping your server, restarting it every time, very useful tool. I already have it installed globally, so I'll just use it. So, nodemon. Of course, there's no file right now, so it crashes, which makes perfect sense. But I guess, now that we've got this one, we are ready to get started with our application. Alright, so let's make sure that this is actually readable, and we'll go into our back folder. So, that's good. We'll create our first file. I just need to move that video. Alright. Our first file, index.js and, immediately, you should see that our NodeMon was restarted. So, you can see that it's really tracking those changes. Alright. So, if this is not big enough, or you can't see, or if there's anything, feel free to use the chat again. I'll be more than happy to actually use it. I've got a question, and I'll get back to that question, I'll go back in just a few minutes, if you don't mind. Alright. So, open a new terminal. That's all good, and now, we're ready to get started. So, let me just look at my cheat sheet for a second. So, I'm going to open up my cheat sheet here. The first thing that we'll need is to declare our Express server. I'll be using the required syntax right now. Some people will say I should be using imports. Let's stick to the classics for now and keep it as simple as possible. So, I've required Express. That's all I need to start my Express server. I'll also require another library right away, and once again, we'll get back to that, dot env. I'll require it and run the config method right away, and I'll come back to that one. The next thing I want is to declare my port. I'll be using an environment variable for the port on which I should be running this application. So, using environment variables are a great way to make sure that your application can run in multiple environments. So, say on my local machine I want to run it on port 5000 or if I want to switch that on port 5001 because I already have something on port 4000. Using environment variables is very easy to… It is an easy way to make sure that our application can run in multiple environments. Now that we've got that let's instantiate our application. So, express.

3. Adding a GET Route and Starting the Server

Short description:

We've added a GET route for the slash health endpoint, which returns a simple object with a status of okay. The server listens on a specific port and logs a message confirming that it has started. We can now access the server and check the status at localhost:5000/health.

So, we've got an application that is available for us and now we can start adding routes to that application. So, we'll add a GET route for the slash health end point. That's an endpoint that is often used by Kubernetes services or by services running on Kubernetes. So, we'll just use that as a standard.

Every request has access to the actual request that came in as well as the response that we will be sending. It's really hard to speak and try to debug. Not all at once. All right, so what this route will do is that it will send a simple object and the object will only say status okay. That's all we'll do. We'll just assume that I mean if the route is available, then we know that all statuses are good. Everything is green so we'll just send status okay. And we will also send a status of 200 just to acknowledge that everything is running. So, if we've got a service that wants to know if it's running, it can check that route and check that there's a 200 there.

Finally, we need to start our server. So, we will listen on a specific port. And there is a callback that is available to us. And in here, we'll just log a message confirming that the server has actually started. So, server started on port. That can be useful for debugging purposes. So, we know which port it started on. There you go. All right. So, it looks like I have my first express server. So, if we're looking at started on port undefined. So, that is because, well, I didn't define or haven't defined a port yet. So, export port equals 5000. And now if I restart, you can see that it's now running on port 5000. All right. So, I now have a server. I have a specific route. So, let's just open up a new window. There it is. And try to hit localhost 5000. 5000 slash health. And there you go. So, we've got status, okay. So, we've got our first server running. So, so far, so good. So, we finally got a full NodeJS express server running. Well, it's a very simple one. It has a single endpoint, but we've got something coming up.

4. Setting Up MongoDB and Connecting to the Database

Short description:

Start a MongoDB container using Docker to easily install and configure MongoDB. Verify the installation by connecting to the MongoDB shell. Store environment variables in a .env file to secure sensitive information. Add routes to interact with the MongoDB database. Use the MongoDB native driver for flexibility and access to data. Fetch the port and connection string from the .env file. Use express.json middleware to read JSON messages in POST requests. Connect to the MongoDB database using the async/await syntax and the MongoDB driver.

Now that I have my server up and running, I'll just go ahead and start a Docker instance. Not a Docker instance, but a MongoDB running inside of a Docker. Installing MongoDB can be complex, and then maybe you don't want all of it. Using Docker is very convenient for that. It's ephemeral, so once I stop that Docker instance, it'll just kill everything else with it, remove any data that was inserted in my database. It can be very useful for testing purposes on your local development environment.

So to start a MongoDB container, use the Docker run command. We'll give it a name. Actually, I should make sure that I don't have one running already. Docker run. We'll give it a name, we'll call it MongoDB. Innervariant, make sure it cleans everything up once it's done. Dash D runs it in the background. We will map some ports. That tells Docker that any incoming request to that port should be mapped inside of that container. It also uses environment variables, so we can specify what is the, in a DB root, username? So we'll say the username will be, for root, will be user. Very simple, not very secure. And then we've got Mongo init db root password and we'll use pass once again, not the most secure username and password, but hey, that'll work. Finally, we tell what image we want to start or to run. So we'll be running the Mongo image, which is maintained by the Docker community. So there it is. I've got a confirmation. My Mongo instance is actually started. So if I do Docker ps, I can see that I now have, well, eventually, there it is. So we can see that I've got a server running on my machine. So it's that easy to install MongoDB when you're using container. So no, don't have to worry about installing anything or configuring anything. Like it just takes care of everything for you. Of course, I already downloaded the image. I had it in my cache. So it was a little bit faster. The first time you might run it might take a couple hours, the first time you might run it might take a few seconds. But apart from that, it's very, very easy and very quick to start. If you want to verify that it's installed, I've got the Mongo shell in here, I can connect to MongoDB colon slash user fast at port 27.0.17. And that should open up a shell look at the databases. So, of course, there's nothing in here right now. But in theory, I should be able to interact with our database right now.

The next thing that I will do here is that I'll actually go back to my code right here. And I'll add a new file called.env. That file will be used to actually store a different environment variable. So I could store my port number and I will also store the connection string that I have for my MongoDB instance. So the one that I've just used. So MongoDB colon slash slash user pass at colon All right, so all defaults. And now that those two are in my.env, once this actually, once this command will run, so it will actually use a.env library and running the config method on it will take those environment variables and just inject it in process.env. So now process.env actually has access to those two variables. So that's very convenient because now you can add this into a gitignore file, just add your.env file, and you don't have to worry about accidentally sharing your connection strings or your passwords, and all that stuff. I don't know how many times I've actually sent a connection string to GitHub. It's like one of the errors that I do all the time. So now that we've got those, let's actually add some routes. We'll want to add some routes to be able to add and retrieve content from that database to create our full Node.js server. So I'll just come back here. I will eventually need my course library just so that I can do cross-origin resource requests. No worry too much about it. So we'll just do app.use. And just use that middleware. So that's kind of boilerplate stuff. So if you want to make sure that you're able to send requests from another domain, you'll need to add that. Next up, we'll add our MongoDB require MongoDB. So that requires the MongoDB native driver. I had the question in the chat about why use the MongoDB driver versus Mongoose. Mongoose is another library that works for MongoDB stuff. I prefer to use MongoDB, the driver, just because it's native. It uses all the syntax that I'm used to work with. At this point, I don't need to enforce schema in my frontend, so Mongoose is more of an ORM or ODM, like an ORM for MongoDB. However, in this case, that is not exactly what I need. What I need is really an access to the data. I want to keep it flexible, and if I needed to enforce the schemas, I could do that on the database as well. So that would be my approach in this specific case. But, you know, if you're already familiar with Mongoose, feel free to use that one as well. I'm just more familiar with the syntax of the driver, and I find it a little bit more intuitive to use. All right, so we've got our port is defined. It will now fetch that from the.env file. We can also add our connection string. So process.env.connectionstring. So now we have access to that. We also will eventually need to use another middleware, use express.json. This will enable us to read messages that come in a JSON format in our post so that we'll be able to add new entries to our database. We'll also define just a general variable here, dbConnected, so we're not connected to the database. We'll just keep that as a status that we can add to our health service eventually here.

Now I'll go ahead and create the code to actually connect to my database, so that'll be getMongoDB, and that will be, I'm defining that as its own function so that I can use the async await syntax here. So, there it is, and now I can declare my Mongo client constant, which will require MongoDB.mongoclient, and actually I'm realizing that I already defined it, so I can just use MongoDB here. Yep, that's the one. So, defining a Mongo client, and now I will define the client that I'll be using for my application, so it's await so that we make that synchronous, mongoclient.connect, we'll use our connection string. We will use a few features, so use new URL parser. True. So, again, those are pretty much just boilerplate, standard stuff, options that you would use. Use unified topology, and true as well. So, don't worry too much about those options, for now.

5. Connecting to the Database and Creating Routes

Short description:

We've successfully connected to the database and created a connection pool. The syntax for interacting with the MongoDB driver is relatively simple. We can retrieve all entries from the database using a GET request and insert new entries using a POST request. It is safe to use MongoClient to avoid SQL attacks on inserts, but it's important to sanitize the requests. With our fully working back-end server, we can now move on to creating the front-end using Create React App.

So, don't worry too much about those options, for now. There you go. Okay, so now that I have that, I will try to get a database. So await client.db and our database, we'll call it MernKates. And there it is and we will, not const, but we will change the status of dbConnected, so if everything went well, this one will be equal to true. Now, in theory, and if you look at my GitHub report I'll share later on, you'll see that I have a try against that and if it is successful, I changed the content of the variable, but we'll just keep it very, very simple and only focus on Happy Path for now. And now I've returned my database. So this one will create a connection pool. So if there's a lot of traffic into your website, it will start spinning up new connections or otherwise it will use a single connection to the database, it really tries to optimize. There's a lot of things that the Mongo client does for you. And back to the question about Mongoose, it actually uses the Mongo driver behind the scenes as well. All right, now it's time to actually call this function. So I'll generate a DB, global variable called db. We'll just get our Mongo database that will return a db object, so I'll just assign it to db. All right, so I'm just taking that database here and assigning it to that global variable here. Okay, so now we can change our health route to make sure that we return the db connected. Let's just make sure that we set db status, db connected. So we'll stand the status whether the database is connected or not, so that will be helpful for debugging, for example. And then we'll get started with some actual routes to our applications. The first one will once again be a get route. It will have a request and response object, so all get routes are— all the routes in Express are using that same syntax. So it's pretty much very always similar to that one. What we'll do in here is that we will get all the entries from — oh, and I'll make sure that this one is actually an async route, as well. So I'll await db.collection— so entries, remember all of my entries should be in the entries collection, and I will find— I enter an empty filter, so just find all the records and convert that to an array. So that gives me all of my entries, so I can just use a response.sendentries.status200, and we're just sending back all the entries. So really, you can see that the syntax here is relatively simple. At least I find it very intuitive when you're using the MongoDB driver, so just specify the collection, find all of your records, convert that into an array, and just send that back to the client that made the request. Of course, if we want to test this out, well, we should be able to test it out at the moment, so I can actually go here and say, slash entries, but that will return me an empty object. Notice that it's actually nice because even if we don't have a database in a collection yet, it will actually create it for us upon the first connection, but it will return an empty array if they don't exist. So that's what we got, but now we also want to make sure we have a route so we are able to add new entries into our database. So to do that, we'll use a post request, post to slash entry. It will also be an async function request response, and there it is. So what we'll do here is that we will actually create an entry and we will just, once again, happy path only. Right. So I'll just use directly the body of the message or of the request that was sent. So just take the body, I'll assume that it's already in JSON format and we will insert that directly into our database. So await db.collection, entries, and we'll do an insert one. So we only want to insert one entry in our database, request.body, not request.body, let's use that entry. Of course. There you go. And then we'll send back the response. Somebody is unmuted right now, so if you wouldn't mind, I'm fully open to actually, if you have any questions, to unmute yourself, but in the meantime, while you're typing, please keep it muted. So what we'll send back is the actual result from MongoDB, and we'll return a status to a one, and we'll return a status to a one, which means object created. So that should help us there. Right, we've got a question, is it safe to use MongoClient to avoid SQL attacks on inserts? It is safe to use MongoClient. Of course, there are no SQL injections in this case, because, well, we're not using SQL, but there's that. There are also little things that you might want to be a little bit aware of. You might want to sanitize this. Don't use the requests directly. That's just good practices. Make sure that you only add the fields that you actually need so that there's not someone that tries to put in some stuff there. But apart from that, it should be perfectly fine. Alright, so, I think we've got our full server running right now. So, if we go back here, we should still see that server is started. It crashed at some point probably while I was writing. So let's try to add a new entry to our database. So, because we don't have a UI right now and I can't use my browser because my browser only does get requests. I'll actually use curl here. So, we'll do a curl, we'll send a payload so it has to be JSON. So, I need those quotes around it. So, that'll be a message from anonymous and it will say message, cool side dude. Yeah, back in the 90s. Okay. And then we need to add some headers. So, we'll send a message with content type, content type applications, slash JSON. I think that's it for my header. And then I'll make sure this is a post request. And finally, I will add my route, localhost, 5000 slash entry. All right, looks like it worked. So, I've got an inserted ID back, so it sent me the result back. So, if I try to open up my browser again, well, you can see that my entry was saved in the database. So, that's it. I've got fully working back-end server. Got a full, you know, that's a Node.js. Pretty much your most basic Node.js you could ever have. Again, be careful, make sure that you sanitize and stuff, make sure that you use try and catch where things could potentially happen. But apart from that, I mean, if you're looking for, like, the most simple web server you can have, that's pretty much it. So, now that you've got your server, you could use it, run it with Postman and try to explore and play around with it. So, that's it. We've got a full Node.js back-end, let's try to go ahead now and create all the front-end to be able to access this database. So, let me just change to my folder here. So, I had my back end, so I'll just use MPX here to create a React app that will run the Create React app. I'll just give it a folder, so front. So, this is actually running Create React app, which is an NPM module that can be used to just create your React applications, so it'll really generate all the boilerplate that you need. And I said most modern projects use Create React app nowadays to bootstrap their project. It just makes everything easier, you've got everything ready, you've got your webpack configured. If you had to do that by hand, it's a lot of trouble. A lot of boilerplate stuff that you need to do. So, using that really, really helps you. So we'll use it here.

6. Building the Front End

Short description:

We've removed the existing code and started writing our own for the application. We've set up the front-end, removed the.git folder, and started the project using NPM start. We've created a guestbook page with a form for adding entries and started writing code for it. We've also added a reusable component for guestbook entries. The page currently has two fields and two buttons, but they don't have any functionality yet. We'll also need to implement listing all the guestbook entries and handle page refresh.

But then we'll just remove all of the code and we'll start writing our own code for everything that has to do with the actual application. Alright, so now we've got our front-end, so let's actually change the folder. It is, well, as part of the boilerplate it actually has a.git folder already. So we'll go ahead and just remove it so we don't have a git inside of our git. We'll just remove it. So there we go. So that removes the git repository altogether. And then we're almost ready to, well, we're ready. We can just use NPM start and that will actually start our project. So it will start those file watchers. It will also start an auto compiler and it will open up a browser window that we can actually use and see the progress of our application as we're working on. Getting started, we'll go back to front.

Now there is one little change that you'll need to do eventually for that Nginx server to work, and I'll just add it right now because I always forget about it and it always comes and haunts me at a later point. So what you need to do is just add homepage. And it'll know it can find all the resources from this point, from the parent folder, especially if you go down with multiple routes, it'll make your life easier. But don't worry too much about that again. So we've got an application, it's now up and running. So if we go to source.js, we should see that page. So source.js, we can see that we've got that logo and that text that was right there. So we can see it on the page right now. So let's actually go ahead and just remove everything. We'll create our own page. We want a guest book. So not exactly the same. So you can see that it reloads automatically, so there's nothing left. I'll just remove that logo as well. And I'll start writing some code. So if you're not familiar with React, basically it uses the JSX syntax, and it basically lets you kind of write some HTML as part of your JavaScript. It gets later on transpiled into some actual JavaScript code. But it makes it for a visual way. First time I saw JSX, I was not happy with that. I was like, no, that's not going to work for me. But once you get familiar with it, it's actually very useful, a nice way to actually write some code. So we'll have our application. It will have an h1 and an h2. Welcome to my guestbook. And then I said we'll have a single form at the top of the page where we'll be able to add entries, which will be listed a little bit lower, right under these. So I'll create a first div. I'll have one label. The label uses or is for the field with the ID name. Note that in React, you can't use for because that's a JavaScript keyword, right? So that's why they use HTML for in the HTML or the MD JSX here. So just a quick side note, but let's add a label. So that will be for the name field and we'll have an input. Type text. Type text. There it is. And it will have the ID name. There we go. And then we'll need another form or another one for the message. And actually, let's not use a input text. We'll use a text area here. ID equals message. And whoops, we need to change this one as well. There we go. I think we've got it. Of course, if you see me do a typo or if you see me do a bug as I'm type, feel free to let me know. Life coding can be hard sometimes. And then we'll need a button to be able to submit that form. So once again, we'll go with just the classic way. Wow. Type, type equals button. And it will have an ID. And it will have an ID, submit button and that's it. We'll just call it submit because that's what we had in the 90s. And then there's this thing where everybody would always add like a clear button. I don't know why. I don't know why we did that. But we want a 90s experience, so we'll add a clear button just to clear the form. I think that's it. So now we should have a page with a form. So nothing fancy here. As I said, it's just a page with two fields and two buttons that do well, absolutely nothing for now because, well, we didn't add any code. One other thing that we'll want to do is to start listing all the guest book entries afterwards. So we'll reuse and we'll use a reusable component here. So what we'll do is that we'll create a components folder and in here we will add a new file, guest book entry.js. And we'll use that for guest book entries. The reason for not using... somebody asked me, what's the reason for using type buttons instead of type submit? It's just because I didn't want to... when you hit enter, it automatically submits the form and I wanted to use onClick. Like, there's no real reason. Just mainly because I prefer to use a type button really. Submit will really submit the form and reload the page when you use it correctly, the right way to use the submit. So I kind of prefer to use it that way. I prefer to use a type button. So just to remove that default type submit. Other questions. So we want to refresh the page... Oh, there we go. Peter, that's the case. And then we need to handle...

7. Creating a Reusable Component in React

Short description:

To create a reusable component in React, we can use JSX to define our own tags with their own properties. By using div class name, we can style the component. We can add a horizontal ruler with the class divider and a div with the class name GbGuessbook name. Inside this div, we can use a span with the class name label to display the name property. Similarly, we can create another div with the class name GbMessage to display the message property. This allows us to use the component as an HTML tag and pass in the name and message properties to display them.

On submit, and stop propagation. So yes, the one way would be to use on submit and then have to stop propagation so it doesn't reload it. It's just a lot of code. Using type button, it just makes it easier. I'm not fully sure that it's good for accessibility and readers though. So something to keep in mind.

All right. So we're ready to create that component. So we'll do an export default function, and that will be guestbook entry, and we'll take some properties. So what this will do basically, is that in our JSX, we'll be able to use guestbook entry, prop one equals, or prop one equals, hello, blah, blah, blah, and can add more. So really, we'll be able to use that item as an HTML tag, basically. So that's where JSX comes really, really useful, is that you can basically create your own tags with their own properties, and you can reuse those in code.

All right. So we will start by creating this, we will return, and we will, whoops, oops, oops, oops, breaking everything, return, and we'll return a JSX block, JSX block. All right. And we'll use div class name, I'll need a few classes just to make sure I can eventually style this a little bit. Note that, once again, class is a reserve keyword in JavaScript, so you can't use class, so it has to be class name here. So if you copy and paste code blocks, like bootstrap or something, be careful, because that is always an error that comes up. So we'll add just a horizontal ruler, we'll give it the class divider. You also always need to close your tags for JSX to work, so that's why I used those self-closing tags here. And then we'll have another div, class name equals GbGuessbook name. And it will have a span with a class name. I want to make sure that I have the right one, but I think it's label. It'll say name. And then it will take the property name and just display it on the screen here and render it right here. And then we will have another one which will be very similar, so we can just copy, but this one will be GbMessage. We'll also have a name, message, props.message. Alright. So now we'll be able to use a tag, pass it a name and a message, and it will just display those tags. So that's just how you would create a reusable component in React.

8. Adding Entries to the State and Fetching Data

Short description:

To use the entries in our application, we import the useState hook from React and define the variables entries and setEntries. We use the useState hook to set the initial state as an empty array. We import the guestbook.entry component and loop through the entries, returning a guestbook.entry component for each entry. We can add a fake entry to test the functionality. The application is fully working, but not connected to a back-end. We add styling to give the guestbook a Geocities look and feel. We define a base URL function and remove the fake data. We define a function to fetch entries from the server and set the state with the fetched entries. We use the useEffect hook to fetch the entries once the component is mounted.

Alright, so let's go back to our application, and let's just add those... those entries in here. To use those, we will need to add the entries into the state of our application. So to do that, I'll need to import a few libraries... import setState from React. setState is a React hook, so a very useful way to work with React applications. So if you haven't done the switch to hooks yet, you should definitely look that one up.

So what I'll do here is that I'll define my two variables, entries and setEntries. So entries will be the state of my application and setEntries will be used to set the state of my application. So I'll useState and we'll give it an empty array for now. Eventually though, it would need to, you know, use the data from the MongoDB server. UseState. Thank you. Somebody just spotted that. Thanks.

All right, and now that we've got that, we will also import our new components. So import guestbook.entry from component guestbook.entry, and we will loop through all of our entities. So all of the, I will just use E, and we will return guestbook.entry, name equals— Actually, we can just expand the whole object. There we go. We'll keep it very, very simple. Expecting. Oh, yes. I need to return this. There we go. Alright, so this one will just be exactly the same as if I would have done a name equals E.Name and so on for each one of the properties inside of our object. So now if I look at it, well, there's nothing because there's nothing inside of my entries, so we could add a fake one. Just the name. Name, Joel. Message, hello. And if we look at it, we should have one entry in here.

All right, so we've got our application. It's actually fully working. So, we've got our full guest book, but it's just not connected to a back-end. Actually, there's one thing missing right now, which I think could really benefit. I'll just need to... I'll need one second here to... ["Tut, tut, tut." in background.] Sorry. There it is. I'll just need to open up my original project because I should have had it, but... Oh, it's actually open right here. So, let's remove this one. ["Tut, tut, tut." in background.] I have it right here, but it's hidden by the chat window. There it is. All right. Let's just do one little thing that we'll just go back now and add some styling. I won't go into the details of styling, but because we wanted a... a Geocities guestbook, I tried to give it that same look and feel as we had back in the days. So, welcome to my guestbook. And I have my form now working, and I can submit and clear. So, I just need to actually add some code to make sure that everything works. But it looks good, doesn't it? So... All right. So, I won't go through the CSS of that. It really doesn't bring any value into this workshop, I'm sure. So, let's go back to our app.js file and actually connect to our server now. We'll need another hook here which will be UseEffect, which will... which will define a callback that will be executed well, upon once the component is loaded and when we want it to load actually. In my app, I'll just add a small function here just to add my base URL. That'll make it easier for us in the future. So, you know, you never want to hard code the base path of your URL because that'll change from production and from environment. So, eventually we'll want to put that inside of an environment variable. We'll get back to that later on. But for now, let's just isolate that so that we can actually use that environment variable. We'll also remove that fake data, so we'll actually connect and start using some real data in just a few minutes. And then we'll need to define a function to fetch the entries from our, fetch the entries from our server. So that we'll be a nasync function. Yep, that looks good. Let entries from DB equal, we'll do an await. We'll use the Fetch API, we'll use BaseURL. And remember, what was the URL for it? It was slash entries that would return all the existing entries from our database. Then we take our response and convert that into a JSON object. Again, you know, make sure that you do add a catch block. Make sure that you try to catch those errors before they happen. But we're just looking at Happy Paths for now. All right, and now that I've got my entries from the database, I can set entries to use entries from DB. So I'll just directly put those in. That'll set those into my state. It will change the value of the entries here. And this one will automatically be reloaded once it fetches those entries. So far so good. So we now have a function to actually start those. And now we'll need to use our use effect hook. This one will start or be triggered once the page or the component is mounted. We'll just fetch entries. And you need to specify when to do that or when to run this use effect hook. You could... come on, I'll get it. There it is. You could specify different things inside of your state. So whenever the entries change, please start that hook again. But in this case, we'll just keep it blank, which will say we'll just run it once. And once it's fetched the entries, it'll...


Connecting to the Database and Q&A

Short description:

We now have the entry from Anonymous, confirming that it is connected to the database. There are a few questions coming in about key values, caching on endpoints, and Marque. While we won't cover caching in this workshop, it can be added to different endpoints.

So we can see that we now have the entry from Anonymous, the one that we did from the curl request earlier on. So we can see that it is actually connected to the database. Pretty cool, isn't it?

All right. So let me just start with we... There's a few questions that are coming in. I guess we need to have the key value on the dot map. We need Marque. Yeah, definitely. We're going to talk about caching on the response of endpoints. No, we're not going to talk about caching on the end points. Not at this point just because yeah, there's a lot of content to cover. But you're right, you could add caching as well on the different endpoints that you have in here.

Building the Full MERN Stack Application

Short description:

We added state variables for the form, wrote code for the submit button, and connected the form. We can now add data to the guestbook. There was an issue with the message not being sent, but it was resolved by copying code from another application. The application now works, allowing users to submit and save messages. The workshop concludes with the completion of the full MERN stack application, including a Node.js backend with defined routes and an Express server, as well as a React front end using JSX syntax.

All right, so we'll need different variables for our state as well, for our form, so let's just add a couple of other state variables. So we'll use the same type of syntax, right? So use state, and this one will be an empty string for now. We can do another one for the messages as well if you want. So set message... use state empty variable to get started. So those will be attached to the name and message fields here.

We will also write some code for our submit button, so just do cons.handle.submit. It'll be a nasync function, so I might as well put it right here. And this one will actually do that fetch to the post route that we've created earlier. So fetch from base URL. What was it? Slash, entry. Slash entry. There it is. Because it's a post, we need to add a couple of more options here. I can't type anymore. It's going to be nice in an hour or two. So this will use a post method, and we will also specify their headers. So basically we're just doing exactly the same thing as we did with the curl request earlier, so content type, content type, application slash JSON. There it is. Wow! Application JSON, but there you go. And finally we'll need to actually pass in the body of the request, which will be a JSON.stringify version of this object name message. Right, there it is. And then we'll have our STEN. We'll parse the response into a JSON object and we'll have access to that afterwards. And yeah, we won't do anything with the result for now. We'll just do a console log maybe so we can see it in our console. You should probably verify if the operation was successful and so on, but let's just not do it for now. We'll then clear the form and we will fetch the entries again. You should probably just add it to the form instead of fetching the entries all over again, especially if you've got a large dataset, but keeping on a happy path. That will go and use the fetch entries right here. Let's just go ahead and create that clear form clear form function and we'll just set name and set message to empty variables, so that should take care of it. Finally, we need to connect our form, So we'll just say, where is it? Input field, we'll say value equals name and the other one will be value equals message and space here, remove that one here. There we go, all right. So now this one is connected. Here's an interesting thing. Now you're not able to type anymore because it will actually always refresh to the value of a name and message. So we've got to make sure that when there's a change here, it actually updates those name and message variables. So, and in order to do that, we'll say on change, E is for the event set name, set name, not message, name E dot target value. So it'll just take the new value of that field and update our name here. We'll do the same thing for our message, so on change, E set message, E dot target dot message. Alright, if we try this out now, now I can actually type, and now in theory, if I, but I, let's put in a real message, so Joel, hello. Now, in theory, if I click on submit, now in theory, if I click on submit, I should be able to send a message to my server. That seems like it, they didn't like it, so let's check out what happened here. Submit, oh, well I know what happened there. Of course, we need to actually say on click, handle, submit, and in this one, we'll say on click. And what was it, clear form. All right, so this time it should work, so let's try this. Submit, and there you go. So I can now add data to my application, to my guestbook. Something happened, I'm not sure why. The message wasn't picked up. So let's just take a quick look. Why message, message? Let's take a look at the slash entry, what was sent, and it only sent a name, that's odd. Message, message. I'll try to see if I can debug it in a second, unless somebody can spot the error here. Value, message, sent message. I don't see it off the top of my head. Let's just try it again. Hello. And it doesn't seem to be sending the entries, all right. So that's fine, let's not worry about that. Yeah, no, it doesn't save the messages. What I'll do is that I'll actually cheat and just take my code from the other application that I have, my backup one. Let's copy and paste, oh, you know what? It might be in the guestbook entry? No, none at all, I said it was in patch. All right, let's just overwrite all of this. And now, move this, conspace url equals localhost 5000. We need to be in HTTP, wow. Okay, there we go. So now we've got the application, should be working, let's remove that config. All right. Really doesn't wanna let me do it, right? Okay. There it is. That should work. Okay, so now it's working. We've got our application, I can submit and I can submit new messages and they're accepted. That's it. That brings me right on the one hour mark with a full merge stack application that we've built. So we can see that we now have a full backend. So we've built our Node.js backend here, which uses an express server. It has two different routes that are defined. So entries and entry. It actually has three. It also has one to tell us the status of the database connection. That can be... That will be useful later on. We've also created our full front end. Let's just go back to the front end. We've used React. We've used the JSX syntax to create that message or that form. And then we connect to the local host 5000.

Containerizing the Application

Short description:

We'll create containers for each tier of our MERN stack application. Each container will contain the necessary dependencies and files to run the respective tier. MongoDB is already running in a container, and we'll spin up a cloud instance for persistent data. We won't use an Nginx proxy directly, and multiple instances will be deployed using Kubernetes. To create a container, we'll use a Docker file, which contains instructions for building the container image. The Docker file starts from a base image, in this case, Node.js 16.

So we connect to the data, the backend that we've created. And it just post new entries or fetch new entries or post new entries. So based on whatever we wanna do with the form. So we've got our full MIRNSTACK application. As I said, I've kept it as simple as I could. So this is really trying to do at as simplest application as possible. And it seems like we've got it. So it seems like it's working. And we can see that when we add an entry, we've got the response that is sent back to us. So yeah. Congratulations. You've done your first MIRN application. All right. So that was the first part. Let's move on back to this presentation again. I'll just take that for my little break. So let's go back to the second part of this workshop and try to containerize all of those things. So if you're not familiar with containerization technology, basically, and this is the definition from the Docker website, a container is a standard unit of software that packages up code and all of its dependencies, so the application runs quickly and reliably from one computing environment to another. So really containers are all about packaging up your application into kind of like one giant zip file that not only contains like the source code of your application, but it also contains everything needed to run that application. So that means in addition to just, you know, just your source code, you would also have like the node.js runtime for your backend, and you would also have, well, basically you have a whole operating system with the runtimes and with your source code. So everything needed to run that application is all packaged together. And then you can take that package and you make sure that it's, that you, or you can run it using Docker. And you can put that into something like Kubernetes to make sure it runs in the cloud. Now, the advantage of using containers is that they run exactly the same everywhere because it contains everything needed to run. It always, always run the same. They're also meant to be ephemeral. So if you shut down a container, it will destroy everything and you can just start a new one and it's its own environment again. So it's a different way of thinking about working. You have to be, to make sure that all of your applications are stateless so that they don't have any state so that you can easily tear those containers down and replace them with new ones as needed. So in our case, we have this merge-stack application. So we've got those three tiers. So what we'll do here is that we'll actually have three containers for each one of our tiers. So we will have our database. So that one is already running in a container. So I've told you, I've shown you right at the beginning I've started my container with MongoDB and it's running on my computer right now. So it spins up a full MongoDB server and it's ready to go. So you saw how easy that was to just start that server. And I don't have MongoDB installed on my machine. So it just saves a lot of efforts and maintaining and I don't need to keep up the latest versions and so on. I just always run the container. We will also create a container for our backend that container will have like the express server and we'll also create another container for the frontend. Now, each one of those containers will contain a lot of different things. So for the frontend, for example, we'll have both a Linux operating system. We'll have an Nginx web server and then we'll have our three files, one HTML, one CSS and one JavaScript. So we'll make sure that we build our package, store only those files onto an Nginx server and it will be blazing fast. It will really serve those files in the most efficient possible way. For the backend, we'll do the same type of thing. We'll build our own container, which will have, again, a Linux operating system. It will have the nodejs run times and it will also have our index.js files and the node modules folder prepared for that specific environment. So that's very important because if you do an NPM install on MacBook, for example, it might actually install dependencies and some run times that will be different between a Linux operating system and the... the macOS operating system. So if you just copy blindly your node modules folders, you might run into issues. So you want to make sure that you run that NPM install inside that container. And for MongoDB, we've got the same type of thing. We've got, you know, the whole MongoDB running. It has its own volume right now. So it has that, you know, it writes into files into the container. But once I stop that container, all those files are gone. My database is gone forever. So we'll see later on how to persist that data. Right, so what we'll do now is that we'll create a backend container, front end container, and well MongoDB is already inside of a container. However, data is not persistent for now, but we'll use the cloud version. So I'll start spinning up a cloud instance for MongoDB. All right, so I see we've got questions. So just before I run into that coding part, multiple instances, and an Nginx proxy. We won't be using an Nginx proxy. We'll kind of use it, but not directly. And multiple instances. So if you want to run multiple containers, that's when we'll start using Kubernetes. So that's really the power of Kubernetes. So yes, we will deploy multiple instances once we get there. All right, so let's get coding again. So in order to create container images, you will need to, can I show previous slides? Can I find the slides? This one? Or was it this one? I'll show the slides afterwards. Actually, there's nothing wrong with sharing the slides right away. Oops, oops. I've sent them to only one of you, everyone. There you go. So yeah, feel free to browse the slides as I'm going through them. That'll be easier than me going back and forth between the slides. My pleasure. Okay, so the first thing that we'll want to do now is to get started with our first container. To create a container, we will use what is called a Docker file. So to create a Docker file, just go into, and literally, a Docker file is a file called Docker file. So far it's easy. And it's just a set of instructions that we tell Docker how to create the container that we want to use. And they always use kind of the same syntax. You start from a base image. You never create an image from scratch. You start from a base image. In this case, I'll start from a Node.js, Node 16 image.

Creating Docker File and Cloud Database

Short description:

To create a Docker file for a Node.js server, start with a base image that has Node.js installed. Specify the version to ensure consistency. Change the working directory inside the container and copy the package.json file. Run npm install to download and compile dependencies. Create layers in your system to optimize building and pushing images. Copy your source code into the container and use CMD to specify the command to run. You can push the image to a registry using Docker push. For the database, use a cloud instance like MongoDB Cloud Platform. Create a new project and database, specifying the region and adding a username and password.

So this is an image that exists that is maintained by the community, and it has Node.js installed all the runtimes, and I want it to use version 16. So you could not specify a version. It will always take the latest. But if you want to make sure that you enforce something and you want to make sure it runs the same, exactly the same everywhere, be as precise as possible.

So if you want to run 16.4.1, go into as much details as you want. I'll just use Node 16 for now just to make sure it runs the same. I know it'll just pull the latest Node.js image from version 16. I'll just hope that it doesn't break anything.

So I'll change the working directory. So that's the working directory inside of my container. So Docker will kind of create that container using a node.16 image. It will cd into slash opt slash app, and then it'll be able to perform operations from there. So what I want to do is to copy my package.json file, the one from my machine, into that slash opt slash app folder of my container. So it'll just take that package.json. And from there, I want it to run that npm install command. So remember when I said, you want to make sure that you run the npm install inside the container, so that it actually downloads and runs and then compiles everything for that specific environment. So that's what we're doing here. So that's why we take that package.json, and then we run npm install. When you're building your images, it creates layers inside of your system. Those layers are then uploaded afterwards. If you want to be optimal, that's the way to do it. You want to make sure that everything that, the things that change the less often are at the top. So it creates those layers and they are cached and they're always based on the previous layer. So if that layer doesn't change, well, this one will be based on the same one. So reuse the cached layer here. Again, that NPM install will reuse that cached layer here. So it makes it for, it's a lot easier to, a lot faster to build your images and to push your images to your registry afterwards. So, yeah, it's just a good practice to only push your package json, run that NPM install, and then start copying your source code into that container. I only have one single file. So I'll just copy that file and use the CMD to tell it what command to run once the container is started. So it'll just use node. That will start my index.js server. So that's it. Node.js servers are, the images are typically very simple. So that is a typical Node.js server image that you, or Docker file that you would run. So now that you have a Docker file, you can come back to your console, find the right one. Which one is it? This one, this one. Oh, the third one. And do DevOps. There you go. I'll go into my back folder. I should have a Docker file in there. So Docker, and I'll use the Docker build command. I'll give it a name. So I'll call it Joel Lord slash DevOps back, and I'll use dot to tell it the Docker file is right here. So Docker build will go ahead and it will run all the instructions that we set. So it'll start by downloading that node 16 image, that one was already downloaded. It changed the working directory. Copy the file. It runs that NPM install inside of our container. Copies that index dot JS file. And that's it, we've got our image working. The other thing that you can do here is that you can actually push this through a registry. Of course, you'll need to make sure that you're logged in first, but I should be logged in to my Docker Hub account already. And because I am, I can do a Docker push. Docker push and the name of my image. What was it? DevOps. I forgot what it was called. DevOps spec. Okay, DevOps spec. And that will take care of sending that image. And you can see that it has all of those layers. And you can see that I did a dry run rest of the day. So it uses the same layers right here. So all those are exactly the same layers. So basically when we pushed, it just told Docker Hub to reuse those cached layers here. And then finally those final layers changed. So just only push those. So you can see that I've used that caching mechanism here. That was an accident, but that's good. For my database now, I'll move away from that container because my container doesn't have, like it doesn't hold state, and doesn't, there's a lot of things, like it's really hard to, and you don't want to create a container that will actually have the data in it because, well, if the container crashes and come back, it will actually, you know, it'll destroy all of the data, the new data. So to do that, to make my life easier, I'll just use a cloud instance. So you can go to, and that will take you to the MongoDB cloud platform. What I'll do, is that I'll create a brand new project, brand new project. You can see, I already had my try run here. So I'll just call it DevOps 2, and that's good. I could add some team members if I needed to. I don't want to. And from here, I'll just go ahead, create my first database. And signing up for an account is free, so don't worry, no credit card is needed or anything, and you can actually deploy free databases. So just go ahead, create a shared cluster, specify the region where you want it to be a hosted. I'll just keep all the defaults and just click on create server. It is secure by default, so you'll need to enter a username and password. Did I actually? Accidentally closed that window. I don't know what I did there. So I'll try to find it again. Where was it? DevOps 2, there it is. All right. So I was at database access. So the first thing you need to do is to add a new user.

Connecting Backend to Database

Short description:

We create a user with default settings and add an IP address to ensure access. The cluster is created with three instances, forming a replica set. We obtain a connection string to connect to the cluster. We start the backend container and map ports for incoming requests. The container is isolated and cannot access the host machine. We stop the container and change the connection string to connect to a cloud instance. The backend server is successfully connected to the database. We can browse the collection and view the entries. The backend is now running inside a container. Next, we will tackle the front-end container.

We'll say user pass, and we'll just keep all the defaults again. So I now have a user created. The other thing is that it doesn't accept a request from anywhere in the world. So you wanna make sure that you add at least one IP address. So in this case, I'll just use mine. So now you have my username and password, but you still can't log into this cluster because the request will have to come from my current IP address. All right.

So back to our deployment, we can see that it's creating that cluster. So really it's creating a very similar to what we did with our container, but it creates three instances. So it'll have that whole replica set. All the data will be persisted. It takes care of a lot of things for you. And we'll actually want to connect to this one later on. In order to be able to connect to it though, what I'll need to do is to get a connection string. So you can go to connect. Go to, anyone doesn't really matter, and it will provide you actually this. Use connect to your application and then you can actually get the code sample to connect to your application. So you can see here that this is my connection string. If I wanted, I could actually have, which is pretty much the code that we've already created into our Node.js server. So you have all the code needed with the imports and everything needed here. If you were using a different programming language, you could also just use it from here. So just get your connection string for the programming language that you're actually using. All right, but let me just go here, remove this and actually copy this connection string. This is what I'll be using right now, because if I try to run that server, that server and try to connect to my local machine, actually, let's just give that a try. So let's run the container that we've just built, the backend container. Run it in d-dash mode, so it runs in the background. Make sure that we clean it up afterwards. I will give it a name, so Myrn Kates-Bag. We'll map some ports so that our incoming requests to my laptop are mapped all the way to the container. We will pass in the environment variables. Remember when we had those environment variables. Now it won't be using dot N, they will actually use the environment variables that we're passing here. We'll use connection string. And if we use the same connection string as before, mongodb slash slash user pass at colon 27.0.7. So that was my original connection string. And then I specify the image. So the image was devops dash back. Right, so that will start my server. Ports are not available, that makes sense. That makes sense. So I'll need to stop my backend. Oh, that was my frontend, doesn't matter. And that'll take a second before it actually lets me do something again. It really didn't want to run that. Let's see what we have. Okay, so let it run there in the background. Not sure what it's doing, but let's try to run that container again. All right, so container is already in use, well, of course. Docker stop, Docker remove, and let's start this all over again. All right, so I started this new container, Docker PS, I can see that my container is running. And if I do Docker logs, Mern, Kate, Beck, I see that it started, but if I try to do curr localhost 5000 slash health, I see that DB status is false, so it's not connecting to the database anymore. And that's because that container is an entirely isolated process, it doesn't know what exists outside of it. So it's looking at for what, for a MongoDB instance on, which is in its own process, and there is no MongoDB database running there. MongoDB database is actually running on the host, which is beneath it, but it doesn't, the container doesn't have access to the host machine. That's a security feature that's very important, you don't want your containers to have access to that, that would be very dangerous, but it's also a little bit tricky. So how do you get those containers to work and to talk to each other? We'd need to do that internally inside of a Kubernetes network, but we're not there yet. So what we'll do here instead, we'll just stop that container. Merge gates back. Oops, I think it eventually crashed because it couldn't connect to the database and then it just disappeared afterwards. What we'll do is that we'll actually change that connection string. And I'll go here again, copy this one over again and just use that kind of... Oh, it didn't add a password. Docker stop mern, I already had it there. There we go. Docker BS. And I need to remember what my password is. So, it's probably that. And there it is. So now it seems to be running. So if I tried that Docker log again, and there it is. All right. So, server is connected. And if I do a curl to my slash health route, it is connected. So, that works. So, you can see now that because I have that environment variable, I'm basically able to connect to multiple databases. So, because that one on my local machine didn't work anymore, I connected to my cloud instance using that connection string, so everything worked, worked out of the box. And now I'm connected to this instance right here. So I can actually go in here, browse my collection, take a look at my database, my entries. And you can see that I have one entry already as I was working on it yesterday. I can run a local host 5000 slash entries. And I should be able to see this entry right here. So you can see that I'm connected to that new database. So we can see that I have this new database. So we can see that I have this new database. And we can see that I have this new database.

Now, I've got my backend, and now the backend is running inside of a container. So we're ready for that one. So let's move on to to using a front-end container. Now, front-end containers are a little bit more tricky, but we'll get to it. So once again, we'll create a Dockerfile here.

Using Environment Variables in the Front-End

Short description:

To use environment variables in the front-end, we need to move them to a config.json file and import them into our application. By doing this, we can isolate the variables and overwrite them when the server starts. We'll create a container and build the application, substituting the base URL with the actual value from the environment. This process is described in a blog post that covers building containers for Angular, React, and Vue.

And we'll do one slight change to our application here. So in order for our application to be able to use environment variables,.env will be overwritten, yes..env will only use the values from the file if there isn't a value in the environment. So I hope that answers that question. In order to use environment variable in the front-end, it gets a little bit tricky because think about it, when you run an application or you run it inside your own browser. So you download the application from the server. So you download the React application into your browser and it is actually executed on the browser. So the browser doesn't have access to the environment variables from the server. So what we'll need to do is to remove this here and actually move that to a file and we'll overwrite that file when the server starts. And I'll get into the details of that as we progress here. But the one thing that I'll need to do first is to make sure that I have a file that I can overwrite and I'll just use a config.json file. I'll move in my base URL variable here, Base URL and let's just keep localhost 5000 for now so that it works on our local machine. And go back here and now we won't be using this instead, what we'll do is that we will import, will import, import config from, from, from, from, from, from, from config.json, there it is. Right, and in here, instead of using base URL, we'll use config.baseURL, same one here. So now our application is still running locally. So nothing changed, as far as the application is concerned, but we've isolated those things in here. So what we'll want to do now is to create a container. We want to build that application and we'll want to make sure that we change this to base URL instead. So we'll just use the same name and we'll use dollar sign base URL. Because we have that inside of our server, we'll be able to do an environment variable substitution, overwrite the base URL with the actual value from the environment, and then we will serve that from our Nginx server, right? So there's a few hoops that we'll need to jump. There is a blog post that is available that really describes this whole process here. I've linked to it. I'll share a link at the end, so you'll actually have access to that. If you want to build, whether it's it covers Angular or React, as well as Vue, they're all slightly different, but it's kind of a temporary for building those containers.

Creating Dockerfile and Installing Dependencies

Short description:

We'll start by creating a new Dockerfile and using a two-stage container image. We'll begin with a Node.js image and install jq for later use. We'll download jq using a wget command and move it to the appropriate location. With jq installed, we'll change the working directory and copy all the necessary files, including the node modules. Finally, we'll run npm install to reinstall dependencies.

So now that we've got this one, we can start actually working on our Dockerfile. I'll need a new Dockerfile. Where is it, where is it, where is it? Front Dockerfile. All right, so just like we did for the other one, we'll need to start from a base image, but this one will be a little bit tricky. You remember I said, I want to run everything from an Ngingx server, but I will need to actually build my React application first. I'll need to use a two-stage container image.

What I'll start by doing is that I'll start with a Node.js image. I'll eventually need jq, so I'll put in some environment variables here for that container. I'll say use jq version 6, 1.6. We will run a wget command to actually download jq. Now, there is no way I can actually type that full URL without making a mistake, so I'll just copy and paste it from my notes here, if you don't mind, so please bear with me. Just do a wget to git hub, settle on jq, release, download jq, jq version 1.6, jq 64, Linux 64, because our container is running Linux. Don't forget about that. I need the Linux version. Copy that over into slash temp slash jq Linux, and I will just run the following command to move it from temp slash jq Linux 64 into usr bin jq. Right? So I'm just moving that file over. Now I'll have access to that file. I'll just need to give it the right permissions, slash usr. Slash bin slash jq. And now I'll be able to use jq inside that container. Once again, this is all running inside the container, so you don't have access to it on your machine, but you can actually use it inside of your container. Now that I have that, so that's my basic image with the jq now, I can change my working directory slash opt slash app. I can copy all of my files. I'll copy everything. I'll be lazy. I'll even copy my node js folder. My node modules folder, but we will rerun a build and we'll reinstall everything, so don't worry about it.

Configuring Nginx Server and Building Docker Image

Short description:

To change the values in the config.json file, use jq to make the process more flexible. Use the jq command to convert everything to entries, then pipe it to the map values function to find and change the key values. Move the updated config.json file to src.config and copy it to source/config.tmp.json. Finally, move the config.tmp.json file to overwrite the original config.json file. After running npm install and npm run build, create an Nginx server using a second stage Docker image. Specify the folder for JavaScript files and use a specific starting script. Change the permission of the file and set the working directory to /usr/share/nginx/html. Copy the files from stage zero to the working directory and change the entry point to run The file extracts environment variables, overwrites the specified files with their corresponding values, creates a temp file, and rewrites the original file. Finally, the Nginx server is started. Build the Docker image again and wait for the process to complete.

Now, what I'll need to do is to actually go ahead and find that config dot JSON file and change any values from base URL to dollar base URL, right. And any field that would be in here, any property in here will have that same structure. So if you have environment two, it will change to dollar environment two. You will also you could also use SED, but because I didn't want to hard code anything, I've used jq to make sure that it's a little bit more flexible. So really, I'm looking for all the keys and I'm changing the value for dollar key. Make sense.

The command I'll run here is jq, I'll convert everything to entries, then pipe that to the map values function in jq, I'll find the key and I will give it the value dollar plus that key. I think so far so good. Close this and then close this one and then pipe that into reduce, to all of the arrays values as item and I will take this one and concatenate that with the dollar item and close this. I think that should work. So move that to src.config. So apply that on config.json and then copy that over to slash source slash config.tmp.json and then finally move that source config.tmp.json file to back to overwrite that config.json file. Easy peasy, right? It took me a while to actually get that right command but and I guarantee that I made probably made a typo in there somewhere, but let's leave it like that for now. So basically what it does, is that it will look through each one of those keys and just change the value to be the value of the key, dollar sign and the value of the key. And now because it will change that to base URL, we'll be able to do an environment variable substitution to change that to the actual environment variable value. So, yes again, go to the article, it explains exactly what happens there. Finally, we'll do the npm install. Like I said, I copied over to node out modules, but let's just overwrite everything. And then finally we'll run npm run build. So that will create our react package. It will actually create one js, one html, one CSS file and then we'll be able to take those files and store them and serve them from an engine X server. So we'll only have the end result. We don't wanna run Node.js like we do on the development server. We wanna really have those three static files and serve only that, and we'll be blazing fast. So that will run, that'll give me my three files, but now I need to actually create my engine X server. So I'll do a second stage. So using another from tails docker that it has the second stage here, we'll use a version number again, so 117. I'll need to specify the folder for my JavaScript files. It's kind of explained in the article. But it's just to make sure that we can reuse the same Docker file for all of our JavaScript front-end projects, but you could hard code that as well. Slash static, slash JS, slash star dot JS. So all of our JavaScript files are there. So this is where it will look for to change those environment variables. Okay, so now I'll need to use a specific starting script. So I'll write it afterwards, but I'll just add the code to copy it. So I'll create that afterwards, but we want to copy that file into our containers. So slash user slash bin slash And now we will change the permission on that file slash user bin. It's not user. I don't know if I've recently learned that. Like I keep saying slash user just because I don't know, I'm so used to it. All right. And then finally we will change to our working directory. Working directory will be slash USR share-nginx. Share, which is the default folder for nginx to serve file from, slash HTML. And finally, we will copy from stage zero. So from that first stage here, we will copy everything inside the slash opt slash app, slash build folder into this folder right here. Into our current working directory into the shared folder right here. Finally, we will change the entry point of that image. That tells it once the Docker is started, once the Docker container is started, run, start Right, so instead of using the default, the default executable that you have to start that nginx server, run this start nginx image instead. All right. Oh, okay. So far so good. Now, because I'm for the, just to try to take a little shortcut here. I'll actually just copy that start nginx file and we'll create a new file. Start And I'll just copy it here. So basically what I do here is that I try to extract all the environment variables using printenv and I find the name of the variables. So I just extract the first part up until the equal sign and just take that first argument. I'm using set here for you, Gopala. And then I just overwrite those and I just extract all the environment variables. Now for each one of those files that were specified in the JS folder, environment variable, I just make sure that I do that environment substitution for each one of the existing variables. So if I have an environment variable called base URL, it will actually try to overwrite dollar score base URL with the actual value from the environment. And then I create that temp file and then just rewrite that file to the original name. And finally I start my Nginx server. So a little bit of voodoo magic here. So don't worry too much, like I said, I'll share the article if you're really interested, you can go through the details. But for now you'll just have to trust me that it works. And I think I've got everything so far. So we're ready to actually go to our front folder and run that Docker build again. So we'll use a tag slash DevOps front and from doc, from the current folder, and that will start compile that whole image. And that one will take a little bit longer because it has to start from the NodeJS, it has to copy over all those layers and it has to copy over all those images. So I'm just gonna copy over all of those files. There's a few little things that I will need to do. It seems like it's actually downloaded one of the images. So you remember when I said I'll use the latest version of Node.js, Node 16? It seems like I probably had an older version cached on my machine. So it's actually fetching the new image right now. And as going through all of those stages, it actually downloaded the jq file, copied that over, changed the working directory. It's now copying all of my files inside of that container, running that jq command, running npm install and npm run build. So this will take a couple of seconds again. You can see the progress as it goes. And pretty much doing pretty good on time. I'm like five minutes late. As I'm waiting for this one to run. Is there any questions with running JavaScript applications inside containers? Guess I didn't leave a lot of time and just finished right away. He was a bit faster than I expected. All right. It seems like we're good.

Running Containers and Deploying to Kubernetes

Short description:

We'll run the container and map ports to redirect incoming requests. The application is connected to the backend and the cloud database. Using a cloud database allows easy sharing and access for team members. The free tier provides ample storage. We'll now deploy the containers to a Kubernetes instance. Minikube will be used for this workshop. Kubernetes automates deployments, scaling, and management of containerized applications. It manages containers, ensures communication, handles scaling, and automates network management. Pods are created for backend and frontend, with containers directly related to each other. Pods enable independent scaling of components.

So what I'll do is that I'll actually go ahead and run that container now. So I'll just do a Docker run, dash D dash dash RN dash dash name. Mern kates front. I'll map some ports. This container is running on port 80, but I can't have anything running on port 80 on my machine salt map any incoming requests to port 8080 on my machine and redirect that to port 80 inside that container. I'll specify that base URL. So it'll be using local host to 5000. 5000. And then the image will be the one I just created. Assuming that everything went well, it should be working just like that. I missed it. I missed the quote somewhere. There we go. And really I can, right. There it is. Docker PS to see what I have running right now. So I've got a container that was started two seconds ago. Seems like it's still running. So I can try to do a curl localhost 8080, seems like it's working, because I have well, I have JavaScript disabled in curl, obviously. So it tells me that I need JavaScript, but if I actually go in here, connect to localhost 8080, I should now have my application, and it is connected to my backend already. And I have that message, which is in my cloud database, which was right here. So I can see that this is, the value that was added there. If I added a new document to the database, let's just insert a document. I'll say name, name. So I want anonymous again. Message. Hello world. All right, insert to my database. If I refresh the page, you can see that I've got this new message. That's distracting. So let's remove this one. All right, so you can see that new message from the database. So it's really connected, everything works. So we've got everything now running into containers connected to a cloud database. So again, that can be very useful now if you wanted to share a database across all the different people in your team. Using a cloud database here makes a lot of sense cause then you've got exactly the same one, everybody can connect to it, and you can have your production environment and development environment for your databases as well. So now everybody can connect to it, putting some junk in there. Like you actually all have access to that same database. So that makes it a lot easier. And again, there's a free tier that is available for I think it's 500 megabytes of data. So, I mean, it's a lot of data for our free tier. Before you hit that limit, you'll be good. If you wanna run production service, you'll probably need a PAI tier, but hey. So, yeah. So that brings us to the end of our container. So everything is now running on container. You can either run the database locally or in the cloud as I'm doing right now. And, but yeah, and both our front end and back end are now running inside containers on my machine. So that brings us to the last stretch. We want to put everything inside of Kubernetes now. So we'll take those containers and we will deploy them into a Kubernetes instance. Now there's a couple of things of ways that you could do that. For this specific workshop, I'll be using, I'll be using Minikube. I did run everything on a digital Ocean server as well. It worked like a charm. So it's, and it's very easy to use a digital Ocean to actually deploy Kubernetes cluster. So I haven't documented it yet. Although, but if you're interested on definitely planning on writing a blog post about the different ways to deploy different deployments, so I'll be using one specific operator. I'll get back to that. There's another one that I want to explore and then I want to give the instructions for both digital Ocean as well as the local environment. And I will send that soon. But again, just follow me up on Twitter. And as soon as I finish that article up, I'll post that there. So if you're interested in a real cloud deployment, that's what we'll do. But for the case for this workshop, I'll be using Minikube. It's a little bit easier to get started with it. You don't need an external domain and point that into your Kubernetes cluster. There's a lot of hoops that you don't have to jump through. So it is a little bit easier to use Minikube. But ultimately all the YAML file that we'll be building will still work on both servers. All right. So let's talk about Kubernetes actually. Let's look at what is Kubernetes? Again, we've had to vet from the actual Kubernetes website. So Kubernetes, also known as K8s, is an open source system for automating deployments, scaling, and management of containerized applications. So really what Kubernetes does is that it manages all of those containers. So we've created those. Now we're gonna send those to Kubernetes and tell Kubernetes, now you take care of that. You make sure that they can actually communicate with each other, that they can talk to each other, manage all of that network stuff, make sure that they're exposed to the outside world, manage that and make sure that you manage those containers. And that's the important part. If you notice that the container crashed or something didn't work, Kubernetes will automatically take care of trying to restart those containers. If you need to scale your application, if you put up an ad for a Super Bowl, it's like the case or Black Friday comes, you wanna scale your application, you want 15 of those Nginx servers running, well, you can just with a simple command tell Kubernetes scale at 15 containers and it will take care of routing the traffic, doing all that low balancing and all that stuff for you. So it's really there where it really, really shines. And once you're done, just scale down, it's not reduced to, you know, a single instance and you're good to go. So really, really that's what Kubernetes does, it scales those containers up and down and it takes care of all of that networking, which is a lot, that networking part is a lot to make work. So the first thing that we'll do is that we'll create a pod for a backend and a pod is one or multiple containers working together. Really what you want inside of a pod is containers that are really directly related to each other. In our case, we'll have a pod for our backend and another pod for our front end and they shouldn't be inside the same pod because they're not directly related to each other. Maybe we'll want to scale up our backend for some reason because it, you know, maybe it has a lot of business logic and it takes a little bit longer. So, you know, we want to make sure they're decoupled and we can independently scale them up or down. So, most of the time, a pod will have a single container.

Deploying Containers and Using Operators

Short description:

We'll deploy two containers for our backend and use a service to access them. The same approach will be used for the frontend, with two instances of an NGINX server and a service to expose them. An Ingress will be used to redirect traffic to the backend and frontend services. For the Mongo database, we'll use an operator, specifically the Atlas operator, to connect to a cloud instance. This allows us to manage the cloud instance from within our Kubernetes cluster. There are other operators available for the community and enterprise versions of MongoDB, but we'll focus on the Atlas operator.

It might have different containers to, you know, check for metrics and do monitoring and things like that. But ultimately, as far as your application is concerned, you would have one single container per pod as a rule of thumb. So in this case, we'll have a deployment. Deployment is a way to describe what you want, like how many pods do you want and how do you want to scale them and how do, what are the resource limitations that you want to put on those pods. So that's a deployment. Once you put a deployment into Kubernetes, it will actually create a replica set that will be in charge of monitoring those containers. So I've got, we'll deploy two containers, or two pods, for our backend here.

And we'll need a service as well. A service is that network instance, that network thing that will tell, well you know, here are all of my containers. And instead of trying to access those containers directly, you'll access that service. And the service will be in charge of dispense, or sending those different requests to all the different containers. Each time you start your containers, they will start with a random name, right? Because if you want to scale it up to 15, you don't want to specify the name for each one of your containers. So that's how you keep track of those containers. That's how you can, even though they have random names, it doesn't matter. You're always interacting with your service rather than with the pods directly. So this is what we'll deploy for our backend. For our front-end we'll have something very similar. So we'll have, again, a deployment. We'll have, well, let's put in two instances of that NGINX server with our files. And then we'll have a service to expose all of those. Now those two can, the way I'll be using it right now they will be able to see each other internally, but the application, because it runs on the browser it will still need to be exposed externally. And those end points in our application will be exposed externally as well. So in order to expose everything externally we'll use what is called an Ingress. The Ingress will map traffic. So anything that comes to, you know my website slash API slash something will be redirected to the backend service. And everything that is just anything else basically slash anything else will be redirected to my frontend. So I'll make sure that we've got those rules in place so we can expose them. And finally, we'll need to also do the same thing for our Mongo database. Now deploying stateful applications and I've already hinted on that but it can be very, very tricky. We'll need a deployment, we'll also need a service in order to expose all of our Mongo servers but we'll also need to set up persistent volumes, and for persistent volume claims, we'll need to add the connection string, make sure that connection string is accessible inside of our Kubernetes deployment so that we can use it directly as an environment variable. And you might want to start putting up the different things like sharding, as well as replica. Now adding all of that, that's where it gets a lot, it gets really tricky. So instead of doing that, what we'll do is that we'll actually use an operator. An operator is a way to basically install applications inside of your Kubernetes cluster, it will create a bunch of custom resources and using those custom resources, you'll be able to connect to your database cluster. There are two, well, technically three different operators that you can use for MongoDB. In this case, I'll be using the Atlas operator specifically. This one is actually connecting to the cloud instance that I've shown you before. So now, I'll be able to maintain and manage all of that cloud instance from inside of my Kubernetes cluster. So that's very useful if you have everything running inside Kubernetes and you wanna make sure that you can still configure an external cloud service. There's also an operator that is there for the community version and the enterprise version of MongoDB, which would actually deploy the actual servers directly inside of your Kubernetes cluster. So again, I'll have a blog post about using those two, but I've decided to use the Atlas one just because I think it's a lot easier and it's easier to use a cloud instance, in my opinion.

Using the Atlas Operator and Restarting Minikube

Short description:

We'll be using the Atlas operator to deploy and manage databases from the Atlas cloud. We'll fetch the images pushed into a Docker Hub and run them inside Kubernetes. Let's continue coding and ensure a clean start by deleting everything and restarting the Minikube server.

All right, so I'll be using the Atlas operator. So it lets you deploy and manage databases from the Atlas cloud. We'll use the database backend deployment as well. So we'll make sure that it actually fetches those images that we've pushed into a Docker Hub and run those inside of Kubernetes. And then we'll do the same for our front end.

All right, let's get coding again. I'm starting to lose my voice. Those are long days of a lot of speaking. I've got about 45 minutes to go. So I think I'm pretty much on time here. We should be able to deploy everything. So the first thing I'll do is I'll actually use, as I said, I've used Minikube and I did a dry run. So I'll make sure that I delete everything just so you see I am not cheating and I'll restart my Minikube server. No, no please don't. And I'll need to start by Minikube delete. This will take care of just deleting everything that is currently running and I'll start from a brand new Minikube instance.

Service Communication and Kubernetes Deployment

Short description:

To ensure secure service communication, you can use namespaces to prevent communication between services. There are also more advanced networking management options available. Configuring visibility within the network is possible. When creating a Kubernetes deployment, use YAML to specify the API version, kind of object, metadata, and the spec. Labels are used to identify components within the Kubernetes cluster. Best practices suggest using an app label for a single application and different labels for different components.

I can enter service communication be secure. Yes, you can. One easy way to do that is just using namespaces and they won't be able to talk to each other and there are ways to do a lot more advanced networking management. It is possible. How exactly you would do it, that is a little bit outside of my expertise, to be honest, but it's definitely possible to make sure that they don't see each other. So, but yes, as I've mentioned, I will be configuring it so that everything is visible from inside the network.

All right, so I'll start. If you're using macOS, I had issues with the default driver, so just make sure that you use the dash dash driver equals virtual box and that you have virtual box installed on your machine. Yeah. I can't remember exactly what was the issue, but yeah, something. There was something. So while this is actually running, it doesn't matter. It'll just say that it's up and running at a certain point. I'll get started with my Kubernetes deployment stuff.

So the first thing I'll do is I'll create a new folder here and I'll create a new file and everything in Kubernetes uses YAML. Technically, you can also use JSON and as much as I hate YAML, it's actually easier than JSON. It gets really hard and you get into. Yeah, no, use YAML. It's easier. Also, it's kind of the standard in the industry. So if you're looking at any example, they're all using YAML. All the docker, not docker, but all the Kubernetes files are using the same structure. So you will start by specifying an API version. You will also specify the kind of thing that you wanna create. You will need to add some metadata and then you will have the spec, which is like the bulk of what you're trying to create. So in here we'll start by creating a deployment and then we'll also create a service and we'll also put that in the same file. So we'll just separate different objects using those separators and we'll be able to add multiple objects inside of a file.

So for my deployment, I'll be using apps slash v1. So that's the version of the API. I'm looking for an object of kind deployment. In the metadata, the requirement is a name. So we'll call it Mern Kates BAC. And then you would also use labels. Now labels are the way that you'll use to find the different components inside of your Kubernetes cluster. You'll see, as we start adding different things, we'll have more and more and more things inside of our Kubernetes cluster. So using labels is an easy way to find different components that are all related to each other. Now there are no real standards to how to use them. There's a couple of best practices that are written basically for a single application, use an app label, and then different components that will use a different label. So what I'll do here is that I'll just use app and component that component will be back in the app will be Mern Kubernetes. Great.

Configuring Deployment and Service

Short description:

I've created the spec for my deployment, which includes running two replicas of the backend server. I've also specified labels for the pods and their corresponding specifications. Additionally, I've created a service to expose the pods and handle the networking aspect. The service is configured to map incoming requests on port 80 to the containers on port 5000 using TCP protocol.

I see that miniKube is now done. I've heard it because my fans stopped running. So if I do kubectl, kubectl is the tool that you use to interact with a Kubernetes cluster. So kubectl get all, I should see that. Well, there's not much, there's nothing, right? Now there's only a Kubernetes service. Right, so back to our deployment. So I've got all of the boilerplate stuff. Now I'm going to go and write the spec of my deployments of the kind of the actual description of the object that I'm trying to create. So the first thing is that I said I want to run two replicas two pods with backend server running. Now I'll also tell Kubernetes that, well for the deployment try to find any pods that match the following label. So find pods that have component back and make sure that you have two of those running at any given time, right? And if you don't see those pods, well here's a template to create a new pod. So give it a random name so you'll notice that we'll add and basically we're just rewriting all of this but for the pods now. So metadata, we won't give it a name cause it will automatically assign one, automatically create a random name for us. But we will add some labels. So it will have app mercates and it will have components back, right? So remember we said, well, match those labels, find pods that have the component back make sure we have two running if you don't, well create one that has that label, right? Cause it needs to find them. And those spods will have the following spec they will use a container. So here's the list of containers that we will have inside of that pod. We will have a single one and we'll give it a name. So mer case back. Here's the image to use for this container. So I'll use, I'll use the one that I am 100% sure that works that I uploaded, that I pushed earlier but in theory you would use the one that you've just created on your local machine and just recently pushed and make sure that you open up the following ports. So container ports 5,000. So I want to make sure that Kubernetes knows that something is running on that specific port. And finally, putting those environment variables. So name port value, 5,000 and what's the, oops, make sure that indentation is key to a healthy YAML file. Finally, connection string value, let's leave it empty for now. I'm using the YAML markup tool as well as the Kubernetes extension in VSCode, which is why I'm getting a message, a warning about not having resource limits. So let's ignore those a weekly line right now for now, but you shouldn't in theory, add different resources here. All right. So let's just go ahead and apply this file for now. So we'll use kubectl again, apply-f and, oops. Let's move into the right folder. I see I have a question. I'll just deploy this and then I'll get back to that question. Apply-f back. Error validating. Right? Template meta data, meta data. And, and, and let me see if I can easily find that one or else I'll just revert back to my line three. Thank you. What's with line three? Oh, thank you. Good gig. Sorry. That's all I can see. All right. Metadata. Boy, okay. Error one creating. Version app slash v1, that should be it. I'll actually open up this file here, and I'll cheat and just copy everything over. Make sure that I've got those two replicas. Everything else should be good. And we'll just keep this one empty. All right. So I don't want to lose your time, so I'll just directly change it once again you'll have access to the GitHub repo, probably just a little typo somewhere. Okay, so we can see that our deployment was created, so now if I do kubectl get all, I see a lot more stuff. A lot more than I had earlier, I just had that Kubernetes service, but now I've got a deployment that is created. It tells me that I've gotten zero pods out of two that are ready. It also created that replica sets, remember I said the deployment will create a replica set. The replica set has its own random name here, it has a random unit ID. And the replica set created those two containers, make sure that they're running. So we've got two containers, or two pods with their random names. Container creating, so it's actually downloading the image inside of my Minikube instance right now, so it's not accessing my local cache. And you should see, there you go. So both of them are now up and running. I can take one of those and do a kubectl logs and give it the pod ID and I should have something, but now it gives me an error. I see that I have an error because, well, I didn't specify connection string. But that's fine, we'll get back to that connection string later on as we finish that deployment. So now that I have my deployment, I'll need my service as well, I need a way to expose those pods. I'll need to just tell Kubernetes how to handle all of that networking part, and let me just remove that. So for that service, I'll go ahead and create an API version. Again, it will use same structure. So I'll have v1 in this case, I'll create an object of kind service. I'll have some metadata, I'll type it correctly this time. It's the name. We'll give the same name. You can use the same name for different kind of objects. I find it a little bit easier to track than having myrncades-back service, myrncades-back deployment. So yeah, I prefer to use that. Myrncades for the app name, the component is still back, so the same component. So it's just the service for that component. And then I'll create my spec. For my spec, I'll tell it to find any pods that have the following label. So, very similar to what we did to our deployment. So, find any component that has that specific label. And then these are the mappings for the ports. So, port 80, for our service, so the port 80 of our service will map to the target 5000. So, incoming requests to the service on port 80 will be mapped to the containers on the port 5000 using protocol TCP. And we can even give it a name so that we can call it by name afterwards, although I'm not using it. So if I go ahead and apply this file again, so I just apply it back. You'll see that deployment was configured.

Managing Pods and Deploying New Versions

Short description:

I made some changes and created another container. When deleting a pod, Kubernetes gracefully removes it and starts a new one. When deploying a new version, it starts new containers and terminates the old ones.

So apparently I changed something in there. Hopefully it didn't break anything. And then the other one was also created. So now if I do kubectl get all, I will see that my two containers, my two pods are still running, but I know I have this service that is exposed to the internal IP address. As you can see here with this specific port right there. All right. So what else do we have? And even if we, you know, let's just do a, I'll chill, need to open a new terminal window. I can delete a pod. And if I do a kubectl get all again, you will see that while I now have three pods, because it tries to gracefully remove the one that I've tried to delete. So this is that one right here. So terminating. But it immediately started a new one. So you can see it right here. So if you want to do a deployment, for example, you want to push a version two of your application, what will happen is that it will actually try to start those new containers with the version two. If they crash, it will just keep those version one pods up and running. But once the version two pods are up and running, it will automatically start terminating the version one pod. So it takes care of making sure that you always have 100%, as close as 100% optime by making sure that you always have the appropriate pods running at any given time.

Deploying the Front End and Exposing Services

Short description:

Deploy the front end by creating a front.yaml file and configuring the container section. Add labels and specify the image, ports, and environment variables. Create a service to map the ports and configure the protocol. Apply the file to create the front end deployment. Use kubectl exec to run commands inside a pod and debug. Create an ingress to expose services using Nginx as a proxy. Specify rules for redirecting requests to the backend and frontend services. Rewrite the URL to match the backend's routes. Apply the ingress file to expose the services to the outside world.

All right, so we now have that up and running. We're trying to move on to the front end. Let's just go ahead and deploy the front end. Front will be very similar. So front.yaml. And I'll just need my secret notes here, if you will bear with me. That's not, there it is. All right, because I have my auto-completion tools, I'll just actually use it. So AppC1, metadata, the name for this will be main-mer-ne-cates. Cates front. We'll add some labels as well. So remember, we're just doing exactly the same thing, but this time for the front end. So component will be different. So I'll be using this one and matching label. So it will be component front. That's the label I wanna match. There it is, make sure that you've got that label. I'll also add the app label here. So far so good. And then I come to my container section. I'll actually use the one from here because they're very similar. So back to front and I won't be using those resource limitations. Come on, there it is. Okay, and let's remove this line. Okay, so instead of using back, we'll use front or the name. The image is also front. The ports, this one is actually running on port 80 already, so it's not running on port 5,000, but it's running on port 80. And then we had one environment variable for base URL. And the value in this case will be slash API. And we'll see, remember, I've mentioned any incoming route to slash API will be redirected to a specific service. So we'll create that ingress later on. And that should take care of my front deployment. I also need to create a service. So we'll once again, use that auto-completion just to make our life easier, copy that name, cause we'll be using the same. There it is. Selector will be component front. That's just still that from there. I should also still be labels for my metadata. And there it is. And selector component front, so far so good. And now I have to map my ports. So I'll have ports, actually port. Target is 80 and the port is 80. So they're both, so the service will be listening on port 80 and target port will inside the container is also port 80. Finally, we'll have the protocol TCP as well as a name. We'll use the same name. All right, so that should be it for my front end. So I can go and apply this file as well. Not F dash F, front and cross our fingers. There we go, everything is created. So now if I do kubectl get all all these, you see that I now have a lot of stuff going on. I only have one container here. So I could just go back into my deployment and say, you know what, I forgot to mention, I want two replicas here. I can apply this file again. And you can see that it was configured. So there's a change that occurred. If I do get all, you can see that I now have two containers, one that was started four seconds ago. The other one was started 30 seconds ago. I have my front service, I've got my two deployments, my two replica sets. So I'm getting more and more things. That's where a label starts to be a little bit more convenient, so if I did kubectl get all-l component equals front, I should be able to get only the things that are for that component specifically. So that's where labels get very, very useful. All right, looking good so far. The next step will be to, well, one thing that we can do now, actually, let me just, so do kubectl exec, just like we can do for our containers. We can actually run commands inside of a pod. So I'll open up a bins bash session inside of a pod, and you can see here that, well, I'm inside my container. So I have my static files, I've got my index.html. If I have cURLed installed, but I probably don't on this specific, no, it's not installed on this specific container, but in theory, I could be able to actually ping myself. If I do printenv, printenv, I should be able to see all of those services that are exposed. So this is how you would find another service. So currently I'm inside of my front end container, but I can see that my back service is host, there we go, so I've got the IP address of my back service here. So thank you, Alex, for the suggestion. There are a lot of tools that are a little bit easier than chipctl and trying to just use the login, so that's one that I'm not familiar with, so I'll definitely look it up afterwards. But yeah, so you can see here how everything is connected, how I can do different things, and you can actually go into those pods if you need to debug anything. All right, so now those two containers are running inside internally, but nothing is exposed to the outside world, so I still can't reach any of those servers. So let's go ahead and create a new file. I'll create an ingress, and this one, I'll actually just copy and paste for the sake of time, and that is not the one that I want. I'm pretty sure I have another one somewhere, please, please, there it is. Okay, so what I'll do here is that I'll create an ingress to expose different services to the outside world. So I'll be using Nginx as a proxy, so that's your Nginx proxy, I can't remember who has that question earlier. I'll use the following annotation, I'll just come back to that in a second, but just rewrites the URL that is sent to the pods. So here I specify my rules, so I specified that my first rule is anything that starts with slash API with an optional slash followed by anything else. Look for the prefix, so start by anything that starts with those. Redirect that to my backend service, right? So that makes sense on port 80, and anything else will just be redirected to my frontend service. Now, this rewrite here is that I'm telling Nginx to take that second argument here and just send that second argument as a request to our internal services. So remember our express server, our backend here, their routes were slash entries, they were not slash API slash entries. So if I would just send that whole request directly to the backend, the backend wouldn't know what to do with it because it doesn't have a slash API slash entries endpoint. So what I'm doing here right now is that I'm rewriting. So whatever incoming request to the ingress to slash API slash entries will be rewritten as slash entries and then sent back to my backend server, right? So, that's a little trick that you can use there to create those. So now I can apply this new file.

Installing Atlas Operator and Managing Secrets

Short description:

I installed the Atlas operator to manage Atlas instances from within Kubernetes. I created and labeled secret keys for MongoDB Atlas. I also created a password for the database user. Now I can access my Atlas instances and create a project with IP access list.

There you go. It's now created. So now what I can do is to actually get the IP address of my Minikube instance. So it runs in a local, in a virtual machine on my local machine. And now we can do a curl on that IP address and it should be redirecting the traffic to my server. Now I have to figure out why it's not working, but.

Let's try slash API slash entries and it's still not working. Oh, I know why. Minikube does not have the Ingress add-on by default. So you'll need to, Minikube add-ons. That's what happens when you delete your whole instance add-on Ingress. So this will take a second. It will just download the additional add-on. You have to do something similar when you're dealing with a deployment in the cloud. You'll need to specify to expose and to intentionally do a little bit of configuration to make sure that everything is exposed to the outside world. So in DigitalOcean again, you'll need to create a load balancer and make sure that the traffic is redirected to your application. So now I can actually ping that server. So it returns nothing because it's not connected to a database. But if I take that IP address again, find my window right here, I can actually go there and have access to my application. So it is actually working, it is exposed. It is exposed to the outside world. I've got my slash API, which is also exposed. The only last part is that I'm still not accessing a database. So I'll need to install my Atlas cluster or inside of my cluster.

So what I'll do here is that I'll use the Atlas operator. As I said, an operator is basically a way to create custom resources and just manage all of your applications. So it's a way for software vendors to help you as a software developer or as a DevOps engineer to be able to manage those external applications by people that are actually experts at running it. So what we'll do here is that we'll use the Atlas operator to make sure we have access to Atlas from within our Kubernetes cluster. If I can just find my notes again because I'll need to find the actual file to install and not this one almost there, sorry. And where is it? Atlas operator there we go. Okay, so what I'll do is that I'll actually use Atlas operator. You can search for it Atlas operator. I want the GitHub, so it notice when open source project so you can see the details of it. It also specifies you the exact instruction to install the operator itself. So you've got this file here which will actually just install from directly from the GitHub page and just find that yaml file and just run it. So I can run this and as you can see, there's a bunch of different things that it runs. So it's a yaml file and it created, well it created customer resources. So you can see now that it created the Atlas cluster dot Atlas dot MongoDB endpoint inside of our internal API, Kubernetes API. And now we'll have access to Atlas cluster objects directly from within Kubernetes. Now, another thing that I'll do is that I'll need to expose a secret. I wanna make sure that I add a secret inside of my MiniCube, which will be able to be... So I, as the administrator of our Kubernetes cluster or MongoDB cluster, I wanna make sure that secret is inside of Kubernetes. So I can actually create it and add it right now. So I'll do that. So create, create secret, we'll call it generic. It'll be MongoDB atlas operator API key. So those different API keys, I'll be needing those, but I can find them in the Access Manager. If I go to the organization access here. So you'll need to create API keys so that the atlas operator can actually interact with your cluster. So just go ahead, create a new API key, DevOps JS, we'll give it the owner permissions so that I can actually create and manage all of our clusters. And you've got your private and public key right here. Once again, you'll wanna add some entries, some IP addresses for be able to manage it. So now, even though if you can see my API key, you still can't access it because it needs to come from my server. Right, so that will create all of the things that you need. I actually have those somewhere and this one uses my preconfigured. I'll just need to find it. Where did I leave those? Right here, and this one I will go back to my terminal. Just hide this from you for a second. If I can manage to get that window inside the other left and the other. Right, okay, so there you go. So I've created all of my secret keys now and they are now hidden inside of secrets in my Kubernetes cluster. So I don't have to, well, share those with you. Oh, sorry. And now I need to also label those keys so that MongoDB knows where to find those API keys. So label the secret, called mongodb-atlas-operator-api-key Label it with slash type equals credentials And for the specific namespace, mongodb-atlas-system. Right, so it's now labeled, so that's good. So that tells MongoDB where to find the credentials. And next up, we'll need to create a password for our database user. So create secret generic Atlas password Atlas password and from literal password equals merncates. Right, not a very good secret, but that works. From literal, there you go. Next up, we'll label that new password with MongoDB type credentials again. So it now has those labels. And finally, I'll just need to create my Atlas file. So Atlas.yml, and I'll definitely need my cheat sheet here. All right, so what I'll do here, I'll now be able to access my new objects that are my Atlas instances. So slash v1, I believe. It is v1, it's not officially v1 yet. It'll be released in June at MongoDB World. So if you're interested, I'll be giving a talk there about it. Again, link at the end, if you're interested. So I'll create an object of kind Atlas project metadata. We'll be using a name Mern, Kate's project. So that will create a project. So a little bit, remember when I started the cloud instance, I actually created a new project and I created a new cluster. So this is basically what we're doing, but from inside of Kubernetes now. So I'll call it Mern, Kate's, and then I'll add project IP access list. So you don't want to make sure that you add those IP addresses that will have access to that cluster. I'll just be lazy here and accept everything, any incoming traffic from anywhere, comment. Never do that.

Deploying Atlas Cluster and Managing Resources

Short description:

To deploy a new cloud cluster, we create an Atlas project and cluster using the Kubernetes clusters API. We specify the project reference and cluster spec, including the instance size, region name, and provider settings. We then create an Atlas database user with roles and specify the project reference, username, and password. Finally, we deploy the cloud cluster using the apply -F atlas command. The cluster is deployed and can be accessed and managed using kubectl commands. Persistence in volumes is automatically managed.

All right. Probably shouldn't do that. But for the sake of this demo, it will be more than enough. So that will create an Atlas project. We'll also want to create a cluster. So we'll use API version Atlas. Oops, Atlas.mongodbv1. I will create an object of kind Atlas clusters. So that will take care of creating our cluster for us. We'll add some metadata name. Trying to type too fast now. Not project, but cluster. And finally, we'll need to add the spec. So spec. We'll need a project reference. So this will tell Atlas to use the following project. This is where you will put in this new cluster. So we'll use Mern Cades Project. Also cluster spec. This is how you will define your current cluster. We'll give it the name cluster zero, which is kind of the default. As well as some provider settings. In this case, what I want to do is that I want to create a free shared cluster as I did earlier. So I'll say instance size, M0. So instant sizes are arranged from M0 to M, I don't know, a big number. It just basically define a number of CPUs that you want and all that stuff. M0 is a free tiered provider name. Because this one is on a shared server, the provider name is tenant. If you were using like a production servers and then at 30, you would use provider name AWS or Azure or a GCP. So you could use any of the cloud. Region name now, region name, you can deploy it pretty much anywhere in the world, there's a couple of hundreds of different regions that you can use. And because this is a tenant, I need to specify what's the backing provider. So it will be deployed on AWS server here. All right, almost done. Remember what's the next step that I did when I created that cloud instance. I went and... where is it? I created a project first, project was DevOps 2. There it is, DevOps 2. I created my cluster zero. I then went to database access and I created a user. So this is the next thing that we'll do here. So exactly the same process, but again, directly from your Kubernetes clusters API version will be the same one. We'll create Atlas database user, user, and we'll add some metadata. We'll just give it a name, let's forget labels for now. Atlas user. And here's the spec for it. I'm trying to type too fast now. All right, spec. We'll specify some roles. Role name will be read, write any database. All right, so we'll just give all the accesses, database. This is the authentication database, so it should probably always be admin. And next up we'll need to specify for which project this is. So project name, no, project ref. Project ref. We'll use the one that we've created earlier, so project. And the username will be merncates. And the password, we'll use a secret. So the secret that we've created earlier, the secret was called atlas password, and it was actually, the value was actually the same one here. All right, and I've got everything that I need now to actually deploy a new cloud cluster. So I can just do apply dash F atlas. Oops. Atlas. What did I call it? Where is it? Oh, there it is. Let's just move this. All right. Those were long days trying to do all of that. Okay, we've got a validation error, but let's just try that. There we go. Right. So we've created a project, we've created a cluster, we've created a user. What should be going on right now is that we should see that Mern Cates cluster. And because we've just, you know, asked it to deploy a new configure, well it will actually look for changes and you can see that it's actually deploying all of that inside of our cloud cluster. In the meantime, it's still running. I could have created a completely new cluster, but it takes about five to seven minutes to actually deploy those clusters. So I just wanted to reuse one just to make it a little bit easier here. Now that I have this cluster and as it gets deployed, I can actually see, I can actually see it. I'll just need my cheat sheet again because I'll be using a lot of jq again. So I can see different things now. I can go ahead and just clear this. This one is up and running. I can do kubectl, kubectl get-atlas, atlas clusters. And I can see that I have a cluster that is connected now. It's been running for 67 seconds. I can do same thing for Atlas projects and so on, and so on, and so on. So I can now access all of my different resources directly from my CLI, directly from kubectl. So it gets all mapped in and I can manage everything directly from here. So that makes my life a lot easier. It takes care of all of that. Remember when I said you have to make sure that there's persistence in our volumes and all of that? Everything is managed automatically.

Finalizing the Application and Workshop Recap

Short description:

We managed to build a full MERN stack application, from the MongoDB server to the Express server running on Node.js, and the React front end. We packaged everything inside containers for maximum compatibility. The application is highly available, running on a Kubernetes cluster with multiple instances. The GitHub repo with all the code and steps will be shared. The workshop covered a lot of content, and a recording will be available for reference.

It's all managed directly inside of our cloud instance, so we can go there, edit, change, access the UI, do whatever you want, both from the UI and from your Kubernetes cluster. And it's all in sync now. And the last thing that we want to do is to do kubectl get secret. So by installing that operator, by running that file, by creating that project, it actually created a secret here. So the name of the secret is the name of the project but all in lowercase, MIRN k8s, and no spaces, uses dashes. So, there we go. And the name of the user and MIRN, no, that's db-admin, yeah. And MIRN k8s user. I can see no, I'll need to do kubectl because I've changed it at the last minute. kubectl secret, there you go. There it is, all right. So name of the user, name of the cluster and name of the actual user name. Name of the project, cluster name, all right. So, let's just copy it over, kubectl, get secret. And I can see that I do have that secret, it is created. I can output the content of it in JSON format. And you will see here that I've got, well, look at that, all of my connection strings and they're all base 64 encoded. So what I'll do is then I'll actually just pipe that through jq. Oh, I know how to write one here. So, pipe with entries and take the value and pass that to the base 64 decode filter, and there we have it. So that's our connection string, now that we have it. You remember earlier I got that manually from the MongoDB UI but now it's part of my secrets directly inside of my Kubernetes. So that last little thing that I can do now is to actually go back to my backend deployment. I'll just need to find the exact one right here. Not looking at the right screen, backend deployment. Remember I had no connection string here. So rather than keeping it, a value directly encoded directly into a YAML file that ultimately, you would push to GitHub and ultimately expose your connection string again. What you can do is to take that value from an existing secret. So secret key reference. Take it from the secret key Myrn, well I can't remember the name again. Myrn Katz Cluster 0. There we go, so take it from this secret key and use the following key. So what was it called? Connection string standard, actually I'll use connection string standard right here. So use that key. Now we can go ahead and redeploy our kubectl apply MAC. And there you go, you can see that it was configured. Get all, you can see that it's terminating the processes. It's slowly starting the new ones, the new backend. There's one that was still running, so it's making sure to do that rolling deployment. And if we go back to not this one, oops, showing all of my emails here. Where is it? And I go back to the IP address, oh, I forgot it. So let's just go and get that IP address Minikube IP. And there it is, and we are finally connected to that new database. So just see that the data is different because I'm actually connected to Mercades. Cluster zero, if I go browse that collection. So I had two different databases. So you can see that it's connected to this one and I can use it. I can put in a new message on that guest book. This is so cool. Submit that, it gets saved here. I can go back and see this inside of my database. Everything is connected. Alright. Sharp on time. 11.30. So managed to get it done in two and a half hours. Let's just go back to those slides to wrap it up very quickly. Alright, so this is our final result. So we finally got our GeoCities guest book up and running. It's using a Atlas database. So it has three nodes with replicas in place. So it's running on six virtual CPUs. I think they have two gigabyte of RAM each. And then we've got two instances of our front-end, two instances of our backend. All of that is deployed on a Kubernetes cluster, which also needs three nodes running. So you've got another six virtual CPUs. So we've got 12 CPUs, and about 32 gigabytes of RAM to run this Geocities Guestbook page, just probably a little bit overkill, but it's never going down. I promise you. Like there's so many different things, so many replica, so many... Yeah. So, you know, it'll be highly available. That's for sure. So this is what we've done today. So there's a lot of content in here. Again, a recording will be available. You should have access to it by tomorrow. So you'll be able to look at it, skip the boring parts, skip the parts that you were already familiar with, and then slow down the parts that you want to reproduce, or you want to try or explore. So you'll have access to all of that. I'll also share a link with the GitHub repo that has all the code that I've just shown, has all the steps that I was kind of cheating and looking at right here. But basically, we've managed to build a full MernStack application. We've got our MongoDB server, we've got an Express server running on Node.js, which we created three routes, technically, one to see the health and the status of our applications as well as two routes to interact with the data, so they're connected to the database. And then we had a React front end where we created a form, and we listed a bunch of entries from our guestbook. So we've built a full MernStack application. So you saw how easy it is to use JavaScript all the way. Very intuitive, it's easy to keep in the same thing. And within an hour, we had a full three tier application up and running. We've packaged up everything inside containers afterwards so that those containers can then be taken and deployed everywhere. So we might make sure that they're available for maximum compatibility across different servers, different environments. We've used environment variables as well. And you saw how to create those inside of our front end container.

Managing Containers and Using Operators

Short description:

Think of containers as cattle, not pets. Kubernetes helps with deployment and scaling. For stateful applications, use persistent volumes or database operators. Operators make managing databases easier. Thank you for your time. Visit for more information and resources on Kubernetes and MongoDB.

So that one was a little bit trickier, but we still managed to do it. Think of containers more like cattle over pets. That's one way to think about it. Those containers should be built in a way that you can easily take them down. So they don't keep state, they don't keep anything. You can easily, you know, take them down and just spin up a new one when needed.

And that's where Kubernetes comes into play. Kubernetes will help you to deploy and scale that application. Make sure that you have those pods running, that you can easily scale them down or up again, and make sure that they're always running.

Now, when you have the database or a stateful application, that's where it gets really tricky because those containers, they will lose the state when they get restarted, and you don't want that. So in order to help you, you can create different persistent volumes. There's a lot of things that you can add. But if you have a database, chances are that your database vendor has operators for Kubernetes. So look for those. You saw how easy it was to just install them. It was a single line of, well, just a kubectl command, it installed a bunch of different resources. And from there, I'm able to interact with my database. I was able to create a new cluster or access a cluster. Actually, I didn't create one. I accessed one that was already existing. Manage all of those users, manage those permissions, and everything was done directly from within Kubernetes. So using operators are a lot, lot easier than trying to do all of that by yourself.

So that's all I have. So thank you so much for your time. I'll stick around for questions for a couple of more minutes if there are any. If you want more information, take a look at You've seen that name, probably always sounds weird when I say it, but you've seen it from a couple of times. So you should be imprinted in your brain now. But in there, you'll have access to the GitHub repo. You'll have access to a bunch of resources, how to build front-end containers as well as just general information about Kubernetes and MongoDB.

Watch more workshops on topic

DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.

React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
163 min
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.

DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation
controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.

DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.

DevOps.js Conf 2021DevOps.js Conf 2021
33 min
How to Build CI/CD Pipelines for a Microservices Application
Microservices present many advantages for running modern software, but they also bring new challenges for both Deployment and Operational tasks. This session will discuss advantages and challenges of microservices and review the best practices of developing a microservice-based architecture.
We will discuss how container orchestration using Kubernetes or Red Hat OpenShift can help us and bring it all together with an example of Continuous Integration and Continuous Delivery (CI/CD) pipelines on top of OpenShift.