Docker 101 - Intro to Container

Rate this content
Bookmark

Software Containers are quickly becoming an essential tool in every developer's toolbelt. They make it easy to share, run, and scale code. In this talk you'll learn how to use Docker to write better, more sharable software. In this workshop Sr. Developer Advocate at Docker, Shy Ruparel, will walk you through getting started with Docker. He'll covers setting up Docker, running your first container, creating a basic web application with Python and Docker, and how to push the Docker Image to DockerHub. He'll share why you'd even want to use containers in the first place and how they enable a developer to write better, more shareable software.

116 min
04 Jul, 2022

Comments

Sign in or register to post your comment.

AI Generated Video Summary

Software containers provide a consistent running experience, making it easy to package code and dependencies. Docker Hub is a repository of existing images that can be used to build containers. The process of creating containers from images and managing them is explained. The usage of Docker files to build images and the concepts of entry point and CMD are discussed. The process of moving files and running code in Python within a container is explained. Finally, the topics of pushing images to Docker Hub, building for different platforms, and networking and running containers are covered.

1. Introduction to Software Containers and Docker

Short description:

In this part, we will discuss software containers and their relevance in solving common problems faced by developers. The analogy of shipping containers is used to explain the concept of software containers. Containers provide a consistent running experience regardless of the environment, making it easy to package code and dependencies. They can share resources and are more modular compared to virtual machines. Docker Hub is introduced as a repository of existing images that can be used to build containers. The official node image is mentioned as an example. Docker commands and the process of creating containers from images are explained.

So, let's get started. Let's talk about software containers and kinda do the Docker... the Docker 101, right? So, I'm Shai, I'm a senior developer advocate at Docker, I'm on Twitter, if you all want to follow me, I tweet nothing of value but, you know, it's there if you want it. And I'm really excited to get to share things with you about using Docker. Full disclosure, I'm primarily a Python developer, so I will be doing my best to teach you all in JavaScript today, but I might struggle a little bit, and I might need some help. My Python foo is much better than my JavaScript foo. So, that's my disclaimer for this workshop, and let's go ahead and get started.

So I have a question. I want you to give me a hand-raised emoji or raise your hand in the chat if you've had this problem. If you've had any of the problems that I'm about to outline or, you know, share your favorite stern emoji, you know, the emoji you send your coworkers when you're mad at them or something, if you've had any of these problems. So, first off, you've built something, you've made something, it works on your machine, It doesn't work like that on the cloud or you share it with a co-worker, you know, maybe it's because you've got OS inconsistencies. You know, you're running Mac on your local machine. You've got a Linux instance in the cloud, and that's causing problems. Maybe you've got to upgrade the machine or the server. Maybe you're having issues with compatible dependencies. I've had this problem a few times myself. You know, and some of these problems aren't even taken into account if you want to change your cloud, you know, if you are on AWS, right, and you decide, alright, I'm fed up with this, the billing isn't working for me, you want to move clouds, or maybe I just want to explore and check out Azure or Google Cloud or I'm fed up with Azure and Google Cloud and I want to move to AWS, how do you keep all that secure, how do you keep it all maintained, how do you keep it consistent, how do you make it easy for yourself to move all that stuff around because it can be kind of a pain in the butt? Now I want to talk about how we can use containers to kind of solve those problems. But before we get there, I want to take a brief tangent and talk to you about shipping. And also, I have some cool gifs of trains and this is my excuse to share them with you. But yeah, let's take a brief tangent and talk about how shipping works.

So back in the olden days, before they invented color, color television, they would do shipping kind of inconsistently, right? So like products would be in also two different containers, you know, you'd have your bags of beans and spices, you'd have your boxes full of full of non-perishable, you'd have your barrels of rum and it made storing things really consistent, you'd have to you'd have to move individual units around. And then there was no standard format of transit, you know, so we had different types of boats back in the day, we'd have horse drawn wagons, you know, eventually cars got invented. Eventually trains got invented. It was a huge pain in the butt to kind of sort and organize and kind of keep things maintainable, you know, have a consistent idea of how much stuff you could you could get around from point A to point B. And this is actually kind of a solved problem now. So these days, containers like modern shipping containers have been standardized, you can load them with a bunch of stuff and you can efficiently transport these these boxes, these containers between multiple different modes of transit. And because we standardized around this format, we're able to build all sorts of different vehicles, assuming that this is going to be the format that we use. So we have things like trucks that can just take them. We have trains that can take them. We have boats that can take them. And when people are done with them, they'll recycle them and put them outside my apartment and turn them into covid-19 testing locations, which I think is pretty cool. I love the reused shipping container aesthetic, especially when it gets turned into like boutique stores. There's some really nice ones of those across across the country, which I think are really fun as well. And so I think shipping containers are pretty cool. And this is like 90 percent of cargo transit today is done in this format. And you might be thinking, Shai, this is an excellent tangent. But how is this relevant? And I'm going to tell you, I think this is this is actually a really good analogy for how software containers are set up. And I'm assuming that's why they decided to name them software containers. So with the software container, you kind of take that same aesthetic. You have this box, right? And you know, the box is going to be able to be run regardless of where you put it. You're going to have this thing that can run on your local machine, on an arm, be seven architecture on Linux, on cloud, on Windows. You have confidence that this thing is going to be consistent, have a consistent running experience everywhere. And so when you create a container, when you build a container, basically what you're doing is you're putting everything you need into that container that you need for it to run. You're throwing all your dependencies in there. You're throwing all your low level libraries in there and you're pushing it out with confidence that it'll be consistent regardless of where it happens. It's easy to make for us as developers. There's a lot of reusable components that are happening, so there's this standard ecosystem that's kind of being built on and you get all that dependency stuff coming with you. So they're great. They're able to let you package up code and your dependencies so things run faster and quickly and it's great. I kind of talked about this already, but this graph is kind of nice. It kind of shows how containers get built. You can actually share. So if you've got multiple containers running, they'll actually share resources between each other too. So unlike a VM that needs a discrete amount of memory and CPU time allocated to it and that's stuck while the VM is up, containers are a little more modular, I guess, they're able to talk to each other, able to share resources, they're to take advantage of being able to talk to each other as well. So that's really fun. There's Docker Hub, which I'll talk about, and I think that's enough PowerPoint for now. I'm going to go ahead and start doing some, so hopefully everyone has downloaded Docker and has had a chance to poke around this tool as well. I'm going to do a bit of coding myself and please feel free to follow along and tell me to slow down if I'm going too fast. But I'm going to get started by pulling up my terminal. I am a hardcore terminal fan. I am old school, I guess. Let me make this bigger so we can all read it. Give me a shout if this is too small and I will keep making it bigger. So, here we just have a plain command line, right? And I've got Docker installed. And let's go ahead and just do a quick help and see how we can figure out how to use this thing. So, here we go. I've got Docker, I've done my help command and there's all sorts of commands that we can use and options that we can use. And I'm going to start by running a new command. So, the way Docker works is when you do run, let's do the help so we could see the documentation here, what you do is you're going to pass it some options and you're going to give it an image. Now, an image is like a recipe or like a template. And that's basically saying, this is the thing that I want you to run. Whenever we do run, we take an image and we build a new container out of it. So, the difference between an image and a container is a containers is an actual process that is running and containers themselves can be changed and have state. They can be on or off and you can use them over again or you can not use them. Whereas an image is kind of like a set of instructions to make a container. And every time you create a container from an image, it's going to be the same starting point, and then containers themselves can change. So, let's go ahead and take a look at Docker Hub. So, Docker Hub is really nice because there's, it's this kind of repository of existing images. So, you don't necessarily have to build them from scratch. One of my favorite things about working with containers is that you're able to kind of always build on top of it. So, maybe you want to use a Python environment or a Node.js environment, right? So, we can type Node.js. Let's just type node and we can see that we've got this official node image. So, this means it's being curated. It's a curated open source repo that like the folks at Docker have looked at and we trust the people making it. We've done our due diligence on it and so we can be sure that this is something you can trust. And you can see here we've got like a verified publisher one. So, again, we know that this is being done by the real people at CircleCI. They're putting it together and so you can trust that this is going to be an image that's in good state and good quality. And so, when you pull it and run it, you're going to feel confident about it.

2. Pulling and Running Official Node.js Docker Images

Short description:

When you pull the official Node.js Docker image, you can choose from different versions. The Node.js Docker team maintains and updates the image regularly to ensure it's up to date and secure. By default, the latest version is pulled, but you can specify a specific version if needed.

And so, when you pull it and run it, you're going to feel confident about it. So, if we click into this node one, we can see that there's a whole bunch of information here and we can take a look at it. So, all we need to get this running is we just need to pull it locally, kind of similar to how like GitHub works. And we can see that it's maintained by the Node.js Docker team. So, this is the official Node.js folks and they're making commits to this fairly regularly to make sure it's up to date, to deal with any security vulnerabilities that might get discovered and trying making it like the best experience for you. And they have a bunch of different versions that they support. So, when you do that Docker pull node, unless you say which version you want, you're going to get the one that is tagged as latest. So in this instance, it's node 18.3, but you have the capacity to select whichever versions you want. So, if you want to do node 14 or some of the upcoming Node releases, node 18.3.15, you can set that level of granularity, as well, too, which is really useful.

3. Running BusyBox Image

Short description:

Let's take a look at BusyBox, a small image used for running on small embedded systems like routers. We'll use it to run a simple Hello World command. By running 'Docker run BusyBox Echo Hello World', it will pull the BusyBox image from Docker Hub, execute the command, and then exit. If you have any issues, don't hesitate to reach out.

But let's go ahead and take a look at an image that is much smaller. So, I'm going to take a look at BusyBox. So, if we click into this, we can see that BusyBox is maintained by the Docker community, and BusyBox is a really small little image. So BusyBox is one of those images that gets run on small embedded systems, so things like routers. And so what we're going to do is we're going to go ahead and use this image to run Hello World. We're just going to do a simple Hello World. So, I'm going to pull up my terminal here, and let me clear it. And what we're going to do to get started, our very first command is going to be a Docker run, and then we're going to say what image we want. We're not going to use any of the options yet, so we're going to run BusyBox, and then I'm going to pass it a command. So I'm going to just do an Echo hello world. And so what this is going to do is it's going to see, do I have an instance of this BusyBox on my local machine? Which I don't, and this is, I think, one of the great uses for Docker Desktop, is you can kind of have a quick peek at what images and containers you have on your local machine. And I went ahead and I deleted everything before we got started, just so you can see the kind of fresh experience. So I'm going to run this Docker run BusyBox, Echo, Hello World. It says it doesn't have it on my system, so it's going to go out to the internet, to Docker Hub, and it's going to pull it. It's going to give me the digest of it. It's going to download the latest one, because I didn't set the tag, and there we go. It executed that command and it ran that Hello World, and then it exited for us. So we did a single process here and we echo that Hello World. Now, let's do something a little bit more useful and like I said before, feel free to give me a shout if this isn't working for you, or if you're having any issues as well, too.

4. Running Ubuntu Container and Installing Packages

Short description:

BusyBox is not super useful. Let's try a more robust Linux instance like Ubuntu. We can run a container with Ubuntu using Docker run Ubuntu. To interact with the container, we can use the options -T for a pseudo terminal and -I for interactive. We can install additional packages like figlet and cowsay inside the container. The host machine and the container are independent, so installing something on one doesn't affect the other. You can run any container on any host, except for Windows containers, which require Windows as the host. When a container is turned off, its resources are freed up. You can create new containers by running Docker run again.

So BusyBox is pretty cool, but it's not super useful. So why don't we do something with a Linux instance that's a little more robust. And I always like to use Ubuntu or Fedora. And so we can see here that we have 22.04 is the latest version. And this isn't going to be the same as the regular Ubuntu instance. So let me go ahead and just do a pull of Ubuntu and we could see how big it is. And we can go over to Ubuntu and let's see if we can figure out what the current file size of Ubuntu OS is right now. So Ubuntu server and there a site. Can I see what the size is without downloading? How big is an Ubuntu in it? How big is Ubuntu? So Ubuntu right now it says is anywhere from 40 megabytes to 4 gigabytes. And I'm guessing this version of it is going to be probably closer to the mid two gigabyte range, whereas our Docker image that we have here in Docker desktop is going to be 77.82 megs. So this is like the smallest version of Ubuntu. So it's going to have everything we need for Docker to run Ubuntu and, and work with it. So let's go ahead and run this container. So we've got a Docker run Ubuntu. Now, if I don't give it any commands, it's just going to exit. But I don't necessarily want it to do that. So one of the things I'm going to do is I'm going to take advantage of some of those options. So let's go ahead and look at those again. Help. So I have two options that I want to use. I want to actually take advantage of Ubuntu's built in shell. So I'm going to use this "-T", which will give me a pseudo terminal. And then I'm also going to use "-I", which stands for interactive. So that'll keep standard input open. So I'll basically be able to type things into that container, and then see the responses. So what we're going to do is we're going to do this docker run "-I", or sorry, "-I", for interactive. "-I", for interactive. There we go. And then "-TTY", or "-T", for pseudo terminal. And then we're going to do Ubuntu. So here we go. So now I've got this shell. Let me clear it, so it's a little bigger. So if I do an ls, you can see I'm inside your standard Ubuntu file system. And if we poke around, you know, we can see that we've got like all of the standard stuff, it's what I would expect if I installed Ubuntu onto like, I don't know, a Raspberry Pi or a laptop or something. And we can do all sorts of interesting things. So one of the things I really like to do is I like to use figlet, or I like to use cowsay, which are these fun little and let me show you how it works because I have them installed on my, on my local machine is this is figlet. What it does is it's not installed on my local machine. Yes. So what it does is it takes the texts I give it, and cowsay here just makes it. So there's this nice little ASCII cow that'll say, say whatever texts I give it. So there's a lot of these fun, little silly commands. And so I've got this Ubuntu instance, and I'm going to go ahead and try and run figlet. Figlet. Hello. And you can see that it isn't here. Um, Ubuntu does not ship with figlet. Uh, much to my chagrin. If anyone on this call is an Ubuntu maintainer, um, awesome on you for joining the workshop, but also why don't you ship figlet inside Ubuntu? That seems like a fun, a fun thing that you could do, uh, to get my seal of approval here. So let's go ahead and install it. So first off, the first thing we got to do here is we've actually got to do an update. We have to reach out to, uh, Ubuntu's package managers and just make sure we get a list of all the available packages that doesn't come in that 70 megabytes. Right? So we can do an apt update or this is going to take a sec, right? So we've got, we've got that now, and then we can go ahead and do an apt install, figlet, and we can also do calcet just for fun, because I like them. Cool. And this is going to add 50 megabytes to my hard drive, to my container runtime, which is great. It's going to run. And now, uh, when I try it, we'll have it working. And so there we go. We've got our little, um, what is it? Uh, we've got our little thing running. We've got it installed into the container, which I think is really useful and we can pass it commands. So figlet has, let's see if figlet has a help. So we've got figlet and then we can do dash F font. And so I think slash slant is a font that they have. So we could do that and, you know, it'll give us a different version. Now this is pretty cool. Um, this is pretty useful. We had Ubuntu up and running on our local machine. We've installed stuff pretty quickly. Um, and now if I run it, I get my output, which is really great. So I'm gonna go ahead and exit the shell and I can just do that with a exit or like a control. D um, And I'm going to go ahead and do, uh, figlet again. And you can see that I don't have this installed on my local MacBook. So we're going to get a command, not, not found. Um, so the, the host machine, my local MacBook and this Ubuntu container are kind of independent things. They have independent packages, uh, installing something on my machine isn't necessarily going to install it to the container. And if I install something to the container, it isn't going to get, uh, installed to the machine. And that's true. Even if I was running Ubuntu, right. If I was running Ubuntu on my local machine and ran an Ubuntu container, those two things would remain separate. Um, and as a general rule, you can run any container on any host. Um, the only weird exception here is Windows stuff. Uh, if you're running Windows containers, you gotta be running on top of Windows at least, at least right now, um, as well. So, uh, so you might be wondering, okay, we turn that container off. Where is it? So let's go take a look at Docker desktop here, and you can see that. Uh, we have this Ubuntu image and it's been given an, a random name and it's been called a vigorous Cori and it's exited. Now this is still here, right? Like this still exists on my disk, but all of the resources that were allocated to running this thing, all of the Ram, all of the CPU. Are, are freed up. So I have that, I have all those resources back as soon as I turn that off. Um, and if we do another run containers, if we do Docker run dash, uh, it so we can do the interactive in the terminal here, uh, if we start this again, it's going to go ahead and create a new container for us. Um, so you can see here that previously I had this ID of eight Oh three FBB eight.

5. Working with Containers and Docker Workflow

Short description:

When working with Docker, it's recommended to start with a fresh container for new projects. Custom images can be created to install necessary dependencies. The analogy of pets versus cattle is used to explain the difference in approach. Automation and repeatability are important when running multiple containers. The workflow involves creating a container image, running the container, working on the project, shutting down the container, and starting a new container for future work. A clock container is used as an example of a container that runs indefinitely. The detach mode allows containers to run in the background. Docker PS provides information about running containers.

And here we have a different ID. So this is a brand new container now, because I've used the run command. So if I type piglet in, you can see that, uh, the command is not found. Um, and you might be thinking, Oh no, why, why don't we have the container with fig let again? Um, and that's because kind of, there's this difference between run and start. So whenever you're doing stuff with Docker, especially a new project, it's always recommended to kind of start with a fresh container. Uh, and if you need something to be installed, you should, you should have it built, uh, and then run that. So you should be creating kind of a custom image there. And we'll talk about that later in the workshop. Um, but yeah, you, you know, kind of the intensity or the intention here is to focus on automation and repeatability and let's, let's take a quick look at, um. Uh, Um, how we can reattach to, um, Do the container in a sec, but I kind of want to talk about the difference between, this is good metaphor, uh, of like pets versus cattle. Right. So if you have a pet, um, you've probably given that cat, pet a unique name, uh, that pet has a. Um, you know, it's, it's independent being that you care about if your cat or if your dog gets ill, you're going to do everything you can to, to fix them and to make sure that they have the best care possible, um, for them to be successful. Whereas with cattle, right. Cattle have generic names, right? Like you, you own a hundred cows. You don't really care about them on an individual level. You want them to be kind of standardized. You want them to be generic. Um, and if there's an outage, you know, you just remove it from the herd and maybe you replace it with a new cow, uh, and that's kind of the same mentality with servers, right? Like you're not, you don't, you're not going to do everything. necessarily need to get a server back. Um, where. You can just, uh, you could just replace it and run a new one. And so, so the automation and the repeatability is important, especially when you're running lots of different things. Um, so, um, so yeah, that kind of the workflow is you create a container image, you run that container off of that image, you work on the project. When you're done, you would shut down the container. Next time you need to do some work on a project, you start a new container. And if you need to change the environment, you would tweak the image files and how the container itself is built. So we're going to talk about how to make a custom image with Figlet, uh, in a little bit. Um, but first I'm going to show you some backgrounds containers. So let me go ahead and exit out of Ubuntu really quick, uh, that I'm going to keep going. So the container that we we've been running so far has been running and as soon as we're done, it kind of stops and it closes and it's something we kind of have had to pay attention to. Right. Even if we're doing the, the run command. So I can do this Docker run, um, echo, hello world. It'll do a thing and then it'll, it'll exit because it's, it's out of stuff to do. So what if we use a container that, um, permanently runs kind of forever? So there's one that I like and let me pull it up on Docker hub so you can see, see it. Um, and it's this clock one. So this was made by a former Docker employee. Uh, mostly just, I think for teaching purposes. So it hasn't been updated in a little while, but this is a fun one. So what this does is it'll just display the time and it'll run basically forever. So I don't have this on my machine right now, so it's going to go look at Docker hub, it's going to find the version and then it's going to go ahead and run it. So here we go. We've got. It printing my logs now because I am running this in an attached state. Um, I basically don't have access to my terminal anymore, right. Which is kind of annoying if I want to, you know, do other stuff, or if I want to be hypothetically like a web server that's running in the background while I may be right code and Vim, like this is, this is really inconvenient. And so what we want to do is we want to run this in detach mode. And this is also something that we don't interact with, so we can try using control C and control C will go ahead and it'll stop the process for us. Generally, usually when you hit control C, it's going to send a, it's going to send a signal to shut down the container. Um, and it's going to wait 10 seconds. When you hit that control C it's gonna send that stop command. And if it's not able to do the stop command, it'll send a kill command. Uh, and if you're ever in an instance where, um, where control C or exit doesn't work, this is I think an advantage of Docker desktop is see all your running containers here, you can just hit this stop function and that'll be able to exit it for you. So we're talking about stopping containers and yeah. So let's go ahead and run, uh, this clock container in the back. And so we can do a Docker run again. And I'm going to take a look at help. As I'm, uh, always, uh, I feel like I'm a little bit obnoxious and I like to be overly overly explaining when I'm teaching about how to find, how to find information. And so here we've got this dash dash detach or dash D and what this will do is it'll run a container in the background and print the container ID. So we can go ahead and do a clear to clean things up. We're going to do a Docker run. We're going to do a detach, the dash D Oh, Docker run dash D. Um, We've got it. Uh, then we're going to run this clock image again. Uh, and when I hit enter, it's gonna, it's not gonna show me the output of the container. And that's okay. We're still going to have access to all those logs. What it's going to do is it's going to give us the ID of the container. So this is the ID of the container as it's running and it's just a randomly generated kind of thing. Um, and you might want to say shy. Okay, that's cool. But how do I see what's running on my system? Uh, and we have, uh, just kind of like Unix has the PS command. We have Docker PS and Docker PS is going to give us all of that info and it's going to look a little weird because my thing is. Little big, but there we go. Let's go ahead and do a doc. Yes, again. And this is a little more readable. Um, and so it's going to say, okay, these are all the ideas of the containers that I'm running. These are the images that we've selected. This is the commands that are running, how long they were created ago. Uh, their status and then their names. And if I don't give it a name, it's going to get something randomly assigned to it. So we can see I've got a couple of things running on my system. I've got the container I ran from. Yesterday running. So let me turn that off. Um, we don't necessarily need that. So let me go ahead and just stop it from yesterday's example. We can do a Docker PS and you can see that it's, it's disappeared.

6. Managing and Viewing Containers in Docker

Short description:

In this part, we learn how to manage and view containers in Docker. The speaker demonstrates how to find and manage container IDs and names. The Docker logs command is used to view container logs, and options like tail and follow are explained. The speaker also explains the difference between Docker kill and Docker stop commands for stopping containers.

So now I just have my extensions running. And let me go ahead and actually change one of my settings really quickly. So, uh, I have it showing me the containers, but we don't necessarily want it to do that, actually, I'll do that later because it'll restart. That'll take a bit.

So yeah, here's all of my containers that I've got running and I can find that ID and I can find that name and I can also just pull it all from Docker desktop as well. So Docker desktop has, has my list of containers and this one's going to show me my exited containers, but we can see that this one is still running right here and I can click into it and it'll start showing me the server logs as well, which can be useful. Um, and let's go ahead and we can start a couple more containers. So I'm going to run that command again. A few times. You don't see the Docker, the stop button on Docker desktop. Okay. Um, well, so the container itself has to be running. So if it's exited, it's not gonna, it's not gonna give you a stop button. So, uh, you might want to check Annabelle and make sure you started it in the detached mode with this dash D flag. Uh, um, and let me know if that, if that helped for you, or if that made a difference for you. Now. So if we take a look at Docker, uh, doc, yes, we can see that I've got three instances of the clock running. They've all been running for a different time, and if I go back to doctor desktop and see, okay, and let me start this by running and not running. You can see I've got three instances of this clock that are, that are running. As well. And they've all got different names that have been randomly assigned. Um, and if we had ports that would tell us what ports are exposed as well. Uh, too. So we've got our containers running and let's say we only want to see the last thing that was running.

Oh, I found it. I seem to have a different version of the app. Oh no. Oh no. Hopefully, hopefully you, you update that fixes it. So we've got Docker P S dash L. Uh, so that's going to be our latest. And so that's going to show us the last image that we created, which is really useful. Uh, and let's say we only want to see the IDs of things. Then we can do that with Docker P S dash Q, uh, so quiet. Uh, and so this will just give us the IDs, uh, as well, uh, without any of the other information, which can be useful. Um, and we can combine this too, so we can do Docker P S L Q and that'll give us the ID of the last container that we started, um, as well too. So, um, so we've had these running for a little bit. And why don't we go ahead and take a look at the logs. So now that we have the IDs, we can go ahead and we can use those to figure out what's going on on the container. So we can do Docker logs, and I'm just going to go ahead and do a help again, so we can figure out how this works. So how this works is we give it an option and then we give it a container. And so this is going to look for either a container ID or a container name, uh, right here. So I'm going to use a container ID in this instance, though. I know that this is one of my clocks. So let's do Docker logs and then the ID, and you can see, it's going to print all of the logs here. So this is going to give us all of our information of the entire logs of the container. Sometimes this might be too much information. I would argue that this is already at the point where it's too much information. It is really inconvenient and annoying at this point with how much is there. And so we can do, we can do, uh, let's take a look at the help command again and see if there's anything in there that'll be useful and cool. We could see tail will give us the number of logs that, uh, that we, we get. So let's do Docker logs dash dash tail. And then let's say, let's look at the last five of a clock image. You can't see the bottom of my screen. That's pretty reasonable. Okay. There we go. Hopefully that's better. Um, so Dr logs, uh, and there it is. It's only giving us the last five. So that's gonna make it a lot more manageable to keep an idea of the logs and just like standard Unix stuff. We can also do it with a follow. So we can do dash F or follow. Um, I'm sorry. Yep. Dash dash dash dash tail, which will give us the last five. And then we can do dash dash follow. Um, and then we can do our container again and we're using this one. This one, and you can kind of see how I'm using these names and IDs interchangeably. So this will hold onto it. And unlike before, when I control C this, it's going to still keep the container running. So, so we don't have to worry about it, about it stopping so we can control C now, let's go ahead and talk about how we can stop some of these containers. So I'm going to do a docker PS again. So I get my IDs up and we've got two ways of stopping things. You can use Docker kill and let's do a help and see what that says. This is going to kill one or more running containers. Then we can also do a Docker stop, right? And this is going to try and stop it gracefully. So the first one kill is going to shut it down as soon as it can. Uh, where a stop is going to send a terminate signal. And after 10 seconds, if the container hasn't shut down, it's going to go ahead and just send a kill signal, um, as well. Uh, and so stop is generally the better way, uh, especially if you've given your code, you know, graceful shutdown features as well. So let's go ahead and stop some of our containers. So I'm going to do a Docker stop, and then I'm going to pull the ID of one of my clocks. Right. And so this is going to send that terminate signal. The container isn't going to react to it cause it's a, it's a shell script. It's not smart. There's no signal handling here. It doesn't know what to do with the terminate. And then 10 seconds after it's going to just go ahead and kill it.

7. Stopping and Restarting Containers

Short description:

In this part, we learn how to stop and restart containers in Docker. The speaker demonstrates how to stop containers using the Docker kill command and how to view running containers using Docker PS. The process of detaching from a container and reattaching to it is explained, along with different commands for detaching on different operating systems. The speaker also mentions the Docker attach command for reattaching to a container. The process of pruning unused images and containers is introduced as a way to clean up the Docker environment. Finally, the speaker mentions that restarting containers will be discussed in the next part.

Uh, and the container is going to stop. And so now if I do a Docker PS, you can see that I only have two clocks running as well, and we can also do this with multiple things at the same time. So we can do a Docker kill and I'm going to do, um, that ID. And I'm going to use the name of the other one, just so we're doing, we're mixing and matching. And so there it is. It shut down both of them. It's much faster because kill is instant versus stop as well. And now if we do a Docker PS, all of those things are gone.

So, uh, so this is only showing us our running stuff and you might be thinking, okay, what do I have on my machine? How do I see everything? And so we can do a Docker PS dash a, and that'll show you all of the containers that are on the machine. Uh, and it'll show you when they exited. Uh, and so this, this can be useful if you just want a general sense of what's on your machine. And each of these containers is going to generate. Information, uh, based on what you're doing with them, which is really great.

So, okay. So we've stopped containers. Let's talk about restarting, restarting containers. So we'll put them in the background. We can talk about how we attach to them. And then we could talk about. Restarting them as well. So from my, from Docker's view, the distinction between these foreground containers that we're interacting with and that turn off, when we use control C versus the background containers that we're running in detached mode is totally arbitrary. Right? Um, from our point of view, a container is a container and it's going to run. Pretty much the same way regardless of if a client is attached to it or not, it's always possible to detach from a container and reattach from a container. Kind of think of it as like plugging in a keyboard or a screen to a physical server, right? The server is still going to be running regardless of if there's a keyboard or screen plugged in. Um, it's just going to give you more control over the, over the server when you plug a keyboard in versus if you don't, uh, and you can still SSH into that server, even if you don't have a keyboard logged in. So, so let's talk about, uh, detaching from containers. So here we go. I'm going to go ahead and, uh, Docker run dash it in an Ubuntu server again. So here we are, we're running this in shell mode. We are attached to it. So the way we detach is going to be a control P control Q. Um, now, uh, so we've gone ahead and detached from that. Um, if we detached from this using exit, it would go ahead and kill the container, but by using this dash PQ, um, it's going to go ahead and make sure it's still running. So now if I do a doctor PS, you can see that that Ubuntu image is still going and it's not doing anything, but it's still there. Um, it's a little different on windows and I'm going to defer to. Uh, actually, are there any windows users in this, in this session out of curiosity? I should have asked. Um, yeah, there's a few, so the Mac command is going to be command or control P and then control queue. So if you're using bash or WSL or any of those kinds of tools in windows, it's going to work the same way it does in Linux. Um, but if you're using the windows terminal one, you are very brave. Uh, and two, I would recommend Googling how to, how to do that specifically. And you can actually change the sequence too, if you want. Uh, as well, you can, you can use a Docker run and then detach keys, go dash, dash, detach these, and you can set what your string is as well. And so let's try this with the clock image again. So Docker run clock, right? So this is going, and now if I control C control P it, it's going to kill it. But if I control P control queue, it's going to theoretically detach. Maybe I'd. Oops. There we go. Okay. I don't think I can do this fast enough, but yeah, then we'll be able to detach as well from it. So. Cool. So, uh, let me go ahead and do a Docker ES again and let's see what we got going on this image. So we have this Ubuntu image, and let's say I want to reattach to that container. Uh, what I can do is I can do a Docker attach, uh, and then I can do the container ID and that'll pull me back into it. So here we are in this Ubuntu file system. And if I exit and I do a docker PS, you can see that it shut down cause I sent that exit command. Wow. So let's go ahead and do a Docker run Ubuntu again and get this going. Well, and so now it's in the background, uh, and a cool trick that you can do is if you're into, um, grep kind of magic or bash magic, you can do things docker attach, uh, and then you can send it docker PS dash LQ. And so this is going to give us that ID of the last container. Um, and if we hit enter, oh, Yes. Docker run. Oh, I forgot to detach that Ubuntu image. That is my fault. So now if I run this, it's gonna. Hmm. Docker dash slash it one, two, there we go. Now, if I do a control EQ. Now, if I do docker attach, it'll pull me into that last container again. Um, as well. So, um, cool. So that's how to detach from a. Uh, interactive container and, uh. And you can always kill things, you know, in, in Docker desktop, if, if you lose track of stuff as well, and you could see, it's already kind of blown up every one of these run commands, created a new container. Um, and it's really easy to delete stuff too. So one of the commands I really like is Docker prune. So there's a prune for images. And we can do a, dash dash help here. Uh, so this will remove any images we aren't using and then there's Docker prune for containers. So we can do Docker container. Prune dash dash help. And let me go ahead and do a container prune. Um, and that'll go ahead and remove all of my stop containers. So it'll go ahead and just clean things up for me. This is a lot more manageable for me now, which is great. So, um, so we've talked about logs already, uh, and tailing them and following them. Uh, let's talk about restarting containers. So, uh, let's go ahead and make a new clock. So docker run j clock.

8. Working with Containers and Docker

Short description:

When restarting a container, use the 'Docker start' command followed by the container ID. The container will start in a detached state. Docker logs can be used to view the container's logs. Containers can be stopped using 'Docker stop' and the container ID. The 'attach' command can be used with 'Docker start' to show logs and keep the container running. Images can be found and downloaded using Docker. Building images will be discussed later. Containers may need to be killed if they don't have handling for stop signals. Adding code to handle stop signals allows containers to be stopped gracefully. The grace period can be extended by adding handling for the stop signal. If the container doesn't stop naturally, the 'kill' command can be used. It's important to handle the Sigterm signal to ensure containers stop correctly.

Um, well, this is running. And then when I hit control, C it's going to kill it. So we can do a docker ps dash dash. All and, and see, okay, here's the container ID of our clock and we exited it about 70 seconds ago. So let's say I want to restart this container again. We can do Docker start, and then we can give it that same ID here. Dash a, and this is going to start the container and it's going to start it in a primarily detached state. So if we take a look at Docker desktop, we can see that it should be running. Yup. It's running. And now we can do a Docker logs. Let's say that, uh, as well. Uh, and this should give us all the logs and you can see that there should be a gap here. Yep. So there's a gap between 42, 43 and 4307, and that's the time difference between me turning it off and on. That's not really important more than just to show that, you know, it stopped and it restarted. You can restart that container as well. Um, and you can do it with, uh, with some other options as well. So let's talk a dockers star dash dash. Help see what our options are. So here we've got this attach mode and this will, this will make it work kind of like how it did when we ran it. So we can do a Docker run, uh, or let's do a doctor stop. Um, yes doctor stop. And we'll give it the container ID. Dr. Stop. And this is going to take us sec cause I should've used Dr. Kill because it's a shell script. So it doesn't have a shut down state. Now, if we do Dr. Start and we use the attach command and we give it that idea again. It's going to show us those logs. Uh, when I control C it's going to keep running as well. So. And this doesn't know. Uh, yeah. So Dr. Attach. Cool. So let's talk about Dr. Images. So we've just been using these images right now. Um, and we've been using months that have been created by other folks. So I'm going to talk a little bit about, um, I'm going to talk a little bit about images and you know, how they're set up, uh, and how to find and download images. A little bit about tags. Um, I think I talked about tags. Maybe I won't reiterate it. Uh, and then kind of, we can get into making images and building images. And this is where the fig. Let thing is going to come back. P Q doesn't work on windows. Let's go ahead and Google that. How do you, uh, Docker atch. Okay. Control. Control Z control. C okay. So Ingo's question is why wouldn't container shut sometimes and need to be killed? So in the instance of the clock, right, the clock is just running a, it's like a shell script, right? So it doesn't have any handling for when a termina termination signal gets sent. So when I do a Docker. Uh, stop, let's look this up in the Docker docs. Cause the Docker docks are smarter than me. Smarter than me, um, versus kill. So what it's doing, is it's sending a stop signal, right? Uh, and so the code that's running or the container that's running needs to actually be able to handle that and needs to be able to do something with that stop signal. So for example, in node, right? If you send a stop signal, um, it's not going to do anything. You actually have to give it code to handle that. Um, for example, whereas like flask itself just knows how to deal with a stop signal and, uh, and can gracefully shut down. Um, and let's see if there's an example of stop in this, here we go. So Sigterm, that's where it was. Yeah. So in this instance, right, like in this node code, they have a, if, if it gets the Sigterm or essentially if it gets that stop command from Docker, it's going to go ahead. And in this instance, it's going to say that it received the Sigterm and it's not going to do anything. Um, yeah. And sometimes containers don't stop and need to be killed. So the code that you're running doesn't have handling for Sigterm in it. Uh, and if you, you need to add something that manually deals with the Sigterminate and then stop will work. Uh, and if it doesn't, then you'll have to use, you'll have to use kill and stop. And stop just, just as we'll wait for 10 seconds. And if it doesn't get, if it doesn't exit naturally, then it'll just overwrite it and send the kill. So you just need to add some, some handling for that Sigterm. Um, and that's kind of the difference there. Does that make sense? I am going to assume that yes, that it makes sense until I hear otherwise. Cool. Dave says it makes sense. If there's more clarity you need, just let me know more than happy to answer. Amazing. It makes sense. I love it when I answer questions. Uh, is it possible to extend the grace period if needed? IE can a process not be shut down in 10 seconds? That is a great question. And I think it is, Oh, here it is. It says right here.

9. Understanding Docker Stop and Images

Short description:

In this part, we discuss the usage of the 'Docker stop' command with the '-t' flag to control the amount of time before killing a container. We then delve into the concept of images, which are a combination of files and metadata that form the root system of a container. Images are made up of layers that can add, change, or remove files and metadata. Containers, on the other hand, have a thin read-write layer on top of the image's read-only layer. We explore the difference between official images and images from organizations or private repositories. Finally, we touch on the process of creating and managing images, as well as the various namespaces used in Docker.Root namespaces are reserved for official images maintained by third-party software providers. Images can be created locally, pushed to Docker Hub, or pushed to third-party registries. We conclude by examining the images stored on the local machine.

So, uh, in the usage of Docker stop, it says that if we give it this dash t command, it'll take a different amount of time. So let's go ahead and. Look at that in the terminal as well. Uh, so yeah, we can give it an integer of how many seconds that waits before we kill it and it defaults to 10 so you can change that to whatever you want though. So, yes it is, uh, that dash t flag.

Alright. I'm going to keep going. So we're going to start talking about images a little bit. So, uh, I know I've gone over this before, but, um, images themselves are kind of like, um, So there are kind of like the things that explain what you need to make a container. So images themselves are a combination of files and metadata, and the files are going to form the root system of the container and the metadata. It's going to have a bunch of info that comes along with it. So things like. Who's the author of the command or of the image, like who made it? Um, what's the command to run before we execute the container? If there are any environment variables that we need, what should those be? And images themselves are made of layers. Conceptually. They're kind of stacked on top of each other, so each layer can add, change and remove files or metadata, and then images themselves can share layers between themselves to kind of optimize Diss usage or transfer times or memory used as well. So if you add a file, remove it, and then add a file again, that out of file layer, isn't actually going to add anything cause it's going to have been added by the first layer as well. No, that's kind of what an image is. Uh, and so an example maybe of, uh, a JavaScript app is maybe we'll have, uh, a no JS base image and that no JS base image might actually be built on top of another Linux instance. So let's say, I don't know, it's a cent OS base layer, right? So it's running cent, cent OS is the base layer on top of that. It's been added, there's a node layer on top of it. So someone's installed node, and then we're going to take that and we're going to go ahead and we're going to add. Our own dependencies to it. So that'll be a third layer. And then we're going to go ahead and add our code and our assets. So that will be a fourth layer. And then we'll add, actually running the thing as a fifth layer. So, so it'll be all these different layers that kind of percolate on top of each other to get the actual kind of recipe that'll be used to then generate the container and the container is the running process. Um, as well. And the, the container layer at the very top is, is a read, write layer. So the image itself is read only. So, uh, the only way to change an image is by regenerating it, by building it from scratch. Uh, whereas a container will add a very thin read, write layer, and that'll contain all of the changes that you've made. So that Ubuntu image from before is going to be the same size, but when I installed Figlet, it's going to create a tiny little layer that's just for that container that's going to include. Um, uh, that's going to include the asks. Can we add the database as another layer? You could totally do that. But my recommendation in that scenario is to actually run it as a second container. Uh, we can talk about multi-containers later on in the workshop. We're going to, we're going to talk about docker-compose and how you can use docker-compose to run multiple containers at the same time. So you'd have your web server container. And your database container is two separate, two separate things, but that's a good question. Uh, so yeah, difference again between containers and images and images of read-only file system. A container is a process running in a read-write version on your- of that file system, uh, and to optimize containers, we use copy-on-write instead of a regular copy, uh, docker-run starts a container from a image, so it'll create a new container every time you use docker-run. Um, as well. Uh, cool. And so if you have multiple containers running, they're all going to share those base images. And then they're all just going to have their own individual thin read-write layer. So when we run those clock instances, they all shared the same base image and they all had different, uh, rewrite layers that were just mostly being used to manage the logs there. Um, if you're in, if you're an object oriented programmer, I would equate it to images being kind of similar to classes. Layers are kind of similar to inheritance and then containers themselves are like instances, basically. Um, and you might be thinking, all right, how do we change an image since it's read-only and we don't. We just make a new image, um, a new image, uh, and then generate a new container from that image, um, as well. So let's talk about, um, let's talk about some existing. Namespaces, right? So the way the way Docker, uh, is set up is that there are official images. So things like Ubuntu, right. Uh, Ubuntu is an official image. So when I pull Ubuntu, I'm going to get the official version of Ubuntu and you can see here in the, in the command, it's reflected by this underscore. Whereas if we go to their organization, um, let's see if we can go to the Ubuntu org. I don't know if they'll have anything else or let's go to circle CI. They're great. Um, let's search for circle CI, right. So circle CI here is an organization and they're running their own versions of things. So there's, there's the circle CI node versus regular nodes. So, uh, the difference there is it's going to be that slash it's going to be org name slash node. Uh, and then you can also host images on private repositories. We just added support for private repositories to darker desktop for, for any enterprise users. And so that'll be like the URL, a port, and then your, your image info. Um, so here are the difference on the node is going to be pull circle CI node. So we're going to pull this node image from the circle CI organization versus if we just do node, uh, we're going to pull it from the official node image. And so that's kind of, kind of the way the namespaces work. Um, root namespaces are only for official images and we get those. Uh, and we generally prefer them to be maintained and authored by the third parties that run that software. So in this instance, node is managed by the node team. Uh, whereas like Python, I believe is actually something we manage ourselves. So it's still an official image, but if you look at it, you can see it's maintained by, by the Docker community. So it's maintained kind of by, by us. And I think, I think this contributor list is going to be a mix of, of Docker people and non-Docker people, um, as well here. So taking a look at that clock image again, um, we can see that it's a image maintained by a person. And, uh, then its itself named clock. And if we click on this, this guy's profile, we can see he's written a bunch of different stuff as well. Um, uh, as well. And so if we wanted his color image, um, I have no idea what it does. You know, we would use his username and then dash the image, um, as well. Um, cool. So hopefully that makes sense. Um, and when you've created an image, you, you can start in a few different places. You can start locally on your machine. You can push it to Docker hub, you can push it to a third party registry, and we'll go over how to do that in a little bit, so let's take a look and see what images we have on my machine right now. So here we have all of those images though that, um, I need to turn this off. Yeah. So here are all the images on my system.

10. Images and Tags in Docker

Short description:

In this section, we discussed different images available in Docker, including clock, busy box, and Ubuntu. We explored how to search for images using Docker search and Docker Hub. Pulling images using Docker pull and Docker run was demonstrated. Tags were explained as a way to specify versions and variants of images. The benefits of using slim images to reduce container size and improve performance were highlighted. The concept of multiple architectures and platforms supported by Docker was introduced. We also briefly touched on the topic of building images dynamically.

And these are hopefully ones that, uh, are familiar to you because we, we pulled them or we generated them together. So we've got this clock image. Um, we've got this busy box image that we used in the beginning and then we've got that Ubuntu image. And then I have some other stuff for Docker extensions. I have extensions running in the side and we, we don't need to worry about those yet, but you can see those. And I can, I can also check that from the command like I can see with Docker images and it'll give me, it'll give me a list, it'll give me an Id. And then it'll tell me how big they are as well, uh, which is very, very useful. So I can keep an eye on it. Um, and I can do searches to see if I can find images too. So maybe we can do a Docker search clock and see what we find. So there's a bunch of different clock stuff here, and we can also do that from the Docker, the Docker hub as well. And remember if you see something on the, uh, Uh, that has, uh, just one level without a slash, you know, that that is going to be an official image because it's, it's the root namespace, uh, as well. So, um, there's a bunch of different ways to, to get images. You can do a Docker poll to Docker poll and let's do Docker poll node, uh, and that'll go ahead and reach out and pull node for me. Uh, or if you do a Docker run and it doesn't have it locally on your machine, it'll go ahead and do that for you. So we've pulled node and let's talk about tags a little bit here. So let's say I've got this node version. And for whatever reason, Running it on latest. So in this instance, node 18 is going to break something for me. I want to run it on 16 or I want the slim version of 18. So we can see that we've got two 18s here. Let's do slim. So here I've got my node image and this has taken a little bit because it's a big image. Uh, so we're going to go ahead and let this finish and then we're going to go ahead and pull a tag. And the way that we do that is we're going to do a Docker pull and we're going to say node again and we're going to give it a colon and then we're going to say slim and now you can see that there are all these different alternative tags. These all kind of, um, map to the same thing right now. So right now, if I use slim, we're going to get this 18.3 point zero slim. Uh, and this is useful for setting how much attachment you want to a specific version. Right. So like if I want to just use the latest version of node, I would just use latest and so whenever there's a major update, you know, let's say we switched from node 18 to node 19. It's going to get pulled along with the 19 version because latest is going to stick to that latest version because the people maintaining the image, This image is going to, are going to build it that way. And you can see how much, how much detail they've given. Right. So let's say I want to stay with, if I use 18.3 point zero or 18.3 point, I don't know, four, for example, and I want to stay on 18.3 point four, I can give it that level of granularity. And so whenever we have an updated image that updates a version, it wouldn't update that. It would stay on that 18.3 point four. Version. And so that's a way you can, you can make sure that, you know, you're only getting minor updates or major updates by, by using those tags. So here I'm going to pull this slim image really quickly. So I just did the colon and you can see that, um. The layers, uh, already exist or, or slim. So like, uh, some of those layers are already there. So we can see that, um, they've been pulled from the previous ones. Uh, so it doesn't read download them. Uh, the only thing that really re downloads is the difference between that node image and that node slim image. And if we take a look here, we can see that the normal node image is running at almost a gigabyte. Whereas the slim node image is running at about a quarter of a gigabyte. And so, uh, this slim image is only going to contain the minimal amount of things we need to run node. And so you're probably gonna have to install a lot more dependencies than, than you might on the regular node image. This is probably gonna come with a lot more dependencies built in, but that can be useful, right? That can be, that can be helpful. If you, if you want to make sure that you're running really slim images. Um, if you want the container run times to be faster, if you want the builds to be faster, running, using those slim images is kind of the best way of making sure we stay, we stay small and that's kind of the tags. So images can have tags and they define like those variants as well. And if you don't give a tag, it will default to latest. And latest is usually the one that gets updated the most as well. Um, if you're doing rapid stuff, uh, I would avoid tags. So like during this workshop, I won't be using that much because I'm rapid kind of prototyping and stuff. But if I was going to push things up to opt to Docker hub, and I wanted other people to use them, then I would use tags, uh, as well to kind of make sure. And this is kind of the same way as like, you know, if I'm doing a PIP install or an NPM install, and I want specific versions of my dependencies versus not, you know, I'm going to say NPM install. 11d version acts versus NPM 11d generic. So the question from Peter's, can you see what the latest download version is? So the easiest way to do that is to actually head over to Docker hub and just find the latest. So you can see here latest is attached to this 18 image. Um, and that's my recommended way for doing it. And you can click on it and actually see the script that they're using to build the whole thing too, which is, which is I think pretty useful too. If you're curious about that stuff. Um, yeah. So, um, images themselves can support multiple architectures. So, uh, most, um, Most images themselves are gonna, when you do a build, it's going to default for your local machine, but let's say you want it to run on a different machine, you can give it a different, like a variant architecture or variant platform. So I have a Raspberry Pi thing that I've built. So let's go to my profile. And we can see here that I've got this last thing, right. And I have some variants. I've got my latest. Uh, and then I've got a variant for AMP D 64, which is what runs on my Mac book. And I've got a variant for RV seven, which is what runs on my Raspberry Pi. So you can build for different platforms too. Uh, and, and Docker can support and take care of that too. And if we go to no, they probably have it done more gracefully than I do. And you can see, yeah, here, they've got a list of all the architectures that they support. So you can run it on IBM Zed, which I think is a mainframe for folks that use mainframes. Which I think is pretty cool, but I'm, I'm a little old school, uh, like that. So, um, so yeah, we talked about images and layers. We talked about namespacing and we talked about how you can search and you can download stuff. So I think that's a good, uh, section. And I think maybe we can talk about building images now. Um, and, uh, I like to do this. So there's, there's a way to do this, uh, dynamically. So, you know, if you go into that Ubuntu container and you make changes, you can track those and you can actually freeze the image, but I, I don't really like doing it that way. I think it's a little, it leaves too much room. And so I'm not going to teach you how to do that, but I do want to make it, uh, uh, make you aware that it does exist.

11. Building Images with Docker Files

Short description:

In this part, we discuss the concept of Docker files and how they are used to build images. Docker files provide a series of instructions for creating an image, which can be executed using the Docker build command. We explore the process of choosing a base image, such as Ubuntu, and running non-interactive commands within the Docker file. The Docker build command allows us to name the image and specify the path to the Docker file. We can also use caching to speed up the build process. The no-cache option ensures a fresh build.

If you need it. I think manual processes are bad and automated processes are good because automated processes are easier to repeat. So I'm going to jump straight to. Building images automatically. And for that, we're going to start talking about Docker files. So in this next segment, we're going to write a Docker file and we're going to build an image from a Docker file. Um, so what is a Docker file? And we kind of looked at one, just, uh, just a second there when we clicked into, into this, a Docker file is a recipe for how to make an image. So it's going to give you a series of instructions for each, um, for doctor to tell you how an image is made. And then we're going to use this Docker build command to build that image. Um, so, uh, Docker files need to live in their own directories.

And so to get started, I'm going to go ahead and start doing some terminal stuff. Cause I like terminal stuff. So we're going to go into my Docker folder. You know, you can do this wherever you want on your machine. Um, I will admit right now for folks watching that my weak point as a programmer is spelling. Um, so I will probably spell things wrong and that will cause things to not work. So feel free to just shout at me if my spelling is bad. Um, I often misspell my own name. So when I'm, when I'm going to want to push things to my account, uh, it won't work because I will spell my name wrong. So, uh, don't, don't, uh, don't be like me, be good at spelling, um, pay attention in grade school when they teach you that stuff. Um, or just give me a shout if I, if I'm confused why things aren't working and it's because I made a spelling mistake. So here we go in my Docker folder and I'm just gonna create a new directory. Uh, and I'm gonna call it react react workshop. Um, so we've got our new directory here and we could CD into this react workshop and it's empty, uh, and I'm just going to go ahead and open this in visual studio code because I like visual studio code, that's the only reason you could do this in, in sublime, you could do this in BIM, do this in, in whatever. I am not opinionated here. I just like BS code. Weird, I guess. And it's really good now. Um, so here I'm going to create my first file and I'm just going to call it Docker file. Um, so I'm going to save it, Docker file. Cool. And then we can do, make it bigger. So it's easier to see. Okay. So we've got this empty file, right? And the first thing we need to do is we need to decide where is this coming from? What is our base image, you know, and we can start from nothing if we want, we can start from scratch, but then in this instance, I want to take advantage of systems that are already there, right? So we're going to, we're gonna go back to that figlet example, and so I'm going to pull Ubuntu. Let's go ahead and take a look at, uh, Ubuntu on Hub and let's see what our options are. And let's see if there's one that jumps out to us, I'm going to use Ubuntu. Uh, and I'm just going to use the latest version. So yeah, we're just going to use 22.04. Um, and we're going to start with from, uh, Ubuntu. There we go. And we're just going to use the latest one, so I'm not going to set the tag. Right. Next up, we're going to go ahead and we're going to run those commands that we ran manually. So we're going to use run and each run line is going to be executed during the build. And our run commands have to be non-interactive, right? So when we didn't apt-get install before, um, or an apt install, sorry, I'm, I'm old school. I'm still, I'm still struggling with the fact that they changed apt get to apt. Um, I don't know if anyone else has that problem. Uh, but yeah. So we need to make sure it runs non-interactively. So we're going to start with the run apt update, which is going to reach out to the Ubuntu servers and get the package update, and then we're going to do a run at install figlet or install figlet dash Y. So what this is going to do is it's going to pass the, the, just the yes command to apps, so it won't need me to tell it manually. Yes. Um, and then we're going to go ahead and save this, this Docker file and we're going to build it. So here we go. We're gonna, we're going to do a Docker build. Um, let me, there we go. Let us there. So let's do Docker build and let's look at the, the help command for this. Um, so here we go. We've got a bunch of different options. Uh, as well, and this is all on the Docker docs too. Uh, if you don't like looking at things on the command line. Um, so the way it works is Docker build slash options, and then a path to our Docker file. So we've got a bunch of different options here. The one I always think is important is to use this dash T so dash T is going to let us name our image. So won't randomly assign it. And, uh, it's going to be in the name tag format. So if I do Docker build dash T and I'm going to call this figlet, um, And I'm going to use dots. So the path is just to this directory and I'm going to hit enter, and this is going to start running all of those scripts that I gave it, and this is going to take a while because I don't have any cash, right. It's going to take about nine seconds give or take yeah. 8.2. So you can see it went ahead and it pulled that Ubuntu image. It ran that update and it ran in salt figlet. Now, if I do this again, a Docker is going to maintain a cache for me. And so it'll run way faster. So this, this just, you can see it says cashed cached and it ran in. No time at all. Now, if I want to run this without the cash, I can give it the dash, dash. No. Where are you? My lovely autocomplete it's not there. So yeah, I can use this no cash option, which will, which will make sure it runs fresh. So, uh, that's something that's useful every once in a while. And why did I mess this up? Because I put it in the wrong place. That's why I messed it up. As dash no dash ash. So yeah, this is going to run the full thing now, and this is going to take that nine seconds again as well. So let's say, you know, the update has changed or you need something to run from scratch. It's going to, it's going to do that. And there we go.

12. Building and Inspecting Docker Images

Short description:

We've created a container from a base image and inspected it using the 'dive' tool. Dangling images are temporary files that can be deleted using 'Docker prune'. The image consists of three layers, with the base image being 78MB and figlet adding 1.5MB. We can now run the figlet container and use the figlet command. To set a default command, we can use 'CMD' or 'entrypoint'. The Docker build command can be used to build the image.

It's a nine second build, uh, as well. Um, so yeah, so we've built it. Um, on the backend doctors using built kit to take, to take care of this stuff, um, as well. Um, which is really great and yeah. And we've created that container from this base image. So if we go back to the doctor desktop, we should see that we've got this figlet container here, right. So, or sorry, this figlet image, it was created less than a minute ago. We've got an ID, uh, and all that, um, as well. And, um, we've created kind of this dangling thing here, but this will get removed and this will get cleaned up in a little bit. So, uh, yeah, we can see the file size and we click into this image. We can inspect it and we can see, okay, this is how it was made. And so we can see here, you know, got all of this stuff going on, that this is all the stuff from the base image, uh, as well, so, uh, a tool I really like to use here is there's a tool called dive and dive lets you inspect, um, inspect each layer in the docker image. And so I'm just gonna, I'm gonna point this out really quickly. Cause I really like it as a way to check and see what's going on on each of your layers and a way to, um, a way to, you know, get a general sense of like file size and things that you're adding, uh, and stuff like that too. So let's go ahead and do, let's go ahead and do a dive on this really quickly. So we can do dive, dive, uh, dash T and I call this piglet and it's going to take, uh, what did I mess up. Dive piglet. Yeah. It's going to take a little bit to inspect it and we can take a look. What does a dangling image mean? That is a great question. Let's pull it up in the doc. So, uh. So dangling image. Uh, what it says is it's just one that is, uh, not tagged and not referenced. So dangling images will get made during the build process, just as like, kind of. They're, they're kind of like temporary files. I wouldn't worry about them. I would just delete them if you see them. Um, as well, and Docker prune is really good for that. It'll just clean your system up. But yeah, it's just kind of temporary files that are. Kind of there and made helpful during the build process. Um, so here we go. Dive is ready and we can see that we've got three. Three layers. Right. Um, and we can see that each of them is going to, is going to run different things. So this is our base image here. So this is that Ubuntu image that we started with as the first layer. And we see that that's 78 megabytes. And then next step, we run that apt install or that apt update, and that is going to attach 34 megabytes to our figlet instance. So now our image is 34 megabytes bigger and then it's going to actually install figlet and figlet is going to add another 1.5 megabytes. And so this is a way where we can get a general sense of what is being added and removed and, and, um, keep our space under control and go and inspect it. And we can definitely rewrite this image to make it more efficient to make it maybe only update the packages we're going to use. But for right now, I don't think it's that important. Um, And so, um, like before, when I ran that Docker run Ubuntu, and I'm just going to do this in the command line, the Ubuntu figlet instance. Um, uh, Docker run on figlet. Hello. So this is going to fail, right? Cause this is using the Ubuntu container and it doesn't have figlet installed. Now, if I use this new figlet container, it's going to use that Ubuntu base and it's already gone ahead and updated and solved that application for us. So we can do a Docker run figlet. Um, and I can give it that command of figlet. Hello. And there we go. It's giving me an output, which is pretty useful. I'm pretty happy with this. And so all we've done is we've made a version of Ubuntu that just has this one package pre installed for us, which is really great. Uh, and I'm really happy with it. So I now have an automated way to get this base image with stuff installed. Now let's say, um, I want to change my syntax or I want to, I don't necessarily want to have to type this command in, right? So right now, uh, I can type whatever I want into figlet because it's just a new bun to image with one package installed. So we've got Docker run, piclet Echo. Hello. Let's say I want to only use the figlet command, or I want to add a command, um, In there we can do that too. So let's talk about CMD and entry points. So this is how we set the default command that is going to be run in the. Um, so let's say we want to do a default command of script. Uh, so Docker run, Piglet, Piglet, and we're going to pass a command line option for fun script. Hello. Let's say we want this to run anytime we just do Docker run figlet. We want to get the hello back. Uh, the dash F is going to tell it to use the fancy font. Hello is the message. We could definitely do that too. So first version of this is going to be, we're going to use CMD. So CMD is going to tell it, um, this is the default executable I want you to run. And this is the parameters I want you to use too. And so we're going to give it. Uh, all of this information in Jason format. So what we're going to do, or sorry, we're gonna not going to do in Jason format, we're going to do this just kind of by typing. So what we're going to do is figlet. Dash F it's auto completing to small, but we're going to use script. And we're going to say hello. World. Great. So now I'm I've given this default command to run when we don't give the container any commands. And, and, um, if I give it more CMD, it's going to overwrite the previous one. So you can technically put multiple CMDs in here, but the last one is the only one that's going to get picked up. So there's only really a need for one CMD per, per thing. So let's go ahead and build this image. We can do a Docker, a Docker bill and, um, uh, and we're going to use, uh, we're going to call it fig let, and we're going to build it right here.

13. Using Entry Point and CMD in Docker

Short description:

We can use entry point to specify different commands while maintaining default parameters. CMD allows us to specify a default command and append it to the entry file. Both entry point and CMD can be used together in JSON format. By manually setting hosts or ports, we can override defaults and provide custom information. We can also override the entry point by using the 'docker run --entrypoint' command. If we want to overwrite the entry point, we can use 'docker run --entrypoint bash' and provide a custom command. To recap, we've built and run an image, discussed using entry point and CMD together, and overridden the entry point. As an assignment, we can build our own Docker file using cow say instead of figlet.

Um, and you can see it's using that cache here as well. It's pretty fast. And now if we do a Docker runs biglet, um, we are going to get it to give us that hello world command, but then he luck, assuming I didn't mess this up, which Docker run figlet, there we go. So it worked, you know, we get this, this hello world out of it. And so now we have it defaulting to a command for us. Um, and we can, we can also override this, right? Like if we want to just get a shell, we can do that same it fig let. And let's say bash. And so now we're in that shell. Um, and I could still do stuff here if I want, um, to, so. Uh, bash is going to override the value of CMD. Any, if I give it any command in the output, it's going to override the default CMD that we gave it. Um, as well. So let's talk about entry points. So entry points, let you, um, specify kind of different stuff. Um, while also maintaining default parameters. So let's say I want to do a Docker run and then give it, give it a message. So let's use entry point here. Um, so our new Docker file is going to look like. Like this. So entry point and we're going to give it figlet, and we're going to give it a dash F and, and Docker is going to require this in JSON format. So, and then we've got dash script as this is sent the font. And now, now when I, when I build and run this, I can do a Docker filled. Um, and if I do Docker run figlet and I give it that hello React, they'll go ahead and it'll pass whatever information I give afterwards, they'll append it onto the entry point. So it'll put it right here. Um, as well. Um, and all of this is going to get wrapped in like, um, uh, yeah, all of this is going to get run. And, uh, I would recommend using JSON syntax whenever you write these things, just because otherwise it's going to wrap it in, in the S H dash C. So it'll run it on like a shell level and it can get a little, a little funky. Um, so I would recommend just. Just wrapping this all in JSON syntax, especially if you're using command line options and things like that, otherwise it'll, it'll start messing up.

So we've built our image, we've run our image and you might be thinking, all right, China's pretty cool, but can I use entry point and CMD together? And the answer is yes. And in this instance, both of them also have to use that JSON format. So what CMD lets me do is specify a default command. So if I don't give it any information, it's going to default to, appending our CMD command to the end of our entry file. So we can go ahead and we can say, hello world as our default, and we can go ahead and build this. So Docker build figlet, Docker build-t figlet dot, it's going to go ahead and run, and now if we do a Docker run figlet and we don't put anything, going to execute hello world, and if we give it, you know, something custom, it's going to go ahead and do our custom thing for us. So this way we get the best of both worlds. We have a version that when we run it, it'll, it'll default to something and we have a version that when we run it, it'll give it, you know, custom information as well. And so when we start doing web servers, it's going to be interesting. If you want to like manually set your hosts or your ports, maybe you have defaults, maybe you can override them as well too, so we can, we can give it arguments and we overrode CMD when we type something in and we still used entry point and you might think, okay, I want to override my entry point and you can totally do that. In this point, right? We're going to, we're going to, if we run docker run figlet bash, it's going to, it's going to just give us the word bash, but if we want to override that entry point, we can do that too. So we can do docker run dash dash entry point and we do bash and I'm going to do echo, hello world. Nope. And we don't. Helps if I type the image name into, yeah, no, we can do that too. Or maybe we can just go into the interactive shell through that. Instead of echoed, bash entry point. Sorry. I had the order of this from. So that's going to be entry point. Echo Oh, can you go? Hello world. They go that's going to work. Nope. I need to put that in base on syntax. Alright. I don't run commands this way normally beyond teaching. So I always forget what the syntax is off the top of my head at this work. It did not. Okay. Let's do that. Let's just do the terminal version. Then. Enter point. Bash Biglet. And we'll run this in interactive mode so we can get attached to it. Yeah, there we go. So now I've overridden that entry point to just get into bash and I'm back in that route Ubuntu shell. So if you need to overwrite the entry point, you definitely can. I very rarely do that. So I don't remember how to do it particularly often beyond looking at my notes. So yeah, as a recap, you know, we've built an image here. Um, uh, we've gone ahead and we've run one. Uh, and I think it might be fun to do a little assignment. I'm going to give you all an assignment. I'm going to make you do some work. So what I want, I want everyone to do is I want you to build your own Docker file, but instead of using big lot, I want you to use cow say, so let's go ahead and let's just go into this really quickly. And so I'm going to do an app install how say, uh, and I'm going to say yes. Um, and then I can also install fortune to fortune. Uh, I'll say, there's a version. Here we go. So I'm also going to install apps, install fortune. And this just gives you your daily fortune. It's it's what is it fortune? Let's not use fortune. Let's just use, let's just use call. Just to keep it simple. Or I think I might need to install, let me install app to install fortune min, which I think will actually give us a list of fortune. Fortune. Is it it's not installed. Okay. That's fine.

14. Moving Files and Running Code in Python

Short description:

In this part, we discuss moving files inside a container and running code in Python. The speaker provides a Python script as an example and explains the process step by step. They also mention using a slim Python base image and demonstrate how to copy files into the container using the 'copy' command in the Dockerfile.

What have I done wrong? All house. Hmm well, it's not finding the command. I guess I can't give you an assignment if I can't do it myself. Oh, it's installed in the in a weird directory. That's the issue here. Okay. All right, an app's all I'll say. Yes. Okay. So it's installed to this user games. How say? Okay. So I'm still standing by this assignment then? So here's my assignment for y'all. I want you to build a Docker file. I want in that Docker file you to install how say, uh, and then I want you to run, uh, to have it run the cowsay command and I want you to set the default up. So it, uh, it'll default say something. Uh, and I also want you to have the entry point so you can override it and, and say your own thing. And I want you to build it. So, so you get this, this Cowsey image here. Uh, so I'm going to give y'all 10 minutes. Let's keep going.

So, um, so this has been pretty fun so far. Uh, but it's also a little, it's not super useful yet. Right. So what we're going to do is we're going to talk about moving files inside of your container. So you've got files outside your container and you want to push it into your container. And so this is where we're going to start. Going to start doing code stuff. Um, and the code isn't super important here. Right? Like the code doesn't really matter, so I can do whatever function I want and it will be fine. Um, or whatever language I want. So. Because I'm a, I am a Python developer here. I'm just going to do this in, in Python and hopefully, hopefully no one gets too mad at me. I know this is a React conference workshop, but we're not, we're not a web server points. I think it's okay to do, to do stuff in some other languages. So I'm going to do a main dot E-Y and I'm just going to do, what is a fun Python script, cool and fun Python projects for beginners. Let me Google this. Um, this is always my hardest part is what, what dumb example should I make during the workshop? Um, okay. All right. Here's my script that I found. I found some random script on the internet that looks interesting. And it doesn't look too crazy though. What we're going to do is we're going to just, I'm going to make this pretty quickly. So import, import calendar. Which is not a default, uh, Python package I believe. But we'll check that and we're going to do, um, let's do, uh, Python input. Input. I think it's Arnby, Python be. We're going to import RV, which is a system level command. And we're going to start, I'm just going to do a quick print. This one. This up. Cool. And let me run this really quickly. Python main to double check it works. I Yeah. Great, so we got our integers. We got it printing. So we've got our data. And next up we're going to just print. Calendar RV, I've bought a favorite. This is going to be our little projects to Python and we'll give it a year. And and this will print out the calendars. So this is our little basic Python script that I've felt. Um, it doesn't require anito dependencies. Let me see if it's all and, I'll pretty print it or something. Look at the elder bribery. Yeah, I think this is good. Okay, so now I've written some code is a pretty basic code, Um, like three lions, which you all saw. And I'm gonna use docker to run this. So we're gonna start talking about moving code over. So this, this project is gonna go ahead and run. So I'm gonna pull up my original docker file and I'm gonna go ahead and just delete it. Start over. Um, so what I'm gonna do is I'm gonna use a, um, I'm gonna start with a different base image. Let's go find the base image. I'm gonna search for python. And there's a bunch of different options here. I always like using the slim in. I'm going to use slim Python. So this is going to be the minimal version. It's going to be running 3.10.5. So we're going to start with from Python and we're going to do slim as our tag, right? And then we are going to need to bring that folder over. Right. And let me go ahead and show you the ways that we do that. So there's a couple of options we can do here. So, uh, first off we can do a copy, uh, and it's going to ask for the source and then the destination. So here I can do copy dot, is going to copy everything in the directory and then copy it again. And that's going to copy it into my container. Right.

15. Copying Files and Running Code in Docker

Short description:

I personally like using workder as the directory for running instructions and copying files. I copy the main.py file to the current directory and create a docker ignore file to exclude certain files. I copy the default Python docker file and run the main.py file with a default value. The docker build command copies files and caches the work directory. I install a fun Python module, create a requirements file, and run 'install -r requirements.txt' to install dependencies. Finally, I run the Docker container and get a calendar and emoji output.

Um, I personally like using these things called workder. So workder is going to be the directory where any instruction after it is going to, is going to run. So if I do any copies, it's going to move it into that directory. Uh, and I like to always just use an app direct to kind of keep things clean. Um, and in this instance, I don't necessarily want to bring everything over. So I'm just going to bring this main.py file and it's going to get copied to the current directory. So, um, that's where I'm going. Uh, and while we're here, um, we can also create, uh, maybe a docker ignore. The doc docker ignore, and this is just like a, um. I got getting nor anything I put in this file is going to get norred, um, by the container. Um, and so it won't get brought over. If I do a generic, like copy, everything, anything in the docker ignore won't come over. So if we have any PI cache or anything. Which right now it's um, It's not really interesting, you know, of a project that we have, we have to worry about it too much, but, but, you know, something, something to keep in mind. So, you know, maybe I should copy over the defaults. Um, so there's some good defaults on GitHub, but default. Docker files and getting mores. This is the default Python one. So we can, we can use that. Um, um, so here we go. So we copied our file over, um, Go ahead and just put this on the line. Um, and then next we are gonna want to go ahead and run it. So let's go ahead and run the file. So, uh, I'm going to do a entry point. And we're going to do Python. Um, and then we're going to do main.py. Uh, and I'm gonna set a default of. 22. And let's say, Oh, so that's what we're going to get. And we can go ahead and do a docker build dash T, now print. And so this is going to be a little different here. Right. So this time it's going to copy a file over to the main directory. And let me go ahead and do this one more time. And we can talk about the way it's cash. So here we've got the cash, um, for the work directory and the copy of the main file. So this is smart. Um, so if it notices that anything has changed, it will rerun the command. uh, and so it will, and it'll also rerun everything after that. So, um, let's go ahead and what's a fun count dependency. Uh, pretty, so we can use, I think this is, Oh that's this check. That's a it's part of the standard library. Oh, we need something that isn't part of the standard life, fun, Python. Interesting Python modules, weird Python modules. Oh, these are cool. I joke. Emoji, it's funny art. this. Let's install I joke. Okay. So we can do, so for Python, I'm going to just create a quick virtual environment. It's fine. I can. Uh, yeah, we can. So we're going to create our requirements file, so troll asked requirements and we're just going to add hijab. Then I'm going to add import. And then we're just gonna, you know, what emojis are. Let's do. Let's do yeah. I think he uses Hortense. And so let me just go ahead and the library. Fine. And we're just going to go ahead and print some random, emotion. this is my file. I have a dependency that I'm going to need to install, and then it's just going to print. And, um, so I'm going to go back to my Docker file and I'm going to need to make a change. So first off, I am going to need to copy over my requirements, uh, dot TXT. So let me go ahead. Here so we can copy requirements on TXT, do the file, and then we're going to do a run command. Uh, run, uh, and we're going to do a run, install dash R requirements, dot TXT. Uh, and this is the default way that, that Python kind of just installs requirements. I wouldn't worry about this too much. It's not super important, but the important part is that we're going to do this in multiple steps, so we can take a look at that in a little bit. So let's go ahead and do a Docker build again. And let me pull up my terminal. So here's, see, let's do a Docker build, uh, dash T and I called it, like call it Docker build dash T cal print area. Let's see if I got this right? So, cool. So it remembered that the work directory thing was cached, right? Uh, so it kept doing that and then everything after this was new. So the copy requirements is new. And then even though, uh, and then it just does everything fresh after that. So we've gone ahead and this has installed our dependencies. It's installed, it's copied our main file. And now if we do a Docker run, L-print should give us a calendar and an emoji. Right? And I can also go ahead and give it my own date. Um, so we can do an O-eight cool. And so this is going to give us the calendar for August, 2011. Um, and so we've, what we've done here is copied code over, um, as well. And so this, this is pretty useful. Um, when you actually need to start using Docker for, for stuff, right? Like it's going to be, we're going to put our web server code in here. We're going to put all sorts of stuff in here, uh, as well.

16. Pushing Python Code to Docker Hub

Short description:

In this part, we continue discussing the Python code and the process of pushing the image to Docker Hub. The speaker explains the steps to rebuild the image and name it with their username. They demonstrate how to check the image on Docker Hub and discuss the concept of image scanning. The speaker answers a question about using the terminal for Docker push and provides an example command. They encourage the audience to run the code and share their results. Finally, they mention running another command to prove that the code works.

So, um, let's keep going. Let's, um. Ooh, you want to see the, the Python. Yeah, totally. And I'll just copy paste the final version chat. Sorry. I didn't, I didn't do the cow version. I'll hang for just a sec, so y'all can, y'all can do this.

Okay. So in the requirements file, we're just going to type the external dependency that we're using. So in this case, it's just emoji. Everything else is part of the default Python. System. So. I mostly just did this. So I would have an excuse to use to install something. Command to install the requirements.

Yeah. It's going to be. It install. Uh, next we're going to push this up to Docker hub. So we've built our first image here or a second image, and I want to publish it. Um, I want to add it to my Docker hub account. So, and that, what that means is that other people will be able to pull it and we'll be able to run it themselves. So to do that, we're going to rebuild the image and we're going to do a Docker build again. The Docker tag. And this time I'm going to name it with my username. So I'm going to say it's a shy Ruperl. Uh, and I did spell my name, right? Just great. And I'm going to call it Cal print. Um, and I'm going to hit build. Uh, so this is going to go ahead and put the image together and it's going to put it with that, um, That, uh, name and I am not sharing my screen. That is my bad. Sorry. Here we go. Hopefully that's coming through. Yes. Yes. All good. Cool. So we're going to do a Docker build and we're going to do a Dockerty or a dash T for the name. And we're going to use my username here. Uh, if you're running this, uh, you're going to need to use your own account. So whatever your, your username is. Uh, and we're going to do Cal print. Cause that's what I'm naming it. I'm going to give it a period. I was going to go ahead and do all that load and stuff. Uh, and now if we got a Docker hub, um, so I'm logged into Docker hub, right? Oh, Docker desktop. And if I go to my images section, uh, we can see that we've got the shy referral cal print. Uh, and, uh, so it actually has the same Shaw as the other images that we create. So instead of just having two versions of this, just going to note that like it's, it's the same thing. But, you know, we've got the Cal print here with this Shaw and then shy Cal print here, which is the same Shaw. Uh, unless these diverge, it's just going to maintain the same shot. And, um, it'll just know that these are the same thing. And so it won't give you two copies on your local system. So all I need to do to add this to Docker hub is just hit this push to hub button. And it's going to take a little bit, cause it's like 130 megabyte file. And this is the first time doing it. So won't do any diffs or anything fancy. Um, so, so there we go. It's pushed up Docker hub. Now if I go to my Docker hub account, um, you hub.docker.com. Uh, and you can see that Calprint there as well. And there's some fun stuff we can do in here too. So one of the things I always like to do is turn on image scanning. So I, uh, we have image scanning as a partnership with an organization called Sneak. Um, they do charge for it. I think after your first 10 scans are free and then they start charging. Um, and so we've got a question from Chintan and that's, can we do this in the terminal? And the answer to that is yes, the Docker push is going to be the command for that. Uh, so it'd be Docker push. And then the, um, the name, the image. So we can... chairootbrl-calprint. And it'll push it. This is one of those times where I actually like to do this in Docker desktop instead of Docker command line. I just, cause I, some things I prefer to do visually some things I prefer to do in the command line. This is one of those times where I prefer to do it visually. Um, as well. So, so yeah, now I have this up here and you all can actually run this. You can run this code that I wrote. Now that it's public, we can switch to the public view. Um, and, um, what we can do is you can do, uh, docker run, uh, chairootbrl-calprint. Docker run chairootbrl-calprint, and then you can, it'll just work on your machine. Now, so, uh, if you type this command in, it's going to pull it from, from hub. Uh, it'll say the image isn't found. It'll reach out to hub and then it'll, it'll just, it'll just work, uh, which is really nice. So if you want to share your code, you can do it that way and maybe someone could run this and just give me a thumbs up, but that this did work and I think the easiest way for me to prove this to you is actually run, uh, run something else that I have on here. Isn't on my local machine. Oh, Peter says it works.

17. Building for Different Platforms

Short description:

To build for different platforms, use the 'platform' command followed by the desired platform. By specifying the platform, the build will be tailored to that specific platform. Push the tag to Docker Hub and pull the appropriate version for your platform. This approach is simpler than using manifest files and is suitable for an introductory workshop.

Cool. Peter, I trust you enough for the whole class. So, uh, as well, so. Um, cool. And if you want to do it from the CLI, just make sure that you've logged into Docker on the command line. And that's just gonna be Docker login, uh, as well. Uh, brilliant. I love it. Emil got an error. Oh no. What happened? What's the error. I'm maybe are you on an M one by any chance? So I've only built this for my machine. Yeah. That's the problem. Okay. So I can fix them, uh, for you. Yeah. Yeah. Yeah. So, so here's the thing. So when I built this, um, it defaulted to my local platform. Right. So, um, so what we can do is we can do a multi arch build to make sure that it builds for both of, um, both, uh, M ones and other stuff. So let me pull up my notes on how to do that. Really quickly. There's like a thing that I usually only show off if I run out of time, but, but I think that's a good, um, right. So the easiest way for me to do this is actually just to do it with a tag. Um, so, uh, that we're going to add one command here. We're going to add a data and a, a, and a. We're going to add a dash dash dash platform, right. And then we can give it a string. And I don't know what the one doctor detector. It's the next arms. Yeah. I'm 64 for Mac, but I'm going to give it a different platform. Now, whenever you run a darker build, it's going to just default to whatever your local machine is. Um, but in this instance, if you're running an M one, then let me, let me go ahead and hack that arm. Before. Right. So now it's going to, it's going to build it specifically for that platform. And then I can go ahead and push that tag up. Uh, and you'll be able to pull the, pull the M one version. And that should, that should work for you. Um, there's more ways you can do this where it's like kind of just built in and docker hub will just. Automatically have it built for different platforms and then pull the appropriate one. But that is, that is kind of complicated and I don't want to get into explaining Docker manifest commands. Uh, yeah. So this is like the easy cheap cheating version, but if you're actually going to push this to production, you would, you would use the manifest files. But since we don't. This is just a like kind of intro workshop. I don't want to, I don't want to overcomplicate things. We're just going to use that platform command for now, if that's a, so going to take a little bit since my machine isn't, isn't an M one machine. It's an Intel machine. So it's going to, it's going to take a little longer than, than formal. So it took like almost a minute for it took like basically no time. Um, and then we can do a doctor push. Uh, shy route, braille, I'll print. Czy. I'll print four. Take a sec. Um, and now if you do a docker run, Jayra pron, CalPrint arms before with the tag, it should work for you. Dr. like that for anyone on M1 Silicon or not an Intel CPU basically should, should work.

18. Networking and Running Containers

Short description:

I mentioned we were going to talk about networking and running containers. We ran a simple Engine X service on port 80 and connected to it using localhost. We inspected the container to find the exposed port and learned that multiple Engine X containers can't use the same port. We can manually set the port number when running the container.

I mentioned we were going to talk about networking. And do some networking stuff. So, um, so right now we've run containers that, um, haven't exposed, you know, we haven't had to connect them in any way. So I mostly use containers when I'm doing web stuff. So I need to expose, uh, ports, uh, and, um, and mess with them.

So, uh, I'm going to go ahead and start by running a really simple service. I'm going to use Engine Act. Uh, and so let's go ahead and take a look at Docker hub, um, and see if we've got Engine X there. I mean, I know we have Engine X up there, but I'm being, I'm being overly, overly, uh, obnoxious about it, I guess. Um, probably also helps if I spell Engine X correctly, that's probably important. So yeah, here we go. We've got official Engine X. Um, so we're going to go ahead and use this. This is going to run like a pretty static web server for us, and it'll be running on port 80. Uh, and it'll just give us the like default, like welcome page. So here's what we're going to do. We're going to. Go ahead and do a Docker run. We're going to do the detach again. Dash B detach, uh, and we're going to use dash dash P and what dash P is going to do a capital P here is it's going to publish all the ports and then we're going to do engine X, which again, can I spell it, engine X. Okay. So this is running in the background and we take a look at Docker desktop. We can see that there's an engine in X container running. Um, and we want to go ahead and connect to our web servers. So let's go ahead and do a Docker PS really quickly. Docker PS. And we can see that engine X is mapping my local 5,000, 5,000 zero zero zero to engine X is port 80. So if we go to local hose, um, that's one extra five here. Uh, we can see that this is the welcome page for engine X. So, uh, we've exposed or engine X on the container, and we've mapped that, uh, sorry, this port on my local machine. And we've mapped that to port 80 of the container, which is really great. Um, so we can use local hose to connect into that. Um, and you can also use one 27 as well. Um, two. Um, cool. So let's go ahead and talk about, uh, Finding the, um, so it's possible to like find the address of Docker and, and use that as your IP address. But I think the local host way is a lot easier. Um, and you can also just like, oh, yeah. Two rights, you have Earl, um, you could attach to the thing and then Pearl it too. So, um, so you might be wondering, how did Docker know? What port to. Um. So you might be wondering how to Docker, know what port to, uh, to attached to. And we can go ahead and check that out actually really quickly. We can use this Docker inspect. And let's go ahead and do a help on, docker inspect. We'll return, um, low level templates of, uh, our low level information, a Docker objects. And I'm pretty sure dive. We'll do this too. Um, but we can do it with, with dive maybe instead of, uh, inspect, but this is going to kind of pull the metadata of the image for us, you know, large they get an estimate on how long it is. It should be, it should be fast ish. Okay, here we go. So here is, uh, what is running are layers, and they have it consolidate. Okay. So let's do this. We'll do it with the inspect way. So we can do a docker inspect, docker, inspect, that's Jsh format. And it's going to look for a string. Uh, and it needs this in, in like the go syntax. Um, and we can, we can maybe look really quickly at the docker inspect docs, if it's got some structure of what we pass, so, um, you know, we've got a bunch of, a bunch of things that we can ask. I don't know if it's got a list of. Uh, uh, sample. So here we've got some examples. Um, I'm just going to use, um, uh, dot config dot exposed ports. So this is going to inspect my engine X container. And it tells me that port 80 has been exposed. So that means somewhere in the docker file they've written exposed port 80, um, as well. And, um, I think we can also see this docker history as well. And yeah, and so this is the docker file that they made. So is everything that happens when that engine X files created and you can see right here, they're exposing port 80 so, um, our docker container here only has that one port, um, out, uh, or sorry. So you might be wondering why can't we just connect directly to my local host to AD, right? Um, so my, my Mac book only has one port 84. So therefore if I'm running multiple engine X containers, we're only going to be able to expose one 80 port. So if multiple containers want that port, only one of them is going to be able to get it. And so, um, I default containers just are going to get random ports. Um, In, In like, not like true random, like crypto random, but like just random, like, you know, a bunch of different, different nonsense. And so we can go ahead and actually set that manually. Um, as well. So like what the port number is going to be. So let me go ahead and do a Docker stop and. And X. Um, all right. Docker, yes. Okay. Uh, this ID, uh, and you saw this was really fast as engine X has a built-in handler for, for the sick term. So what we can do is if we want to set this manually, we can do a Docker run, uh, and I'm going to start naming this container as well. So let's do, um, let's look at how we do that. Um, it's going to be where's the name. Off of that order. So yeah. Name string is how we name containers. We're going to do a Docker run, Ash dash name engine X and genetics dash D for detached dash P for publish.

19. Mapping Ports and Docker Compose

Short description:

We can use lowercase 'p' to specify how we want to map ports between our local machine and the container. By using the syntax 'localPort:exposedPort', we can choose any number for the local port. For example, '80:80' would map the local port 80 to the container's port 80. This is useful for running multiple web servers. Docker Compose is a powerful tool for managing multiple containers at once. It allows you to easily start and manage services defined in a compose file. I recommend checking out the Docker documentation and resources like the Docker cheat sheet and Brett Fisher's Docker Mastery course for further learning. Feel free to reach out to me for any questions or assistance.

And instead of it being a capital P we're going to use a lowercase P and then we can pass it the syntax of how we want things to map. So it's always going to be. Your local machine and then your container, just like we did with the copy, copy file. It was copy file from local machine to container. It's going to be Matt local port to expose port. And so we can pick any number we want. So I could do part one, two, three. I can do port 80, 80, 88, it doesn't really matter. Uh, and then I do colon, and I'm going to say, I want this to map to port 80 of the container, uh, on engine X. So whatever I say on the left side is going to be the thing on my local machine and the right side is where it's going to go on the container. And then I can give it the image of engine X. So it's running now. Uh, we can do a Docker PS and see that, okay, this is map to part eight, eight, eight, eight, uh, and local host. And let's go to eight eight, eight eight. And here we have the, the, um, engine X age as well. So just a reminder, the syntax is poured on host to portal container. Um, and this is going to be really useful if you need to run multiple web servers, for example. So, um, so, uh, my, my coworker, a sheet has run a really interesting project, but I can show really quickly. Um, so this is in the awesome composed directory, which is like our kind of example server. I wouldn't recommend actually running this. If people are typing along, I'm just going to show it really quickly to show while this useful, let me go ahead and open this in here. So in this file, he's running, he's running the same web server two different ways, got web one and web two. And you can see he's using both port 81 and port 82, uh, to connect to the same port. He's just running the same code two times. Uh, as well. And the only difference is, is that he's using different ports. Those things. But again, I wouldn't recommend doing this or looking at that directory just to more just an example of why, why having this port thing is, is useful. Um, I use dash P pretty much every time that when I run Docker commands, just to manually do to expose ports on my own. Um, but you, you don't necessarily need to, you can, you can just have it decide itself and let the container decide. But I prefer to, to, to manage it, uh, manually too.

So, um, I am getting near the end of what I wanted to cover during this workshop. We're at the two and a half hour mark. Um, and I want to check in and see if y'all have any questions that you want me to answer, if, um, if you're looking for resources that you can do. Next, if there's anything, anything that would be useful for me to cover in the last half hour or, or anything like that. How does Docker compose fit into this? That is a great question, Felix. So Docker compose is really awesome version. Um, it's kind of the next level in all of this, and it's how to manage multiple containers at once. But right now we've been running containers individually and hopefully I've given you the tools you need to write and run code and expose those ports of the, the servers, um, as well. But let's say that we have a server that's running multiple containers at say, you want to run multiple containers. Let's say you have a database that you're running, you have a web server that you're running. And do you have a load balancer? So Docker compose is really good for managing all of that individually. Um, and so it's just kind of a new format of how to kind of, uh, manage things. So this example here on the, on the Docker docs has a web compose. And, and, um, And a, uh, reticence it's running two services. Uh, and it's going to start at the same time, but we can go ahead and look at, uh, the Jeep's example here. So he's got this compose file and instead of me having to manually start all of these three services, I can just do Docker compose up. And it'll go ahead and turn all of this on for me. Let me make sure I turned off mine, otherwise there'll be a port like maybe if we have time, we can do the output of calc out the emoji part and I'll say two different. Yeah, we can, we could do that. We can do some container networking stuff. Um, let this run here. I don't know if I'm going to be able to cover that in 20 minutes. So, um, let me do, do my best though. Let's see how far we get. The best way probably to handle that is, um, is to set up, you know, API calls between, between the servers. I would have, um, container one be, like have a port available and container two would output and then send, make a, make like a request, uh, over to the other container with whatever information it wanted. So it'd be doing it kind of similar to like, if you had two different machines running and you wanted them to talk to each other,

Going, let's see. uh, oh, I was missing this file yesterday. Uh, yeah, yeah. Now this multiple the servers running. It's gonna be listening on the engine export. I believe, is 5,000. So engine X it's like a web server here for a load balancer here's total 5,000. It might be I you're wrong. Part 80 where anyway, so where am I at? Where's my thought process here? Feel like I've gotten a little ramble. Yeah, I'm just gonna go ahead and wrap the workshop here. But before I go, I'm gonna give you a bunch of resources. Keep your learning going. I don't think there's a ton I can cover in 20 minutes. Everything starts to get a little more a little longer to get to get going at this. I'm gonna go ahead. Share resources on where y'all can go to you. You're learning on. And and then I will stick around to answer questions. Well, I really like some of the getting started guys that we have a doctor. We have a specific language guides. So if y'all are looking for kind of JavaScript or versions, you know, you can grab them at our documentation a s well. We also have a bunch of links to external resources here. I actually just added our little cheat sheet to the docker docs yesterday. So if you're looking for a little cheat sheet of where to go for doctor commands that are useful just to have them off the top of your head, you can grab them from here. And there's also a lot of different different forces that you can take advantage of. I really like this. Brett Fisher course. Brett Fisher is one of our docker kind of community champions. And he wrote a really excellent docker docker mastery course, which I really like. He has a he has a mastery for node and then a generic one as well, which I really, I really like this is 19 hours, as opposed to my, my course, which is which is like two and two and some change. And so I would highly recommend any of this content. If you're looking to learn more, just in terms of me, and you need help I'm going to ask questions I am still around even after the workshop swag promo and you can get a t shirt and it should hopefully have enough on it to cover shipping. As well and if you need need anything, shoot an email well or hang me on Twitter or something like that. So I'm gonna I'm gonna end it a little early just because it's a good stopping point but I am gonna stick around if anyone has anyone has questions but otherwise, this has been this has been a really fun.

Watch more workshops on topic

React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Featured WorkshopFree
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
React Summit 2022React Summit 2022
136 min
Remix Fundamentals
Featured WorkshopFree
Building modern web applications is riddled with complexity And that's only if you bother to deal with the problems
Tired of wiring up onSubmit to backend APIs and making sure your client-side cache stays up-to-date? Wouldn't it be cool to be able to use the global nature of CSS to your benefit, rather than find tools or conventions to avoid or work around it? And how would you like nested layouts with intelligent and performance optimized data management that just works™?
Remix solves some of these problems, and completely eliminates the rest. You don't even have to think about server cache management or global CSS namespace clashes. It's not that Remix has APIs to avoid these problems, they simply don't exist when you're using Remix. Oh, and you don't need that huge complex graphql client when you're using Remix. They've got you covered. Ready to build faster apps faster?
At the end of this workshop, you'll know how to:- Create Remix Routes- Style Remix applications- Load data in Remix loaders- Mutate data with forms and actions
Vue.js London Live 2021Vue.js London Live 2021
169 min
Vue3: Modern Frontend App Development
Featured WorkshopFree
The Vue3 has been released in mid-2020. Besides many improvements and optimizations, the main feature of Vue3 brings is the Composition API – a new way to write and reuse reactive code. Let's learn more about how to use Composition API efficiently.

Besides core Vue3 features we'll explain examples of how to use popular libraries with Vue3.

Table of contents:
- Introduction to Vue3
- Composition API
- Core libraries
- Vue3 ecosystem

Prerequisites:
IDE of choice (Inellij or VSC) installed
Nodejs + NPM
JSNation 2023JSNation 2023
174 min
Developing Dynamic Blogs with SvelteKit & Storyblok: A Hands-on Workshop
Featured WorkshopFree
This SvelteKit workshop explores the integration of 3rd party services, such as Storyblok, in a SvelteKit project. Participants will learn how to create a SvelteKit project, leverage Svelte components, and connect to external APIs. The workshop covers important concepts including SSR, CSR, static site generation, and deploying the application using adapters. By the end of the workshop, attendees will have a solid understanding of building SvelteKit applications with API integrations and be prepared for deployment.
React Summit 2023React Summit 2023
106 min
Back to the Roots With Remix
Featured Workshop
The modern web would be different without rich client-side applications supported by powerful frameworks: React, Angular, Vue, Lit, and many others. These frameworks rely on client-side JavaScript, which is their core. However, there are other approaches to rendering. One of them (quite old, by the way) is server-side rendering entirely without JavaScript. Let's find out if this is a good idea and how Remix can help us with it?
Prerequisites- Good understanding of JavaScript or TypeScript- It would help to have experience with React, Redux, Node.js and writing FrontEnd and BackEnd applications- Preinstall Node.js, npm- We prefer to use VSCode, but also cloud IDEs such as codesandbox (other IDEs are also ok)

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
39 min
Don't Solve Problems, Eliminate Them
Humans are natural problem solvers and we're good enough at it that we've survived over the centuries and become the dominant species of the planet. Because we're so good at it, we sometimes become problem seekers too–looking for problems we can solve. Those who most successfully accomplish their goals are the problem eliminators. Let's talk about the distinction between solving and eliminating problems with examples from inside and outside the coding world.
React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
React Day Berlin 2022React Day Berlin 2022
22 min
Jotai Atoms Are Just Functions
Jotai is a state management library. We have been developing it primarily for React, but it's conceptually not tied to React. It this talk, we will see how Jotai atoms work and learn about the mental model we should have. Atoms are framework-agnostic abstraction to represent states, and they are basically just functions. Understanding the atom abstraction will help designing and implementing states in your applications with Jotai
React Day Berlin 2022React Day Berlin 2022
29 min
Fighting Technical Debt With Continuous Refactoring
Let’s face it: technical debt is inevitable and rewriting your code every 6 months is not an option. Refactoring is a complex topic that doesn't have a one-size-fits-all solution. Frontend applications are particularly sensitive because of frequent requirements and user flows changes. New abstractions, updated patterns and cleaning up those old functions - it all sounds great on paper, but it often fails in practice: todos accumulate, tickets end up rotting in the backlog and legacy code crops up in every corner of your codebase. So a process of continuous refactoring is the only weapon you have against tech debt. In the past three years, I’ve been exploring different strategies and processes for refactoring code. In this talk I will describe the key components of a framework for tackling refactoring and I will share some of the learnings accumulated along the way. Hopefully, this will help you in your quest of improving the code quality of your codebases.
React Summit Remote Edition 2020React Summit Remote Edition 2020
32 min
AHA Programming
Are you the kind of programmer who prefers to never see the same code in two places, or do you make liberal use of copy/paste? Many developers swear the Don't Repeat Yourself (DRY) philosophy while others prefer to Write Everything Twice (WET). But which of these produces more maintainable codebases? I've seen both of these approaches lay waste to codebases and I have a new ideology I would like to propose to you: Avoid Hasty Abstractions (AHA). In this keynote, we'll talk about abstraction and how you can improve a codebase applying and creating abstractions more thoughtfully as well as how to get yourself out of a mess of over or under-abstraction.