In this talk, you will learn how to optimise your Node.js development and release workflow to Kubernetes with Skaffold and Rancher Desktop. Using these tools together helps enhance the local K8s development experience mirroring a real cluster experience, as well as the release workflow you would have for your remote cluster. We will cover the challenges of local Kubernetes development, how Skaffold and Rancher Desktop help, demonstrations of local releases to a cluster and how to use the same configuration for remote cluster releases.
Optimize Node.js Development Workflows in Kubernetes with Skaffold and Rancher Desktop
DevOps.js Conf 2022
Hi there, I'm the condom wheeler or you can call me Luke. I'm going to be talking to you about optimizing your node JS development workflows in Kubernetes with scaffold and rancher desktop. I'm a principal technical evangelist at Sousa, so you can feel free to get in touch with me on various social media platforms, whether it's Twitter or LinkedIn, following me on GitHub or subscribing to my YouTube channel. Now this is a lightning talk, but I've still got some golden nuggets that you can walk away with in a short amount of time. For starters, we're going to consider the developer experience when building your node JS apps for Kubernetes. Then we'll look at how rancher desktop simplifies the cluster management lifecycle, followed by optimizing build and release workflows with scaffold, and then we'll cap it off with a demo, which is usually everyone's favorite part. Now, if you're a developer and you've got experience working on cloud native apps, you might actually have been reluctant to tune into this session because you want to focus on your application development. Kubernetes is predominantly seen as a separate world that shouldn't get in the way of what you should be prioritizing. Personally, I agree with that. However, there's a slight conundrum because Kubernetes does solve real problems and your team might be dealing with those problems, and so Kubernetes might be a solution that's in the bigger picture of your architecture. So you have to be a part of the journey either way. And this is where things get even trickier because there's different ideas around who should own what. Now my goal is to demonstrate how rancher desktop and scaffold can complement the core developer priorities and still incorporate DevOps practices like release workflows for application deployments to your Kubernetes cluster. An example of this would be going beyond node mod. Now I love using node mod when building out my node JS apps because it quickly rebuilds the changes and I get to see those reflected. But what if you could accomplish that with a full CI CD pipeline on your local machine with all the intricacies abstracted away, but still remaining configurable for a local context as well as for remote deployments? And we'll take a look at that shortly. Now the first tool in this solution is rancher desktop and RD is a desktop application available on Windows, Linux and Mac. It's an electron based application that wraps a host of components under the hood with a virtual machine running K3s and either container D or Docker D depending on your configuration choice. In the end, you have an intuitive UI that simplifies the cluster management process and you can easily upgrade or reset your cluster with just a few clicks. And the second tool in this solution or one to punch in mind is scaffold and scaffolds. Goal is to simplify the Kubernetes development workflow by automating and abstracting away the process of building and deploying container images and the inner development loop of iteratively coding building and testing your apps is something that can be enhanced by scaffold because it will take your local changes and trigger a pipeline for deployment whenever it detects them. And this is especially helpful for debugging your apps before they end up in the final target cluster. So using RD and scaffold enables developers to stay focused on application optimization because they give you the combination of a cluster that's easy to interact with and manage and a configurable workflow process that is automated and abstracts the DevOps details. So let's take a look at this in action real quick. As you can see, I've got rancher desktop open and running and I'm currently in the Kubernetes settings section and over here a lot of the main things that happen around your Kubernetes cluster management and optimization would actually be consolidated in this particular section. So as you can see, I can easily update my Kubernetes version just by using this drop down over here and this will obviously depend on the particular version that you or your team have agreed upon. In addition to that, if you wanted to choose a particular container runtime between container D and Docker D, that's something that you can toggle with over here. And then you have different situations when it comes to your applications. You might find that some are memory intensive, some are compute intensive, but you want to have the opportunity to modify the virtual machine that you're working with. And so you can do that over here by updating the particular memory in the CPU and that will just reset your cluster. And also, if you did need to do a hard reset to delete all the workloads and the configuration that you had set up for whatever reason, you could simply hit the reset button over here. And that's cool, especially when it comes to local cluster development, because you have the safety of the blast radius not impacting other people or other teams, but nonetheless, you want to be able to have a nice way of just ejecting and starting all over again. So now I've switched over to the application and what you're looking at is my scaffold configuration file, which is the scaffold.yaml file. And this file is essentially used to configure how your application is going to be built and deployed to the relevant cluster. So if you look at the top level fields, the API version, kind and metadata are three familiar top level fields when it comes to working with your Kubernetes manifest files. In addition to that, we have build, test and deploy, and each one of them, the name kind of implies what they actually deal with. So the build contains the build configurations, in this case, the artifacts. And so I'm building a Docker image and I specify the particular Docker file that's being used, as you can see over here, as well as the name of the image so that scaffold knows where to be pushing this particular image to in terms of the relevant repository. And as for the testing phase, because I want to try and replicate what I'd essentially be doing in a real life or production CI CD pipeline, I can essentially import those same features over here and this same configuration file can be used locally and remotely. And so I can include my test phases, you can see over here, and I'm running the npm run test with the same image that I'm using for my build phase. And then lastly, under the deploy section, I'm using kubectl or kubectl to deploy my particular Kubernetes resources. And I specify that these resources are defined in the manifests.yaml file. So if I come to the manifests.yaml file, you'll see over here that I have my deployment resource and my service resource. Now, if I head over to the app.js file, you'll see over here, this is a very basic application and I just have a single root called test. And the response that I should get when I hit that endpoint will be, did you expect anything less? And so I also have a single test and that test will just make sure that whenever I query that particular endpoint, I should get the relevant response. It should give me 200 as response status and it should be a string response and it should have that exact same text. So what I'm going to do now is I'm going to head over to the terminal, I'm going to run scaffold dev and that should build my application and get it up and running. Great, as you can see, so my application has been built and it went through the test phase as well and the test passed. And as you can see over here that the application is now running and listening for traffic on port 8080. So if I go to the browser and you can see I've already tested this previously, but I'm getting the relevant response. Now, the cool thing about how scaffold complements the inner development loop is if I was to head back to my application and make a change over here that I wanted to test, I'm simply going to add, did you expect anything less this year? Update that, but very important, we're also going to need to update our test. Scaffold will detect those changes and it will proceed to redeploy the application. Great, so we see our deployment has been stabilized and so if I come over here and I just refresh that and we get the right response. So this makes things a lot more seamless when it comes to local Kubernetes development for your developers who don't need to know too much about Kubernetes. This makes things a lot easier for them. So I hope you found that helpful.