How to Build CI/CD Pipelines for a Microservices Application

Rate this content
Bookmark

Microservices present many advantages for running modern software, but they also bring new challenges for both Deployment and Operational tasks. This session will discuss advantages and challenges of microservices and review the best practices of developing a microservice-based architecture.

We will discuss how container orchestration using Kubernetes or Red Hat OpenShift can help us and bring it all together with an example of Continuous Integration and Continuous Delivery (CI/CD) pipelines on top of OpenShift.

33 min
01 Jul, 2021

Video Summary and Transcription

This Talk discusses the benefits of microservices and containers for building CI-CD pipelines. It explains how container technology enables portability and scalability. The challenges of microservices include network communication and testing in isolation. The Talk introduces Tacton, a cloud-native CICD pipeline for Kubernetes, and highlights the use of GitOps and Argo CD. It also discusses the importance of maintaining referential integrity between microservices and the evolving role of operators in the DevOps world.

Available in Español

1. Introduction and Background

Short description:

Hi, I'm Sasha Rosenbaum, and today I'm presenting on building CI-CD for microservices applications. I have been in development, operations, dev route, and customer success. I've been involved in DevOps days since 2014. You can find me on Twitter as devineops.

Hi, I'm Sasha Rosenbaum, and today I'm presenting to you on building CI-CD for microservices applications. So first, just a little bit of an introduction about me. I am currently a Team Lead on a managed OpenShift BlackBell team at Red Hat. And in my career, I have been in development, in operations, briefly in dev route, and I spent quite a bit of time in customer success. I still consider myself a developer. But as you can see, I've done a lot of different things along the 15 years that I've been in this industry.

I've been involved in DevOps days since 2014. You can see on the chart as well. And you can find me on Twitter as devineops. I post a lot of cat pictures and some Twitter, some technical hot takes. So please feel free to follow me on Twitter.

2. Microservices and Their Benefits

Short description:

So let's talk about microservices. Once upon a time, there was a monolith, and it was easier to deal with because you had full control. However, there were challenges, such as collaboration between different teams and the need for regression testing. The solution to these challenges became microservices, which involve modularizing different parts of the application and breaking them into separate services. Each microservice is ideally supposed to own its own data. Microservices separate by business logic.

So let's talk about microservices. As I was building this presentation, I was thinking that in addition to how to do things, it's also important to talk about why we're doing things. And especially if we are talking from a developer standpoint, it's important to talk about why we even use microservices, why we use containers, and dive into kind of overview of the entire architecture.

So let's start with this. Once upon a time, there was a monolith, right? And the monolith was pretty, and we actually enjoyed working with a monolith, and everything was relatively okay. And to tell you a little secret that no one likes to talk about anymore, is that monolith was actually easier to deal with, because monolith, you had full control over all the parts of your application, all of your dependencies, and all of your things. And also because distributed systems are hard, and that's just a universal truth.

But monolith also had challenges and these challenges were among many. So many different teams needed to collaborate. So you had to collaborate across different parts of a company. And if the company was large, that often meant that something like a large airline.com became a behemoth, became a huge beast and all the different teams had to be involved in building that huge application. Everyone had to be using the same technology. So if one team really liked Java and another team really likes C-sharp, they had to choose between them. They had to use both, or same with different OS types and things like that. Every change required a regression testing of the entire system, which can be quite time-consuming. Deployment therefore was slow, right? Because teams were often running into merge hell and scaling was difficult. So scaling a huge beast of an application is usually not a picnic. So because we had challenges where there's challenges, there's always solutions. And in this particular case the solution became microservices.

So what's really microservices? We started with an application, and then we started talking about proper patterns of development. So we modularized different parts of application. And then we started talking about breaking these different modules into different services. And as we went, it became a network of services. In proper implementation, each microservice also is supposed to own their own data. In practice, this doesn't always happen, but this is the implementation best practice for each microservice to also own the database. So I often hear questions about service oriented architecture versus microservices. So if you're on the left of this diagram, and you have a data layer, and you have a business logic layer, and you have a web layer, even if these three layers are separate, you're in microservices. Separating by vertical concerns is still not microservices. Microservices separate by business logic. So in this case, for instance, authentication would be a separate microservice.

3. Microservices and Containers

Short description:

And it can scale independently of the HTTP listener that serves the web application. The biggest benefit of microservices and the reason that all the industry went with microservices is actually that microservices enable agility. Benefits of microservices stem directly from the architectural description. So why use containers? A shipping industry used to ship every workload in a separate type of box, and handle it separately. And then shipping industry actually adopted containers.

And it can scale independently of the HTTP listener that serves the web application. So if you are breaking by business purpose, you are in a microservices world. So microservices, and this is from microservices.io, right? They're defined as an architecture pattern of collection of services that are loosely coupled, independently deployable, organized by business capabilities, and owned by small teams.

So the biggest benefit of microservices and the reason that all the industry went with microservices is actually that microservices enable agility. And that's the biggest benefit, right? Because compared to the huge mileage deployment, you can move much faster with microservices. So the old school is loving the monolith and the new school is microservices applications, which are typically supported by Kubernetes or in my case, I work for Red Hat. So I'm going to be mentioning OpenShift a lot, which is kind of like a version of Kubernetes, if you will.

So benefits of microservices stem directly from the architectural description. So they can scale independently. So if you need only one instance or only three instances of authentication service, and you need 15 instances of let's say, photo upload service, you can scale them independently. You don't have to create VM for the entire monolith that does all of these things. You can deploy them independently, which is something that enables agility. You can rely on different text stacks. So if I want to use Python and you want to use Java, then we can definitely do that and each service will support the technology that's best applied to it. And then they can rely on conflicting dependencies, which can also be a huge benefit, right? Because if I need version Python version two, and you need Python version three, if we were in a monolith, we would be conflicting with one another. But now we can have different microservices relying on different versions of the same library. And most importantly, you will be a microservices developer, which will make you a Silicon Valley unicorn in the proper fashion. This image is from GitHub.com 503. This is a 503 error, but it just happens to look really nice.

So why use containers? So I hear a lot of people basically saying that containers are a must for microservices implementation. I like this quote from Jeffrey Richter, who wrote like 18 different books on computing. Basically, microservices are an architectural design pattern, and containers are an implementation often helps. So you technically don't have to have containers to implement microservices, but it is often making your life easier. So why do we use containers? Well, so here, a metaphor from a shipping industry actually helps. So a shipping industry used to ship every workload in a separate type of box, and handle it separately. And then shipping industry actually adopted containers. So these huge shipping containers. And you can see that in the 70s, where the shipping industry adopted containers, there was actually exponential growth of the global trade, right? Because shipping containers was a lot easier. You could standardize on one particular procedure. And so you didn't have to handle every single workload in a different fashion.

4. Container Technology and CI-CD Implementation

Short description:

And that allowed you to build ships that support containers, that to build procedures for, offloading, unloading on ships and stuff like that, it basically enabled explosive growth around the entire industry. So if we're talking about container technology, it is immutable, it is portable, and it is relying on open source and open standards, right? So in a previous life, it used to be that I could be a developer and I implemented something and it worked on my machine perfectly. And then by the time it made it even into test environment, things didn't work anymore. So essentially, we figured out a way to ship a development machine and that container can be shipped and can work portably across different environments. So let's talk about containers, because it's actually relevant to how to implement CI-CD for microservices, and specifically if you're running on a Kubernetes implementation, which you probably are. A container is the smallest compute unit, and it's actually instantiated from a container image. So container image has different layers, so essentially, you start with the base Linux. It's typically Linux, there are Windows containers out there, but there aren't many of them. And then you have the OS update layer with the things that you need. You then have a runtime layer for your language for your application. And then you have the actual application layer. So in this way, we just packaged the entire version of our application into a container.

And that allowed you to build ships that support containers, that to build procedures for, offloading, unloading on ships and stuff like that, it basically enabled explosive growth around the entire industry. And kind of like that, those benefits are also kind of true for computing containers as well.

So if we're talking about container technology, it is immutable, it is portable, and it is relying on open source and open standards, right? And so these are huge benefits, but what does it actually mean? What problem did we actually solve by containerizing everything? Well, in reality, we solved the problem of, it works on my machine, right?

So in a previous life, it used to be that I could be a developer and I implemented something and it worked on my machine perfectly. And then by the time it made it even into test environment, things didn't work anymore. So basically this meme comes from Cam's blog, but it says, it works on my machine. Let's ship your machine, and that's how Docker was born. So essentially, we figured out a way to ship a development machine and that container can be shipped and can work portably across different environments.

So I no longer care if I'm in the cloud, if I'm on premise, if I'm on the laptop, right. I can package with the same container and it can run anywhere. Now, this is not necessarily completely transparent, but it definitely enables us to eliminate a lot of the challenge of differences in environments. So let's talk about containers, because it's actually relevant to how to implement CI-CD for microservices, and specifically if you're running on a Kubernetes implementation, which you probably are. Right, so a container is the smallest compute unit, and it's actually instantiated from a container image.

So it's a little bit similar to like a VM image, if you're familiar with those. Basically, you create an image of a container, and then you can instantiate a container at runtime and you can instantiate many containers from the same container image. So container image has different layers, so essentially, you start with the base Linux. It's typically Linux, there are Windows containers out there, but there aren't many of them. And then you have the OS update layer with the things that you need. You then have a runtime layer for your language for your application. And then you have the actual application layer. So basically, the anatomy of a Docker file for a JavaScript application is similar to this. So basically, you're pulling an image from a registry, they're typically a registry where images live. And if you are in a company in a big enterprise, you're probably pulling from an internal registry. If you are just building for yourself, then maybe you're pulling from Docker Hub or some publicly available registry. And then you have basically defining your environment variables. You can install dependencies. So your dependencies tooling basically depends on your base image and on the runtime that you need. So in this case, we are going to copy over package JSON, copy over package log JSON, and run npm install to install our dependencies. Then we're going to copy the actual application. We're going to expose the ports which are needed by the application. And then we essentially run the app, right? So in this way, we just packaged the entire version of our application into a container.

5. Container Technology and Challenges

Short description:

And so now I can spin up this container in different environments. Container images are stored in image registry. This Kubernetes is everywhere and it is the standard for orchestration. Challenges with microservices include getting them to play well with each other and handling network failures. A microservice can only do one thing, but that can lead to additional challenges.

And so now I can spin up this container in different environments. And like I said, it's going to be portable, it's going to be working the same way in many different places. So essentially container images are stored in image registry. And then I could have different versions of the same container in the registry, right? So as I create new versions of my application, I can create new versions of my container image, and then I can push them into the registry. And then they become available. So I could essentially also pull the different versions.

So, de facto, this Kubernetes is everywhere, and basically, it is the standard for orchestration of water and app delivery. It provides lots of extensibility, lots of customization, and a great community of open source contributors, and it just became this big technology that powers everything. And then, yes, of course, the developer take on it is like, I just want to write code, right? Let me just write code. I don't really necessarily care about the platform. And that's kind of where we're trying to get to, to where the Kubernetes platform can actually be obstructed away, and you, as a developer, don't have to necessarily worry about where your application is running. And, of course, it's important to use Kubernetes because you can then put it on your resume and become a Kubernetes developer in a Silicon Valley unicorn.

So, let's talk briefly about challenges with microservices. So, not everything is rosy. We didn't solve all of your problems. We just created different types of problems with microservices. So, basically, creating microservices is easy. Getting microservices to play well with each other can be hard. So, one of the things that happens is microservices are actually defined integrate user published API. So, think about it as a contract. You essentially have contracts between different microservices. And so, if someone breaks this contract without updating documentation, without updating your team, they can essentially break every service that's dependent on them. And so, instead of merge hell and integration, where all the teams come together and then have to merge it, you essentially just have this situation where someone can ship a change without a dependency on you, but you then can find out that it no longer works with your particular service. The other thing is it's a network. So, when there is a failure in a network, it can create cascading failure across the entire network, which can be a huge problem. So, you need to handle that. You need to be able to isolate a failure and that can be hard and that can require additional implementation. And so, a microservice, it's a common saying that a microservice can only do one thing, right? You're only doing a microservice if it's responsible for a single business logic thing. But then, there's this joke of, if your microservice only does one thing, it can't have an error because that would be an additional thing.

6. Challenges of Microservices

Short description:

If your microservice only does one thing, it can't have an error because that would be an additional thing. The challenges of microservices include identifying stuck agents or containers, handling network communication, decreasing performance due to network hops and serialization, testing in isolation, debugging and monitoring across services, and supporting multiple API versions.

But then, there's this joke of, if your microservice only does one thing, it can't have an error because that would be an additional thing. Now, it sounds like a joke, but it's actually true, and in many implementation patterns. If your agent is stuck, if your particular container is stuck, it can be really hard to identify if it's still working on something or if it's actually stuck and needs to be killed. So, this is something that we need to worry about at containerization level as well.

To summarize the challenges of microservices is essentially that more services mean more network communication, which requires more failure and timeout recovery codes. So, this is something that you as a developer have to worry about. It decreases overall performance due to network hops and the continuous serialization and deserialization of objects. It's hard to test in isolation without dependent services. So, it can be really hard to stop everything out and be able to test a service in isolation. It's hard to debug and monitor across different services. And new services must support all the new API contracts simultaneously, which is, again, if you were implementing CIC before, you might have been worried about this as well, because you want to support multiple versions of the same code. But as you're rolling out new microservices, it becomes even more important to be very, very clear about what your API version is, so that the services that depend on you can be able to select the API version that they actually are able to work with.

7. CI-CD with Tacton for Kubernetes

Short description:

So I'm managing deployments and finally we got to the CIC part. You could use the existing pipeline, such as Jenkins or Azure DevOps, to deploy microservices onto Kubernetes or OpenShift platforms. However, it helps to use a cloud-native CICD pipeline designed for Kubernetes. Tacton is an open-source project that provides declarative, Kubernetes-native CICD. It eliminates the need for maintaining a Jenkins server and offers a task library with integrations for existing tools. A Tacton pipeline consists of tasks and steps, allowing for sequential and concurrent execution with conditional logic and retries. It provides a CI implementation for Kubernetes containers and microservices applications.

So I'm managing deployments, and finally we got to the CIC part. So basically you could use the existing pipeline. So hopefully you have CIC pipelines of some kind already, and you could use Jenkins or Azure DevOps or something like that, for sure, to deploy Kubernetes, to deploy microservices onto Kubernetes or OpenShift platforms. But it helps to use something that's... And we'll use the buzzword cloud-native, cloud-native CICD, which is designed for Kubernetes.

So essentially it's a pipeline as a service, so we are putting everything into containers. So pipelines actually run in containers, and it's native to Kubernetes. It's native to Kubernetes Authentication and things like that. So one of the potential services, and the one that Red Hat is using, is Tacton. So Tacton is an open-source project supported by the continuous delivery foundation. And it is basically declarative, Kubernetes-native CICD. The pipelines run on-demand in isolated containers. You don't have to maintain a Jenkins server or something like that. And there is a task library that you can pull from, and again, it integrates with Kubernetes and has a bunch of different integrations with existing tools.

So essentially, a Tacton pipeline is similar to pipelines that you might have seen before. So essentially, it's comprised of tasks and every task is comprised of steps. So a step is something that runs a command or a script in a container, and a step also defines what type of container you're using. So essentially, that container image you need to pull. Then a task is a list of different steps. So it can be steps running on the same type of container or a different type of container. And essentially, you define a sequential reusable sequence of steps that you need to accomplish a task. The tasks can be the pre-implemented essentially, tasks can be found at Tacton Hub, which is a public registry, public hub for Tacton tasks. And so for things that happen all the time, such as, you know, AWS CLI tasks or something like that, you can actually pull it from this library. And then once you have the task defined, you can pull it together into a pipeline. A pipeline is essentially a graph of tasks that can run sequentially or concurrently. And they can run on different nodes. They can have conditional logic and they can have retries. So if certain step didn't succeed, we can execute it again, until it succeeds, right? And it can share data between tasks, which is again, it's something that needs to happen Because often you're handling artifacts between different pipeline stages. So essentially comprising it all together, it gives you a CI implementation for your Kubernetes containers, which also means your microservices application. And then just because we needed more buzzwords in this industry, we created a buzzword for GitHubs.

8. GitHubs and Cloud-native Development

Short description:

GitHubs is an approach to DevOps and CICD, where everything is controlled through Git and is declarative. Argo CD is used for the CD part of the application, defining the deployment environments. Tacton pipelines build container images and push them to the image registry. Declarative configurations through Argo CD define the application in each environment. Cloud-native application development enables faster deployment. Thank you for being here. I'm Sasha Rosenbaum, find me on Twitter and GitHub as DevineOps.

GitHubs is essentially an approach to DevOps, to CICD, where everything is implemented, controlled through Git, and everything is essentially declarative. In particular with OpenShift, and again, you can do the same thing with Manila Kubernetes, you can use Argo CD for the CD part of the application. So essentially, you would have CI pipelines in Tacton, and you will have Argo CD on the CD side, and then the Argo CD is the declarative GitHubs definition that defines which environment your code is actually deployed to. And I like this diagram. So basically, essentially, your Tacton pipelines are building your container images and pushing them into image registry. And then you define, basically, add manifests that say, okay, for this type of image goes into the dev environment, this type of container image goes into staging environment, and so on and so forth. So essentially, implementing in a declarative fashion, decided state configurations through Argo CD for what your application actually look like in each different environment. And of course, now everything is automated, and you're a Silicon Valley unicorn, and everybody's really happy and everybody's using microservices successfully. So cloud-dated application development is about becoming faster. It's not necessarily gonna make your life easier on every dimension, but it does enable companies in particular, developer teams to move faster with deploying their applications into production. And thank you so much for being with me today. I am Sasha Rosenbaum, you can find me on Twitter and GitHub as DevineOps, and it's a pleasure to meet you.

9. Introduction and Crowd Response

Short description:

Hey, Sasha. Hello. I'm super happy to be here, and I'm really excited that you are my host because it's always fun to just chat with you too. It's interesting to see that most of the folks are actually new to different kinds of architectures, service-oriented architecture and microservices, containers. The big idea of switching to Kubernetes and container orchestrations was to free up developers from dealing with infrastructure, but it's important to understand CICD and all of these things.

Hey, Sasha. Hello. I'm super happy to be here, and I'm really excited that you are my host because it's always fun to just chat with you too. I actually got to choose who I'm introducing, and I chose you, Sasha. Yeah! Oh, nice. I'm surprised. I feel appreciated. So, let's take a look at the question that you asked the crowd, and you asked if folks are experienced with different kinds of architectures, service-oriented architecture and microservices, containers. And it's interesting to see that most of the folks are actually new to all of the above, which is not surprising to me, seeing as it's more of a front-end conference, but it's interesting to see how that shift is actually happening, right? What do you think? Well, I think, you know, the big idea of switching to Kubernetes and container orchestrations and stuff like that was to free up developers from dealing with infrastructure, and I think some extent what's happening is the opposite, right, and we having developers learn more about infrastructure. So in one sense, it's better because developers and operations are coming together around this development pattern that is shared by everyone, but on the other, all these things are hard to learn, right? And so it's hard enough to do front-end if you also have to worry about deployments, that might not be ideal. It's good to understand, you know, CICD and all of these things, but it's also great when we can just have pipelines as a service and not worry about that stuff at all. Yeah, I completely agree. It's like, there's such a shift, like everyone's role is expanding.

QnA

Referential Integrity and API Versions

Short description:

Sergio asks about maintaining referential integrity between microservices with multiple databases. The challenge lies in each microservice owning its own data, as databases are often owned by different teams. The idea is to consume from different APIs and interact with the parts that matter. Supporting multiple API versions requires guidelines and maintaining backwards compatibility. Implementation approaches like writing to both old and new fields can ease the transition. Kubernetes is becoming the de facto standard for implementation, although other technologies like serverless coexist. Microservices are more flexible than functions, which have inherent limitations. The DevOps story is expanding to other technologies, including this conference with DevOps.

Sergio asks a couple questions from the crowd, so how can we keep the referential integrity between microservices when we have several databases? Yeah, so it's, unfortunately, like you have to kind of talk to the database that you care about, right? And then like that, that is just, it's not necessarily super easy, right? Like it's not necessarily super easy for each microservice to own its own data. And a lot of times what happens is that like actually the databases are owned by another team and then you have to basically start reaching out to the same database, which kind of breaks part of the purpose of the microservices and they're like separating concerns and stuff like that. But yeah, it's, the idea is like you keep consuming from different APIs and just interacting with the parts that you essentially care about.

Awesome, and then there was another half, which was, what's the best strategies to support several API versions with microservice architecture? Yeah, so I mean, there's guidelines for having API versions so commonly, like you will actually publicize your version, right? So like v1, v2, v3. And when you develop microservices, actually, when you develop anything in the modern world, you have to support at least two versions of your code, right? You will never be able to move from one version to the next without keeping backwards compatibility. So there's some implementation, kind of sugar that like, some implementation approaches, for instance, when you write to a database, let's say you had last name and first name as a two separate fields, and now you want to combine them, right? So then, like, keep both, right? So have last name, first name is two separate fields, and then also last name, first name as a same field. And for a while, you basically write to both, right? And then you kind of move on to the next API version, and then you can basically retire. So you don't retire the code immediately after you change it, you retire it after another iteration, right? And then again, your API version is your contract, right? So if I'm consuming v1, and you moved on to v2, if you just tell me you moved on to v2, and you don't support v1 anymore, then you just broke contract with me, and I'm having a hard time dealing with it, obviously. But if you support both, then we can work together for a while. And then, obviously, at some point, you're going to tell me like v1 is retired, it's time for you to move on, right? So Yeah, makes a lot of sense.

Dennis asks, how prevalent do you see Kubernetes being moving forward? Cloud functions fulfill some of the same needs. So I think Kubernetes is a de facto standard, right? And so like, everyone is rolling out their version of Kubernetes, right? I mean, every cloud has a Kubernetes orchestrator of their own flavor, that has OpenShift, which is like enterprise-ready Kubernetes, if you will, right? So like, everybody is on this boat, and it's a big boat. And like, that's effectively the standard for implementation. I mean, obviously, you can implement functions as a service, and you can implement, you know, continue using monoliths, and you can continue using mainframe. Like, all of these things coexist. We invent new technology, we kind of roll to it. And then, you know, we still have to keep some of the backwards compatibility. And then, obviously, you never know when something new is going to be introduced, right? So like, maybe, in addition to containers and serverless, we're gonna have, you know, new patterns that come out in the next couple of years. I think that's kind of highly likely, honestly. And then, in terms of Kubernetes versus serverless, like, I mean, there's implementations for serverless that use Kubernetes. So, like, now it's all blurry lines. I mean, microservices and serverless have a lot of things in common. But again, microservices are more flexible, right? Functions by design have limitations. Like, you wouldn't put a web app on functions, like, that just would work. So that's a for instance, right? Whereas microservices kind of can support almost any app. Not necessarily that's going to be the easiest path to implementing application. But definitely, it can support pretty much almost any architecture. Makes sense. Yeah. So, as both of us being, you know, co-organizers of DevOps Days around the globe, I think it's really interesting that, you know, DevOps is, like, the DevOps story is expanding to other technologies, like this conference with DevOps.

Evolution of DevOps and Pipeline Checkpoints

Short description:

JS discusses the evolution of the DevOps world and the changing role of operators. The focus now is on solving more interesting challenges and adjusting technology to meet business expectations. The ideal state for developers is to focus on delivering business value rather than implementation details. Checkpoints in pipelines are crucial to ensure proper functioning, including code checks, building, testing, image registry checks, and security checks. Leveraging free and easy-to-implement tools can help ensure application security and functionality.

JS, what do you think is going to happen next? What are your thoughts on, like, kind of how the DevOps world is changing and evolving? It's interesting. Patrick just shared a tweet of, like, all the DevOps flavors of the industry and it's just, like, it's scary because it almost looks like the CNCF landscape. You know what I mean? There's so many words describing similar things or different things, and we just, like, keep expanding the levels of concern in all these different jobs.

I think, like, so if I look at, like, ten or whatever, 2009, 11, almost twelve years ago when the whole DevOps movement started, like, sometimes I ask myself a question, like, did we make the world better? And I think we did, right? And we continue doing that, but, like, continuing, like, if you think about the job of the operator versus the job of the operator today, I think the operators work on way more interesting challenges, which, like, you know, instead of clicking buttons and doing the same thing over and over again and worrying about, I don't know, backup tapes and stuff like that, we now, like, have operators implementing kind of, you know, basically writing code, which is, like, it's terrible for me to say that writing code is the coolest thing, but, again, like, we basically solve more interesting challenges. It's also, like, the world has moved on, right? Like, 10 years ago you could have an outage for the entire Saturday while you're moving your app to a new version, and today, like, there's no business in the world for which it would be acceptable to just, like, be like, oh, we're off for maintenance for an entire day. Like, that just doesn't work anymore, right? So you have to kind of adjust your technology to everybody's expectations, essentially.

I think in terms of coming to the frontend, I don't know. I think the expectation should be that people should be able to work on delivering business value as opposed to worrying about implementation details. And that's where, like, maybe we should come full circle and be in a sense where, like, you can click a button and then you'll get a container or whatever it is or a function or you don't even care about implementation details, right? I get a container of this, you know, type, and I can run my app in production and all I worry about is code, right? And that's just, like, I feel like that's the ideal state for, you know, developers to be able to be effective with their job.

Yeah. In the context of that, right, that's often been the promise of, like, kind of past platforms like OpenShift that talk about, like, kind of from ID to production. So it's interesting how they kind of, yeah, we are coming full circle around to that kind of a model again. Yeah, there's, you know, like one last just question to wrap up your talk is what are the checkpoints you set to make sure your pipelines are working as expected? Is there any, like, quick tips on that just to make sure that people have themselves covered? Yeah, I mean, there's multiple, basically, checks that you can show and, like, if you have the technology configured properly, you will have gates so you can actually, so continuous delivery doesn't mean you always automatically go into production, right? So ideally, you have a checkpoint after you're checking your code and you have a checkpoint after you build it, you have a checkpoint after you run tests and you can have multiple of those because you can have multiple types of tests, essentially. For these types of pipelines, you have a checkpoint when you're checking the image into the container registry. And then basically, the checkpoint where you check it out and then, deployment is a whole other thing where like once it's deployed, you have to also take care of monitoring and stuff like that. And then usually now, it's a lot of times when you're checking into source control, but you definitely need to run security checks, right? And again, security checks can be at multiple stages as well, right? So you can run site code analysis, and then you can run security as part of your tests. And then you obviously mentor security once it's deployed. So there's a whole lot of different things. I think, basically, you kind of go and chip edit at, you know, leverage the things that are free and easy to implement first, and then like that already puts you ahead of a lot of companies. And then kind of start looking at different tools that can allow you to make sure your applications are secure and functioning well. I think security is one of the things that is often missed when people are worried about speed of deployment so much.

I couldn't agree more. Thank you so much, Sasha. It's always a pleasure and honor to have you. Join Sasha in her speaker room immediately after her talk and on the Spatial chat. We'll get to chat with her and network. So thank you so much for joining us, as always, stellar talk. Big thanks. I'll see you soon. Take care.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.
DevOps.js Conf 2024DevOps.js Conf 2024
25 min
End the Pain: Rethinking CI for Large Monorepos
Scaling large codebases, especially monorepos, can be a nightmare on Continuous Integration (CI) systems. The current landscape of CI tools leans towards being machine-oriented, low-level, and demanding in terms of maintenance. What's worse, they're often disassociated from the developer's actual needs and workflow.Why is CI a stumbling block? Because current CI systems are jacks-of-all-trades, with no specific understanding of your codebase. They can't take advantage of the context they operate in to offer optimizations.In this talk, we'll explore the future of CI, designed specifically for large codebases and monorepos. Imagine a CI system that understands the structure of your workspace, dynamically parallelizes tasks across machines using historical data, and does all of this with a minimal, high-level configuration. Let's rethink CI, making it smarter, more efficient, and aligned with developer needs.

Workshops on related topic

Node Congress 2023Node Congress 2023
119 min
Decomposing Monolith NestJS API into GRPC Microservices
Workshop
The workshop focuses on concepts, algorithms, and practices to decompose a monolithic application into GRPC microservices. It overviews architecture principles, design patterns, and technologies used to build microservices. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated TypeScript services in the Node.js stack. The workshop includes a live use case demo of decomposing an API application into a set of microservices. It fits the best architects, tech leads, and developers who want to learn microservices patterns.
Level: AdvancedPatterns: DDD, MicroservicesTechnologies: GRPC, Protocol Buffers, Node.js, TypeScript, NestJS, Express.js, PostgreSQL, TurborepoExample structure: monorepo configuration, packages configuration, common utilities, demo servicePractical exercise: refactor monolith app
Node Congress 2023Node Congress 2023
102 min
Decoupling in Practice
WorkshopFree
Deploying decoupled and microservice applications isn't just a problem to be solved on migration day. Moving forward with these architectures depends completely on what your team's workflow experience will look like day-to-day post-migration.
The hardest part of this can often be the number of vendors involved. Some targets are best suited for specific frontend frameworks, while others are more so for CMSs and custom APIs. Unfortunately their assumptions, workflows, APIs, and notions of security can be quite different. While there are certain advantages to relying on a strict contract between apps – where backend and frontend teams work is limited to a single vendor – this isn't always realistic. This could be because you're still experimenting, or simply the size of your organization doesn't allow for this kind of specialization just yet.
In this workshop, you'll have a chance to explore a different, single vendor approach to microservices using Strapi and Next.js as an example. You'll deploy each app individually, establishing a workflow from the start that simplifies customization, introducing new features, investigating performance issues, and even framework interchangeability from the start.
Structure:- Getting started- Overview of Strapi- Overview of Platform.sh workflow- Deploying the project- Switching services- Adding the frontend
Prerequisites:- A Platform.sh trial account created- The Platform.sh CLI installed
DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Workshop
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
WorkshopFree
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.