Microservices present many advantages for running modern software, but they also bring new challenges for both Deployment and Operational tasks. This session will discuss advantages and challenges of microservices and review the best practices of developing a microservice-based architecture.
We will discuss how container orchestration using Kubernetes or Red Hat OpenShift can help us and bring it all together with an example of Continuous Integration and Continuous Delivery (CI/CD) pipelines on top of OpenShift.
Transcript
Hi. I'm Sasha Rosenbaum and today I'm presenting to you on building CI/CD for microservices applications.
About Sasha
So first, just a little bit of an introduction about me. I am currently a team lead on a Managed OpenShift Black Belt team at Red Hat. And in my career, I have been in development, in operations, briefly in DevRel, and I spent quite a bit of time in customer success. I still consider myself a developer, but as you can see, I've done a lot of different things along the 15 years that I've been in this industry. I've been involved in DevOpsDays since 2014, as you can see on this chart as well. And you can find me on Twitter as DivineOps. I post a lot of cat pictures and some Twitter, some technical hot takes, so please feel free to follow me on Twitter.
Why use microservices?
[01:14] So, let's talk about microservices. As I was building this presentation, I was thinking that in addition to how to do things, it's also important to talk about why we're doing things. And especially if we are talking from a developer standpoint, it's important to talk about why we even use microservices, why we use containers and dive into overview of the entire architecture.
So, let's start with this. Once upon a time, there was a monolith. The monolith was pretty and we actually enjoyed working with a monolith and everything was relatively okay. And to tell you a little secret that no one likes to talk about anymore, is that monolith was actually easier to deal with because with monolith you had full control over all the parts of your application, all of your dependencies, and all of your things and also, because distributed systems are hard and that's just a universal truth. But monolith also had challenges, and these challenges were among many.
[02:17] So many different teams needed to collaborate. So you had to collaborate across different parts of a company, and if the company was large, it often meant that something like a largeairline.com became a Behemoth, became a huge beast and all the different teams had to be involved in building that huge application. Everyone had to be using the same technology, so if one team really liked Java and another team really liked C#, they had to choose between them. They couldn't use both. Or say with different OS types and things like that.
Every change required a regression testing of the entire system, which can be quite time-consuming. Deployment, therefore was slow because teams were often running into merge hell and scaling was difficult. So scaling a huge beast of an application is usually not a picnic. So because we had challenges, where there's challenges there's always solutions, and in this particular case, the solution became microservices.
[03:22] So what's really microservices? We started with an application and then we started talking about proper patterns of development, so we marginalized different parts of application and then we started talking about breaking these different modules into different services and as we ran, it became a network of services. And in proper implementation, each microservice also is supposed to own their own data. In practice, this doesn't always happen, but this is the implementation best practice for each microservice to also own the database.
So, I often hear questions about service oriented architecture versus microservices. So if you're on the left of this diagram, and you have a data layer and you have a business logic layer and you have a web layer, even if these three layers are separate, you're still not in microservices.
[04:09] Separating vertical concerns is so not microservices. Microservices separate business logic. So in this case, for instance, authentication would be a separate microservice and it can scale independently of the HTTP listener that serves the web application. If you are breaking business purpose, you are in a microservices world.
So microservices and this is from microservices.io, they define it as an architectural pattern of collection of services that are loosely coupled, independently deployable, organized business capabilities, and owned small teams.
Benefits of microservices
[04:29] So, the biggest benefit of microservices and the reason that all of the industry went with microservices is actually that microservices enable agility and that's the biggest benefit because compared to the huge monoliths deployment, you can move much faster with microservices.
So the old-school is loving the monolith, and the new school is microservices applications, which are typically supported Kubernetes or in my case, I work for Red Hat, so I'm going to be mentioning OpenShift a lot, which is a version of Kubernetes, if you will.
So benefits of microservices stem directly from the architectural description, so they can scale independently each. So if you need only one instance or only three instances of authentication service, and you need 15 instances of, let's say, photo upload service, you can scale them independently. You don't have to create VM for the entire monolith that does all of these things. You can deploy them independently, which is something that enables agility. You can rely on different tech stacks. So if I want to use Python and you want to use Java, then we can definitely do that and each service will support the technology that's best applied to it.
[06:06] And then they can rely on conflicting dependencies, which can also be a huge benefit because if I need Python version two and you need Python version three, if we were in a monolith, we would be conflicting with one another, but now we can have different microservices relying on different versions of the same library and most importantly, you will be a microservices developer, which will make you a Silicon Valley unicorn in a proper fashion and this image is from github.com 503. This is a 503 error, but it just happens to look really nice.
Why use containers?
[06:40] So, why use containers? So I hear a lot of people basically saying that containers are a must for microservices implementation. I like this quote from Jeffrey Richter, who wrote 18 different books on computing. Basically, "Microservices are an architectural design pattern and containers are an implementation detail that often helps." So you technically don't have to have containers to implement microservices, but it is often making your life easier.
So why do we use containers? Well, so here a metaphor from a shipping industry actually helps. So a shipping industry used to ship every workload in a separate type of box and handle it separately and then the shipping industry actually adopted containers, so these huge shipping containers. And you can see that in the seventies, where the shipping industry adopted containers, there was actually exponential growth of the global trade because shipping containers was a lot easier. You could standardize on one particular procedure, and so you didn't have to handle every single workload in a different fashion and that allowed you to build ships that support containers, to build procedures for offloading, onloading on ships and stuff like that basically enabled explosive growth around the entire industry and kind of like that, those benefits are also kind of true for computing containers as well.
[08:09] So if we're talking about container technology, it is immutable, it is portable, and it is relying on open source and open standards. And so these are huge benefits, but what does it actually mean? What problem did we actually solve containerizing everything? Well, in reality, we solved the problem of 'it works on my machine'. So in the previous life, it used to be that I could be a developer and I implemented something and it worked on my machine perfectly, and then the time it made it even into the test environment, things didn't work anymore. So basically, this meme comes from Cam's blog, but it says, 'It works on my machine. Then lets ship your machine and that's how Docker was born.' So essentially, we figured out a way to ship a development machine and that container, it can be shipped and can be worked portably across different environments, so I no longer care if I'm in the cloud, if I'm on premise, if I'm on my laptop, I can package to the same container and it can run anyway. Now, this is not necessarily completely transparent, but it definitely enables us to sort of eliminate a lot of the challenges of differences in environments.
[09:23] So let's talk about containers because it's actually irrelevant to how to implement CI/CD for microservices and specifically if you're running on a Kubernetes implementation, which you probably are. So a container is the smallest compute unit and it's actually instantiated from a container image. So it's a little bit similar to like a VM image, if you're familiar with those, but basically, you create an image of a container and then you can instantiate a container at runtime and you can instantiate many containers from the same container image.
So a container image has different layers. So essentially, you start with the base Linux. It's typically Linux, there are Windows containers out there, but there aren't many of them and then you have the OS update layer with the things that you need. You then have a runtime layer for your language for your application and then you have the actual application layer.
[10:16] So basically, if you're using... The anatomy of a Docker file for a JavaScript application is similar to this. So basically, you're pulling an image from a registry. There's typically a registry where images live and if you are in a company, in a big enterprise, you're probably pulling from an internal registry. If you are just building for yourself, then maybe you're pulling from Docker Hub or some publicly available registry and then you have basically, defining your environment variables. You can install dependencies, so your dependency is towing, basically depends on your base image and on the runtime that you need, so in this case, we are going to copy over package JSON copy over package log JSON, and run npm, install it to install your dependencies. Then we're going to copy the actual application. We are going to expose the ports which are needed the application, and then we essentially run.
[11:17] So in this way, we just packaged the entire version of our application into a container, and so now I can spin up this container in different environments and like I said, it's going to be portable. It's going to be working the same way in many different places.
So essentially, container images are then stored in image registry and then I could have different versions of the same container in the registry. So as I create new versions of my application, I can create new versions of my container image and then I can push them into the registry and then they become available. So I could essentially also pull the different versions. So if I wanted to run, let's say Frontend 2.0 with Mongo 3.6, I could run this combination of services.
[12:05] So the fact of it is Kubernetes is everywhere and basically, it is the standard for orchestration of modern app delivery, it provides lots of extensibility, lots of customization and a great community of open source contributors, and it just became this big technology that powers everything and then yes, of course, the developer take on it and say was like, "I just want to write code. Let me just write code. I don't really necessarily care about the platform." And that's where we're trying to get to, to where the Kubernetes platform can actually be obstructed away and you, as a developer, don't have to necessarily worry about where your application is running. And of course, it's important to use Kubernetes because you can then put it on your resume and become a Kubernetes developer and a Silicon Valley unicorn.
Challenges with microservices
[12:57] So let's talk briefly about challenges with microservices. So not everything is rosy. We didn't solve all of your problems. We just created different types of problems with microservices.
So basically, creating microservices is easy, getting microservices to play well with each other can be hard. So one of the things that happens is microservices are actually defined integrate using published APIs. So think about it as a contract. You essentially have contracts between different microservices, and so if someone breaks this contract without updating documentation, without updating your team, they can essentially break every service that's dependent on them and so instead of merge hell and integration, where all the teams come together and then have to merge it, you essentially just have this situation where someone can ship a change without a dependency on you, but you then can find out that it no longer works with your particular service.
[13:52] The other thing is it's a network and so when there's failure in a network, it can create cascading failure across the entire network, which can be a huge problem, so you need to handle that. You need to be able to isolate a failure and that can be hard and it can require additional implementation.
And so a microservice... It's a common saying that a microservice can only do one thing. You're only doing a microservice if it's responsible for a single business logic thing, but then there's this joke of if your microservice only does one thing, it can't have an error because that would be an additional thing. Now, it sounds like a joke, but it's actually true. In many implementation patterns, if your agent is stuck, if your particular container is stuck, it can be really hard to identify if it's still working on something or if it's actually stuck and needs to be killed, so this is something that we need to worry about at a containerization level as well.
[14:49] So to summarize the challenges of microservices is essentially that more services mean more network communication, which requires more failure and timeout recovery code, so this is something that you, as a developer, have to worry about. It decreases overall performance due to network hops and the continuous serialization, (de)serialization of objects. It's hard to test in isolation without dependent services, so you can't really... It could be really hard to stop everything out and be able to test your service in isolation. It's hard to debug and monitor across different services, and new services must support all the new API contracts simultaneously, which is again, if you were implementing CIC before you might've been worried about this as well because you want to support multiple versions of the same code, but as you're rolling out new microservices, it becomes even more important to be very, very clear about what your API version is so that the services that depend on you can be able to select the API version that they actually are able to work with.
Automating deployments
[15:51] So automating deployments. It finally got to the CI/CD part. So basically, you could use the existing pipeline, so hopefully you have CI/CD pipelines of some kind already and you could use Jenkins or something like that, for sure, to deploy Kubernetes, to deploy microservices onto Kubernetes or OpenShift platforms. But it helps to use something that's... And I will use the buzzword cloud native, cloud native CI/CD, which is designed for Kubernetes. So essentially, it's a pipeline as a service, so we are putting everything into containers, so pipelines actually run in containers and it's native to Kubernetes, it's native to Kubernetes authentication and things like that.
So one of the potential services and the one that Red Hat is using is Tekton. So Tekton is an open source project supported the Continuous Delivery Foundation and it is basically declarative, Kubernetes native CI/CD. Their pipelines run on demand in isolated containers. You don't have to maintain a Jenkins server or something like that and there is a task library that you can pull from. And again, it integrates with Kubernetes and has a bunch of different integrations with existing tools.
[17:07] So essentially, a Tekton pipeline is similar to pipelines that you might have seen before, so essentially, it's comprised of tasks and every task is comprised of steps. So a step is something that runs a command or a script in a container and a step also defines what type of container you're using. So essentially, that container image you're using to pull. Then a task is a list of different steps, so it can be steps running on the same type of container or a different type of container and essentially, you define a sequential, reusable sequence of steps that you need to accomplish a task. The pre-implemented essentially tasks can be found at Tekton Hub, which is a public hub for Tekton tasks. And so for things that happen all the time such as, AWS CLI task or something like that, you can actually pull it from this library. And then once you have the task defined, you can pull it together into a pipeline and pipeline is essentially a graph of tasks that can run sequentially or concurrently and they can run on different nodes, they can have conditional logic and they can have retry. So if a certain step didn't succeed, we can execute it again until it succeeds, and it can share data between tasks, which is, again, it's something that needs to happen because often you're handling artifacts between different pipeline stages.
[18:37] So essentially, comprising it all together, it gives you a CI implementation for your Kubernetes containers, which also means your microservices application and then just because we needed more buzzwords in this industry, we created a buzzword for GitOps. So GitOps is essentially an approach to DevOps where... To CI/CD where everything is controlled through Git and everything is essentially declarative. In particular with OpenShift... Again, you can do the same thing with Vanilla Kubernetes. You can use Argo CD for the CD part of the application. So essentially you would have CI pipelines in Tekton, and you will have Argo CD on the CD side. And then the Argo CD is the declarative GitOps definition that defines which environment your code is actually deployed to.
[19:36] And I like this diagram. So basically essentially your Tekton pipelines are building your container images and pushing them into image registry. And then you define basically add manifest, let's say, okay, for this type of image goes into dev environment, this type of container image goes into staging environment and so on and so forth. So essentially implementing in a declarative fashion, desired state configuration through Argo CD for what your application actually look like in each different environment. And of course now everything is automated and you're a Silicon Valley unicorn, and everybody's really happy and everybody's using microservices successfully.
[20:14] So cloud native application development is about becoming faster. It's not necessarily going to make your life easier in every dimension, but it does enable companies in particular developer teams to move faster with deploying their applications into production.
And thank you so much for being with me today. I am Sasha Rosenbaum. You can find me on Twitter on GitHub as DivineOps, and it's a pleasure to meet you.
Questions
[20:39] Sharone Zitzman: Hey Sasha.
Sasha Rosenbaum: Hello. I'm super happy to be here and I'm really like excited that you are my host because it's always fun to just chat with you too.
[20:14] Sharone Zitzman: I actually got to choose who I'm introducing and I chose you Sasha. Hello. Surprise.
Sasha Rosenbaum: A few will appreciate it.
[21:02] Sharone Zitzman: So let's take a look at the question that you asked the crowd and you asked the folks who are experienced with different kinds of architecture as service oriented architecture microservices, containers. And it's interesting to see that most of the folks were actually new to all of the above. It's just not surprising to me seeing as it's more of a front end conference, but it's interesting to see how that shift is actually happening. What do you think?
[21:31] Sasha Rosenbaum: Well, I think, the big idea of switching to like Kubernetes and container orchestrations and stuff like that was to free up developers from dealing with infrastructure. And I think to some extent what's happening is the opposite. And we having developers learn more about infrastructure. So like in one sense it's better because like developers and operations are coming together around this development pattern that is shared everyone. But on the other, like all these things are hard to learn. And so it's hard enough to do front end. If you also have to worry about deployments, like that might not be ideal. It's good to understand CI/CD and all of these things, but it's also great when we can just have pipelines as a service and not worry about that stuff at all.
[22:25] Sharone Zitzman: Yeah. I blank agree. It's like, there's such a shift, like everyone's role is expanding. Sergio asks a couple of questions from the crowd. So how can we keep the referential integrity between microservices when we have several databases?
[22:42] Sasha Rosenbaum: Yeah. So it's, unfortunately, like you have to kind of talk to the database that you care about. And then like that is just... It's not necessarily super easy. Like it's not necessarily super easy for each microservice to own its own data and a lot of times what happens is that actually the databases are owned another team and then you have to basically start reaching out to the same database, which kind of sort of breaks part of the purpose of the microservices and... Like, separating concerns and stuff like that. But yeah, the idea is you keep consuming from different APIs and just interacting with the parts that you essentially care about.
[22:26] Sharone Zitzman: Awesome. And then there was another half, which was, what's the best strategy to support several API versions with microservice architecture?
[23:36] Sasha Rosenbaum: Yeah. So like, I mean, there's guidelines for having API versions, so commonly you will actually publicize your version. So like V1, V2, V3 and when you develop microservices, actually when you develop anything in kind of the modern world, you have to support at least two versions of your code. You will never be able to move from one version to the next, without kind of keeping backwards compatibility.
So there's some implementation kind of sugar that like some implementation approaches. For instance, when you write to a database, let's say you had last name and first name as a two separate fields, and now you want to combine them. So then keep both. So have last name, first name as two separate fields and then also last name, first name as a same field and for a while you basically write to both. And then you kind of move on to the next API version and then you can basically retire.
So you don't retire the code immediately after you change it, you retire it after another iteration. And then again, your API version is your contract. So if I'm consuming V1 and you moved on to V2, if you just tell me you moved onto V2 and you don't support V1 anymore, then you just broke contract with me and I'm having a hard time dealing with it, obviously.
[24:56] But if you support both, then we can work together for a while and then obviously at some point you're going to tell me like, V1 is retired it's time for you to move on.
[25:08] Sharone Zitzman: Yep, that makes a lot of sense. Dennis asks, how prevalent do you see Kubernetes being moving forward? Cloud functions fulfill some of the same needs.
[25:17] Sasha Rosenbaum: So I think Kubernetes is a defacto standard. And so like everyone is rolling out their version of Kubernetes. Like, I mean, every cloud has a Kubernetes orchestrator of their own flavor, but it has OpenShift, which is like enterprise ready Kubernetes, if you will. So like everybody is on this boat and it's a big boat and that's effectively the standard for implementation. I mean, obviously you can implement functions as a service and you can implement... You can continue using monoliths and you can continue using mainframe, like all of these things co-exist we invent new technology we kind of roll to it and then, we still have to keep some of the backwards compatibility. And then obviously you never know when something new is going to be introduced. So like maybe in addition to containers and serverless, we're going to have new patterns that come out in the next couple of years. I think that's kind of highly likely honestly.
[26:23] And then in terms of Kubernetes versus serverless, I mean, there's implementations for serverless that use Kubernetes, so like now it's all blurry lines. I mean, microservices and serverless have a lot of things in common, but again, microservices are more flexible. Functions design have limitations. Like you wouldn't put a web app on functions like that just wouldn't work. So that's for instance. Whereas microservices kind of can support almost any app. Not necessarily that's going to be the easiest path to implementing an application, but definitely it can support pretty much almost any architecture.
[26:59] Sharone Zitzman: All makes sense. Yeah. So as both those being co-organizers of DevOpsDays around the globe, I think it's really interesting that, DevOps is like the DevOps story is expanding to other technologies like this conference with DevOps JS, what do you think is going to happen next? What are your thoughts on like kind of how the DevOps world is changing and evolving and like you…
[27:27] Sasha Rosenbaum: So it's interesting. Patrick Debois just shared like a Tweet of like all the DevOps flavors of the industry. And it's just like, it's scary because it almost looks like the CNCF landscape. Like, you know what I mean? There's so many words describing similar things or different things. And we just like keep expanding the levels of concern and all these different jobs.
So if I look at like 10 or whatever, 2009, 11, almost 12 years ago, when the whole DevOps movement started, like sometimes I ask myself a question, like, did we make the world better? And I think we did. And we continue doing that. Like if you think about the job of the operator back then versus the job of a operator today, I think the operators work on way more interesting challenges, which like, instead of clicking buttons and doing the same thing over and over again and worrying about, I don't know, backup tapes and stuff like that, we now have operators implementing kind of, basically writing code, which is... It's terrible for me to say that writing code is the coolest thing, but again, like we basically solve more interesting challenges.
[28:14] It's also like the world has moved on. Like 10 years ago you could have an outage for the entire Saturday while you're moving your app to a new version. And today, like there's no business in the world for which it would be acceptable to just like, be like, "Oh, we're off for maintenance for an entire day." Like that just doesn't work anymore. So you have to kind of adjust your technology to everybody's expectations, essentially.
I think in terms of it coming to the front end, I don't know, like I think the expectation should be that people should be able to work on delivering business value as opposed to worrying about implementation details. And that's where, like, maybe we should come full circle and be in a sense that we're like, you can click a button and then you'll get a container or whatever it is, or a function, or you don't even care about implementation details. I get a container of this type and I can run my app in production and all you worry about is code. And that just like... I feel like that's the ideal state for developers to be able to be effective with their job.
[29:49] Sharone Zitzman: Yeah. In the context of that, that's often been the promise with the kind of past platforms like OpenShift that talk about like, kind of from ID to production. So it's interesting how that kind of, yeah we are coming full circle around to that kind of a model again. Yeah. There's, like one last, just a question to wrap up your talk is what are the checkpoints you set to make sure your pipelines are working as expected? Is there any like quick tips on that, just to make sure that people have themselves coverage?
[30:22] Sasha Rosenbaum: Yeah. I mean, there's multiple, basically checks that you can install. And like, if you have the technology configured properly, you will have gates. So you can actually... So continuous delivery doesn't mean you always automatically go into production. So ideally you have a checkpoint after you check in your code and you have a checkpoint after you build it, you have a checkpoint after you run tests and you can have multiple of those because you can have multiple types of tests essentially.
For these types of pipelines you have a checkpoint. When you check in the image into the container registry, and then basically the checkpoint where you check it out and then, deployment is a whole other thing where like once it's deployed you have to also take care of monitoring and stuff like that. And then usually now it's a lot of times, when are you checking into source control, but you definitely need to run security checks. And again, security checks can be at multiple stages as well. So you can run site code analysis, and then you can run security as part of your test. And then you obviously manage our security once it's deployed.
[31:26] So there's a whole lot of different things. I think basically you kind of go and chip at it at, take, leverage the things that are free and easy to implement first and then like that already puts you ahead of a lot of companies and then kind of start looking at different tools that can allow you to make sure your applications are secure and functioning well. I think security is one of the things that is often missed when people are worried about speed of deployment so much.
[32:00] Sharone Zitzman: I couldn't agree more. Thank you so much, Sasha, as always a pleasure and honor to have you. Join Sasha and her speaker room immediately after her talk and on the spatial chat, where we'll get to chit chat with her and her network. So thank you so much for joining us as always stellar talk, big thanks.
Sasha Rosenbaum: Bye, I'll see you soon.
Sharone Zitzman: Take care. Thank you.