Decomposing Monolith NestJS API into GRPC Microservices

Recording available for Multipass and Full ticket holders
Please login if you have one.
Rate this content
Bookmark
Project website

The workshop focuses on concepts, algorithms, and practices to decompose a monolithic application into GRPC microservices. It overviews architecture principles, design patterns, and technologies used to build microservices. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated TypeScript services in the Node.js stack. The workshop includes a live use case demo of decomposing an API application into a set of microservices. It fits the best architects, tech leads, and developers who want to learn microservices patterns.


Level: Advanced

Patterns: DDD, Microservices

Technologies: GRPC, Protocol Buffers, Node.js, TypeScript, NestJS, Express.js, PostgreSQL, Turborepo

Example structure: monorepo configuration, packages configuration, common utilities, demo service

Practical exercise: refactor monolith app

119 min
18 Apr, 2023

Comments

Sign in or register to post your comment.

AI Generated Video Summary

The workshop focused on decomposing a monolithic API into microservices using NestJS and gRPC. The speaker discussed the benefits and drawbacks of monoliths and microservices, as well as principles and architecture patterns for microservices. The process of decomposing the monolith involved refactoring the code, introducing gRPC for communication, and gradually moving parts of the monolith to separate services. The workshop also highlighted the importance of testing, CI/CD pipelines, and database decomposition. Overall, the workshop emphasized the need for careful planning and testing when decomposing a monolith into microservices.

1. Introduction to Workshop and Speaker Introduction

Short description:

Welcome to our workshop, How to Decompose Monolith API into jerky microservices. I will explain the agenda of the workshop, the prerequisites, and general information. If you want to run it locally, you will need to install the necessary tools. Let me introduce myself, I'm Alex, a software engineer with experience in frontend, DevOps, and backend development. I've worked on projects using Node.js, Java, and Ruby. I recommend exploring Golang as well.

Alright, alright, everyone, welcome to our workshop, How to Decompose Monolith API into jerky microservices. I will explain a little bit the agenda of the workshop because this is, by the way, the page that I can visit. It's available, it's gonna be available for some time for sure. It has some description, it has some prerequisites, it has some general information.

So, you don't need it to run locally today because we're gonna mostly demonstrate some codes today and show how it's running. But if you want to run it locally and go through the steps that we described in practice session, then you will need to install these two links on your local like Nolchess, NPM, Docker, Docker Compose and more importantly, Particle Buffer and Particle Buffer Compiler in this particular case.

Let me introduce myself first. I'm Alex, this guy with a huge photo on the screen. I'm a software engineer. That's my main job. I used to be a frontend engineer some time ago until 2015 and then I experienced something that is I think known as frontend fatigue. So I tried to switch my professional area and moved a little bit to DevOps and backend stack. So I actually was lucky enough to join the team who was supporting the Continuous Integration Cycle for big European bank. And it was for me, it was a big luck because it was in the middle between frontend, because we were supporting frontend engineers at that time, DevOps, which I really like like it's just doing some tooling for engineers you know, helping out there and debugging lots of stuff with JavaScript. That's what DevOps in our case meant. And later more to to backend stack. First, it was quite a big, quite performant, let's say, experience with Node.js backend. We had an API, very traditional Node.js API. And later on, I participated in a few backend projects with more traditional stacks like Java. Also, now programming in Ruby. And yeah, that's pretty much my experience. So yeah, enjoy programming and different technologies. I would also mention Golang because we matched with Andrew on that language, too. It's awesome language. And I definitely recommend for you to explore.

2. Introduction to Andrew and Node.js Experience

Short description:

Hi, I'm Andrew, a software engineer from the UK. I have experience with various technologies, including C-sharp.net, Node.js, and TypeScript. Recently, I've been working on microservices using gRPC and exploring DevOps practices. I'm also involved in open-source projects and conference topics. Alex and I have known each other for over 10 years. Speaking of Go language, there is no 'after JavaScript' for me, but I appreciate its simplicity and restrictions. Now, let's move on to some questions. How would you rate your Node.js experience from 1 to 10? We expect participants in today's workshop to have some experience with Node.js and backend development as we explore Decomposing Monolith API into Jervis Microservices.

Andrew, can you introduce yourself, please?

Yeah, sure. And then I'm going to have another question to you about Golang, which just right after... So hi, everyone. My name is Andrew. I'm also a software engineer from United Kingdom. I spent a lot of time trying to explore different technologies. Like basically, I started from C-sharp.net. Spent about three years there in this area and then starting moving in other directions, trying to find what I would say, better technologies for better cases. So there is huge of them, a lot of languages and then after second or third language, you already know all the basics and you just follow. You know, main partners and things like that. So yeah, but the most recent, my experience was related to Node.js and TypeScript. I've been developing some microservices based on gRPC. We've been working on some financial application, trying to create like an infrastructure, multiple services interconnected to each other. Also during this time I started to explore for myself DevOps practices. The main reason for me was like I come to the company and I start doing repeatable, the same steps to set up everything for the company to support the developers, to roll out everything and then at some point of time I realized that, okay, there is some tooling like Pulumi or Terraform, which allows you to pretty much to do that and then repeat it. Even you go to another company and just do the same stuff, which is really nice. Then yeah, we were there with Alex, I don't know how for how long we know each other more than 10 years, I guess. And we started to do some open source projects all together and now we are working on different topics for conferences as well. Yeah, that's pretty much about me. And my question, Alex, after JavaScript, how do you feel about Go language? After JavaScript, there is no after JavaScript. Once... What are your thoughts, when you're using the syntax. Once it's formatted, you feel it all... all time afterwards. So there is no after for me now, but personally, I like Go language and I enjoyed its simplicity and its restrictions, some limitations on writing your code after JavaScript that's a different feel. Yeah. Is that related to your experience as well? Yeah yeah. I mean, I mean, for me, it was a couple of things I didn't really get at the beginning like interfaces. Like you cannot actually throw errors. But in overall, when you're used to do that, it seems much better experience. But again, as you said, there is no after JavaScript. If you need to, you are in it forever. Awesome, awesome. Right. Cool. Cool stuff. So that was a warm-up. I actually wanna ask a few questions. I tried to find the poll here in Zoom but for some reason, I don't see it. So I will ask maybe in the Please Reply Class, if they say yes. So actually, for this question, please rate your Node.js experience from 1 to 10. Where 1 is first time seeing, 10 is super huge expert with 10 years of experience. I really wonder with that. How would that relates to us today. Again, to remind you, feel free to interrupt us and ask any questions. We have almost finished with the warm-up, so let's actually go and see what we're going to do today. Okay. So some average, both the middle and the both sides. I think the audience for today's workshop, we expect some experience with Node.js definitely with some backend. Because for today, we are going to explore the Decomposing Monolith API into Jervis Microservices.

3. Projects Ecosystem, Theories, and Chosen Algorithm

Short description:

We will explore the Projects ecosystem and dive into theories and patterns of microservices and monoliths. We will introduce the algorithm chosen for this experiment and demonstrate its transformation. Additionally, we will discuss the use of JRPC and summarize our findings. We have an average rating of 6 and will be using NestJS for the non-JS API stack. We found a cool banking application on GitHub that uses NestJS and will show it in action. It is a big banking monolith with various models, including authentication, views, currency, and languages.

Yeah. What it's going to look like, well, we just began, we now finishing the introduction. We're going to look at the Projects ecosystem. Let's have a look at our use case, but we try to see there what is a code and what we try to achieve. Things like that.

Then we going to dive into some of the theories, some patterns for what is a microservice, what is monolith? What are the principles of decomposing? Then we also going to introduce some algorithm that we have chosen for this experiment ourselves and we gonna see it in action. Actually, Andrew will demonstrate how we transformed the Monolith partially because it's actually quite a big job. We found many multiple, let's say issues while exploring this bus and we gonna we want to share it with you basically to feel it with us. And then we gonna dive into the JRPC. It's not the main topic. It's actually a very legit question why we have chosen JRPC for this demonstration? I think we can make this remark in the beginning that it's not necessary, but we wanted it to see, we wanted basically in our use case to demonstrate it for you, more technologies, things like that. And then basically we going to summarize it. That's our plan.

Thanks for the answers. By the way, I see around 6 is our average rate today. Also another question, did you play with NestJS before? Because that's going to be our main technology for non-JS API stack today. Yes, yes, yes, lots of yes. Okay. And some no's. Okay, thanks. Thanks a lot, guys. Okay. From my side, no, as well. Yeah, sure. Okay. Well, yeah, that was about Echo technologies, our plan today. So let's find out what's the project is. And actually, when we chosen this topic. It was the idea was, okay, what we what we got to showcase and we We've been in contact with one of the mentors on the conference. And, yeah, if you just suggest it to write one monolith in in a few hours. So that was not the way we chosen, by the way, but what would be needed? Essentially, we went to the github and we pick the non JS as an environment We pick JavaScript, and then we try to find the appropriate application. So we found one and this is a very, very cool application, in my opinion. It might be not having tons of stars. But in our case, it's well first of all, first of all, I think the aerial application is very suitable banking application. Everyone interacts with the bank. So now how we. How to expect from there. And next, it's using nest yes. And that's, that's awesome. Because this is a very cool framework to demonstrate all the patterns that we're going to talk about today. In my opinion, NES just it's pretty much closer to enterprise development than maybe other non shares frameworks for APIs. Yeah, so cheers to, to the total is that, I think this is clone. Actually, it's a better stack, if I'm not mistaken, but yes, actually, yes, that was Academy Service. And just a different thing. Yeah. And then Andrew will show it in action. But basically, it's one big banking monolith. You can accept it has all kinds of model is there. We can see already plus or minus what would be expected. We have authentication, views currency languages.

4. Introduction to NestJS Framework

Short description:

We have notifications, transactions, and users in our monolith, which is built using NestJS. NestJS is a decorator-based Node.js framework that allows us to build efficient, reliable, and scalable server-side applications. It hides crosscutting functionality behind decorators and provides around 100 decorators to handle various tasks. The framework simplifies routing and allows us to define restrictions on data types. NestJS also has extensive documentation and guides, although it may be overwhelming at first. It supports different modelers and has plugins like SWAGAR for generating API routes and protocol objects. Overall, NestJS is a mature framework with similarities to Angular, including the use of dependency injection.

We have notifications, transactions, users, so pretty much big monolith. And then it's using NestJS, this logo. We have many people that touched in NestJS before, but I will give a little bit of an introduction to this framework today. A really quick one.

So, yeah, you've probably have seen this like like code before. This is a very, yeah, very generous NestJS code. From one of the guys. So how I presented NestJS before is, it's basically Decorator based Node.js framework to build. And then it says to build efficient, reliable, and scalable server side applications. So what the decorator based means is that all the cross cutting functionality. If you hear some something that you have question about, please raise your hand and just ask it in the chat or just stop us. So all the crosscutting functionality or other functionality that doesn't have relation to a business usually is hidden behind this in the decorators. Let's say it has around 100 decorators to hide this card holding logic. So we have, for example, model here, a model decorator. We also have a controller. Get request request. This is from request to merge all kinds of data from a request object, express request express object. And it has also really nice notation. It's very like the JavaScript notation if you have experienced that before. Or I assume also PHP slash C sharp framework's notations with their decorators. So not that messages invented that patterns, but they are widely used in backend world, I believe. So you can find routes with these decorators. You can define some restrictions on the data types. We don't see it in this example, but you can actually define types of data that you want to receive in decorator. And the controller will throw an error if it receives something else. You see, for example, here, we define the GET URL for our application on KET's URL. So it's going to be KET's GET. And then it also wraps up the string that we returned from this router, from this handle. And it wraps it up with the correct content type, like we would expect it. So this framework does a lot for us, so we don't have to repeat ourselves.

What I like about NestJS is, well, it's maturity, I would say. First of all, it's really big. It has a lot of documentation points, has lots of guides how to do things. To be honest, reading documentation is a bit difficult with NestJS, because it's a lot here. And if you want to really learn about it, it's best of course to combine, like, learning different courses, trying some examples. It doesn't have a simple guide how to begin and end, for example, how we see it in Remix, but still it's a lot of documentation to find any missing information, missing gaps in your knowledge on the framework. Also, there are different recipes for using different modelers. Like, it is not built in functionality, for example, to use Type ORM, but it's very easy to attach to your application. or SQLite, even SQLARES, one of the partners that I'm gonna mention today. I think it's just a really quickly mention. SQLARES, what else? Pretty much all of them. We're gonna see, I think, not the group generator, but something like SWAGAR generator in a few minutes. Actually, the SWAGAR plugin, for example, allows you to generate all the API routes that I used in the application, in the NestJS application, and also the protocol objects used for interaction there. Just SWAGAR, basically. So that's why I like it. Also, if you have seen Angular in action, I assume NestJS is sort of Angular for backend. It has many things in common. For example, dependency injection. That's a very common pattern used in both of them. In short to explain it, because we already...

5. Dependency Injection and Code Introduction

Short description:

Dependency injection is an important pattern in frameworks like Angular and NestJS. NestJS provides many features out of the box, including the ability to choose between Express and Fastify as the web framework. We are working with a banking application built with Next.js, and we will focus on the Currency module, which presented some challenges. Overall, NestJS simplifies server-side application development and we will explore it further in the code.

Dependency injection is an important pattern for this workshop. It allows you to avoid manually instantiating services or objects in your application. With dependency injection in frameworks like Angular or NestJS, these services are instantiated for you and injected into the class that uses them. NestJS also utilizes domain-driven design and provides decorators for controlling application flow, such as pipes and guards for route access control. NestJS comes with many features out of the box, including the ability to choose between Express and Fastify as the web framework. We will be using Express for this workshop. Now, let's take a look at some code.

The project we are working with is a banking application that consists of a client and a backend server. The server represents the monolithic application and is built with Next.js. We chose this application because it already had Swagger documentation, which helps us understand the modules and routes. The main file of the application is main.ts, which bootstraps the Express application and sets up Swagger UI. The application is structured into modules, such as Alz, Bill, Currency, Language, Message, Notification, Transaction, and User. We will focus on the Currency module, which presented some challenges, such as filling the data and replacing an obsolete API with a new service.

Overall, NestJS is a powerful framework that simplifies server-side application development with its decorator-based approach, extensive features, and support for different web frameworks. The chosen project for this workshop provides a real-world example of a monolithic application that we will be decomposing into microservices. Now, let's dive into the code and see how it all works.

6. Project Overview and Code Demonstration

Short description:

We are working on a banking application that consists of a client and a backend server. The server is a representation of our monolithic application and is written in Next.js. The application has a deployment library and uses Swagger for module visualization. The main file, main.ts, handles the bootstrapping of the application and sets up Swagger UI. The application has various modules, including currency. One challenge we faced was how to fill the data, which we solved using a tool called cron. We replaced an obsolete API with a new service called cryptocompare to get currency exchange rates. The application also has a Swagger.json file for documentation. Currencies are a central aspect of the application and were chosen as a starting point for our coding challenges. Alex will explain more about this in the next part about the algorithm.

How is it with the project, Andrew? Can you show us some code? Yeah, sure. Thanks for asking. Just to confirm, guys, if you can see, if you can see it well, or I can do the fonts a bit bigger if you want to. It's good to me, my screen. Can you see my screen, right? Yeah. Okay, cool. So yeah, a little bit introduction again about this project which we stick to. It is a banking application and originally it consisted from two parts, the client, which we, to be honest, never executed and never tried, and the backend part, which is server. Server is a representation of our monolithic application and I think the main reason why we choose especially this application was related to that fact that it was first of all written in Next.js Next.js and second, it already had I think Swagger so we can easily see the whole picture of modules and the roads the platform consists of and then decide what we're gonna use, what we're gonna take for our coding challenges basically.

And here is the deployment library. The bad thing about this application was also related to the fact that it was, it has no tests at all at the beginning, it has no Docker compose. So we had to do a couple of things before we even started to do anything about this project. So basically, how the structure of it relating to Nist JS, sorry. But I think you already aware of it. So the main file is called here main.ts. It consists from some bootstrapping of the application. I think it's an express application. And also the main thing here, it's set it up, Swagger UI, which we're gonna take a look a bit later. Just understand what roles it has. And yeah, if we go deeper in the modules, there's a couple of things. So there is a kind of main model. It's called application model up model. Basically it consists of everything what you have in your system. And you know, all the models are connected here and basically defined like we need to configure. I think database in this place as well. Yeah, couple of modules. Let's see Alz, Bill, currency, language, message, notification, transaction, user. Let's have a look in currency as we pick this as the main one. A few challenges we experienced. The first one, how to fill the data. Basically, here is another tool from next-gen SJS which is called kind of cron, which you can define and how often it will be executed and it will be executed. But basically, this is not a regional cron, it's just part of the framework which executes out of 20, 12 hours. And here was the problem that the API used here was obsolete so I have to replace it with something new, service, let's see. Get currency exchange rates, so yeah here I've just used very popular thing called cryptocompare to get all the rates for a couple of currencies here and then just store it into the database, pretty much that. Let's have a quick look into Swagger. Swagger.json, Alex by the way, have you seen this? If you're open source. I'm not sure maybe we generated that but if you just start the application we should see it on the localhost as well. Yeah sure, yeah I mean, yeah I was trying to express something else but I don't see this. I mean this, oh here is it. So you can actually draw this here you can use never seen it before. It's like a read me kind of thing. But basically yeah, this is the page I was going to show you when it started. I can reshare my screen or we can just align on this. Yeah pretty much a lot of things here. Something about authentication and working with users login, register, password things, the queries about user messages within I think within an exit messages, I guess. Currencies, I mean currencies is one of the centric things here, which is used kind of everywhere. I think that was the first thing, why we, first reason why we picked this. The second reason was it was one of the smallest one to try. And I think Alex will explain why that was important during the next part about algorithm.

7. Exploring API, Swagger, and Dr. Compose

Short description:

The API is simple, with parameters, pagination, and returns currency rates. It's a bit slow. Notifications and currency are interconnected. Swagger allows for standardization and automatic documentation generation. Dr. Compose defines the application, including the Postgre database. Starting the application requires the database. Generating code from Swagger eliminates boilerplate code. Andrew didn't try generating code from Swagger.

And I think Alex will explain why that was important during the next part about algorithm. But the API is pretty much simple, some parameters, pagination, and returns, currency rates and the main currency exchange rate and the name of the currency. It's a bit slow. Some deals, transactions. And by the way, all of these things are kind of interconnected with currency, which is one of the things we haven't solved yet in our experience. But yeah, this is another thing we are working towards to maybe we'll contribute later.

And the last one, notifications, by the way, notifications was the second one we consider it for picking up as the starting point for us. Underneath you can just find all the schemas, which is basically generated based on our annotations in the file. So yeah, pretty much interesting when I'm not sure how many of you worked with Swagger before. But it allows you to have some standardization, and which is really cool, I guess. And basically this standardization based on these definitions, like what roads should be what. Yeah, what kind of pages return the data, what kind of dates or- Yeah, just trying to find where these definition of the data. Yeah, I think this is currency page to your. It's returning type and status, which is it should be returning some description. And basically based on this information, all the documentation portal automatically generated from the box, which is really, really nice.

Yeah, so let me check what else I want to show you. So yeah, Dr. Compose probably the next. So again, the things we, what we had here, we had some kind of database. We also had migrations, I guess, which already being in the project. So we can execute migrations, and then start doing, you know, the projects start working, but you still have to have some database locally. So that was the reason which we just developed Dr. Compose file. Again, not sure how many of you use it, but this is a kind of a definition of, of our application. I mean, later on, you can add here also the definition of the application itself, but for now it's just the Postgre database, which we used for these challenge. So here we have an image, which is just pull it from public repository, the ports, ports mapping, where basically when you started this Dr. Compose, you just start forwarding these ports from your local machine to container and some of the environmental variables like the database name and the user and the password, pretty much standard. If you also open these here because to start this application, basically I need to have this database. So here's it. So basically if I, let's say kill these, kill database, uh, uh, this is a pretty much simple way to, you know, to get your application not working, not was not originally in the project by the way as well as tests. So we had to start with that. So that was our first step with, I guess it acquainted with, with the project, like, yeah. Exploring the slugger, how to start the project. And we now showing pretty much our experience. Like when you go to a new project, how we started to work on that. What was our line of thought to decompose it and et cetera. Yeah. Right. So, yeah, as a first thing, as you can see in this panel, this project. Like when like database was not here, then it started. So the first thing, it's executes some integration from the PC, this folder. And then it starts application, where it just looks into all the models it has. And then it start to register our roads, we have, yeah. And then here's the new things which is executed by the cron job, which has I previously showed you in the currency model, it has a cron job, which goes to, could to compare IPR takes the rates and then populates it into the database so we can use it. I think we can just try to, sure. Try to call it local. Nicholas asks if you've used a swagger or code gen, like to generate, not swagger, but generate code from swagger, did you ever try something like generating code from swagger? Yes, I did myself, and just wanted to ask you whether you have already done it, too, and what is your opinion regarding that? I know that it also means that you have much boilerplate code you probably won't use in the future, but it means that you not have to start from scratch because you get all the signatures and everything generated what needs to be done otherwise manually. Understood. Well, for Swagger, I didn't have that experience, but we definitely do something alike with a gRPC. Andrew, did you try to generate a code from Swagger? What was your experience with that? Yeah, I think, no.

8. Experience with SOAP, XML, and Swagger

Short description:

I've worked with SOAP and XML, but not Swagger. gRPC was the main concern, but there were issues with code generation. Despite that, it's an easier way to work with protocols and write business logic.

I mean, I've done this pretty much the same stuff about SOAP definitions, which is kind of the old way, XML stuff from Bunking area, which is still alive and still there. But from Swagger, no. Yeah, as you said, just gRPC was the main concern. Unfortunately, yeah, in some generators, when you pick it up, when you try, and you got some issues somewhere in the middle, so you have to adjust your code basically after generating, that's the main point. But in overall, yeah, it's much, much easier way. So when you need to, when there is some protocol and you just rely on this protocol, and then you just generate everything around it and write your business logic, it's much easier for developers, I guess.

9. Introducing the Currency Model and Microservices

Short description:

We called a service that returned data from an external source. We discussed the ease of working with APIs that have Swagger documentation. We explored the current system and the plan to separate the currency model into a dedicated service. The plan involves using the GRPC protocol for communication. We also discussed the concept of microservices and the need to be ready to introduce them when the organization structure allows. Conway's Law states that the product reproduces the organization structure. Startups usually begin with a quick and dirty solution before transitioning to microservices as the monolith grows.

So yeah, I just called this service and it returned us some data from the external source, which we just populated before. Yeah, Alex, you're on.

Yeah, yeah, that's all cool. Yeah, just there was another comment. It's nice when you need to work with some API that already has Swagger. So my small comment on that is that if you go to maintain, Andrew, there is a method there set up Swagger. I think it was called something like that. So it's a very, very easy for nest.js applications. Like you have a plugin for that and they will generate pieces Swagger for you. So I don't remember if we had to add it or it was already in a project. But anyways, it's very tiny addition to nest.js some application if you have to explore that one, right?

Okay. Does a current system work now, right? Yeah, so Andrew had to actually fix this currency external poll also for the project. All right? Yeah, so let's... What else? Let's have a look at what we are trying to achieve, right, Alex? Yeah, yeah. You had a really nice drawing, I think. Yeah, a little bit of kittens. Let me just switch my screen to another. Open API TypeScript code again to generate backend code to call another server API. Just to confirm, can you see my screen now? Yeah, yeah, all good, yeah. Yeah, cool, so yeah, a little bit of drawing how, what is the part currently of this model is. So it consists of REST API and here's a list of different models and they are all interconnected between each other. And there is one database. And here's the highlighted model which we picked up to, you know, separating which is a currency model. And yeah, basically a kitten coming in and this is another drawing. I don't know what was intended to be shown now or later but anyway, yeah, so the idea was just to separate these currencies surface as a dedicated service, which again, we picked up a GRPC protocol for communication between them. However, you know, it depends on the situation. In some cases, we can just use the same REST API and instead of doing these, we can just proxy the whole currency theme just here. That's pretty much it. But in this certain case, we wanted to play with the GRPC as well. So, yeah, the main idea was to move these currency model in the new not kind of repository, which, yeah, mono repo, the same repo, but an external model, external application, which intended to use the separate database, but not in our example yet. And then from this side we just have a currency client, which will proxy all the requests coming to the REST API and doing the proxy to pure PC service and doing the underneath metric behind it. So I think that's pretty much about the plan. Yeah, if you want to follow up on the next steps, what we're gonna do. Right? Yeah. Thanks, Andrew. Nice, nice. So yeah, a little bit of theory now. So we see the project now what we, how we gonna move from it. Also remark that you can go into this links for example, for this microservices.io, it's very, very cool information there. You will find lots of interesting articles. What are the monolith patterns, micro service patterns, how basically, yeah, it has a lot of thoughts before us for decomposing and for how to organize these application case. But we have a shortcut for that today, short bus. So yeah, basically why and when you wanna introduce your microservices ecosystem form instead of using monolith. I think a very good thought on that, is that at some point in time you need to be ready to do that. And they usually that depends on organization structure, and that is a time where usually, as someone mentioned, so called Conway's Law. And basically it says that, the product reproduces the organization structure. And so at some point in time, because usually startups begin with a just, with a very fast way of producing products, right? Like try some hypothesis, if that's suitable for market or not. So you don't begin with microservices. Usually you begin with something really quick, dirty and working the most important. And then at some point in time you decide, you want to decide to move faster in delivering the software because your monolith was growing.

10. Monoliths vs Microservices: Benefits and Drawbacks

Short description:

Monoliths have benefits like simplicity in code growth, testing, and consistent state. However, they become harder to change, onboard new developers, and introduce new features. Deploying, debugging, and scaling also become challenging. Sticking to a monolith can lead to outdated technology. Microservices offer a different approach, with compact applications responsible for specific business features. They are independently deployable and owned by one team. The core responsibility principle applies, with services having one reason to change.

If it was successful and at some point in time it is big enough, you will experience that issue that you want to have separate teams, that separate teams will need to be responsible for separate parts of this monolith. And then working on this as a single project would become harder. Ways of becoming harder. This is, by the way, just a definition of applications that we are working with. So nothing more than that.

Andrew already explained some API operating with HTTP, to say REST API, it has some business logic, it uses database, it operates with Jsons, not necessary, by the way, but in our case, it was a Json originally, and we introduced something more than Json to that. Yeah, so you begin with a monolith. And a monolith has some very good benefits, actually, working on that. It's a very simple to grow this code, right? To develop it, because your code is in one place, and introducing changes to it is relatively easy on this bus. Also, it's easy to test, because again, all the dependent parts are close to each other. You don't have to, I don't know, look for external swagger, and mock it on your local to make it work. So yeah, what it says is all code is local. And it also includes a database in most cases. In your monolith you have just a single database, and it also has a lot of benefits, actually. The benefit there is that your state across the whole application is very, very consistent. Everywhere you need to do some transaction, you need to affect multiple tables, let's say, it's a much easier than having it spread around different microservices doing a synchronous call to each other. And yeah, it's close to impossible to make it a real transaction in this case. And the scale linearly, it says is also simple. So that's a point where you might start to argue with me. But the idea is that it's very easy to start a second monolith next to a first one, right? If you have enough instances on the machines in AWS, you can just start a new one, and then that will increase the capacity of the service. However, it has some limitations too, right? And for example, if you know that there's only one place in your monolith, it's getting loaded all the time, and the rest of the monolith is actually not doing much work, then it would be a waste of resources to start it all together and scale it all together.

Yeah, so now about a little bit of drawbacks, right? So we said some benefits of using monolith. Basically, all the code is in one place, and that helps itself. But then, as it's growing, it's getting harder to introduce some changes. It will affect many, many places in your monolith, and sometimes engineers also don't know where exactly it needs to be affected. To be honest, I'm, right now, in a situation of a big, big monolith, and I know for sure that it looks like that. Yeah, it's harder to onboard new developers also for that reasons, right, because it's very hard to make changes to it, and usually it means that it's harder to learn how the system works, and it's easy to start on your local, in most cases can become very slow, but it's also hard to make new features to that. Yeah, because it's getting slower in time, as it's growing, it's harder to deploy, debug, and scale. So all the pipeline, that's maybe the main pain point is that basically, every change, it will also require the running CI CD, of course. And this CI CD, as your monolith grows, it's becoming slower and slower and that affects developer work a lot. Yeah, and one other interesting aspect, I think that one is clearly true, is that as longer you stick to some code, to the monolith and you don't change it, don't break it into some smaller services, basically you stick to some technology and this technology is becoming outdated very quickly. In the developer world, especially in JavaScript, I would say, after five years, you would already regret your original technology and you would be happy to introduce something new to your approach, but in case of monolith, it's unfortunately not all time the truth, right? It's kind of long-term application and yeah, you have to stick to technologies have been chosen around some time ago.

So people can choose to use microservices and microservices is another way of running, developing your software. So it's a architectural pattern and I like actually to think about it because architectural pattern means that it affects all cycles of development and also for organization and also for product. So it all relates to the home base law, as you can see. So with that pattern, your application is a set of services. And again, if you click on the link, you will find a really nice article from, I have two heroes for articles today that are references is a Chris Richardson and Martin Fowler. Of course, if you don't know them please put mines in the chat. But I think Martin Fowler you most likely heard before.

In terms of services they are very compact applications and these applications they usually contain and they're responsible for one particular business feature. They can be independently deployable. They usually are owned by one team, not necessary, but they usually it's true. And then you can compare it with Monolith, right? When a Monolith is basically one Monolith is supported by lots of lots of engineers who are changing code in your service in your own monolith application. Some core responsibility principle and granularity. I don't know if I can explain it later on in a self-contained service, but you can imagine that's every service tries to be responsible for one thing. And then again, reference into this articles. They have a really cool thought about that. Well, single responsibility principle is a very clear thought is like, you have just one reason to change your service or class if it's about classes or if package if it's about packages. So something like that can be applied to microservices. So you need just one reason to change your service ideally.

11. Microservices and Principles

Short description:

Microservices can be developed independently, maintained and tested independently, and have a protocol that can be registered. Swagger is a common approach for API definition, but a registry is also important for managing conflicting APIs and controlling changes. Microservices can be used in front-end development as well, known as micro frontends. They offer benefits such as easy deployment, faster delivery of changes, and onboarding new developers. Choosing technology is flexible, and different programming languages and frameworks can be used. Scaling microservices is easier when resource needs are identified. Principles of microservices include working independently, experimenting with business capabilities, growing teams, and following the single responsibility principle.

And that will be the responsibility of the service. Loosely coupled, so yeah, usually you have some intermediate layer to connect these services. So they are clearly independent from each other and then they can be developed independently on each other. It can be maintained and tested independently. They have usually a protocol that can be registered somewhere. And that one I refer again to the Swagger or to another approaches of dealing with the protocols, with the Swagger can be interesting.

So basically the Swagger is a definition of your API in JSON format or in YAML format, whatever. And that some organizations, it's not enough to know your API, right? If you deliver lots of microservices and your product is a set of microservices and they all need to operate together, then it would be very smart to actually have a registry of this APIs and that it would be very smart to that you know ahead if these APIs are conflicting. Well, they should not conflict to each other, right? So you need to have some registry in the idea that you want also somehow compile the definitions upfront before introducing the change to your API. So you want to control the like minor changes or major changes for your APIs. And if you do it with a swagger with the rest API, then you will need to invent a waive I guess in this case. So we will need to have a way to implement way to actually compare this rest APIs to declare them in some way to do all the checks before going to change some of the services in your product. With the gRPC, again I talk about it later, it's not necessary because gRPC is already a format, it's already a language itself to say protocol buffer language, and then you have a compile step there for free. I will continue that my thought.

So yeah, microservices even use now on front end and I don't know if you're familiar with that, there is a whole term for it, micro frontends. I worked on my last project in a bank was about micro frontends. So yeah, microservices and the micro frontends is definitely a way to decompose your software. So some of the characteristics comparing to monolith is easy to deploy because every change you basically deploy separately, your service is deployed separately. You can iterate and evolve it separately again because that's crucial, right? Because as a guest less time for you to work on particular code, the less the code is, the less is your, yeah, you're not noise basically that the developer has when developing some of the parts of the system, and then the faster is the delivery of the change. And that's a very important, right? Those important for onboarding new developers, they don't have to learn the whole monolith to know how the product works. You just needs to know one particular part of the code and then yeah, you can maybe learn services next to current one. And then you know how the whole system works, but you begin small and that's very important to for you for the journey. Choosing technology again, comparing to the, what we saw in monolith, right? You don't need to stick to it. If you prefer, you can rewrite one particular piece of the system. It will be just one service in this case and it can be any technology to choose. So usually the systems had somewhere this picture with a, yeah, this picture with microservices. So usually in a big companies, can be multiple programming languages chosen on, yeah, or can be different frameworks that can be anything, can be... I'm just, sorry, Alex, I'm just wondering if you, if you experienced the work in a bank, why it's so less Java in here? It's just three of them, right? Yeah, indeed, in to, it's not like that in a bank. In a bank, usually you have indeed mostly Java. But the truth is in, even in my experience, we had Java and Node.js services. And I think we also had Golang, but I never saw a code of those one. So yeah, I am sure Java and Node.js are being used there. And next is a scaling part. So scale is also great easy when you know which part of your microservices needs to be, needs to have more resources. You can start instances of this particular microservice. By the way, we have been exploring this topic with Andrew as well, like how to find the bad microservice in your system. But that's maybe next year. Maybe for the next year, it's right now, 30% done or something like that. Yeah, let's go on. So some of the other principles that the microservices pursue, like you can work on it in parallel, independently. We said it already. Experiment with business capabilities. So basically your business features can move fast and forward very quickly. Grow number of teams. So basically you introduce new services, you introduce new products, and that feature is a new team, one team is responsible for a new feature, one microservice again. So that is all covered with microservices pattern. Common closure principles help contain services. Yeah, that's very alike what I tried to say before. So we have a single responsibility principle is a part of SOLET principles, right? If you haven't heard of it, please write in chat. Yeah, indeed, I see the comment that the second one is Chris Richardson and the other one is Martin Fowler, the co-host for today.

12. Common Closure Principle and Architecture Patterns

Short description:

The common closure principle or self-contained services is when a microservice contains the whole state for itself. When transitioning from a monolith to microservices, it's important to ensure that nothing is broken. Monitoring, CI/CD, and tests are prerequisites for refactoring. Microservices should be loosely coupled and have no dependencies on the monolith. Architecture patterns like command query, responsibilities creation, event sourcing, and sagas optimize database usage and loosely couple services. The strangler application pattern involves introducing services that gradually replace the monolith. The monolith slowly decays as the number of microservices grows.

Yeah, so common closure principle or self-contained services is when you plan and to deliver a new feature in this whole ecosystem, the ideal microservice is actually containing the whole state for itself. So it doesn't need to do some asynchronous calls. It doesn't need to check some data externally. Ideally, it would contain all needed information just inside the service, but that's ideal microservice.

In most cases, of course, you will have some external dependencies, but keeping it in mind also good. Yeah, next maybe these few points is when you thinking about the system and when you, especially going from Monolith to microservices, what you would need to have is you need to know that you're not breaking anything. And how do you know that? There are lots of help for, for DevOps related services. You can have monitoring, you have a CICD in place, you have tests, you need to have tests in place. You also need to run these tests on a CICD. And that's, I would say, a prerequisite when you need to have, when you need to do the refactoring and go from Monolith to microservice.

A few last thoughts here, and no dependencies on Monolith. Again, this is a practice, a good practice, but it doesn't state it restricted. So the idea is when you go in from Monolith to microservice, you want to go in one direction. So every call from Monolith to microservice, that is a legit. But the other way around, that will slow you down basically when you need to do a call back to Monolith, that will neglect a little bit the work that you've done with a decomposing these Monolith into applications, into microservice applications. And each service is actually inside it, is it containing the whole state for the service. It would be very cohesive inside. But between all the ecosystem, they should be loosely coupled. And usually, again, there is a layer of linking these microservices together.

Moving forward, some of the architecture patterns. So I will start with this few because I actually just wanted to name them, not to explain them today, but I really loved with this command query, responsibilities creation, event sourcing and sagas. It's a very beautiful way of looking at your microservices. They help to do many things differently. They help to optimize your database usage. They help to architecture your services. Really loosely coupled. Yeah, but unfortunately we don't have time for it today. For solid principles already mentioned single responsibility principle and the other things. Clean architecture again, again, referencing to Martin Fowler. I will show the picture clean, some related clean architectures today, that's a common query responsibility segregation if you haven't seen it. But what we got actually gonna use today is a very known thing called Stringent application. Please put plus minus in the chat if you haven't seen this wonderful tree before. So that's, yeah, it's a very common and very known pattern when you're talking about refactoring. So the idea here is that, that the big tree is your monolith. And then when you want to go from monolith into services, I can think of like a small strangler figs. I think they call. So basically, this is a parasitic plant that can grow on top of the one big tree. And it goes from top to bottom actually and then it grows its roots and then each of their plants is getting bigger and bigger and you see it hides the whole tree. So actually the tree and decay slowly and in the end it dies. That's a very dramatic way of describing the refactoring But I like this metaphors That's what helps to describe the flow I think. That's a little more technical explanation, right? Strangling the monolith. So you had the one monolith and how you refactor it, basically you introduce a service and you make sure that you still can run the operate the whole system together. And a number of microservices are ideally growing through the time, and the monolith is a slowly decaying. So if it would be ideal situation in the end, you will end up with a right situation when you have lots of loss of microservices and you have just them. If you don't have anything left from a monolith itself.

Yeah, already mentioned. Just a few last words about other patterns that I used in a DDD and then algorithm. We're going to explore a little bit algorithm and show the code again. Yes, so this is a clean architecture picture. Actually there are many other synonyms for that, hexa-canal architecture, good architecture, I don't know.

13. Domain-based Applications Decomposition and NestJS

Short description:

Domain-based applications decomposition by clean architecture is crucial for understanding the language of the business and the application itself. NestJS, a DDD-focused framework, uses terms like entities and repositories to represent models. Entities contain business logic and have an identity, while values are immutable constants. Sub-domains in NestJS can be represented by models, and repositories connect models to databases. Microservices can communicate through REST APIs or event buses, enabling asynchronous interactions and the introduction of sagas. It is important to avoid using shallow entities without behavior. Decomposing a model is a challenging and costly task that requires organizational readiness for microservices.

Maybe someone else remembers something like that. Yeah, I chosen this definition and gave myself. Domain-based applications decomposition by clean architecture. So that's how I formulated it but I also not super clear.

So domain is a core. That's a way when you think of DDD, that's basically, the main is a core is a first low to learn. And that means that when you operate with some bigger application, it's very crucial to know the language of the business, to know the language of the application itself. So everyone from the developer side from the manager side, from the product side, they know on which part of the application that they are talking about. They know the business they're talking about. How to achieve that, it's a different story but usually it's about communication and about iteration between different stakeholders about knowing the business again. Ideally, when you implement this part of bus correctly everyone operates just the same language in the whole ecosystem. Like if you have users of the banking you'd have of course like individual users or you have company users. So you will know it upfront in your system and you will apply it according to that or according to that. Then why I put here at this list is because not is only DDT terms but they also apply a lot to the NestJS ecosystem. Like NestJS is a very DDD in mind framework. It has all this OP in place and it also uses terms like entities repositories which again, not probably invented by others of NestJS but coming from other frameworks like Spring, JavaScript, et cetera. So a few points, so your main, your most granular part models inside your application are either on this. So is a value or entity. And in most cases in NestJS, you use a term entity for that but they both are models. And the idea is that in classical DDD way is that value is like constants. They are really immutable. They are long living. Usually it's a single tons and entities is a model with some business logic encapsulated in the classes. So they actually have some logic inside, they have some identity usually means that they have ID in case of value objects, it's just constants, maybe it's not that an ID, but just a string itself.

Then you have an idea of sub-domains and there in DDD you have much more than that. You have bounded context, you have aggregates and that's very, very interesting topic itself, but we don't have too much time today. Just to say that in NestJS, you have this idea of modelers and each model basically represents a sub-domain can represent a sub-domain. In that the sub-domain is basically part of application that your teams are responsible for. So it's not necessarily one service, but it can be multiple services in this case. If you want to read about aggregate and the transactions, I'd very encourage you. It's an interesting topic, but let's not do it just today. Yeah. Interesting other one, repositories, you probably heard. Again, it's very widely used in JavaScript and also in NestJS, so repositories is a layer where your models are connected to a database. So usually there is a class that is responsible for getting data or saving data in database. So they are called repositories and the entities there are usually attached to a repository or using their repositories under the hood to save or retrieve data from database. And another one that we unfortunately will not touch today is that the main events. So that's the way how a microservices break to each other because there are different approaches to that. You can think of REST API. That's the one that we use and actually in application right now. But to make it less, more less, you want to have a bit different pattern which is usually some kind of event bus, and then the interaction between your microservices become absolutely asynchronous. And that's very cool because it also allows you to introduce sagas and different absolutely isolated contexts of microservices, which is, again, very, very cool. Some antipatterns that you can also to have a look separately. So basically not to use FET entities or shallow entities without any behavior inside. So now let's talk about the splitting algorithm that we have chosen and what we want to demonstrate. Yeah, nice picture from Martin Founders article, I believe. So last few thoughts before we go into the very simple algorithm, by the way, first of all, it costs a lot to decompose your model. It's a very, very hard work. And there you need to be that told to use these Microsoft services. You need to have the whole organization prepared for you to do that.

14. Refactoring Approach and Algorithm Steps

Short description:

When refactoring a monolith into microservices, it's important to have a clear goal and know where to stop. Choosing services based on return on investment can be a legitimate approach. Starting with the cheapest service allows for testing the infrastructure and ensuring functionality is not broken. The algorithm steps include understanding the domain, introducing DevOps and tests, and reflecting on removing dependencies and database considerations. The next step is to show the second code part. The speaker also asks the audience about their experience with distributed transactions and discusses the use of Kafka. The text ends with a mention of writing tests and understanding the coupling of the currency service to other models and services.

One way of doing that is when your organization is forward, when you develop your application, you can start with the new services because it doesn't cost you much, right? You need to develop this code anyways and that you can put it close to your monolith and then your strengthening application that would be just naturally continue working because you don't break any old functionality in this case. That's one approach of doing that and unfortunately that doesn't work for the refactoring part, right? For refactoring, you actually need to rewrite some code.

Also, it's good to have an end in mind, like you need to know the goal, where basically where you wanna stop in this picture, what part of the monolith you wanna leave or at least the steps where you wanna iterate through. Like, okay, I'm gonna stop on my authorization service because it's a pain, hope for the whole application will be in a separate service and I can scale it up to 12, 20, any point in time, that can be a really good goal. Like if you know where to stop, that's of course, helps a lot.

Another last thought, just last one before the algorithm is when you're choosing the service, you can also choose it via the Return on Investment and that's I think, a very legit approach. So basically, if you're a factor, decouple, one, this service particular, how much it would benefit you in the future, maybe it's hardest one, but yeah, that's maybe, that would be the best to start from it anyways, because it's used a lot and then you will have a Return on Investment, the biggest one. So, I don't see the thought that actually triggered us in our journey, it's not in this list at least, is basically you choose as next service from existing one, because we are talking about refactoring, the cost of splitting monoliths. So, the cost of getting these first service from the monoliths, and that includes, yeah, how couple is inside the monoliths, so that was our thought actually, why we've chosen currency, the idea why you want to choose the cheapest in terms of development service first is that because you wanna test the whole infrastructure around this monolith, and you want to test that you are not breaking any of this functionality on no way. So, you need to because you need to have this intermediate layer of your services communication to make it all work through the time, right? You need to have tests in place, you need to have a gateway that your service is communicating fine. And to test it all, the first the cheapest service is a very good approach. And that's why we actually chosen currency that was our idea of choosing currency. So, yeah, that's the algorithm steps that we define. So, understand domain, we did it, right? We showed you the nice figma diagram, we showed we saw the swagger, we explored the code a little bit, we saw the technical overview, what is an SGS model and drawing things Andrew again, next DevOps and tests. Yeah, that's a very good point. So we didn't have any of those, right? We didn't have Docker compose, we didn't have a CI-CD. We'll actually have, we'll ask Andrew to show it us the next one. So we had to introduce this CI-CD again, to make sure that all of the functionality is still working while we're doing the position set. A true the server's direct factor. Yeah, edge case to proof CI-CD, this was our approach, or we could use a high value services. So it's on off investment. And then how we reflect, that's a very, that's the main pain points, right? Because we need to get rid of all the dependencies from the monolith into the, into our microservice. And then don't forget about database because database, that ideal service will have its own database and it will not be using the original monolith database. All right, Andrew, I think we're ready to show the second, second code part, right? Yeah, I actually had one question in mind if you, if you're okay, I will just place it in the chat. So yeah, most of you answered yes for the microservices. How many of you maybe experienced using distributed transactions when you need to do like a multiple changes with multiple services? And just wondering how was your experience? Yeah, I see Onion architecture for my question. Thanks, Simon. Indeed, Onion is the other name for it. I think I had experienced that if you ask me. And if I understand that correctly, distributed transactions, like we use the Kafka and we wanted to actually receive a message from Kafka after we sending our message to Kafka. So we wanted to have a bi-directional interaction and we encapsulated it in one transaction. So, yeah, I experienced that. That was interesting. Was it fun? You can say like that, yes. Okay. So just, yeah. Can you see my screen? Yeah. Cool. Yeah. Okay. Getting back to our basically code base of the monolith, monolith, like one next step, one next thing we introduced here in this repository to get them the standing it. Everything works good. It's a test. So I think most of you use test, just testing. So we did it as well. And let me just try to find it. So proceed, currency test. Yeah. Actually, while I was writing the test, I've been trying to understand what is the coupling of the currency to other modelers and services in regards to mock everything I need to mock. And basically, yeah, it has some connections.

15. Integration Testing and CI Pipeline Setup

Short description:

The test involves setting up the database, pushing data using a repository, and calling the API to check if the result matches the expectation. It's a simple integration test. The currency entity has some unused directional references. We then set up the CI pipeline to run the test across commits and ensure everything works well. The pipeline includes steps for checking out, installing dependencies, setting up the environment, running tests, and downscaling services. We use Docker Compose to quickly set up the database for integration tests. Overall, we are making progress and have introduced a monorepo.

It has a connection, like bidirectional connection to user and to transaction and to bill. The currency itself doesn't use this connection internally, but it's still kind of a way to see this connection. So the test is pretty much simple. So we set up the database. And then push something into this database with using a repository. Repository is something internal, yeah, currency repository based on the entity. And then we just try to call this API and see if the result is the same as we expected. Basically, it's a very simple integration test.

And just to have another look on the currency entity actually. It's a bit more confusing. Yeah, so here was two things like which I basically commented out. That was, as I said, like the directional reference to user, config and deal entity. Which we don't use here at all. And yeah, let's try to run this test as it's already written. So when we have the test, so there is another step of forward, we can set up the CI pipeline, basically to be able to run this test or multiple tests across all the commits we make and make sure that everything works well. Basically, it's a long login, but that was useful in the sense of understanding what's going on, on the database level. Basically, it just does run all the migrations it has before and then it tries to apply the test that was introduced here. Yeah, so the next thing to show in its pipeline, we add it here. Again, that's another question I had in mind how many of you touched GitHub Action so maybe GitHub pipelines tool? What was your experience? Yeah, well, we are collecting, we just take a look. So we defined that the working directory will be server because we don't care at this point about client at all. As Alex opt out from the front end, so we don't care about front end anymore. Never did. Yeah. Okay. A little bit of environment for our database and then just steps of what's going to be executed. So here we just do pretty much like check out, install the dependencies, install the yarn first. I'm not sure why we have decided to use the yarn here, but anyway, yarn install. And then we have from the previous step, we have docker-compose which is another useful tool to run from here. So we run docker-compose up minus D, which basically means that this will be executed as a daemon. So it's just, you know, we can just continue our workflow. And what it does, it just set up the database for us, so we can run our integration tests on the CI quickly. Here is a little bit of testing. Don't do print end, especially if your end like sensitive, it's possible to read them just in here or somewhere else. But yeah. Wow, gold practices. Thank you, Andrew. That's... Oh my gosh. I mean, I do this from time to time. Yeah. In open source projects mainly. That's why they ruined after that. Yeah, and basically then we run our yarn test, too much the same as I did in the terminal now. And the last, the very last step we do Docker compose down. This is just to downscale these services we have scaled up from Docker compose which is basically our database and we also have this flag, if always, which means basically on any step, this test or this pipeline fails, this step will be executed anyway. So let's see it about test server, CICD. So like what's next we started to do basically. We started... So first thing we introduced a monorepo. From our previous experience we used Learner.

16. Introduction to gRPC and Protocol Definitions

Short description:

Learner allows you to utilize more efficient yarn workspaces and run multiple commands across packages in one repository. We have one package called gRPC and currency. gRPC is a framework for communication that allows you to increase performance and standardize architecture. It generates code for communication between services, with some services acting as clients and others as servers. gRPC uses protocol definitions to define service interactions. It's a lightweight language that includes service definitions and data types.

I think some of you tried it. For now it's more common thing like turbo repo, which we started to use as well in other workshops. But here we just have Learner. Basically Learner allows you to utilize bit more efficient yarn workspaces and also allows you to do like to run multiple commands across all your packages stored in one repository. But for now, we just have one package which is called gRPC and then currency.

So here we actually went to, getting started guides to nest.js about gRPC services, generated everything from the box. I mean the bootstrap file. So it has pretty much the same as we have for REST API, instead of, it says like it's microservices, microservice thing with gRPC client options. Let's have a look what is that. So it's something which we need to run gRPC correctly. So the package name, the proto buffer, proto file location, and basically host wage will be executed to service, wage will be run. So yeah, let's have a quick look into that proto. Let's have a quick look into the proto. What is it? Currency, currency, proto. I actually have some material about the proto buffer. Yeah, yeah, yeah, I wanted to ask you, do we want to jump it now, or? Yeah, let's go. Yeah, I think I can be not a very, not a huge one. So let's do a little bit of intro to that. So indeed, gRPC is one of the possible, well, you can think of it as asynchronous. I think in a microservices world it's called Synchronous Communication. But yeah, officially it actually can be synchronous or asynchronous in your code. Yeah, so it stands for... I heard some joke about gRPC recently, but I forgot that. Well, original joke is that it's Google Remote Procedure Clause. But actually, it's not a joke. It's rather true. So it was started in Google, like a big technologies, some of the technologies that we know. So it's a way of communicating through the network and yeah, it allows you to communicate with the different technologies. Some dates, 2015, it's widely supported by community right now in fully in open source. So Google actually was using some of these tooling. It wasn't called the gRPC at the best time in 2015, but they were using it inside the internal infrastructure. And why they did it? Because it allows you to increase performance of communication between services a lot, but not only that, right? It also standardizes this architecture and it allows you to know the framework. So basically, the gRPC is your framework and it allows you to also standardize tooling to support this communication. This is a famous picture from the official site of gRPC, should be having the link here. Yeah, gRPC, very cool website. I like it here because it's not that big, so actually you can find all the important information here and it's not gonna be overwhelming. So you have some recipes for how to use Node.js, for example, and it's a little bit outdated, but it works fine. Yeah, so this is a picture from there. So the idea is, very quickly, is that you generate code that is responsible for communication. So this dark green parts of these services is a generated code from a gRPC protocols for you. And you can see here that some of the services are called clients and some of them are called servers. And this is a little bit ambitious because in Microservices world, basically every one of them can be either a client or service, but they need this communication anyway. So yeah, in most cases you will have, yeah, either, you will have both. So what gRPC is, in a nutshell, some protocol definitions. This is an example of how this protocol looks like. So you can see it's very, very, very tiny language. Actually, this is, you can think of, okay, yeah, that's just a hello world service. But if you look at the real world examples, you probably would find something really close to that. So it's not much more than these service definitions. So it has a, it can have some data types, that's described with a message keyword, and then with this notation, you have a name for this data type, and then you have keywords to represent fields inside the data.

17. Protocol Buffers and gRPC

Short description:

Protocol buffers are a format for describing services and data types. They are used in gRPC to introduce services and protocols between services. With protocol buffers, you can generate code for communicating between services and generating types. Protocol buffers are widely supported in various languages and have built-in features for authentication. They compile data into a small binary format and automatically generate code for encoding and decoding. Protocol buffers are highly adopted by the community and have multiple implementations, making them a popular choice for data transformation. While JSON can be used for inter-service communication, protocol buffers offer the advantage of smaller binary data.

Like in here, we define one field, the greeting, and here we defined one field, the reply. You can also have, you also have low-level data types, strings, numbers, integers, and so on, booleans. And yeah, if you need an object, you imagine you would do something like a nested structure there. So that's actually the data types. It's original part of a protocol buffer.

And then what we see a little above is a service. And the service is a part which is also part of protocol buffers, but it's not needed in protocol buffers. It's a part that the gRPC introduced for protocol buffers. Protocol buffer, just to make this confusion a little harder a protocol buffer is a regional format for describing services, originally data, only data. And it was used in the gRPC to also introduce services and protocols between services. Again, definition of services is very, very tiny. You have a C service keyword, you name it, and then your methods are starting with RPC and they return something. And then they have some input and they have some output data, easy like that. And yeah, that's yet another format for describing your service, of course close to TypeScript. Yeah, you would ask, maybe ask that, yeah I need to also write it in TypeScript, why to duplicate it? And yeah, the point is with a generating and these services actually, you don't have to write it twice you will generate it from a, from these protocol buffers. So like what we discussed in the beginning generating code from Swagger, same kind of happens here. We generate a code that is responsible for communicating between the services using this format. And that is gonna be in case of TypeScript but also generating types. Protocol buffers inside we'll show them in a bit. Cloud client and server generated code support it's lots of languages originally support Node.js because it was from 2015 was already a good time for it to support that. But actually you have much more than 10 plus languages. You have, yeah, you can have any other implementation in here is doesn't have TypeScript but you can imagine that the community supports plugins for that as well. For the platform site, it also even have a web application supported so you can actually write code from just from your console and make a call using gRPC to your local service if that's what you need. It's a little bit limited if I remember correctly but it's pretty much possible as well. Yeah, it's needed for communication and it has a lot of from the box already for authentication, it says so we can use a certificate for example, and you can also add something more than just a certificate if you want. And this is feature I made myself just to show again that there is a cold stuff generated on both sides and this generated cold stuff is responsible for transferring data for the network.

Now a little more about the protocol buffers. So this is another example of data types. So the protocol buffers is pretty related to GRPC basically it's also started in Google a little earlier in 2008 and it was also open source. So pretty much the same way and it's much broader used technology nowadays. You have it supported for example in a message brokers in Kafka in other other pops up systems for sure. So basically it has this type protocol format and already try to explain what is inside but in the protocol buffers itself the focus is on data types. So protocol buffers are responsible for compiling your data into this small binary format based on your type, data type definitions. It's also code generating. So all the code responsible for code encoding and then decoding this binary format is auto-generated for you. It's generated again against the most popular frameworks. So we have JavaScript also framework, right? JavaScript is also can be considered a framework here. And then if you look here, it's not that big list but I have a second link here, which is a sort of party add-ons for protocol buffers. And this is kind of huge. You can find here multiple implementations for TypeScript, for example. I think we use protoc gen TS in our example, but yeah, feel free to use any other one. So it's a very, very adopted by the whole community.

Yeah, protocol is a main instrument to convert, basically attached to generate these transformations. It's a protocol buffer compiler too. So that's what you need to pre-install to use a protocol buffers on your local. Yeah, that's all I have, maybe just a few thoughts. So why to use that your PC and protocol buffers? We have one guy on the pole. I don't wanna name him, but we had this conversation last Sunday, I think. And it gave me, yeah, we had to raise this question like, okay, but do we really need it? Like, you can just send the JSON between the services and protocol buffers, of course, that they state they send binary data, which yeah, it's smaller than JSON, of course. In JSON you have repeated fields and so on, but then of course, it's very common that you can archive or zip your JSON or JZIP or whatever, your JSON message.

18. Binary Format and Protocol Description

Short description:

The binary format of gRPC is optimized for decoding and encoding, making it more efficient. The protocol is descriptive and allows connections from both client and server sides. It provides a nice and descriptive way to connect using any technology.

And it will be tiny as binary format. But the point is that it's a work that needs to be done in both client and server side, right? You need to first zip it, and then you need to unzip it. So it will be resources taken for this step. And with the binary format, with this binary format this is, of course, it's also needs to be done, this step of decoding and coding back, but it's optimized, highly optimized for that. And that's why it's treated as more efficient. And, yeah, please go ahead, Andrew. And, yeah, I also want to add a bit more about protocol as its the protocol itself, quite nice and descriptive, like pretty much as we have with Spyder, but for, I don't know, for any technology you can connect from both sides, which is really useful. My opinion.

19. Introduction to Currency Service and GRPC

Short description:

We have a package currency with a service and a method called findall. The currency message is transferred with a repeated flag representing an array. We moved a module from the monolith to the new repository and switched the controller to gRPC. The currency package is defined with client options and a GRPC method called grpc currency service. We added login to determine if the method is called while testing. The currency service gets all the currencies from the database, and there's a cron job for updating the database. We introduced a gRPC service as a transition step from monolith to microservice, maintaining the connection to the monolith. The new microservice still uses the same database. We ran a test that failed due to no connections established. In the monolith, we replaced business logic with a client for GRPC and called the method generated from the Nest and Protobuffer definition. We started the second service written in GRPC, and the test passed. This is the test from the monolith executed using the new GRPC service.

And, yeah. Going back to our code. So thank you, Alex, for showing us this useful intro about protobuffer, so let's have a look what we have in sense of our currency. So, again, we really simplified that calls. I mean, it for sure will be a bit more helpful complex behind it, but for now it's just a very simple way, just trying to combine multiple parts altogether and trying to run it and see if it works or not.

So, basically, we have a package currency, we have currency's service, and we have a method which is called findall. All these messages, like currency is a message we transfer in, and here's an interesting flag called repeated, which basically represents an array of some message. So, we have currencies, and we have page metadata, very simple protobuf. I think we had a few more complex things, and here is some, already some code we moved.

So, first of all, this is a package generated from a, from Nest.js boilerplate in for gRPC, I guess. And then we just moved a module. So the module was taken from our modulate. Let's see, where is it, modulus currency. Yeah, basically moving it into the new repository. New package, and then the difference is that here it was the controller which been registering for restaurants. And instead here, we switched it to gRPC. So we have to move to gRPC format, kind of format. So we can check currency package. We'd go and try to find where these currency packages are, so defined somewhere. Yeah, so it's defined at the place which I showed before, which contains client options. Yeah, so what roles we have? Like, we have the GRPC method for the service called grpc currency service, and we'll have it find all. So basically, we have that find all method, which taken from here from this proto. And we just, what we just need to do with this chest, we just need to implement it, as easy as that, but then you probably lost. There's it, yeah. Here we added some login, just to understand, to determine if it's been called while we are testing. Underneath it just called currency service, which does pretty much the same as before. It gets all the currencies from the database, I guess. And here we also have a cron, which does the job with appealing this database with some real data. So let's, let's try to run it all together, I guess. Do you have, Aleksey, something to add here? Excuse me? Do you have something to add? Or I can just run it. No, no, I was thinking, yeah, it's, we unfortunately don't have a picture of how it looks like, but, yeah, we introduce this gRPC service. And this gRPC service is already a part, it's a new microservice, but we call on it from the monolith. So that was like our first transition step, right, between this, from monolith to microservice. So we moved it apart, but we still want to have, we need this connection to sustain as a state of the whole application, basically. Yeah. Yeah, so, yeah, the best thing about this, it still uses the same database. So there is plenty of work to be done to have the separate databases, but for test purposes, we just have one. So I would try to let's run the test. Yeah, it should fail, I guess. And then we will see why. Yeah. So it's failed, with a message, unavailable, no connections established. So what we did, so part of moving the model here from the monoreport, from the monolith, we also in the monolith, in the currency controller, we did the following, we replaced, we just, yeah so in the monolith, we inject this client for GRPC, so instead of doing this business logic in here, we instance injecting the service, currency service, which is, will be our GRPC client, for calling our GRPC server, and then we, as you can see, like, we just called the method, which was pretty much after generated from the Nest and Protobuffer definition, and we call this method, and underneath it just goes, like, to the network, and called the GRPC server, got the result, and then unpack it, and returned pretty much the same, and in this place, it just says, like, no connection established. That's right, because I didn't start the second service. Let me do this. So this one already the one, which is written in GRPC, where we moved our code from Nest, one Nest application to another. So it started, it's not so huge as Monolith before, it has just two rows, and we also have here, we also have REST definition, but as you can see, it started on a different port, like GRPC standard port. Let's try this test one more time. Okay, that's that's been executed and we got, we got it passed. So this is the test from Monolith, I guess, but yeah, it's been executed using that party server, that party service.

20. Decomposing Currency Service from Monolith

Short description:

We need to fully decompose the currency service from the monolith by detaching it from the database. This requires a significant amount of work, as there are cases where the currency service is heavily used in the repository with batching queries. It would have been easier to choose a less coupled service like notifications. The main obstacle we faced is that the database is still present in the monolith, and we need to remove the data and replace it with references to the separate service. There are also questions in the chat about the main obstacles and how to decouple dependencies. We need to introduce tests to ensure that commenting out the dependencies does not break the functionality. Splitting the database is a challenge that requires careful consideration and planning.

And if we go here, we also see this login appeared here. So that was what was returned to GRPC, from GRPC server. And basically all this job was done here on this side. Also loading the rates was executed on this side. Okay. Yeah, I mean, this is pretty much the end of what we have for now. There is a plenty of things we can explore more. For example, dividing the database to a separate instance, this is a main thing. And to do so, we also need to consider that, I mean, in the main monolith, we still have other monoliths which uses currency. So that was the, that's gonna be the most painful part, I guess, to replace this entity instead of using the whole connection to the entity like currency entity instead of it should be just using the reference to eat and then pull the data from the separate service at some point of time, right? Alex, what's your thoughts? Yeah, yeah. Go ahead. Yeah, yeah, I'm finished. Finish. Okay. Mistaken. I've missed the part that you demonstrated it all works. Does? Yeah, that's too close here. Okay, sorry. All right. So at this point we have a service or in a GRPC. It's running next to our monolith. Monolith. And then the endpoint is calling the micro-service through GRPC protocol. So yeah. What's the next step? What do we still miss? What are the issues that we experienced if we summarize it somehow, Andrew? Yeah, I think the next steps would be to fully decompose this service from the monolith by fully, I mean, to detach it from the database. So the ideal picture in the monolith database, we don't have this data anymore. We just have the reference to eat and we are able to pull this data. But in this particular case, it was really huge amount of work to be done in this direction. In sense of, for example, there was some cases where it was used in the repository, in the SKL repository with a lot of batching queries in it. So there is some more refactoring needed to be done to simplify this in the smaller parts and then decouple. Yeah. Yeah. Can we say it was a mistake for you for choosing that currencies microservice? What do you think? I think it's much in reality. I think it's much harder compared to notifications, for example. If I choose it from the beginning, knowing all what I know right now, I will choose notifications instead because it's much less coupled to other parts, in the only connection as user, and there is no more part coupled in the sense of a scaled queries or repository or something like this. Yeah, so basically with the currencies, the problem was that this is indeed used a lot inside of monolith. Right. Mm-hmm, mm-hmm. There are also some questions in the chat, like one is the main obstacles that we had, I think last time, indeed we discussed it and Andrew just said that database is still there. It's still there. Also, we had a nice comment last, on last call, regarding, yeah basically, these dependencies, like we don't show how we decouple these dependencies from the code. And so far this approach that Andrew showed, we could comment them out. We don't know if that was 100% correct decision or not, because we don't have a full test coverage on the project, well, not full, but somehow test coverage on the project. So it would be really nice to introduce this first and to see if it's not failing after that. So yeah, dependencies are using this service inside. This is the main pain point, I believe with these paths. Have tips to split that database. That's a good point. I think back to the presentation, to my 2006, I know it's microservices webpage. So in a monoloop, in a microservices.io, sorry, it's already the second hour.

21. Decomposing the Database and Introducing gRPC

Short description:

We discussed the idea of refactoring the database, specifically moving only the part used in the microservice to a new database and removing it from the old database. Introducing gRPC to the monolith was not too difficult, and the NestJS documentation provides a helpful recipe for working with gRPC endpoints. It's important to start by introducing new services and gradually moving parts of the monolith. This approach allows for testing and ensures that everything is working as intended. However, it's important to note that microservices come with their own challenges and complexities. Maintaining the infrastructure and debugging issues can be more difficult in a microservices architecture. Overall, the process of decomposing a monolith into microservices requires careful planning and testing. Lastly, we briefly touched on the benefits of using gRPC over REST and how to convince managers of its advantages.

Yeah, basically- You can share the screen. Yeah, I can, but I'm still trying to find the logo, the article, but yeah, I should switch to my screen for sure. Yeah, so one of those articles is actually having a good thoughts of decomposing the database. I don't have it, might open a tab, unfortunately, but yeah, one of these passes, it's exactly about that. So there should be some of the database. So at least it have some database and I think one of them shows a nice way. Yeah, exactly that one of the articles, but the best is to... Yeah, it's wonderful read all of them. I enjoy it so much and it has a really good... But I think the best idea for database is that you refactor it in a way that only the part that is used in microservice is going to be moved to a new database and it should be removed from the old database. So that's also a very crucial point from these guys is that you actually want to decay the monolith, right? Otherwise the work is not done there. Yeah, link of course. Again, thanks a lot for the comments and for your attention. It's really nice. Yeah. I missed this one. Bruno asked the boiler plates for, to start with and my question to start with is what exactly, maybe more concrete, with the Nest JS or... in the meantime, after creating currency service, how you start to decouple your currency entity in the Monolith and replace it with a type of value by gRPC? Andrew? Yeah. Basically we started this part, but it's still ongoing. I mean, the idea here was just to have the reference in the database, in the Monolith database. For example, the reference to the currency for example, for currency, could be currency quote or something. And then the story, but the main point, the main hard point about it, that it also has some queries, SQL queries connected and that was not too much easy to work it out, to have some aggregated query to decouple it to a smaller parts. Yeah. Yeah, exactly. Actually, it was not that hard to introduce the gRPC service on the monolith. It's basically, the keys that go through the NestJS documentation. They have a really nice recipe for going to the gRPC endpoints, and after the first one, the first one might be first try would be not that easy, but I assure you it's, yeah, second one is already much, much easier with that. And I just posted in a link to on this, x-technologies.dev, if you type gRPC, you can also find how to begin with, not NestJS, but like something like express or even do the gRPC example yourself with Node.js. So it's not a long path itself, but if you wanna go from a low level from gRPC, I recommend you to go through this one, has some nice examples there as well. All right, all right, I have one more link. Mikolay asked me before beginning of this session, please fill up the feedback survey. Really curious to see how it was. I think we are almost finished, and yeah, maybe one more time, Andrew, so what's your key takeaway for this Monolith to microservices way? What is, yeah, what would be your good advices now? Words of wisdom. It is for sure a lot of pain, I mean, in any application I used to do this in our old code base, but we started pretty much as Alex shown before, so we started to introduce new services, and then by the moment where we already introduced them, we had some kind of tested structure, how to deliver them, how to test them, how to check if everything is good. And then at some point of time, we also started to move parts of these monolith. I mean, some parts mostly like, for example, about authorization user authentication, which could be just separately moved as a separate module. And we did these. And then we started to move piece by piece. So I think that's the best approach to just introduce a new service somewhere, if you have a task to do some logic and test everything around it and make sure it's what you want it to. I mean, microservices is cool until you introduce also a lot of pain which comes with it. You need to maintain all this infrastructure. It's much harder to understand what's going on in this infrastructure. It's much harder to debug things, but it's possible. So I'm not saying all of you just use monoliths instead. Very optimistic end of a workshop how to decompose monoliths to microservices. No, but you feel that, yeah, we did this past with Andrew and we spent quite a lot of time to try to summarize it by ourselves as well. Yeah. Another point is that, can be that HRPC is nice. We showed how to use it, but I also, yeah I have this question on the conference, actually on a Friday, someone asked me how to convince your managers that you need HRPC instead of Rest for example. Yeah.

22. Conclusion and Future Plans

Short description:

Thanks for your feedback. It's important to have good reasons before starting to decompose or refactor a system. Metrics like time to production or performance are crucial for measuring progress. Our future plans involve exploring other applications and technologies to gain more experience. Thank you for joining the discussion and I encourage you to provide feedback. Looking forward to seeing you again soon.

Thanks. Thanks guys. Thanks. Thanks a lot for our feedback. And basically, yeah, it's a, you will know when you have this, when you reach the limitation of current system. So it's always not, it's not always a good idea just to start decomposing or start refactoring everything. Usually you should have some good reasons to do so. And also if you have this, and how is it here, it was the word of wisdom again, no end of it or something like that.

Yeah, begin with end in mind, if you know what you want to achieve, the best approach here would be, if you already know the metrics that will show you that you have achieved something and the metrics is not the number of microservices, the metrics is the time to production, for example, yeah, time to production is a very good one because imagine in Monolith you would need lots of steps to do this change around it on production but for the microservices it's just whenever you want, basically in ideal situation, but that can be also like the performance of this particular part. So yeah, this is important step, but if you prepare also metrics that will guide, that would be the best to proceed.

Yeah, and maybe to add to that, for our future plans, we wanna continue exploring that process and my vision is that we should try other applications, other technologies to see how, what else can we add to that bus basically, would like to try some other applicant to decompose other application to see more experience with that. Yeah. All right. I think that ends up our discussion today and I can repeat myself, but it was really awesome to have you guys on the call. I hope it was useful, I encourage you to fill the feedback form. Thanks for not Congress or organizers was an awesome conference in the end. And yeah, I hope to see you even sooner than next year.

Watch more workshops on topic

Node Congress 2023Node Congress 2023
102 min
Decoupling in Practice
WorkshopFree
Deploying decoupled and microservice applications isn't just a problem to be solved on migration day. Moving forward with these architectures depends completely on what your team's workflow experience will look like day-to-day post-migration.
The hardest part of this can often be the number of vendors involved. Some targets are best suited for specific frontend frameworks, while others are more so for CMSs and custom APIs. Unfortunately their assumptions, workflows, APIs, and notions of security can be quite different. While there are certain advantages to relying on a strict contract between apps – where backend and frontend teams work is limited to a single vendor – this isn't always realistic. This could be because you're still experimenting, or simply the size of your organization doesn't allow for this kind of specialization just yet.
In this workshop, you'll have a chance to explore a different, single vendor approach to microservices using Strapi and Next.js as an example. You'll deploy each app individually, establishing a workflow from the start that simplifies customization, introducing new features, investigating performance issues, and even framework interchangeability from the start.
Structure:
- Getting started
- Overview of Strapi
- Overview of Platform.sh workflow
- Deploying the project
- Switching services
- Adding the frontend
Prerequisites:
-
A Platform.sh trial account created
-
The Platform.sh CLI installed
DevOps.js Conf 2022DevOps.js Conf 2022
163 min
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
Workshop
The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.


JSNation 2023JSNation 2023
117 min
How to Convert Crypto Currencies With GRPC Microservices in Node.js
Workshop
The workshop overviews key architecture principles, design patterns, and technologies used to build microservices in the Node.js stack. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated services using the monorepo approach with lerna and yarn workspaces, TypeScript. The workshop includes a live practical assignment to create a currency converter application that follows microservices paradigms. It fits the best developers who want to learn and practice GRPC microservices pattern with the Node.js platform.
Prerequistes:
- Good understanding of JavaScript or TypeScript
- Experience with Node.js and writing Backend applications
-
Preinstall Node.js, npm
-
Preinstall Protocol Buffer Compiler
- We prefer to use VSCode for a better experience with JavaScript and TypeScript (other IDEs are also ok)
Node Congress 2022Node Congress 2022
162 min
How to Convert Crypto Currencies with Microservices in Node.js and GRPC
Workshop
The workshop overviews key architecture principles, design patterns, and technologies used to build microservices in the Node.js stack. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated services using the monorepo approach with lerna and yarn workspaces, TypeScript. The workshop includes a live practical assignment to create a currency converter application that follows microservices paradigms. The "Microservices in Node.js with GRPC" workshop fits the best developers who want to learn and practice GRPC microservices pattern with the Node.js platform.


Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

DevOps.js Conf 2021DevOps.js Conf 2021
33 min
How to Build CI/CD Pipelines for a Microservices Application
Microservices present many advantages for running modern software, but they also bring new challenges for both Deployment and Operational tasks. This session will discuss advantages and challenges of microservices and review the best practices of developing a microservice-based architecture.
We will discuss how container orchestration using Kubernetes or Red Hat OpenShift can help us and bring it all together with an example of Continuous Integration and Continuous Delivery (CI/CD) pipelines on top of OpenShift.
JSNation Live 2021JSNation Live 2021
21 min
Micro-scopes – How to Build a Modular Modern App in a Bundled World
In this talk we will explore the great benefits of breaking a big modern app to meaningful, independent pieces – each can be built, deployed and loaded separately. We will discuss best practices and gotchas when trying to apply this microservice-like pattern to the chaotic world of the browser, and we'll see how building the right pieces guarantees a brighter future for your apps. Let's dive into Neverendering story of modern front-end architecture.