The workshop overviews key architecture principles, design patterns, and technologies used to build microservices in the Node.js stack. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated services using the monorepo approach with lerna and yarn workspaces, TypeScript. The workshop includes a live practical assignment to create a currency converter application that follows microservices paradigms. The "Microservices in Node.js with GRPC" workshop fits the best developers who want to learn and practice GRPC microservices pattern with the Node.js platform.
How to Convert Crypto Currencies with Microservices in Node.js and GRPC
Transcription
Let's begin. So today, we are very pleased to welcome you on this workshop. It's mainly about gRPC and about node.js solution on gRPC. I hope you read this description, so I will not spend that time on this particular part. Yeah, maybe in general structure that would make sense. So we are going to start. Let me show you the agenda for today. Well, our workshop is actually consists of three parts today. And when we prepared this workshop, we thought that it would be great to give our participants some theory knowledge about gRPC service, about the gRPC framework, what under the hood this technology is, what parts it consists of. And that, for example, includes protocol buffer. So that's what the first part is going to be about. That's going to be some introduction, some theory, some demo demos that we prepared, showing really basic stuff about gRPC, about node.js and proto buffers. And then the second part is like more practical part. And Andrew, can you please say a few words about that? Yeah, so practical part, we actually get a deeper dive into the practical example. So today we're going to implement a currency converter. And we also try to combine different providers in one big thing. So this big thing will be created from different microservices. And all these microservices will be communicating between each other through this gRPC protocol using proto buffers. So it's going to be fun. Yeah, of course it will be fun. And the final part of the workshop, we call it practice. It's quite experimental to us, but I think we're going to do it that way this time. So for this particular workshop, we prepared a list of issues in GitHub. We're going to show you what these issues are. And the point of this practice session is you can experiment with a repository that we prepared, touch the gRPC itself. And if you really want to touch it technically, you can clone it and you can make some exercises. So they are on different levels, let's say beginner, more advanced level, and you will be able to practice some interesting parts to create a service, for example. But let's not discuss it right now. We'll go to that later. I think we will not record that part, by the way. So I hope that would be possible. But that is in general our agenda today. And I want to start with introduction and say a few words about ourselves. Maybe it makes sense to start with Andrew. So Andrew, again, please. Hi, everybody. I'm a software and infrastructure engineer at a startup called Emma. Currently I'm based in the United Kingdom. I really enjoy doing what I do. I did jumped into all these microservices theme and also into the devops topics quite recently and started to explore this area. And this is basically the moment when we decided with Alex to prepare this workshop and to try to combine all this knowledge into some kind of material and try to present it. So I'm really happy to be here with you guys. So let's move on. Alex, your turn. Yeah. Thanks. Thanks for that. Yeah, that's me. I'm also a software engineer. I'm located close to Andrew, but still in the other country, Netherlands. And I'm working in some European bank. It's called ING Bank. And yeah, I did in my past different stuff. I started actually from frontend Stack. And as frontend Stack was developing quite quickly last decade. And also we saw these different technologies growing like react, angular. It was a really interesting journey. However, in 2016 summer, I decided to switch my focus a little to devops and also back into Stack. So I enjoyed a little bit of time with the node.js part. We did quite an interesting project for the bank to make one pipeline for all the front-enders. So that included the node.js applications. For example, a CLI application that was available for all developers in our bank. And then my latest project that I'm currently participating in, we are doing some interesting part inspired by one of the systems in the kubernetes world. And this includes more traditional stack like Java, GVM. And also we have a nice interesting client that we support in the Go language. And to be honest, I enjoy this technology a bit lately. Yeah, so I still think my strength is node.js. And that's why actually we found with Andrew this common ground with node.js and decided to make a workshop in particularly this technology. So before we move on, maybe Andrew, one more question to you. So why have we decided, why did you decide to make this workshop? Was it because of your current work or it was absolutely unrelated? Yeah, actually all of this stuff is related to my current work. So at work we started to develop microservice infrastructure based on pretty much the same technologies. And we started building some stock trading api, which allows to trade different stocks in the United Kingdom. And this is actually involves lots of different services and lots of different communication between them. I mean, here we just show very simple communication between services like direct communication, like one service just calling another. But in real world cases, it could be some queue in the middle of these services. So really they are communicating in a fully asynchronous way. But this is probably will be another workshop. All right. Yeah, thanks. Thanks for that. Yeah, my inspiration actually for this workshop was, first of all, working with Andrew on that was quite a nice commitment from both of us. And beside that, I am really interested in learning new technologies. And since we decided to explore this node.js part of the JRPG stack was particularly useful for me because I haven't touched JRPG before that. And then to get to know this stack a little bit to also to touch it in node.js was quite interesting to me. Yeah, so that's in short our introduction. And maybe before we really start, I want to maybe a little bit of interaction with you guys. So if you can put a number from 1 to 10 to the chat, how do you consider yourself, your knowledge in today's topic, JRPG plus node.js plus typescript, because we're going to use these technologies today. So if you can specify 1, if you never heard of it or 0, if you never heard of it, and then 10 is like, are you a super extra expert in that? How would you rate yourself? That would be awesome. Cool. Cool. Thanks, Jakub. That's actually quite nice. So we're going to start from basics today. So I think it's very reasonable to for one remark on JRPG. Thanks. I see some nice feedback on that. Yeah, I think we are in a good company today. Yeah, so if you have any questions during the workshop, feel free to ask them, feel free to put them in the chat, whatever you prefer. We find the post, I think, traditionally more like to a chat and we look into a chat sometimes, but feel free to use any channel you prefer. Some prerequisites, I don't know if you already have read that before. For the last part, for practical part, you probably got to need to have node.js and the protocol buffer compiler. So you can find links in this page that we prepared. We'll post it now in the chat. Thanks for feedback again. And yeah, for IDE, of course, feel free to use any solution. VS Code is I think our choice today, but you can use WebStorm or whatever fits you the best. We are ready to begin almost just to align one more time on technologies that we're going to talk today. I made some list of tags. We made some list of tags here and we're going to talk about microservices and the JRPG based on node.js, javascript, but also typescript environment. And the JRPG hardly includes Protobuf with the framework. And for a practical example, today we're going to also use a monorepo solution and it's going to be based on Lerna. So we actually use it with yarn, but if you're interested and curious to use it with npm, that's, of course, also possible. Yeah, that's the text for today. Let's see where we actually start. So we talked about technologies. What we're going to do today, we pretty much said. So what is JRPG? This is where we begin with the first part. Well, I think I hope you heard this title before and JRPG states of g remote procedure calls, of course. Yeah, the fun part is they have a really nice mascot. I think it's called pancakes, if I'm not mistaken. I might be mistaken with that. So very, very nice branding, branding sign of them. Yeah, I like to begin an introduction to theory actually with an official definition of that. So let's spend a few seconds on what the JRPG is. I hope you were able to read it. So for me, this definition is quite expressive itself, but we need to dive into some parts of that. And I analyzed some of the words here. So modern, open source, remote procedure calls. Well, remote procedure calls is something that we're going to start with a few minutes. Besides that, we have a modern, modern solution. And that reminds us that this is quite a new framework still. It began, in fact, in March 2015 as an official open source project and that it can run anywhere. Like the definition says, it enables client and server applications. And I put here client and server underlined these words because it's all about communication between applications or services. If you're talking about microservices and the client and server are very important words that we're going to use today. And to go a little further, client and server sometimes are a little bit confusing because both client and server are just microservices. So you can think of that, I think, in that way. So what's the history behind of JRPG? Like we said, it's a framework, remote procedure framework, and the G as a first letter in the JRPG. Sometimes they joke that it's about Google, but actually, officially it's not. It's just a JRPG, remote procedure calls. But indeed, it all started in Google. And in this date, March 2015, is the date when they officially released it to open source to developers in GitHub. And now it's actually hardly supported by a community. But it started all inside Google. And as many other different technologies that are coming from Google, let's name a few of those. I already mentioned kubernetes today, but also Go language, for example. Yeah, there are thousands of them. I'm pretty sure very nice technologies that are nowadays used a lot by communities and supported by Google and by other big companies. But to name a few that are more related to today's workshop is, for example, SPDY and later QUIC. Those are two transport protocols on top of transport layer of TCP stack. And they both are kind of revolutionary because SPDY was introduced quite a while ago, but that was like a beginning for HTTP2 protocol. And now QUIC is going to be something like that, but for protocol HTTP free. And why it's important for us today is well, because to show that Google works on interesting projects that become standards for the whole industry. But also, as these are transfer protocols for different messages across network, the protocol that is used nowadays in the JRPG is actually on top of HTTP2 and originally it was SPDY included into this protocol, into the JRPG framework. And we can expect that when HTTP free is going to be stable enough, then QUIC transports into HTTP free officially will be supported by many platforms. I'm pretty sure that JRPG will also switch to that modern technology. And then maybe there is one question here, maybe you ask yourself, I ask myself here. So why a company like Google will move something, some project that was developed and started inside the company to the whole world? And there are many good reasons to do that. But to name one is actually, it's quite simple. If you're working on some internal project, no one knows it. And how are you going to hire a new developers for that? They will spend a tremendous time to learn about it. And if you compare it to alternative, like you introduce something that your company successfully uses over the decade, maybe, then it becomes popular in the outside world. And people start learning that. And indeed, this knowledge can be just grabbed from others. And it's going to be very easy to find developers for this technology inside your company. But maybe the second good reason is, I think, also very important is that by providing it to community, you can get much more feedback on the decision that you made, on the technologies that you used, on the, yeah, I don't know, any solutions that were started in this technology. So they can, of course, be improved, because in our world, everything can be improved in the technology. So that becomes really transparent to the whole engineers that are using it. And eventually, it also becomes very, very stable. And then, yeah, I think there are many pros for this concept to bring something outside the world and make a standard for the community. And that's what happened with GRPC. So Google originally started with a project called Stubby. They started thinking on microservices, how to, because you can imagine that Google is a huge company with lots of teams, I don't know how many, hundreds of thousands, maybe thousand teams. And you don't know the purpose of each team, right? You cannot make any team, every team in your company to use, for example, one programming language, because programming languages are good for the for the goals of the product that they're used for. And therefore, you have multiple projects that are working on non-JS tech, and other working on Go lang, and other working on Java stack, and most, maybe, teams working on Java stack. And to combine them all together, this pattern of microservices became popular. So you don't build any more than one monolith project. You build smaller parts of application that can communicate with each other, simpler, and to make it very stable, you still need to build infrastructure on top of that, because microservices will need to solve different other problems that didn't exist in monolith, like communication, where you need to send a call, how to make sure that your receiver is actually what it was intended to. So yeah, there are many, many challenges. And I think industry in past decade made the choice to go further to the microservices world. And that's why the gRPC framework was one of the things that was hardly grasped by community and really a nice framework for everyone who's writing their own microservices. Now it became actually a part of cloud Native Computing Foundation, and maybe you even can certificate for that. I didn't look into that yet. Yeah, and before we move on with more technical terms, let's have a look at the official site that we're going to use today a lot, the gRPC.io. So this is an official site that describes the whole concept that has some very nice articles from the beginning, actually, from 2015, how it all started, what were the concepts. And this, for example, page shows the motivation and the design principles that are more detailed that we are discussing today. So you can dive into that. More than this page, you can also look into documentation of how to start it on each platform. We already see that it supports a lot of platforms, and it has really nice guides on how to start it on. Let's have a look. Where is it, guys, for example? Yeah, now this is platforms, languages. Yeah, for languages for node.js. It has a really nice tutorials. I think if you open this one, it will show you how to start quickly with the application. It might be a little bit outdated, but still quite relevant. You will make it work if you go through all the guide. And yeah, the most important is that you see there are many choices already for the languages you want to use. Yeah, so that's another important resource that I encourage you to put somewhere in your notes, maybe. So we're going to use it today, and it's also a very first step for other information on the gRPC. So let's begin with the easy part, with the easy theory. What is the RPC? It's remote procedure calls. And if you look at the Wikipedia, it will say that it's a distribution computing technique. And the idea of a distribution computing technique is that one application can make a call to some remote server, and that remote server understands which procedure it needs to execute. So it executes this procedure or function, whatever you call it. And then you send a response back to the client, and the client continues to run. So easy as this, but under the hood, there are more problems you need to solve. And actually, one part that is missing in this explanation is that a client, client program, that is the first taxor here, needs to do that in a very simple programmatic way. Like literally, in your code, you just say, hey, remote server, do procedure one, and it should respond you back. And in different use cases, you want to have a synchronous or synchronous response, but in your code, it should be very, very simple. And that's the idea of RPC. So what it does under the hood for you, beside these simple steps, it actually needs to have some logic that allows you to do that. And you need to have some client library. It's also sometimes a client stuff that will kind of mock this call to remote server. And inside this remote, inside this mocked call, it's called client stuff, it will need to marshal or serialize the message that you sent to, you want to send to a remote server. So it will also do a connection to a server, to a remote server. And on the remote server part, you will need to do some opposite operation, right? You will need to deserialize it, you will need to check, and if you want to check that, what is the message, is it an allowed message? So it will marshal a message. And yeah, after that, it will be able to just execute what this function that needs to be executed. If we go further with this investigation, you will find more and more interesting questions, like how does a client server propose remote service? Is it done over HTTP? Maybe, maybe yes. How to expose a remote service, like on which IP address this is running, and how can I connect to that? How it is serialized for a network? Is it like a JSON stringify, if it's talking about javascript world or something else, maybe, I don't know, string, maybe that's something else. How communication happens, again, about the protocols. And yeah, authentication, it's a last question here. We don't cover it today too much, but it's one of the important ones, because you need to understand, is that the caller, the correct one, and the caller and the callee is the right one, and can I trust this caller and give him back a response, or should I deny the response? There is a call, actually, the request. So speaking about the second part, client and server communication, and again, remember that the client and server adjust microservices in our world, in our world of microservices. So they can communicate with the different protocols and for web, because all web is based on open standards, you can use plenty of them. You can use HTTP, HTTP2, soon HTTP3, which is a new nice protocol on top of UDP, actually not on TCP. But you also can choose something more like a WebSockets way. It's also, I think, related to UDP somehow. So there are plenty of choices here. And in fact, to make these calls, doesn't matter which protocol you choose on that level, you will need still to have the client library or just a library to not confuse ourselves even more. We will need to have a library both on our client application and our remote server that are able to understand this protocol. Because on a client application, after you stringify your message, after you serialize it, you will need to send it. And in a web browser, for example, it's quite easy. You do fetch because it's included already in a web browser application. And yeah, previously we had the XHR object to do that. But now we have this fetch polyfill code, which is really nice. But in your server, it's a little bit more complicated because you don't have it from, well, you have it from scratch, but you need to maintain it, right? For example, if you look at node.js, we still stick to some node.js version. And it might be that our outdated server is using some Node version 4. And does anyone know which is the most recent version of node.js? Maybe you can guess in the chat. Thanks, Jakub, indeed. Yeah, that's right. Yeah, in fact, it's version 17.6, if I'm not mistaken, was following the latest two updates on minor changes. On 17 line, it's an experimental line, as you know. A little side sentence, but it includes two nice changes. 17.5 includes a fetch. So you have in node.js a browser-like behavior for doing requests, which is very, very nice. And I think the most recent has experimental support for import from remote modelists, which is also very cool, right? But yeah, that's really a side topic here. So we said that on low level, you can choose any of those low-level network protocols. And on both sides of this application, you will need to support them. And you will need to update your environment. And you will need to be sure that your application uses this layer, and this layer is supported by both applications. So it's some lifecycle work, let's say, and that takes your resources. And that's why people started thinking of other different protocols, and the alternatives to gRPC can be named like SOAP, which was quite popular in Java world in the 90s, 20s, 90s, or 2000s. A REST that appeared in 2000s as well. You can actually argue if that is a remote procedure call or not. I think most authors will say that it's not really the RPC, but more philosophically, even REST, I think, is a remote procedure call in the end. And the graphql, a recent technology also appeared, I think, in 2015. The point is, no matter which protocol you use, some kind of library that supports this protocol can be low-level or high-level protocol. You still need a library that supports it. And that's why gRPC takes it into its features and allows users of gRPC to not think about this protocol. So what gRPC supports and what it enables to a user is this client and server communication. But let me actually start from a first point here. Service definition. This is protocol buffers, so-called. We're going to explore them in a few minutes. But with these service definitions, you define what your api would look like, and it also enforces your api and the messages that you send it to request to be strong typed. So it already checks, enables the check for a message type that you sent. It does it through the protocol buffer. But a client and server communication, that is the part that I was talking in in this previous part. And this is what gRPC enables you from the box. So it generates the gRPC framework, allows you to generate code in multiple languages. In 10 plus official supported languages, actually, it's quite much more. It allows you to generate these codes. And if you look at this picture, you will see that our clients here have the gRPC stub and the gRPC stub here. And actually, there is also a stub on other microservices, C++ servers here. And this is a stub that does communication with other remote servers, with just other services. And that's why it's generated. And this comes from out of the box. You don't have to worry about this library part here. So that becomes for free if you choose the gRPC. Next, I already said that it's supported by multiple languages, also supported across different environments and platforms. Again, back to gRPC.io, you'll find here that it's supported in Flutter, with the mobile, Android or web. So that's very interesting already. So in general, you can just type here, enable some, include some library, of course, on your page, and then just type fetch. And then, well, not fetch, you will still call the api method from a remote server. But in the end, we'll do the HTTP call and we'll do the similar action as you're doing it with your microservices, but now from a browser. Quite cool to me. Yeah. And then, what else I wanted to say here? Of course, gRPC is a framework and it has some source code available. And this is our core source code repository. You can find here the main supported languages implementation. You see around 10 implementations available. And in fact, I think the most important, most supported from the beginning, it was like a seven here. And what I wanted to underline, point here is that for Node itself, the implementation actually was moved to a separate repository. And that's why the versioning, however, official website, it says version 143 is now available. In some cases, in like, for example, in node.js, the version is a little bit different. So you have to look yourself and don't trust this version because it depends on the environment that you are using. Next point here, I wanted to say about extension points. So we said, we started in the beginning that your gRPC communication will be on top of HTTP2. But at some point when HTTP3 is stable enough, I'm pretty sure the whole library will move maybe with version two or something like that to HTTP3. And this is available through the extension points. And the extension points is basically some pieces of code here in this implementation source code that allows you to add some hooks to write or maybe generate the code because it's mainly about generating the code that allows you to write any implementation of the gRPC framework features. So transport protocol that will be used, maybe buffers can be even replaced. I'm not entirely sure, but I kind of think it will be possible. Many other points, like if we're still talking about transport protocol and the communication, we can enforce, for example, the message ordering. However, it's already enforced by current, by the core version, but we can do something extra, for example, switch to web sockets and do something similar. And then the communication itself, a point like this stop generated library, it allows you to communicate from a client server. And as we said, it's done over HTTP2. It also does serialization for you from the box and how it does it actually similar way as a service is defined with the protocol buffers format. The message ordering, I already said that. Streams are quite natural to node.js, and node.js implementation of gRPC supports streams from the box. So that's very nice. I don't know how familiar you are with the streamings, but in short, it's transport of small chunks of data, some small pieces of data over applications. So that allows you to not take a huge part of resources operational system where your application is running. And it just sends one piece of data and forgets about piece of data. It sends next piece of data and forgets about second piece of data. And in this way, the application stays really tiny, doesn't take resources and it will just run and it's done very quickly. You don't have many troubles like if you would do one whole HTTP request that will, for example, fail somewhere and it will take a lot of time until you get some packages that are coming to finish this resource. Synchronous asynchronous, that depends again of the technology that you use, but in short, it is up to a user of a gRPC to choose which way of communication to other microservices you want. Either you want to do a synchronous call, wait until the response is done and then continue running your application. In node.js, it's not the most natural way. I think in most cases, you would like to send a request and let's say subscribe to the event when the response is given. And again, authentication that I already mentioned, we will not spend too much time on that, but I think we will see some parts of that in the demo. In short, gRPC from the box allows you certificates to be involved in authentication and also token-based authentication is enabled from the box. Okay, that was a lot of speaking. I think let's have a look finally at some example. What we see here is a service definition in a protocol buffers way. So this is a small file and that's where every development of gRPC microservices begins. You just start with these small files, you define a protocol, you define a protocol for service that you are writing or that you're going to be using just as service design that is going to be involved in your overall application. So in here, we define the package hello, we define a service hello service and it will take four methods. It will have a method just hello and this is a direct message to message and then other free showing how the stream definition looks like. Let's have a look at the last one. Both streams, it supports streaming from both ways, like from server to client, from client to server, all in one. So we just say that I'm going to have a stream hello request type of message and the service will also return a stream of hello response type of message. So this is a service definition but you see here you have some data types and you in the protocol buffers, this is I think even more important that the services themselves is a type definition and in the protocol buffers it's really really simple. You just say message and this is your type of data here. It's a complex type of data as we can see. It has a it has one keyword, one property greeting inside and in response it has another property reply. Yeah, that's a gRPC part. Let's summarize it one more time. So gRPC is a framework. It's mostly about generated code and this generated code is called generated stop, so just stop, stop code and this stop code is used both on your client microservice and server microservices and this stop code does quite a few points to you. It serializes message, it chooses and specifies internally which network protocol to use to communicate to other and to the remote service. It sends a request, it verifies that the data, the requested data is correct and the response is correct. This realizes checks, checks again validates that the message type is correct. Yeah and then your server implementation can do the rest. Okay, hopefully this is that makes sense. Now let's talk about second part. I think it's going to be a little bit simpler than the first one. So I wanted to say a few words about protocol buffers. So we talked already about the gRPC itself and the protocol buffers might be the most important part of it. So this is protocol buffers originally was efficient technology to serialize data and like in the previous example in here you just specify message type and then you have a new data type. In here we have a person data type and we specify the property name, ID and has Ponycopter. Does anyone know what the numbers on the right mean here? Maybe a rate for property magic numbers. Yeah exactly. Yeah that's something that is called a tag and we can take it as an ID maybe. So there is some order also. So it should be unique. This is a unique number and it can also say that they define somehow an order in a message for this field. But yeah this is not officially that actually depends on implementation and I think this is an abstraction in which it doesn't say anything about an ordering. However if you look at the original data of a protocol buffer you will see that they are indeed done in this order. But we can take it as an ID and this ID should be unique and this is very important to when you evolve your message type, message data type. All right so a little bit of history on protocol buffers now. They are also created in Google. It's a much more mature project than GRPC itself. It started in 28 as an open source. Maybe also in Google in this time, I'm not sure. But it's hardly used by a community in absolutely different technologies. Also in different stacks, also in absolutely different technologies. You can name Kafka from Child Vault. They can use protocol buffers as a serialization mechanism. So what it allows you and because we said that the protocol buffers are about the data format and the efficient technology to serialize data. So it's mainly about describing your data format or protocols inside the application. So what you need for that, you need a .protofile and in this .protofile you will specify data types that you need for your application. And those data types, they are strongly typed. So the protocol buffers can enforce you in your application to check the data type before dealing with it. As GRPC, it also does code generation. It's done via the tool called protocol buffer compiler. And I hope if you wanted to touch some of the code today, you already installed it. If not, have a look at the beginning of the page. You have a link to do that. As GRPC, it has a really nice and also tutorials for different technologies how to use it. It's a kind of similar list of officially supported technologies and environments that are used for. But actually because it's started a little bit earlier, it has quite a few implementations over the official stack. So for example, today we're going to use typescript and we can see that even for typescript, we have two implementations just in this list. And I think we're going to use today .protoc.gen.tc. So this is quite a good package to be used in your application. Yeah, so protocol buffers allows you to define data types. It also allows you to specify services definition. That's that part we already seen. And that's where GRPC actually comes into play. But originally I think that was not the intention of protocol buffers. They edit it incrementally and now it can be nicely used there. Now a little bit more about that. So this is a general definition of property inside your message type. If you're in your data type, you see you can specify a rule type of your field, the name for it, and a tag. So tag we already discussed, kind of a unique ID for this property. Name, just a name with almost with no restrictions. So like maybe besides some extra characters. The type, the scalar type of this property, the GRPC itself has quite a few types supported from the box around 20 to 30. You can imagine there is a string, there is an int, float. Yeah, maybe some other types, float of different ranges, float 32, maybe 64. Also plus or minus that data type. And a rule, I think it doesn't support that much. It has repeated type. It's repeated. It's a synonym for array. So when you say that repeated, it's basically array of int numbers. That's the synonym here. And yeah, that's quite simple. And if you are not agreeing with that, let's have a look at a few more things that I put here. So we have a message and the message supports scalar types. We also have support for enums. We don't have today. I think maybe we have today an enum example. I don't know. Maybe Andrew will show something. So repeat it, as we said. From the box, we don't have a void type. So we cannot say undefined, for example, unfortunately. But we have in a standard library, we have this reference to kind of null type, void type. So we can import it first and then reuse it. But I think in most cases, you can just specify a message void with a capital V and by not specifying anything that will allow you undefined values in the javascript. It allows to specify packages because package is quite important pattern in overall programming. Interestingly, we didn't have it in javascript until 2015. And the services again, it was not first in the protobuf itself, but now to enable gRPC, you can also specify some pieces of your api by using this keyword service. And then with the RPC keyword, you specify a method. But in the end, if you look at this definition, most of your microservices will be much more complicated than this piece of code. It's really simple. So you specify data types, you specify them with a message keyword. For complex type, you put a name on it, you specify fields with a specifying unique text for them, you specify these properties that you need. And yeah, if you need to and you want to define the api in terms of gRPC, you also specify service. And there are two more keywords I use here, service and RPC. And you specify that type that was defined, maybe announce to make it really nice for application. And yeah, that's not much more than that. To finish this part about protobufers, there are extremely extreme amount of plugins. We said this list is available for by the community. And this can be treated as plugins for a protocol buffer compiler. So we can generate code for almost every programming language that you want. And for each, well, almost depends on implementation for these languages, you sometimes have nice flags in there. You can specify, for example, here some specific flag to compiler. For example, in Java Vault, you would specify that you want to generate multiple files, multiple files per class, for example, yeah, file per class, because this is a very common way in Java. Yeah. And some extra options that you have from protocol buffers is you can have massive types, you can use required and optional keywords. I think in the latest version, they are mostly treated as optional almost all the time. Unfortunately, it's not supported versioning. And that means that if you need to change your protocol between the versions of your application, you will need to be really, really careful with that. And the main rule is that you don't want to change existing fields tags because the tags are the most important part of this message definition. And I saw this approach. I think this is one of the main approaches using versioning in the gRPC world and the protocol buffers world. You basically create a different version of your packages by introducing a new package with something like that, mypackage.v1. So that's how you switch from one version to another. Yeah. And there are a little bit more than that, but it's really for some specific use cases. And I'm talking about maps, one-off and all-off, but in most cases you will not need them. And that pretty much covers the protocol buffers. So it's actually not a very difficult step. So yeah, any questions on this point? I was talking a lot. I know. Sorry for that. All right. It's good. That means that everything is clear, bright and clear. Now I'm kidding, but seriously, if you have anything to ask, you want anything to ask, feel free to, of course, very open to that. So at this point, I will show a little bit of demo. And my demo finally will let us know, let us see some code and some gRPC stuff. So let me start with a demo that shows the protocol buff itself. Oops, did I show? Yeah, I think I showed this VS code here. So what I wanted to explain in this, yeah, let me share the whole screen, be a little bit easier like that. So in this first demo, I want to show how to use a protobuf to serialize and store JSON. What I'm going to be using the protocol buffer compiler and to generate code from my type of data, I will use this kind of command. And you see this in here, I specify already some arguments to my command. So I specify that I want to have an output in a javascript format. I specify the common JS. By the way, ProDoc doesn't support ECMAScript 6. However, you can find the flag for ECMAScript 6 in the original source of the library. Please don't use it if it doesn't support it. And then you specify a protocol file and this command will generate you and generate us a protocol buffer compiled code. We're also going to be using Google Protobuf. This is a library that is also supported by Google that enables runtime of a protobuf inside javascript. Yeah, and as an example, I will show, I will use this Bitcoin api data. So we should see something like that in our JSON, this part on the right. Yeah, and the last part is a VS code protocol extension. So to work with the protocol files, it's useful to have an extension and you can find many extensions for your VS code, for example, and I'm pretty sure also for other ideas, it is also available. Yeah, in this package JSON, I have already prepared scripts to do so. And the first step is to download prices. And this is, I have direct knowledge on gRPC yet, but just to show you how easy it is, I'm using the HTTP get here. So it's still an old way of getting data from external point, but maybe in further workshops, we should change it for Node version 17.6 and they use just fetch. That would be nice. And then I just put it to prices.json. So let's see what we're going to generate. So this doesn't have anything related to gRPC yet. Let's however execute it. So let's call them npm run. So it created a file, prices.json. And it looks like that, almost like we've seen in the browser, I will format it. So yeah, this is our data. And now our next step is going to be to introduce a proto file for this type of data. So let's do that. And I have already created it for us. And the nice thing is that we compare this to the JSON itself and the message there, really straightforward how you have to define the data. So I just created message price. So every item inside the array is of type price. It has some date. You see it's a type string here. However, it's a formatted date type, but it's still a string. And I think proto buffers don't support dates. They just support strings and it's still a string here. Next one, interesting here. So I have price and I made a mistake when I was preparing that. So I made this an integer and actually in my further code, I was going through this array and I was saying, yeah, can I convert it to this type? And it was throwing me an error, but it was throwing an error on the second item. So I looked at the second item and I saw that, no, it's not an integer. It's actually, it should be a float. So I put a float and yeah, price, open, high, change percent from last month's percent also a float. And last is a volume, which is also a string. So that's how we defined it. And we defined one more message type. This is a price and this is very easy. We just say that it's an array of price items. So that's how you define two data types here. And yeah, what's our next step? The next step is basically to generate protobuf runtime. So for that, we are going to need this protoc executable installed. I've already explained it. So let's execute this command and then run generate protobuf runtime. What it created, and I'm not sure if you saw it or not, created this prices underscore protobuf or pb.js. So this is an absolutely generated code for us. Let's delete it and show again, just to make sure. All right. It looks like it's still relevant. Yeah. So it's created it. And yeah, I don't think we should go through all of this generated code, but in the end, we see that it uses the Google protobuf, again, the runtime to operate the protobuf format. And yeah, and more than that, it does some internal magic stuff. But I think what for us important is we have data types as prices or just price. And every data type has a setter, a getter, and it also is strictly typed. So that means that when you need to create new instance of this data type, you will need to pass correct parameters. And only in this case, you will have the correct result. For example, this get prices list will allow you to get the array of prices. This will allow you to add an array to a list. We will see actually some of the users show this in the next step. But yeah, just to show you set high, get high, and then it calls actually the internal library to put this field into the protobuf scenario, into protobuf data type. So what would be my next step? Next, I want to actually report about transformation. I actually see an error here. I see that I use, again, an index.js, but I think that was not an idea. I think I wanted to use a progdoc.js. So my progdoc.js file is the next step. I want to do the transformation of a JSON file. And I want to get this. Instead, I want to get the prices file. So let's first delete the result because I already have it here. So what I'm going to do here, I still use a node.js. I use a file system. And now I require these prices underscore pb that we just generated. I request the prices data JSON file. And then next, I create an instance of the prices. I go through my JSON array and I add each price to my array. And what happens actually, it looks maybe not that fancy, but what happens here at this point, it validates our message for this strict data type that we defined. And as I said, as I did the mistake in the second field, it was not the integer, it was a float. In here, in the second item, it threw me an error, so I had to fix it. After I did that, after I created this instance of prices, I want to serialize it and I want to output it to a prices file, just the prices as binary. So let's do that. Let's execute this run.bub.transformation. And yeah, it did. It created these prices JSON. If we execute, show how it looks like. Yeah, it shows some data inside, but it's of course not very readable. However, strings you can see on the plane here. Yeah, so let's have a look, because I showed you this file. So we output this prices file. We put the serialized data in there. But my code, my last pieces of code here, I'm doing a little bit more. They are reading the file, then they de-serialize it back and go through the array, but of serialized data now, and they output the stringified representation. And it shows me here in the console how it looks like. And it looks like almost the same. I see now that the key parts of this object are lowercase, which I actually didn't tell before I saw that, but I guess this is an object method, doesn't it? Yeah. So this is the first part, just to show you how these prices can be specified in protocol buffers. Just to show you a little bit more, if I do LA now, you will see that my prices.json file, and maybe let's do it a little bit more correct. Let's do it like that. My still my prices.json is 15 case here. And my binary data, my prices data that we just generated from script protocol.js is almost three times less. Well, I think usually people name twice less or something like that, but we see that this is a much, much less amount of megabytes that can be used in your production servers to transfer data. So this is really nice. Okay. Second point, second demo that I prepared for today is how to use this gRPC node.js server and client. And we're going to continue with these cultures that I showed you, but we're going to need to do something more. First of all, we're going to need to use the gRPC.js library. This is the thing that I tried to explain at this point when I showed this repository of officially gRPC technology supported different environments and languages. Still gRPC.js is something separately developed. So this is a client library that allows you to use gRPC supported by the same company, same community, but it's located in a different repository. So this is a library that allows us to do gRPC. Proto loader. This is a loader for our proto files. It allows us to load them correctly into our runtime. Actually, at this point, we might want to say that there is a way to do it statically, because in most cases, it's still used like we get this proto loader npm dependency, and we load the proto file into our code, and it's parsed on the fly into a service and creates a stop on the fly. But we can do it in a static way if you want, so do it ahead of time. But in most cases, in most examples, you will see the dynamic way, and we'll show it also today, the dynamic way of doing that. And I'm going to show also the usage of streams. It shouldn't be very hard. I use actually, for this demo, I use the example from proto loader, has a nice piece of code that shows how to use it. So let's have a look. In my next step of this code, I start the gRPC server. So let's first have a look at the code, what our gRPC server does. So again, these are dependencies, so we just take these dependencies. We get the proto loader, explain that it loads the proto files from the fly to us, gRPC.js, it can help us to run these stops. So we load the proto file, and here I specify the pass to my proto file. I load the definition of that. What happens next is because we're running a server, we need or we want to specify implementation for a server. So how are we doing that? We write some implementation, list, list item, get, then we create a server by doing new gRPC server, and we add the methods that we want to implement in this service. So I think in this point, it's not clear what do we have in the service. So let's have a look at our proto file again. Beside the messages here, I specify the service, and I say my service is called history data. It has three methods. RPC get by date will return a price, list by empty will return prices, like a list of prices. And the last one is a list stream. So this one will stream data of prices back to a client. You see here, I specify the message of type empty, kind of void type here, and also just to show another type of data and message date here. So a user can put a date, and I will return a price by this date. So I implemented this get, list stream. I created a service. I created a server. I added the implementation of this method to my service, and then I just run it. I say I will be running on port 8001. I will create an insecure connection because just for test reasons for our local. And this is a part where on production, you will want to use certificates, and we will pass the tokens between the services. And yeah, that's how I start my service here. I think you might be wanting to know what I inside my application, my implementation here. And as you can see, it's very, very simple code. By getting, remember this get by date, I read the date from a request, and then I check my prices list. I try to find their prices, by the way, is my adjacent here, but this doesn't matter because I need to return still a type of, because it will be converted here into protobuf. And that's how we operate actually on our program. We basically operate the normal messages. We don't have to transform them manually because it's all done on the stop code. So if I found something, I will return it back. If I haven't found it, I will just return it. For a list approach, it's even easier. I just return my whole prices list. For list streaming, this is a part where it gets really interesting. And this is, if you look here, the difference is that I return a stream of prices. So in here, I use something interesting. I go through prices and for each price, I put a timeout, like it would respond to me in half a second. And I will just to show you this process. This is a kind of asynchronous way of communicating between microservices. And then I just end the call when it's needed. This is our server. Let's start it. It seems to be running. So now we might want to show how a client looks like. And it's actually a little bit even less code, but still we kind of use gRPCJS, protoloader, and some proto file name to generate kind of the same stop here. So we generate a stop. And then we don't need to create a server because we're going to be using it. So how are we going to do that? We specify that we want to use an instance of this history data protocol. So we say that the history data service is running on this board. It's using the insecure connection. And then I show that I can make a client get by some date. I will show that it found its currencies from there. And I show the list as well. And let's actually do it one by one. Let me comment the client list first. So first let's show how client gets working. That I will open another terminal here. And I'm going to start this client. Start gRPC client. Yes. So that worked nicely. I got a client by this date. Let's have a look what I got by specify the wrong date, something like that. Yeah. So it will tell me in the details of an error message that it was not pumped. That's what we specified in the server. Okay. That's a get. It was very simple. Next is a list. So this should be the whole list of adjacent prices. And that's what it is. And the last one that I wanted to show is exactly the list stream. And then here I use another interesting thing. I use a for await. It's a way to iterate through iterable logics in javascript and streams supported in node.js and also gRPC supports it. So I just go through it as it would be my just synchronous array. But in this case, it's actually asynchronous. And when I call it, it will show us every half a second. It will get a new price and I will put it to console. Something like that. Very, very nice. And I like specifically this example because it shows how elegant some constructions are in modern javascript. Yeah. Quite cool in my opinion. To finalize this demo, I have just one more thing is a Java client. We're not going to talk about Java today. So if you don't like Java, don't worry about that. But just to show you that it's possible, I went through the guide of how to start something similar, but in Java. And it looked well, I was having a little bit of troubles, but in the end, yeah, I just copy pasted the prices for the right. This is my still prices doesn't change. I told you that in this photo files, you can specify some compiler flags. So let's don't worry about them. I actually didn't dig too much into that. And then your prices client, I will also connect to this port 8001. And then I will call the get or list message. Yeah. Block and stop list on request. So that should just make a list call to my to my running server. And yeah, it's one of the prices as it would be. And I think I hope that demonstrates a nice way for JRPG. So what we had to summarize in this demo, we did five steps. In the first step, we downloaded some JSON data. In the second step, we just generated the protobuf runtime. So just a protobuf formatted data types from our proto files. Then we transformed our data into protobuf formats, just to format is indeed correct. And in the second part of this demo, we started the microservice one that reads the data in this in a JSON, pure JSON. And in our client part, we started another microservice that does a call to a server. And it does it in plain javascript. But inside, there is stuff that is generated for us. And this stuff is actually converting data or a message into a stringified version of that into protobuf formatted message, sends it back through some client. And by default, in JRPG, this format is HTTP2, as we said. It received it on a server, did the opposite direction operation, and send back response. And we could read it in different ways. We saw it also with streaming, getting one response, getting a list of responses. And then the last part, we saw that we also can just make another client in the different, absolutely different technology, generate stuff from it, and we'll still receive the same message from our server. I think this is, I really think this is nice. And yeah, that finalizes our first part. That's even better. So, Andrew, are you still here? Yeah, definitely, Alex. Such a lot of material. I'm really, really impressed by you. You okay? I feel myself, you know, my head almost exploding because of all this information, but I feel myself so smart now. Thank you. Yeah, thanks, Andrew. Thanks for this work, maybe. So, Andrew, why have we chosen typescript to demonstrate today? Well, typescript. I just, I mean, I recently started to learn in Go language, and that was a crucially different experience after javascript because, you know, all of these types and, you know, strict typing, you understand what you send, what you receive. And basically, right after that, I also started to, you know, explore in typescript for myself. I haven't worked with it too much before, but yeah, this is pretty much that. I mean, good typing system, understanding of all, most of your bugs on the early stage before you even go to production or any environment or when you run a build. All right, thanks. And actually, for me, you know, like I think typescript would be really, really logical here because protobuffers kind of already allows us to use a strict typing, right? And it would be logical if you have a kind of bridge between typescript and the protobuffers. And then I think that's where it comes really nice. So, so far, guys, we just use the pure javascript, but don't worry. In the next part, we're going to be using typescript, and it will be really nice with this Protocrypt GEMTC, if I'm not mistaken. Next question to you, Andrew. So why have we chosen to use Lerna today? A good question. I mean, Lerna is one of the tools which allows you to build effectively monorepos. So monorepos is something when you decide to put your different services and different libraries in just one repo, it will bring some benefits, like you don't need to, you know, manage all these dependencies kind of in a separate way. So you have all the code base in one place, you can go change, see changes and, you know, start doing faster. And Lerna just allows you to do some extra features on top of that, like you can build all your packages in one go, you can test all your packages in one go and things like that. All right. All right. Yeah. And are we going to deploy? So for now, answer is no, but it's going to be another workshop soon. So join and we will deploy it. Yeah, thanks. Exactly. So I don't know if you're following. Yeah, I hope you follow this nice conference. We, there is, it's going to be an X conference soon. It's a devops GSConf. So we're going to be building kind of second part of this workshop there. And we kind of show on, based on this code, how to deploy to some interesting environment. But yeah, let's not cover it today because we have a plenty of stuff. So Andrew, please take a screen. Yeah. Yeah. So cryptocurrency converter. I hope you all just, you know, let's just begin from the clone in this repo. I already have this clone, so there is nothing to do. But yeah, I know by our link, you can find the explicit explanation, how you can install all the companies you need. So this is the repo for sure. And another thing it's called pro talk. Andrew, I think you're showing your ID, not the browser. Yeah. Okay. Okay. Yeah. Thanks. Yeah. So you have this material, you can just install this away and try, you know, using these tools. So pro talk is a tool, which Alex spoke a lot. It allows us to build our proto files to some code basis files. So then we can just start using this code in our project. So yeah, so to prepare your environment in this repository, you need, first of all, you need to install dependencies for sure. So let's start from that. But yeah, again, I already have it. But anyway, it's all done. Then the next step, we just need to run learn yarn, learn bootstrap. This is a command which, which drops all the projects we have here. So in one repo, we have multiple, multiple different projects. So when we run learn yarn, learn bootstrap, it just executes yarn install in every single project. And we just make sure that everything is prepared for our demo. So it's just say, okay. Okay, then. Yeah. So next, next, we're gonna, we're gonna speak about monorepo structure. I hope you can see this well. But if, if not, just tell us. So we have, we divided this repo into the packages. So we have common packages, and common packages is something we'd like to use between our services and between our other packages. So some common code, we just like to share between different, different services, what we have. So here we have like go gRPC, tracer, and boilerplate. Boilerplate is just a thing to kick off a new project. And inside gRPC, what we have, we have a currency converter service, we have demo service, and we have ECB provider service. So let's, let's have a deeper look what inside. Inside proto, we have also a proto folder, which, which contains all our proto files in it. A few words about Lerna, and how it allows us to work inside one, one space. So we have an extra configuration for Lerna, where we have defined like packages, and here we defined like all our packages is under this folder called packages. And we also define that we use npm client, which is yarn, and also define that we want to use yarn workspaces. So yarn workspaces is another tool which allows us to actually include a package like any common package inside this package without any need to, you know, to publish it somewhere. So with yarn, with yarn workspaces, this platform understands that there is some package which is just, you know, number of files located in this folder, and it can be used from, from another project, from another, from another package, another service. Yeah, so let's, let's have a quick look into the Lerna. So what Lerna gives us, Lerna gives us some abilities to run comments on this monorepar, and basically do some magical stuff, I would say. Like, for example, we can, we can use this comment, and type it, yarn, Lerna, not Clerna, not Lerna, run, build, and also, if we run this comment just like that, it will build all the packages inside, like all the common packages, all the services inside, and so on. But here we just say, okay, we're done, we don't want all of them, we want just specific scope, we want the scope starting from common, and one common, why common? So here inside, if we go in the package.json, we see the package is created, started from common, as well as here in the services, we have all the packages starting from the gRPC, just, you know, giving a bit more context in case of we want to build only gRPC packages, or REST packages, and so on. So yeah, let's run this, and you see when we run it, it actually says, executing comment in three different packages, the comment is that yarn, run, build, and what it does, it actually goes to every single package in common, and just execute comment, yarn, build. Here's another interesting thing, so you can see that somewhere we have linked, and for example, in tracer, probably we don't, so let's imagine we don't have it, like we remove it, yeah, and we just try to run this comment yarn linked on common packages, and yeah, it automatically detects that only two packages for now have this comment linked, so it will just run linked on two packages, and third package, where we just removed this comment, it won't be affected anyhow, so you don't see any error coming out. Yeah, yeah, a few words about common and services, and why we decided that. So we have a common folder, as I said, it's like common code shared across different services, and one of the important things we have here is called GoJRPG, you may imagine this is written on Go language, but it's not, it's just like, hey guys, let's go with JRPG, yeah, and we have services, also under the services folder, we have a JRPG folder, this is another kind of, you know, gradation of the services, so for example, we could have here services, rest services, also, it will be another folder, and we can have, we may have CLI services as well, like daemons or something like that, so it will be a separate folder. Yeah, another thing to say about this structure and important thing, in the package.json, in the root folder, we actually need to define this part, this part is picked up by yarn workspaces, so for yarn, we need to say that this is the packages of common, and here's all the packages we have, and here's the packages of JRPG, so we are know when we define one service inside the package.json, and we say, okay, let's just use this service from there, and yarn understands from where, because of that, because of that lines. Yeah, so let's have a look in a picture, what we're actually trying to achieve by this project, what's the business goal? So the business goal is the following, so we want to have a converter, currency converter, which will allow us to have a business goal, so we want to have a converter, currency converter, which will allow us to convert currencies between different sources, so we also have multiple providers, so provider, this is a code which gives us currencies from some specific source, like for example, we have a central bank, European central bank provider, I think this is, so basically, yeah, so this is the rates we utilize from one of the providers, it's European central bank provider, we call it like that, and it just gives us a XML with all the rates they can convert, and now idea to take this race from one place, from another place, and from the third place, and also from crypto conversion, crypto providers, and then we have kind of an aggregation here on the converter side, so converter can, you know, allow to respond on the request, like I want to convert 0.345 ethereum to Australian dollar, for example, and converter is intended to ask all the providers, understand what currencies you can convert, and if these currencies could be converted, converter actually responds you with some result, let's say that. Yeah, a bit of information about that, pretty much that, let's go back to the code and see a bit deeply what we have in our proto, so proto is a really good way, so here you can see the picture that, yeah, and on the picture you actually can design how your application will look like, and then after having a picture with all these flows connections, like the step number two, go to the proto and try to, you know, try to create the same picture, replicate this picture inside the, you know, save the code, so we don't really implement anything in here, but we just, you know, saying that this is something should be implemented, let me close this for a bit, yeah, so we have a currency converter proto, and it has just one method, it's called convert, so convert accepts a request with sell currency, buy currency, and sell amount, so we gotta, you know, explicitly define what we want to convert, and from response we're gonna receive a sell amount, sell currency, buy currency, buy amount, and buy currency, and also we want to see the conversion rate as a result. What else, so else we have another part, as it's shown on this picture, so now here we just described a converter part, and the rest part is providers, so how we kind of define providers, so we have a currency provider proto, which also describes a very simple function or method, it's called getRates, and yeah, and here you can see, like, we have getRates, and basically the request is empty, so we just ask provider, give us all the actual rates for now, and we gonna use it for conversion. We also have base currency, base currency is, you know, the currency in which these rates are represented, so this description, and we also provide an exchange rate, which is, which contains from currency and the actual rate, which is a number, and here's an example how we can represent repeated structures, like these structures will be repeated, so it will be like an array, in typescript it will be an array of exchange rate object, yeah, and then we just go to implementation of two of our providers, so first provider, we have UCB provider, which is European Central Bank provider, and as you see, another interesting thing here, we, instead of writing the same code as we have here in currency provider, we just say, okay, we're gonna import this, so we import this, and we say currency provider getRates request, currency provider getRates response, so we are reusing proto between each other, which is also kind of beneficial thing to use, yeah, and here's pretty the same part for currency compare provider, so currency compare, if you just go here, if you're not familiar, here's an example, yeah, how it can look like if you want to convert crypto, oops, sorry, oh yeah, here will be a response for crypto, let's say we only cares about USD dollar, so all these currencies is represented in currency, which is in dollars, yeah, so what's next, so we just had a had a look into the proto, so how this proto is going to be used, how we actually will receive our code implementations of this proto, it's a good question, I mean, this is where this GoJRVPC library will help us, so if we go here deeply, we will see, so the library contains from two things, it contains from the thing which is called when we run build, so if we if we open our terminal and we go to visit its place, so we go into this folder, into this common library and we say yarn build, okay, okay, it says no, I'm not gonna do this, okay, let's try from another part from here, yarn, learn, build, no, run build, scope, common, gojrvc, gojrvc, gojrvc, so in real it just run, it just executes this, how to say, this comment from package.son, which is a build and here we just execute build mjs and build mjs is just a wrapper of google, another tool from google, zx, it allows you to write bash code just inside the javascript, not very cool, but it allows you to, you know, simply reuse some variables from the javascript, which is, you know, which was the idea, yeah, and as a final thought, what this comment does, you just run protoc plugin, so it's pretty much the same if we just run by ourselves the same comment and it will do the same, pretty much the same, so let's actually try to run it and see if it's going to be executed or not, hopefully it is, so, oops, sorry, yeah, so if we go gojrvc in here, try to run this comment and just copy it because it's just very long one and it says no again, interesting, okay, anyway, we have a, just not sure why is it happening, but we have a well-working copy from here, as you can see, just been executed the same, pretty much the same stuff, and as a result, what we get as a result, the main thing I didn't describe yet, so as a result, we have a dist folder here which contains all the proto converted to typescript, to typescript and then to javascript, and basically these classes could be used from, you know, other places as just, you know, services, as clients, and there's this library which we use, it's called, how it's called, let me see, it's called protognts, we use this library, so this library only generates clients for our gRPC services, but not servers, so that's why we just implemented here our command server, which will look pretty much the same for all the other stuff, so what it has, it has a, you know, kind of wrapper class, which allows us to create a new server from where, somewhere, and also it has a method called addService, which is a wrapper, a fast way to add new services inside your server, basically to add implementation, so here, as you can see in the proto, so in the proto, we just defined how these servers will look like when they run gRPC build, and that's created the same the same service, but in javascript, but what else, when else we just need to implement the service, so we need to write our business logic behind this service. Yeah, so let's, let's probably try to do something on top of this, I think it's pretty much clear for now, so we have server, let's try to, let's try from simple, let's start from simple things, which is basically, let's create a common library here, let's create a common library here, so we have three common libraries, we're going to create another one, and the library is going to be a logger, so I want to implement a logger, kind of a wrapper on some, any other implementation, and I want to use this logger in other places, in other services, for example. Yeah, so to start with logger, I want to introduce another thing which I used for, which we used for bootstrapping, so here we have, like, already created some codes, so we have an option like copy boilerplate, paste, replace some other parts in this thing, or we can just, from the root of this folder, run something like yarn bootstrap command, and see, so these tools, this command uses a tool called Higan, it allows you, it's kind of a template engine, which provides an ability to simply, you know, create boilerplates quite fast and efficient, so here I will show you very briefly, we have a templates, underscore templates folder, which contains two services, which is called, like, service and common, and pretty much the same, like, we run it here, Higan common view, Higan service view, so common, you know, contains templates for, which will be used when we create a new user, a new common package. Yeah, some questions, which we see here right now, asking from us, and some, you know, implementation, where it should be placed, and so on, so let's proceed, I'm gonna call it logger, the name of the common library, logger, okay, Andrew, my name is Andrew, okay, as a result, we see that this Higan tool created a few files for us, and we're gonna go over there and see them, so here's the files, pretty simple, we have empty index.js, and here we try to empty index.js, implement a very simple protocol of the logger, so let's say we have an expert debug, we have a debug function, which basically, message, string, and which does a very simple thing, like, you know, just producing the same part into the, the same message into the console standard library, but here we can define, actually, whatever we want, we just kind of create a few more, info and error, so error here, info here, so error here, info here, and we also need to export these code pieces to be accessible from out there, okay, once we've done that, we can go to our, our new folder, common, packages, common, logger, and what we can do here, we can try to run build, and see if it works, if it creates anything, hopefully it is, yeah, it says no, can it really create a variable, info, oh, okay, already exporting it, okay, okay, yeah, so that, here is a mistake, actually, I wanted to write here a default statement, so, yeah, now it's, it doesn't show any error here, so it should be fine, yeah, let's see how it goes on the second run, okay, it's done, do we have also a, okay, so it shows some warnings as well, okay, no console, which is okay, we know about this, yeah, so we have this library, we have built it, so it appeared here, so what's next, we're gonna, we're gonna use it somehow, yeah, so let's go to, let's try to edit first, so, let's try to edit first, so, learner uses a specific way of adding the libraries like that, our, like, private project libraries, and we just use the same way, so we go to the root folder, and we type yarn learner add common logger, yeah, we just say learner, we want to add logger, but still need to define scope, where, so, it will be added to gRPC sd provider, let's try, learner does the installation, but finally, we don't install anything, it's just, you know, our library, which is located just near us, but it still needs to use yarn workspaces to proceed this thing, so if we go to, if we now go to UCB provider, UCB provider folder service, UCB provider folder service, we see here that the logger was added with our version, which we defined it, okay, it's a bit luck, and, okay, what's next, so here we have our server, as an example of using server, yeah, let's, let's, let's just try to, let's just try to add our logger here, right, so here we do import, sorry, import from common, okay, we see it here, and we're gonna import info, oh no, we're just gonna do this logger, okay, and then logger, to see if it works, we just add a new logger in here, info, server, let's say, UCB, as a provider has started, nothing, nothing more, okay, then we can go to this folder, oops, not that one, okay, or I'm having some issues with the terminal here, some reason, let me try to resolve it, I cannot actually navigate there, okay, for some reason it doesn't work, let me try to restart this quite quickly, hello, usually ideas are not that, not that great with terminals, I sometimes show it with just native terminals, okay, but I just have everything here, it will be just easy to do it here, okay, okay, we are here, so what we, what we're gonna do, so we, we can actually try to build this UCB provider, where we just added our new theme, which is logger, let's try to build it, and see if it works, should be, okay, it's done, let's try to run it, yarn start, basically, when we run it, it executes a standard typescript command, which compiles on the fly, all the typescript and just run it, yeah, so here we can see UCB provider has started message, this is the one we just added here, and we see also another message this is the one we just added here, and we see also another message is, which is like a, you know, server started, and this message, I believe it comes from standard, our, our common library, go to your PC, let's try to find it and see, server, yeah, this is it, and here we just use console log, because we don't, we didn't have logger before, and now we have it, yeah, that's great, we just created one, one simple thing, we just created a simple library, which we reduced then, in another, our service, which impressive, I'd say, so let's, let's move on, let's move on and try to, try to implement something more, more sophisticated, I would say, so we try to implement another proton, so here you can see that we have a UCB provider here, and we also have a crypto, crypto compare provider, but for UCB provider, we already have an implementation inside our code, but for crypto compare, we don't, so let's try to, to do that and see how it goes, so again, we're just using the same templating stuff, which is, you know, Higan, let's see the command, so the command is bootstrap service, yarn bootstrap service, ah, what's, what's the name of the service, oops, sorry, crypto compare, just copied, uh, so another question is, define the proto file, which is that, proto, and what is the proto service name, and the proto service name is, let's, let's go and see, so service name is that, so we need to define this, just copied, and what is the proto package name, this is the same, but started from the lower character, and yeah, my name is Andrew again, okay, so now this Higan tool created a bit more stuff behind this, which will be located already in here, in the services, gRPC, so here we can see that new folder created, which is a crypto compare provider, and we also see some tests here, and as well as, you know, two files, and one of the file, which is index, index.js, in the index.js, we just simply run the server, and pretty much that's it, and if we go to the server, it shows us some errors, because here we don't have any implementation yet, and here we just need to, you know, need to add our implementation, here you can see that all the values we just, you know, provided here to the Higan, it's been to the Higan, it's been inserted in here, so we have most of our code already written, but still some code should be, should be added, so let's create our implementation of our server, so we're gonna create a new file here, which we call like services getRates, so getRates will be our implementation, and so here we just write import currency provider, okay, automatically insert it, and we export default function, which is as well as a sync, which will be implementing our stuff, yeah, so we have a request, request, and our request will be currency provider getRates request, easy, and we have also a response, as we have a sync function, we are required to return a promise with our response, so the response will be currency provider getRates response, and here we just, you know, we'll implement this, say, sorry, so let's, we're not going to get deeper into the implementation of this service, we just, you know, simply return some values to see if it works, because the implementation, the real implementation will be one of the tasks to be done, so it would be nice practice for you to try it a bit deeper, so we return new response, and here if we start typing, we can see that we have two things to return inside of it, which is rates, we just return empty, and basic currency, let's say it will be USD dollar, okay, say that, we don't need requests here actually, few more fixes, okay, okay, we just implemented this function, what else, so here in the server there's a special sign here, we need to add this function to our server, so it understands that here's the implementation, so we add it with an add service, so add service is a function, completing function, which we say it's going to be accepting a request, a request, currency provider request, and currency provider response, but here as earlier we just said it should be a promise, so it should be promise instead of just response, and inside this function we shall to define the method name, and the method name should be exactly the same as we have it here, so we just copy it from here, what is the method of gRPC, it's getRates, so we define here getRates, and then we just use getRates from our emblem implementation, yeah, it should be this one, let's check, okay, okay, so we added to our gRPC server, which I'm going to understand that the prototype is located in here, this is the server name, this is the package name, and here's the implementation of our method, if we go here we just return an empty response, so let's try to see if it really works, uh yes, sorry uh services gRPC cryptocompare provider okay, I'm in the right place now, so now I'm just going to run yarn build, and see if it's this new service is going to be crafted successfully by typescript, and it is, okay, so we just verified that, you know, all is compileable, here we see the build folder, which is the result of our build, so this build folder is something we can actually run it in production if we deploy somewhere, so the idea don't use the typescript in any environment, but instead of, you know, compile it to native .js files, and javascript files, and then just execute it on your environment, so yeah, let's try to see, here we also have a file, it's called ant-example, so let's copy it, this file is going to be used by the environment to pick up this port, and use it when the server is going to be started, so let's start, yarn start, okay, it's running now, as an example, we're going to use a tool called gRPC Queryl, let's see if it's working, it's working, it's working, it's working, it's working, it's working, now, as an example, we're going to use a tool called gRPC Queryl, let's have a look into this, so gRPC Queryl, it's a tool which allows us to run a click command, CLI command against our gRPC server, and try to test it, so basically we just send in a request, to this running server, so here we have a server running at this port, and here we just try to call it somehow, let's try to see what we have here, we have here, it's a cryptocurrency, I don't know, cryptic compare provider, and we go into to show the, you know, all the lists we have inside, so we have cryptic compare provider here, okay, let's try to see what's inside, there is a list to work, actually reads the proto file and, you know, just understands that there is a one service, it's called that, and there is also a method called like this, so as simply as this, but I have an example for our another, our gcb provider, not for this one, let's try to change it, and see if it works successfully or not, yeah, we just have, you know, this line, and here we also need to change the proto to a cryptic compare, yeah, you can see here at the beginning echo and just empty json object, which means like we're just sending an empty request to this service, server closed the stream without sending trails, interesting, do we see here, yeah, it's interesting, actually, I'm almost sure that I've just done some kind of a mistake in the middle, let's try to run it on the gcb provider, maybe, because I have a practical example with that, so yeah, let's run start gcb provider as a side thing, let's start it here, okay, it started, we have to stop this one, because we are going to, you know, we're going to run currency converter, which is another piece of software we already have implemented before, as a second service, so for now we're trying to run two different services, and one service is going to execute another one to proceed some values, let's start also do this here, okay, so two services have started on different ports, as you can see, there's some experimental stuff we used, okay, and my example is about these calling get rates, which will hopefully will be executed successfully, okay, so here's an example, I called a gcb provider, which is an existing one, I called it with a on this specific port, so if we, for example, decide to go here and try to change it, not here, but here, so here's some implementation of real service, reading this xml file, which I showed you before, this, within this xml, and just converting it to our proto stuff, proto stuff, yeah, if we change it, just return like, you know, empty rates, hopefully it will work too, okay, it's restarted, it's restarted, let's try another one, okay, now it's just return base currency, because when you when you return some empty structs in here, it automatically optimizes it, and that doesn't even pass it on the other side, so from the other side we will never know, will it be, you know, feel it or not, unfortunately, so let's get it back, yeah, another yet thing to mention, how actually to test this, to test it, there is also a folder, if we go to UCB provider, which is called test, and here we have a integration test for our server, so what exactly we do here, we what exactly we do here, we we mock the implementation of not-fetch, because we use this not-fetch to fetch external rates, and as a mock, we just return some specific values in here as an xml document, and here what we do, we create a UCB provider, client, and then we just we just test, test this provider, basically, we try, on top of this client, we try to run getRace function, and we expect that this function will return some specific converted result, so the result which actually, you know, being generated here in this business file, business implementation of this file, of this service, yeah, what else I wanted to say, so we know how to create a service now, this service return us some data, we also have a currency converter here, let's try to ask this converter to convert us to something, and see if it works, so I will show you also a converter, the implementation of converter for clarity, just to make it clear, so we have the same file called server, which is pretty much the same as the one we implemented, we have a convert method, let's go there and see, and here what we have, here we we have a provider services environment variable, which for us contains only the, for us contains a list of all the providers we have implemented, as for now we just implemented one UCB provider, we have just one here, but the general idea, we define all the providers using comma, comma separated in this variable, and then inside the code we have a provider clients, which is the clients like that, and then get currency, yeah, and here what we can do, when we arrive, we have a new request arrive to converter, we just ask all our providers to return us, get current rates to provide us rates, and then this happens, we have this aggregated rates value, and basically we can, you know, let me just show it, so for now it's just implements, as we, as for now we have just one provider, which is UCB provider, just takes the first result from the first provider, basically, and then it takes rates from it, and quite simple conversion here, also rounding, and here we return some result, okay, so we have run in this currency converter, now here I want to, I want to actually call it, so here I need to change currency converter proto, it's on the port 51, and here it also currency converter, so yeah, so what else we are missing here, we are missing here the request, so here as we, as our service expects some request to be sent to it, so we need to actually define it, so it's a cell currency, I would say it would be USD dollar, it's another thing called buy currency, I'd say it would be GBP, and I want to sell, oh sorry, it's not sell currency, it's sell amount, so I'm going to sell 100 of 100 of USD dollar, and I want to have a conversion to GBP, let's see, the magic is not happening for some reason, let's see what we have here, okay, we have a, we have some weird error here, I think it was a rejection inside our service, let's try to restart it, let's try to restart it, currency converter yarn install, just make it clear, it's just some dependencies, it's not really there, oh my goodness, I'm just, yeah, I'm just used nvm, if you ever are familiar with that, really cool tool to switch in node, so here I'm on node version 8, and if I use nvm, nvm use 8, it will be 8, and the stuff like that, yeah, it's probably when I restarted my my idea, everything goes wrong, I'm one more time trying to install everything, which is not clear, yeah, indeed, nvm is a very nice script to control which version of node nvm you use, like, I would say must have if you're working with a node environment quite often, it's a simple script that you can install for any operational system, basically, and I can also imagine that it wouldn't work, the dependencies that you already installed on the other version, wouldn't work on the older version of nodes, makes sense, yeah, sure, thanks, guys, for feedback, okay, now it worked, finally, so what we finally have, we have two running services, so this is a first provider we have, and we have another service running, which is a currency converter, so here in the code, we just, you know, call one service from another, and ask all the rates, and then just make a conversion on this side, but the idea of the service, if we just decide to add more providers in here, it will be easy to do, so we implement new providers, we add them inside this chain, and then we just, you know, run pretty much the same protocol in here for converter, but instead of having just one server, we just have one server, and then we just have one server, instead of having just regular currencies, for example, we also have a cryptocurrency as well, and we are able to convert cryptocurrencies as well, but for now, it's just, yeah, just a simple implementation when I'm selling $100 and receiving some GBP, and here's an effective rate of it. So, I think I'm pretty much finished with the material we had. Alex, what do you say? Hey, hey, hey, yeah, great job, Andrew. Let me quickly back to my screen again. So, what I like about the structure of this workshop is that we... Can you still hear me? Oh, sorry, that was a share screen. One second, I lost connection or something. So, what I like about this structure is that we kind of start from a really ground, right, and then we, on the second part, we show more like a real scenario, right, because how I understand it, what you're doing in your regular basis, you're pretty much dealing with kind of this setup, right? You have Lerna, you have services defined, and then much more services, of course, much more definitions, but still kind of that. Yeah, right. Okay, then what we think is going to be logical next step is to try out some of those techniques yourself, but before we do that, let's maybe do... Let's make some summary. Let's give some final words, and then I think we're going to explain technical tasks that you can take for yourself and practice a little bit more on that. Yeah, I'm going to start the summary. So, Andrew, please join some of the points if you think they are relevant or not relevant to you. So, first of all, what is... We discussed today what is gRPC, what is protobuf, and it seems to be yet another standard, right, to learn, like we already know many, many other approaches and frameworks that can be used for service client, server client communication. So, yes, this is a yet another standard. However, it's very appropriate to learn, so we think. Why? And this is because it's environmental language agnostic. Well, you can say so, but as we've seen, it's not environmental language agnostic, but it is more like it has support for many environments and for more languages. You still understand what exactly to use, how exactly to use, which version to use, but what you get out of the box from it is one common standard for the whole ecosystem of microservices that are used inside your application. The protobuf itself is a strongly typed message in a format, so together with typescript, I think it's a really nice match. You can define your services protocol, you can convert messages into strict types, they will be checked before sent or after received, actually, in a service. It's also quite efficient. We didn't show today any performance tests, but you can find some of those here in this jrpc.io. They have really nice metrics, especially for the more traditional backend stack, technology stack like Java and C++. You can imagine it's super fast. Actually, I think if you look at some presentations from 2017 or even early presentations for jrpc, the Googlers working with and promoting this jrpc format were saying something like, yeah, I don't know, they said something like 10 billions operations per second across microservices environment of the application. You better check the numbers, but it's incredibly high and incredibly fast. So really efficient format for transport and as a data itself. The compiler and library is supported by, well, first of all, community, and more importantly, it is supported by Google that gives you a lot of stability actually in this choice if you choose a jrpc. This repository that we saw today, it allows you basically without doubt to put a dependency on this library because it will be supported by lots of people and to make it according to the goals that we also discussed today like efficient and the performance, for example. security from the box, yeah, a little bit discussed today. Scalable as more as all microservices goals are. Well, those are two words that I cannot elaborate on top of today's workshop. We didn't discuss it that much, but yeah, but I think this is an idea of all microservices to be scalable. Free and open as all open source libraries. Plugable, this is a very key feature that allows basically to add your extension on top of the jrpc framework. Like if you want to generate code in, I don't know, law is, for example, law is supported, I'm pretty sure, but yeah, for example, the pizza script language that you invent in your nights without sleep, then yeah, then you can make your own extension for the jrpc and I imagine you will have a microservice that runs the jrpc in a pizza script. I can just dream about it, to be honest. I actually can add a bit about pluggable. If you heard maybe about company HashiCorp, this is one of the companies I started to use it for infrastructure creation, basically using language terraform, if you've heard of it, and then I just get a deeper dive into these microservices and jrpc stuff. And then at some point of time, I realized that these guys just implemented this pluggability inside their cloud platform by using jrpc pluggable services. So basically, they allows you to create your own service, which you know, just follow some specification which is written in jrpc and then you can just plug it inside the this cloud and it start executing, giving you, giving the cloud a lot of, you know, features like, you know, they ensure that this service, if it goes wrong, you don't, you know, break the whole system, it just breaks itself and, you know, they handle it. So yeah, that's it. Sorry, Alex. No, no worries. Very nice indeed. And the layer is kind of the same here. Yeah, I have this also comparison table. Let's just quickly go through it. I'm not saying it's finished or it's 100% precise, but let's have a look here. So what I tried to do and summarize other formats, like architectural formats, architectural patterns that I used to some messaging, transportation between services. So we have REST, RPC, and GrabQL. So you see here, I'm comparing RPC, not jrpc, so more like a pattern. So if you compare them by, for example, a focus, so REST is focused clearly on resources, like semantic of resource, and RPC, because of course, it's calling a procedure, it's focusing more on a program, let's say, on an action in the program. So I don't know, I want to pet my dog, for example. That's a method that you call in remotely. Then a GrabQL, I think is kind of similar to REST in that sense, also focused on the resource, but this is more mixed, I think, in the GrabQL is also very flexible. Next, semantics or protocol-based, protocol-related semantics, let's call it like that. So that means I put here HTTP, and REST is, I think, has a really big and strong connection with REST works themselves, like get, delete, update, patch, and post, and boot, for example. And the RPC and GrabQL have more programming semantics. So what you have in JRPG, for example, you just say, just api, call my method, again, pet my dog. So it's a little different approach, as you can see. Coupling, like how resources and how these services are coupled in the final ecosystem. And in REST, they're quite loose, right? They don't have much dependencies. If you don't enforce format checking, like if you don't have, for example, Swagger, that is consistently tested against the version of application that you deploy, then basically you don't have, you don't know what is running there in terms of api semantics, in terms of version of api. And in RPC, it's a completely opposite side. It's like if your RPC, if your protocol file is not matching the one that you implement, your application will start to fail. And otherwise, if your client is trying to use a service of this protocol format, then yeah, it's a street attack. So it will fail it, and it will not allow you to do so. In graphql, I also put loose, but we don't discuss it today much. And yeah, last year, a point is a format and in REST is clearly a text format. You can argue that REST is sometimes it's called JSON over HTTP, but I think people more laughing about that can be also XML, any other format. But in RPC, clearly it's binary. We saw it today that it's serializing some data into, it's serializing text into the binary data that you cannot even read without the stringifying it first. Yeah, this is a kind of summary for today. Now about practical tasks that you can do yourself. So in here, in practice session, you will find a link with the issues on GitHub. So first of all, the issues are defined in this public monorequal node.js SVC sample. That's exactly what Andrew was showing us today. We were going through all the steps that are basically defined in this workshop. And we tried to make different interesting tasks for this monorequal. They are not always about programming, but always, let's say, have of course the connection to what we discussed today. I will start from the bottom, just because I can. It's actually the Node programming task here. And this one says, create local stuff integration with Docker Compose. So if you are really into making a Docker composition, you want to make, and you having fun with playing with the Docker, then it will be for you to see the structure of this monorequal, create some Docker files, maybe challenge yourself to reload these Docker images on build, change in resources. It can have a flexible solution, of course, but the idea is you create just a local Docker setup. And you might want to run all the services at once with the Docker Compose app command, for example. Next one from bottom. Also, you can see here good first issue. So if you consider yourself less experienced, maybe with this technology or with this particular task, then good first issue also embraces more like a structure of this repository, allows you to experiment a bit with dependencies, install dependencies, update dependencies, something like that. And second one from the bottom actually about the dependencies. So in our case, in our showcase today, we use the protocol genetc of version 0.3.9. And actually, it's not the latest. I think the latest one we checked was 0.8.1. So feel free to update it. And please be sure that the code is working. So check the regenerate code, start test services. And this is indeed a good first issue to try it out, I think. However, also, you don't know the whole how the whole infrastructure should look like. But yeah, I think it's one of the good start points. Yeah, next one, trace funds relation between services is quite advanced, let's say, I will just quickly describe it. But I really doubt you should pick this one up. So in one pull request to this repo, we created a trace service client. And this one is open telemetry client for HRPC. So it's like a logger, you can think of like a logger, some common utility that is used across other services. And the idea is that we want to bind it to some tracing system. We use the zipkin in this code. We added to the tracer, we added some functionality for it. And it starts, but it doesn't show the connection. And that's an idea, like you have a call from one service to another. And in this zipkin interface, you want to see actually that the first service called second service and the request to, I don't know, half a second to run. So that part is not working. We expect there is some small bug there. And yeah, if you want to get back into that, the familiar with open telemetry technology, and with javascript quite good. This is a good feature to take for you. But otherwise, I would not suggest to pick this one. Next one, maybe, Andrew, you can explain a little bit, the create common library logger. Yeah, create common library logger. Basically, this is pretty much the same as I was showing you. But I was not sharing this code anywhere. So this is a really good first issue to start with, to try this learner setup, to try this monorepar. And also it has some step by step guidance to how to go there. So if you want to try it from the beginning, it's a good point to go. Yeah, and the next one is similar to that what I was showing. It's about implementing cryptocopier provider, which is a new provider you need to implement. But basically, the final thing, the final goal is to implement this api. So this api should return some rates. And then as a final thing, you can connect this provider into the currency converter and basically be able to convert cryptocurrencies. So that's the goal. I will say it's more advanced. So you need to try to implement it from nothing and then just move on. And also try to touch different services between each other. Yeah, nice. Actually, the test, how it should be working in a current repository. So you can also have a look at how the tests are running for these gRPC services. And yeah, if you're experimenting with it, first make sure that the tests are passing and then modify code, make them fail and use a tdd there. That would be great. Yeah, and last in the list, the issue basically inside the converter convert method, as I've shown, we're only taking one provider result and then we're just making all the calculation. And here the idea that we started to think that, okay, we're going to have multiple providers and now we need to amend this conversion somehow. So it will work correctly based on different providers. So here's an example of different responses. Basically, it could be achieved even with just tests. So you can go to the test of this CryptoCompare concurrency converter and just add more provider in response and just go to the code and implement your logic. And this is actually a very beneficial tool. I think that every single service you can test and cover by its own, just emulating all the environment around it, which gives you a more option so you don't need to run all the infrastructure on your laptop. Yeah. Great. Yeah. I hope that it gives some variety in what you want to play with. Of course, you don't have to do it now, but if you have a little bit of time, why not? Yeah. Speaking about time, I think it's perfect timing. It's almost three hours. To finalize this part, also, of course, feel free to choose extra resources. So there are some YouTube links I really enjoyed. We really enjoyed this video. Some crash courses, really great to explore the technology deeper.