How to Convert Crypto Currencies with Microservices in Node.js and GRPC

Rate this content
Bookmark

The workshop overviews key architecture principles, design patterns, and technologies used to build microservices in the Node.js stack. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated services using the monorepo approach with lerna and yarn workspaces, TypeScript. The workshop includes a live practical assignment to create a currency converter application that follows microservices paradigms. The "Microservices in Node.js with GRPC" workshop fits the best developers who want to learn and practice GRPC microservices pattern with the Node.js platform.

162 min
19 Feb, 2022

AI Generated Video Summary

Today's workshop is divided into three parts: an introduction to gRPC and Node.js, practical examples of implementing a currency converter, and a hands-on practice session. The workshop covers the history and key features of gRPC, the concept of Remote Procedure Calls (RPC), gRPC communication and service definitions, gRPC extension points and protocol buffers, and the use of protocol buffers in TypeScript. It also includes demos on protocol buffer transformation, using gRPC.js and Proto Loader, starting the gRPC server, implementing the server and client, and using gRPC in JavaScript. The workshop concludes with discussions on using TypeScript and the protocol.gmtc file, preparing the environment and monorepo structure, creating a currency converter with providers, and the benefits of gRPC compared to REST, RPC, and GraphQL.

1. Introduction to gRPC and Node.js

Short description:

Today's workshop is divided into three parts. The first part is an introduction to gRPC, Node.js, and protocol buffers. The second part focuses on a practical example of implementing a currency converter using gRPC and protocol buffers. The final part involves a practice session where participants can experiment with a repository and make exercises to create a service. The workshop starts with an introduction of the presenters, Andrew and Alex, who share their backgrounds and motivations for conducting the workshop. Andrew's inspiration comes from his work on microservices, while Alex's interest lies in exploring new technologies. The workshop aims to provide participants with knowledge and hands-on experience in gRPC, Node.js, and TypeScript. Participants are asked to rate their knowledge level on a scale of 1 to 10 before the workshop begins.

Let's begin. So today, we are very pleased to welcome you on this workshop. It's mainly about gRPC and about Node.js solution on gRPC. I hope you read this description so I will not spend more time on this particular part. Yeah maybe in general structure that would make sense.

So we kind of start, let me show you the agenda for today. Well, our workshop is actually, consists of three parts today and when we prepared this workshop we thought that it would be great to give our participant some theory knowledge about GRPC service, about the GRPC framework. What under the hood this technology is, what parts it consists of and that, for example, includes protocol buffers. So that's what the first part is going to be about.

So the first part is going to be some introduction, some theory, some demos that we prepared. Showing really basic stuff about GRPC, about node.js and protocol buffers. The second part is more practical part. Andrew, can you please say a few words about that?

So the practical part we actually get a deeper dive into the practical example. Today we are going to implement a currency converter, and we also try to combine different providers in one big thing. So this big thing will be created from different microservices. And all these microservices will be communicating between each other through this GRPC protocol using protocol buffers. So it's going to be fun. Will it be? Yes, of course it will be fun. And the final part of the workshop, we call it practice. It's quite experimental to us, but I think we can do it that way this time. So for this particular workshop, we prepared a list of issues in GitHub. We're going to show you what these issues are, and the point of this practice session is you can experiment with, let's say, a repository that we prepared, a patch of the gRPC itself. And if you really want to touch it technically, you can clone it and you can make some exercises. So they are on different levels, let's say beginner, more advanced level, and you will be able to practice some interesting parts to create a service, for example, but let's not discuss it right now. We will go to that later. I think we will not record that part by the way, so I hope that will be possible. But that is in general our agenda today.

And I want to start with introduction and say a few words about ourselves. Maybe it makes sense to start with Andrew. Andrew, again, please.

Hi everybody, I'm a software and infrastructure engineer at a startup called EMA. Currently I'm based in the United Kingdom. I'm really enjoying doing what I am doing. I did jump into all these microservices theme and also into the devops topics and started to explore this area. This is basically the moment when we decided with Alex to prepare this workshop and to try to combine all this knowledge into one some material and try to present it. So I'm really happy to be here with you guys. So let's move on Alex your turn.

Thanks for that. That's me, I'm also a Software Engineer. I'm located close to Andrew, but still in the other country, in Netherlands. I'm working in some European bank, it's called InG Bank. I did in my past different stuff I started actually from Front-end Stack and as a Front-end Stack was a development quite quickly for the last decade also we saw this different technologies are growing, like React, Angular. It was a really interesting journey. However, in 2016 summer, I decided to switch my focus a little to DevOps and also Back-end Stack. So I enjoyed a little bit of time with no js part. We did quite an interesting project for the bank to make one pipeline for all the front-enders. That included Node.js applications, for example a CL application that was available for all developers in our bank. And then my latest project that I'm currently participating in, we are doing some interesting part by one of the systems, inspired by one of the systems in Kubernetes world. And this includes more traditional stack like Java GVM. And also we have a nice interesting client that we support in a goal language. And to be honest, I enjoy this technology a bit lately. Yeah, so I still think my strength is Node.js And that's why actually we found with Andrew this common ground with Node.js and decided to make a workshop in particular using this technology. So before we move on maybe, Andrew, one more question for you. So why have we decided, why did you decide to make this workshop? Was it because of your current work or it was absolutely unrelated?

Yeah, actually all of this stuff is related to my current work. So at work we started to develop microservice infrastructure based on pretty much the same technologies. And we started building some stock trading API, which allows to trade different stocks in the United Kingdom. And this actually involves a lot of different services and a lot of different communication between them. Here we just show very simple communication between services, like direct communication. Like one service is just calling another. In real-world cases, it could be some queue in the middle of the services, so really they are communicating in a fully asynchronous way. But this probably will be another workshop to speak about. Yeah, thanks for that. My inspiration actually for this workshop was, first of all, working with Andrew on that, it was quite a nice commitment from both of us. And beside that, I am really interested in learning new technologies, and since we decided to explore this Node.js part of GRPC stack, was particularly useful for me, because I haven't touched GRPC before that. And to get to know this stack a little bit, also to touch it in Node.js was quite interesting to me. Yeah, so that's in short our introduction and maybe before we really start, I want maybe a little bit of interaction with you guys, so if you can put a number from 1 to 10 to the chat. How do you consider yourself, your knowledge in today's topic? GRPC plus Node.js plus TypeScript, because we're going to use these technologies today. If you can specify 1, if you've never heard of it or 0, if you never heard of it, and then 10, are you a super extra expert in that? How would you rate yourself? That would be awesome. Cool. Thanks, Jakob.

2. Introduction to gRPC

Short description:

Today's workshop begins with an introduction to gRPC and its history. gRPC is a modern open-source remote procedure call framework that enables communication between client and server applications. It started as an official open-source project in March 2015 and can run anywhere. The framework is based on the HTTP2 protocol and includes the use of protocol buffers. Originally developed by Google, gRPC has become a widely supported technology in the industry. By sharing it with the community, Google gained valuable feedback and made it more stable. The workshop will cover microservices and gRPC in the Node.js and TypeScript environment, as well as the use of a Monorepo solution based on Lerna. Let's dive into the theory and explore the definition and key features of gRPC.

That's actually quite nice. We going to start from basics today, so I think it's a very reasonable one remark on GRPC. Thanks. I see some nice feedback on that. I think we are in a good company today. If you have any questions during the workshop, feel free to ask them, feel free to put them in the chat, whatever you prefer. We find the post. I think traditionally more to a chat and we look into chat sometimes, but feel free to use any channel you prefer.

Some prequisites, I don't know if you already have read that before. Last part, the practical part, you probably got a need to have Node.js and the protocol buffer compiler. You can find the links in this page that we prepared. We'll post it now in the chat. Thanks for feedback again. For EDI, of course, feel free to use any solution, VS Code is, I think, our choice today, but you can use WebStorm or whatever fits you the best. We are ready to begin almost just to align one-time on technologies that we're going to talk today. I made some list of text. We made some list of text here, and we're going to talk about micro-services and the GRPC based on Node.js JavaScript, but also TypeScript environment. And GRPC hardly includes ProtoBuf with the framework. And for a practical example today, we're going to also use a Monorepo, Monorepo solution, and it's going to be based on Lerna. So we actually use it with Yarn, but if you're interested and curious to use it with NPM, that's, of course, also possible. Yeah, that's the text for today. Let's see where we actually start. So we talked about technologies. What we're going to do today, we pretty much said. So what is GRPC? This is where we begin with the first part. Well, I think, I hope you heard this title before. And GRPC stays off-g remote procedure calls, of course. Yeah, the fun part is they have a really nice mascot. It's, I think it's called pancakes, if I'm not mistaken. I might be mistaken with that. So very, very nice branding side of them. Yeah, I like to begin introduction to theory, actually, with that official, an official definition of that. So let's just spend a few seconds on what the GRPC is. I hope you were able to read it. So for me, this definition is a quite expressive itself, but we need to dive in into some parts of that. And I analyze some of the words here. So modern open source remote procedure calls. Well, a remote procedure calls is something where we gonna start with a few minutes. Beside that, we have a modern solution, and that reminds us that this is quite a new framework still. It began, in fact, in March, 2015, as an official open source project, and it can run anywhere, like the definition says. It enables client and server applications. And I put here client and server, underlined this word, because it's all about communication between applications or services, if you're talking about microservices, and the client and server are very important words that we're gonna use today. And to go a little further, client and server sometimes are a little bit confusing because both client and server are just microservices, so we can think of that, I think in that way. So what the history behind of GRPC. Like we said, it's a framework, remote procedure framework, and the G, in the first letter in the GRPC, sometimes it's a joke that it's about Google, but actually, officially it's not, it's just GRPC, Remote Procedure Codes, but indeed it all started in Google, and this date, March 2015, is the date when they officially release it to open source, to developers in GitHub, now it's actually hardly supported by a community. But it started all inside Google, and as many other different technologies that are coming from Google, let's name a few of those, I already mentioned Kubernetes today, but also Go language for example, there are thousands of them, I'm pretty sure very nice technologies that nowadays used a lot by communities and supported by Google and by other big companies. But, to name a few that are more related to today's workshop, is for example, SPDY and later Quick, those are two transfer protocols on top of transport layer of TCP stack. And then they both are kind of revolutional because SPDY was introduced quite a while ago, but that was like a beginning for HTTP2 protocol. And now, Quick is gonna be something like that, but for protocol HTTP3. And why it's important for us today is well, because to show that Google works on interesting projects that become standards for the whole industry, but also as these are transfer protocols for different messages across network. The protocol that used nowadays in the gRPC is actually on top of HTTP2, and originally it was a SPDY included into this protocol, into the gRPC framework. And we can expect that when HTTP3 is going to be stable enough, then QUIC transports it to HTTP3 officially, will be supported by many platforms. I'm pretty sure that gRPC will also switch to that modern technology. And that maybe there is one question here, maybe you ask yourself, I ask myself here. So why a company like Google will move something, some project that was developed and just started inside the company to the whole world? And there are many good reasons to doing that. But to name one, it's actually, it's quite simple. If you working on some internal project, no one knows it, and how are you gonna hire a new developers for that? They will spend a tremendous time to learn about it. And if you compare it to alternative, like you introduce something that your company successfully uses over the, I don't know, decade maybe, then it becomes popular in the outside world and people start learning that and indeed, this knowledge can be just grabbed from others and it is gonna be very easy to find developers for this technology inside your company. But maybe the second good reason is, I think also very important, is that by providing it to community, you can get much more feedback on the decisions that you made, on the technologies that you used, on the, yeah, I don't know, any solutions that were started in this technology. So they can, of course, be improved because in our world, everything can be improved in technology. So that becomes a really transparent to the whole engineers that are using it. And eventually it also becomes very, very stable. And yeah, I think there are many pros for this concept to bring something outside the world and then make a standard for the community. And that's what happened with a GRPC. So Google originally started with a project called Stabi. They started thinking on microservices, how to, because you can imagine that Google is a huge company with lots of teams. And you know, how many hundreds of thousands, maybe thousand teams. And you don't know the purpose of each team, right? You cannot make any team, every team in your company to use, for example, one programming language, because programming languages are good for the goals of the product that they are used for. And therefore you have multiple projects that are working on Node-JS tech, another working on Golang, another working on Java stack, and most maybe teams working on Java stack. And to combine them all together, this pattern microscopes became popular.

3. Introduction to gRPC and Remote Procedure Calls

Short description:

Microservices have become the preferred approach in the industry, leading to the adoption of frameworks like gRPC. The gRPC framework, now part of the Cloud Native Computing Foundation, provides a reliable solution for building microservices. To begin, it's important to understand the concept of Remote Procedure Calls (RPC). RPC allows one application to call a procedure on a remote server and receive a response. Under the hood, client libraries handle the serialization and deserialization of messages, as well as the connection to the server. Communication between client and server can be done using various protocols, such as HTTP, HTTP/2, HTTP/3, or WebSockets. Implementing these protocols requires the use of client libraries on both the client and server sides. It's essential to ensure compatibility between the chosen protocols and the client libraries used in the application.

So you don't build anymore one monolith project. You build smaller parts of application that can communicate with each other simpler, and to make it very stable, you still need to build infrastructure on top of that because microservices will need to solve different other problems that didn't exist in Monolith, like communication where you need to send a call, how to make sure that your receiver is actually what it was intended to do. So yeah, there are many, many challenges. And I think industry in the past decade made the choice to go further to the microservices world, and that's why the GRPC framework was one of the things that was hardly grasped by community and really a nice framework for, everyone who's writing their own microservices. Now, it became actually a part of Cloud Native Computing Foundation, and maybe you even can certify for that. I didn't look into that yet. Yeah, and before we move on with more technical terms, let's have a look at the official site that we're gonna use today a lot, grpc.io. So this is an official site that describes the whole concept has some very nice articles from the beginning actually from 2015, how it all started, what were the concepts. And this for example, page shows the motivation and the design principles that are more detailed that we discussing today. So you can dive into that. More than this page, you can also look into documentation for how to start it on each platform. We already see that it supports a lot of platforms and it has really nice guides on how to start it. Let's have a look, where's that guys for example? Yeah, this is platforms, languages. Yeah, for languages for Node.js has a really nice tutorials. I think if you open this one, it will show you how to start quickly with the application. It might be a little bit outdated, but still quite relevant. You will find you will make it work. If you go through all the guide. And yeah, the most important is that you see there are many choices already for the languages you want to use. Yeah. So that's another important resource that encourage you to put somewhere in your notes maybe. So we're gonna use it today. And it also a very first step for there are other information on the gRPC. So let's begin with the easy part with easy theory. What is a RPC? It's a Remote Procedure Calls. And if you look at the Wikipedia, it will say that it's a distribution computing technique. And the idea of a distribution computing technique is that one application can make a call to some remote server, and that remote server understands which procedure it needs to execute. So we did use this procedure or function. Yeah, whatever you call it. And then you send a response back to the client and the client continues to run. So easy as this, but under the hood, there are more problems you need to solve. And actually, one one part that is missing in this explanation is that a client program that is the first actor here, and needs to do that in a very simple programmatic way, like literally you in your code you just say, hey, remote server do procedure one and it should respond you back. And in a different use cases, you want to have a synchronous or synchronous response, but in your code, it should be very, very simple. And that's an ideal for a PC. So what it does under the hood for you, besides these simple steps, it actually needs to have some logic that allows you to do that. And you need to have some client library. It's also sometimes a client stuff that will kind of mock this call to remote server. And inside this mocked call, it's called client stuff. It will need to marshal or serialize the message that you sent, you want to send to a remote server. So it will also do a connection to a server, to a remote server, and on the remote server part, you will need to do some opposite operation, right? You will need to deserialize it, you will need to check, if you want to check what is the message, is it an allowed message? So we've got a marshal a message, and then yeah, after that, it will be able to just execute what this function needs to be executed. If we go further with this investigation, you will find more and more interesting questions like how does a client server calls remote service? Is it done over HTTP? Maybe, maybe yes. How to expose a remote service, like on which API address is running? And how can I connect to that? How it is serialized for a network? Is it like a JSON string defy if it's talking about JavaScript world or something else? Maybe, I don't know, string maybe that's something else. How communication happens again about the protocols and the authentication. It's the last question here. We don't cover it today too much but it's one of the important ones because you need to understand is that the caller, a correct one and the caller and the callee is a right one. And can I trust this caller and give him back a response? Or should I deny the response? Or the call actually the request? So speaking about the second part client and server communication. And again, remember that the client and server adjust microservices in our world, in our world of microservices. So they can communicate with the different protocols and for web, because all web is based on the open standards. You can use plenty of them. You can use HTTP, HTTP, two, soon HTTP, three, which is a new nice protocol on top of UDP, actually not on a TCP. But you are also can choose something more like a WebSockets way. And it's also I think, related to UDP somehow. So there are plenty of choices here. And the in effect are to make this, make this a cause, doesn't matter which protocol you choose on that level, you will need a steel to have the client library or just a library to not confuse ourselves even more. We will need to have a library, both on our client application and our remote server. That able to understand this protocol, right? Because on the client application you after you stringify your message, after you serialize it, you will need to send it. And in a web browser, for example, is quite easy. You do fetch because it's included already in a web browser application. And yeah, previously we had the XHR object to do that but now we have this fetch polyfill code which is really nice. But in your server, it's a little bit more complicated because you don't have it from, well, you have it from scratch but you need to maintain it, right? For example, if you look at the node.js we still stick to some node.js version and it might be that our outdated server is using some node of version four. And does anyone know which, which is the most recent version of node.js? Maybe you can guess in the chat. Thanks, Jacub, indeed. Yeah, that's right. Yeah, in fact, it's version 17.6, if I'm not mistaken, was following latest two updates on minor changes on 17 line, it's experimental line, as you know. A little side sentence, but that includes two nice changes. 17.5 includes a fetch, so you have, in Node.js, a browser-like behavior on doing requests, which is very, very nice. And I think that most recent has experimental support for import from remote models, which is also very cool, right? But yeah, that's really side topic here. So we said that on low level, you can choose any of those low-level network protocols. And on both sides of this application, you will need to support them, and you will need to update your environment, and you will need to be sure that your application uses this layer, and this layer is supported by both applications. So it's some lifecycle work, let's say, and that takes your resources.

4. gRPC Communication and Service Definitions

Short description:

gRPC supports client and server communication and uses service definitions to define the API and enforce strong typing for messages. It allows users to not worry about the underlying protocol. gRPC enables client and server communication through protocol buffers. It ensures strong typing and checks message types. Service definitions play a crucial role in defining the API structure.

And that's why people started to thinking of other different protocols, and the alternatives to gRPC can be named, like SOAP was quite popular in the Chao world in the 1920s, 1990s or 2000s, and REST that appeared in 2000s as well, HEPA. Yeah, you can actually argue that this is a remote procedure call or not. I think most authors will say that this is not really RPC, but more philosophically, even REST is I think is a remote procedure call in the end. And the GraphQL and a recent technology also appeared, I think, in 2015.

The point is, no matter which protocol you use, some kind of library that supports this protocol can be low level or high level protocol, you still need a library that supports it. And that's why gRPC takes it into its features and allows users of gRPC to not think about this protocol. So what gRPC supports and what it enables to a user is this client and server communication. But let me actually start from a first point here, service definition. This is a protocol buffers so cold. We're gonna explore them in a few minutes. But with this service definitions, you define what your API would look like. And it also enforce your API and the messages that you sent in to request to be strong typed. So it already checks the, enables the check for a message type that you send. It does it through the protocol buffer. But our client and server communication, that is the part that I was talking in this previous part. And this is what a JRPC enables you from the box.

So what gRPC supports and what it enables to a user is this client and server communication. But let me actually start from a first point here, service definition. This is a protocol buffers so cold. We're gonna explore them in a few minutes. But with this service definitions, you define what your API would look like. And it also enforce your API and the messages that you sent in to request to be strong typed. So it already checks the, enables the check for a message type that you send. It does it through the protocol buffer. But our client and server communication, that is the part that I was talking in this previous part. And this is what a JRPC enables you from the box.

So what gRPC supports and what it enables to a user is this client and server communication. But let me actually start from a first point here, service definition. This is a protocol buffers so cold. We're gonna explore them in a few minutes. But with this service definitions, you define what your API would look like. And it also enforce your API and the messages that you sent in to request to be strong typed. So it already checks the, enables the check for a message type that you send. It does it through the protocol buffer. But our client and server communication, that is the part that I was talking in this previous part. And this is what a JRPC enables you from the box.

5. gRPC Extension Points and Protocol Buffers

Short description:

gRPC supports extension points for future development, including the potential use of HTTP3. It allows for message ordering, supports streams, and offers flexibility in communication methods. Protocol buffers play a crucial role in gRPC, providing efficient data serialization. They define data types and enforce strong typing. Protocol buffers have a long history and are widely used in various technologies. They allow for the description of data formats and protocols within an application, and code generation is done using the Protocol Buffer Compiler.

And that's why the versioning, however, official website, it says that version 143 is now available. In some cases, like for example, in node.js, so the version is a little bit different. So you have to look yourself and don't trust that this version, because it depends on the environment that you are using.

Next point here, I wanted to say about extension points. So we said we started in the beginning that your JRPC communication will be on top of HTTP2. But at some point, when HTTP3 is stable enough, I'm pretty sure the whole library will move, maybe with version two or something like that, through HTTP3. And this is available through the extension points. And the extension points is basically some pieces of code here in this implementation source code that allows you to add some hooks to write, or maybe to generate the code, because it's mainly about generating the code that allows you to write any implementation of the gRPC framework features.

So transport protocol that will be used, maybe buffers can be even replaced. Not entirely sure what, kind of think it will be possible. Many other points like, if we still talking about transport protocol and the communication, we can enforce, for example, the message ordering. However, it's already enforced by current, by the core version, but we can do something extra, for example, switch to web sockets and do something similar. And then the communication itself, a point like this stub generated library, it allows you to communicate from a client server. And as we said, it's done over HTTP2. It also does serialization for you from the box and how it does it actually a similar way as a service is defined with the protocol buffers format. Message ordering, I already said that. Streams are quite natural to Node.js and Node.js implementation of gRPC supports streams from the box, so that's very nice. I don't know how familiar you are with the streamings, but in short, it's a transport of small chunks of data, some small pieces of data over applications. So that allows you to not take a huge part of resources operational system where your application is running. And it just sends one piece of data and forgets about a piece of data. It sends next piece of data and forgets about a second piece of data. And in this way, your application stays really tiny, doesn't take resources, and it would just run. And it's done very quickly. You don't have many troubles, like if you would do one whole HTTP request that will, for example, will fail somewhere and it will take a lot of time until you get some packages that are coming to finish this resource. Synchronous, asynchronous, we, that depends again of the technology that you use, but in short, it is up to a user of a gRPC to choose which way of communication to other microservices. You want, either you want to do a synchronous call, wait until the response is done, and then continue running the application. In Node.js, it's not the most natural way, I think you, in most cases, you would like to send the request and let's say subscribe to the event when the response is given. Yeah, and again, authentication that I already mentioned, we will not spend too much time on that, but I think we will see some parts of that in the demo. In short, gRPC from the box allows your certificates to be involved in authentication, and also token-based authentication is enabled from the box.

Okay, that was a lot of speaking, I think let's have a look finally at some example. What we see here is a service definition in a protocol buffers way, so this is a small file, and that's where every development of gRPC microservices begins. It just starts with this small files, you define a protocol, you define a protocol for service that you are writing, that you're gonna be using just as service design that is gonna be involved in your overall application. So in here we define the package hello, we define a service hello service and it will take four methods. It will have method just hello, and this is a direct message to message. And then other three showing how the stream definition looks like. Let's have a look at the last one. Both streams supports streaming from both ways like from server to client, client to server, all in one. So we just say that I'm gonna have a stream hello request type of message and then the service will also return a stream of hello response type of message. So this is a service definition, but you see here, you have some data types and you in the protocol buffer service is I think even more important that services themselves, it's a type definition and the in protocol buffers is really, really simple. You just say message, and this is your, this is your type of data here. It's a complex type of data as we can see, it has, it has one keywords, one property greeting inside and then in response it has another property reply. Yeah, that's a GRPC part. Let's summarize it one more time. So GRPC is a framework, it's mostly about generated code and this generated code is called generated stub. So just stub, stub code and the this stub code is used on the client microservice and the server microservices. And this stub code does quite a few points to you. It serializes message, it chooses and specifies internally which network protocol to use to communicate to the remote service. It sends request, it verifies that the data, the requested data is correct and the response is correct. Disrealizes checks, again, validates that the message type is correct. Yeah, and then your server implementation can do the rest.

Okay, hopefully, this is, that makes sense. Now, let's talk about second part. I think it's gonna be a little bit simpler than the first one. So I wanted to say a few words about protocol buffers. So we talked already about the GRPC itself and the protocol buffers might be the most important part of it. So this is, protocol buffers originally was a technology to serialize data. And like in the previous example in here, you just specify message type and then you have a new data type. In here, we have a person data type and we specify the property name, ID and has a point of character. Does anyone know what the numbers on the right mean here? Maybe a rate for property, magic numbers, yeah, exactly. Yeah, that's something that is called a tag and we can take it as an ID maybe. So there is some order also. So it should be unique, this is a unique number and it can also say that they define somehow an order in the message for this field, but yeah, this is not official, that actually depends on the implementation and I think this is an abstraction in which it doesn't say anything about an ordering. However, if you look at the original data of a protocol buffer, you will see that they are indeed done in this order, but we can take it as an ID and this ID should be unique. And this is very important too, when you evolve your message type, message data type.

All right, so a little bit of history on protocol buffers now, they are also created in Google, some much more material project than gRPC itself, it started in 2008 as an open-source, maybe also in Google I mean this time, I'm not sure, but it's hardly used by a community in absolutely different technologies, so also in different stacks, also in absolutely different technologies, you can name Kafka from Java bolt, so they can use protocol buffers as a serialization mechanism. And so what it allows you, and because we said that the protocol buffers are about data format and efficient technology to serialize data, so it's mainly about describing your data format or protocols inside the application. So what do you need for that? You need a protocol file, and in this protocol file, you will specify data types that you need for your application. And those data types, they are strongly backed, so the protocol buffers can enforce you in your application to check the data type before dealing with it. As gRPC, it also does code generation. It's done via the tool called Protocol Buffer Compiler, and I hope if you wanted to touch some of the code today, you already installed it.

6. Introduction to Protocol Buffers

Short description:

gRPC has multiple implementations and is officially supported for various technologies. TypeScript has two implementations, and we will use protocol GenTC. Protocol buffers allow the definition of data types and services. They support scalar types, packages, and services. Defining microservices using protocol buffers is simple and involves specifying data types, fields, and properties. There are plugins available to generate code for different programming languages. Versioning is not supported, but different versions can be created by introducing new packages. Overall, protocol buffers are not difficult to work with.

If not, have a look at the beginning of the page. You have a link to do that. As gRPC, it has a really nice and also tutorials for different technologies, how to use it. It's kind of similar, at least officially supported technologies and the environments that I used for. But actually, because it started a little bit earlier, it has quite a few implementations over the official stack. So, for example, today we're gonna use TypeScript, and we can see that even from TypeScript, we have two implementation just in this list. And I think we gonna use today protocol GenTC. So this is quite a good package to be used in your application. Yeah, so protocol buffers allows you to define data types. It also allows you to specify services definition. That's that part we already seen, and that's where gRPC actually comes into play. But originally, I think that wasn't a pension of protocol buffers they edited incrementally and now it can be nicely used there. Now, a little bit more about that. So this is a general definition for property inside your message type. If you are in your data type, you see you can specify a rule type of your field, the name for it and the text. So tag we already discussed, kinda a unique ID for this property, name just a name almost with no restrictions. So I can maybe decide some extra characters. The type, the scalar type of this property, the GRPC itself has a quite a few types supported from the box around 20 to 30. You can imagine there is a straight, there is a int float. Yeah, maybe some other types float of different ranges of both up to maybe 64. Also, yeah, plus or minus data type. And a rule I think it doesn't support that much. It has repeated type. It's repeated is a synonym for array. So when you say the repeated, repeated is basically array of int numbers. That's a synonym here. And yeah, that's quite simple. And if you're not agreeing with that, let's have a look at a few more things that I put here. So we have a message and that message supports scalar types. We also have support for anons. We don't have today. I think maybe we have today an example. I don't know, maybe Andrew will show something. So repeat it as we said, from the books, we don't have a void type. So we cannot say undefined, for example, unfortunately, but we have in a standard library, we have this reference to kind of null type, void type. So we can import it first and then reuse it. But I think in most cases, you can just specify message, message void with a capital V, and by not specifying anything that will allow you undefined values in the JavaScript. It allows to specify packages because packages is quite important pattern in the overall programming. Interestingly, we didn't have it in JavaScript until 2015. And the services again, it was not a first in a proto-buf itself, but now to enable gRPC, you can also specify some pieces of your API by using this keyword service. And then with the RPC keywords, you specify a method. But in the end, if you look at this definition, most of your micro-services won't be much more complicated than this piece of code. It's really simple. So you specify data types, you specify them with a message keyword. For complex type, you put the name on it, you specify fields with a, specify a unique text for them, you specify these properties that you need. And, yeah, and if you need to, and you want to define the API in terms of shared PC or specified service, and there are two more keywords, I use here, service and RPC, and, you specify that type that was defined. Maybe announced to, yeah, to make it really nice for application. And, yeah, that's not much more than that. To finish this part about Protobuffers, we, there are extremely, extreme amount of plugins. We said this, this list is available for, by the community, and this can be treated as a plugins for a protocol buffer compiler. So we can generate code for almost every language, programming language that you want. And for, for the each, well, almost depends on the limitation for, for these languages. You sometimes have nice flags in, in you can specify, for example, here, some specific, a specific flag to, to compiler. For example, in Java, you would specify that you want to generate multiple files, multiple files per class, for example, yeah, file per class, because this is a very common way in Java. Yeah, and some extra options that you have from, from a protocol buffer system, you can have message types, or you can use a required and optional keywords. I think in a latest version, they are mostly used, they're treated as optional almost all the time. Unfortunately, it's not supported versioning. And that means that if you need to change a protocol between the versions of your application, you will need to be really, really careful with that. And the main rule is that you don't want to change existing fields text because the text are the most important part of this, of this message part, message definition. And I saw this approach. I think this is one of the main approaches using versioning in the GRPC world, in the protocol buffers world, you basically create a different version of your packages by introducing new package with them, something like that, myPackage.v1. So that's how you switch from one version to another. Yeah, and there are a little bit more than that, but it's really for some specific use cases. And I'm talking about maps one of and all of, but in most cases you will not need them. And that pretty much covers the protocol buffer. So it's actually not a very difficult step. So, yeah, any questions on this point? I was talking a lot and now sorry for that. Right, it's good. That means that everything is clear, right in theory. No, I'm kidding, but seriously, if you have anything to ask, you want anything to ask, feel free to.

7. Demo: Protocol Buffer and Code Generation

Short description:

In this part, the speaker demonstrates a demo showing the use of a protocol to serialize and store JSON data. They explain how to use the protocol buffer compiler to generate code from the data type. They also mention the use of Google protobuf, a library that enables the runtime of a protobuf inside JavaScript. The speaker shows an example using Bitcoin API data and discusses the use of a VSCode protocol buffer extension. They demonstrate the process of downloading prices and creating a protofile for the data type. The speaker explains the definition of the data types and the use of the ProtoC executable to generate the ProtoWAP runtime. The generated code uses the Google ProtoWAP runtime and provides strongly typed data types with sender and getter methods. An example method is shown for getting the array of prices.

Of course we're very open to that. So at this point I will show a little bit of demo and my demo finally will let us see some code and some gRPC stuff. So let me start with a demo that shows the protocol buffer itself. Oops, did I show? Yeah, I think I showed this VS code right here. So what I wanted to explain in this, yeah, let me share the whole screen, be a little bit easier like that. So in this first demo, I want to show how to use a protocol to serialize in store.JSON. What I'm gonna be using the protocol, a buffer compiler and to generate code from my type of data, I will use this kind of command and you see this in here, I specify already some arguments to my command. So I specify that I wanna have an output in the JavaScript format. I specify the common JS by the way, protocol doesn't support the ECMAScript 6. However, you can find the flag for ECMAScript 6 in the original source of the library. Please don't use it, it doesn't support it. And then you specify a protofile and this command will generate you and generate us the protocol buffer, compiled code. We're also gonna be using Google protobuf. This is a library that is also supported by Google that enables the runtime of a protobuf inside JavaScript. Yeah, and as example I will show, I will use this Bitcoin API data. So we should see something like that in our JSON like this part on the right. Yeah and the last part is a VSCode protocol free extension. So to work with the protocols, it's useful to have an extension and there you can find many extensions for your VS Code. For example, and pretty sure also for other ideas it is also available. Yeah, in this package JSON I have already prepared scripts to do so. And the first step is to download prices and this is I have direct knowledge on a jrpc yet but just to show you how easy it is I'm using the HttpGet here. So it's still an old way of getting data from external point but maybe now in further workshops we should change it for node version 17.6 and they use just fetch that would be nice. And then I just put it to prices.json. So let's see what we gonna generate. So this is doesn't have anything related to jrpc yet. Let's, however, execute. So let's call them npm run them. This guy. So it created the file or prices.json and it looks like that, almost like we've seen in the browser. I will format it. So this is our data. And now our next step is gonna be to introduce a protofile for that, for this type of data. So let's do that. And I have already, of course, this created for us. And nice thing is that we compare this to the JSON itself and the message there. It's really straightforward, straightforward how you have to define your data. So I just created message price. So every item inside the array is of type price. It has some date. We see it's a type string here. However, it's a formatted date type, but they still a string. And I think protobuf, protobuf don't support dates. They just support strings and then it's still a string here and next one interesting here. So I have price and I made a mistake when I was preparing that. So I made this an integer. And actually in my further code I was going through this array and I was saying, yeah, can I convert it to this type? And that it was throwing me an error, but it was throwing an error on the second item. So I looked at the second item and I saw that, and oh, it's not an integer. It's actually, it should be a float. So I put a float and yeah. Price open high. Change percent from last month's percent. Also a float. And last is a volume, which is also a stream. So that's how we defined it. And we defined one more message type. This is a price and this is very easy. We just say that it's an array of price items. So that's how you define two data types here. And yeah, what's our next step? Next step is basically to generate to generate ProtoWAP runtime. So for that, we are gonna need this ProtoC executable installed, I've already explained it. So let's execute this command. And VM run, generate ProtoWAP runtime. What it created, and I'm not sure if you saw it or not, created this prices underscore ProtoBuf, or pb.js. So this is a absolutely generated code for us. Let's delete it and show again, just to make sure. All right, looks like it's still relevant, yeah? So it's created it, and yeah, I don't think we should go through all of these generated code, but in the end, we see that it uses the Google ProtoWAP, again, the run time to operate the ProtoBuf format, and yeah, and more than that, it does some internal magic stuff. But I think what for us important is we have data types as prices or just price, and every data type has a sender, a getter, and it also strictly typed. So that means that when you need to create a new instance of this data type, you will need to pass correct parameters. And only in this case, you will have the correct result. For example, this get prices list will allow you to get the array of prices.

8. Protocol Buffer Transformation

Short description:

This part demonstrates the transformation of a JSON file into a prices file using protocol buffers. It shows how the data is validated and serialized, resulting in a binary prices file that is significantly smaller in size compared to the original JSON file.

This will allow you to add an array to a list. We will see actually some of the users show this in the next step. But yeah, just to show you set high, get high, and then it calls actually the internal library to put this field into the protocol buffer scenario in protocol buffer data type. So what would be my next step? Next, I wanna actually run protocol transformation. Actually see an error here. I see that a user again on index.js, but I think that was not an idea. I think I want it to use a protocol.js. So my protocol.js file next is the next step. I want to do the transformation of a json file and I want to get this. Instead, I wanna get the prices file. So let's first delete the result because we already have it here. So what I'm going to do here, I still use Node.js. I use a file system, and now I require this prices.pb that we just generated. I request the prices data json file, and the next generate, I create an instance of the prices. I go through my json array and I add each price to my array. And what happens actually, it looks maybe not that fancy, but what happens here at this point, it validates our message for this strict data type that we defined. And that, as I said, as I did the mistake and in the second field, there was not the integer was a float, in here in the second item, it throw me an error. So I had to fix it. After I did that, after I created this instance of prop prices, I want to serialize it and I want to output it to a prices file, just prices as binary. So let's do that. Let's execute this run protocol transformation, run transportation and yeah, it did, it created this prices JSON. If we show how it looks like, it shows some data inside, but it's of course not very readable. However, strings you can see are pretty plain here. Yeah, so let's have a look because I showed you this part so we output this prices file, we put this serialized data in there. But my code and my last pieces of code here I'm doing a little bit more. They are reading a file, then they de-serializing it back and go through the array, but of serialized data now and they output the stringified representation. And it shows me here in the console how it looks like and it looks like almost the same. I see now that the key parts of this object are lowercase which I actually didn't know before I saw that but I guess this is the object method as it. Yeah, so this is the first part just to show you how this price can be specified in protocol buffers. Just to show you a little bit more. If I do la now, you will see that my price is the JSON file and maybe let's do it a little bit more, a little bit current more correct. Let's do it like that. And still my prices of JSON is 15 case here. And then my binary data, my prices data that we just generated from the script protocol JS is almost three times less. Well, I think usually people name twice less or something like that, but we see that this is much, much less amount of megabytes that can be used in your production service to transfer data. So this is really nice.

9. Using gRPC.js and Proto Loader

Short description:

In this part, we will learn how to use the gRPC.js library to create a gRPC Node.js server and client. We will also explore the proto loader, which allows us to load proto files into our runtime. We can load the proto file dynamically or statically. Additionally, we will demonstrate the usage of streams using an example from the proto loader. Let's dive into the code and see how the gRPC server and client are implemented.

Okay. Second point, second demo that I prepared for today is how to use this gRPC node JS server and client. And we're gonna continue with this coaches that I showed you, but we're gonna need to do something more. First of all, we're gonna need it to use gRPC.js library. This is the thing that I tried to explain at this point, when I showed this repository of officially gRPC technology supported different environments and languages. Still gRPC.js is something that's separately developed. So this is a client libraries that allows you to use gRPC supported by the same company, same community, but it's located in a different repository. So this is a library that allows us to do gRPC. Proto loader. This is a... This is a loader for our proto files. It allows us to load them correctly into our runtime. Actually at this point, we might want to say that there is a way to do it statically and because in most cases it's still used like we get this proto loader, NPM dependency and we'll load the proto file into our code and it's parsed on the fly into a service and creates a stub on the fly, but we can do it in a static way if you want, so do it ahead of time. But in most cases, in most examples you will see the dynamic way and we'll show it also today, the dynamic way of doing that. And I'm gonna show also the usage of streams, it shouldn't be very hard. I use actually for this demo, I use the example from proto loader, has a nice piece of code that shows how to use it.

10. Starting the gRPC Server

Short description:

In the next step, we start the gRPC server and specify the implementation for it.

So, let's have a look, in my next step of this code I start the gRPC server, so let's first have a look at the code, what our gRPC server does. So, again, this dependencies, so we just take these dependencies. We get the proto loader, explain that it loads the proto files from the fly to us, gRPC GS can help us to around these tabs. So we load the proto file, in here I specified a bus to my proto file, I load the definition of that. What happens next is because we're running the server, we need or we want to specify implementation for a server.

11. Implementing the Server and Client

Short description:

We write some implementation, lease, lease item get, then we create a server by doing new gRPC server. I specify the service, and I say my service is called history data, has three methods. I implemented this get leased in least stream. I created a service. I created a server. I added the implementation of this methods to my service. And then I just run it. I say, I will be running on port 8000. I will create an insecure connection. I read the date from request. And then I check my prices, prices lists. I try to find their prices. For a list approach, I just return my whole prices list. For list streaming, I go through prices and for each price, I put a time out like it would respond to me in half a second. This is our server. Let's start it. So now, we want to show how a client looks like. We specify that we want to use an instance of this history data protocol. So we say that the history data service is running on this board. It's using the insecure connection. And then I show that I can make a client get by some date. I will show that it found its currencies from there, and I show the list as well. Let me comment the client list first. So first, let's show how client is working. And then I will open another terminal here. I'm going to start this client. npm run start gpcclient. Yes, so that worked nicely. I got the client by this date. Let's have a look what I got. If I specify the wrong date, something like that. Yeah, so we go tell me in the details of an error message that it was not pumped. That's what we specified in the server. Okay, let's get, get was very simple. Next is a list. So list should return the whole list of JSON prices, of JSON prices and that's what it is. And the last one that I wanted to show is exactly the list string. So I, and in here I use another interesting thing. I use a, for a wait or a way to iterate through iterable objects in JavaScript and streams supported in node.js and also gRPC supports it. So I just go through it as it would be my just a synchronous array, but in this case, it's actually a synchronous. And I just call, when I, when I call it, it will show us every half a second to get a new price and I would put it to console.

So how are we doing that? We write some implementation, lease, lease item get, then we create a server by doing new gRPC server. And we add the methods, that we want to implement in this service. So I think in this point, it's not clear what do we have in the service. So let's have a look at our proto file again.

We have, besides the messages here, I specify the service, and I say my service is called history data, has three methods, RPC get by date, will return the price. List by empty, will return prices, like a list of prices. And the next last one is a lease stream. So this one will stream data of prices back to a client. Back to a client. You see here, I specify the message of type empty, kind of void type here, and also just to show another type of data and message, message date here. So a user can put a date and I will return a price by this date. So I implemented this get leased in least stream. I created a service. I created a server. I added the implementation of this methods to my service. And then I just run it. I say, I will be running on port 8000. I will create an insecure connection because just for test reasons for our local. And this is a part where on production, you will want to use certificates and you will pass some tokens between the service between services. And yeah, that's how I start my service here. I think you might be wanted to know what I inside my application, my implementation here. And as you can see, it's very, very simple code, I, by getting, remember this get by date, get by date, I read the date from request. And then I check my prices, prices lists. I try to find their prices, by the way, is my adjacent here, but that doesn't matter because I need to return still a type of, because it will be converted here into protocol. So I, and that's how we're operating actually on our program. We basically operate the normal messages. We don't have to transform them manually because it's all done on this stuff, on this table. So if I found something, I will return it back. If I don't, I haven't found that word just sort of found. For a list approach, it's even easier. I just return my whole prices list. For list streaming, this is a part very, I guess, in a really interesting. And this is a, if we look here, it's a, the difference is that I return a stream of prices. So in here I used something interesting. I go through prices and for each price, I put a time out like it would respond to me in half a second. And I will be replacing just to show you the process. This is kind of a synchronous way of communicating to between microservices. And then I just send that the call when it's needed. This is our server. Let's start it. It seems to be running. So now, we want to show how a client looks like. And it's actually a little bit even less code, but still we're going to use grepcjs, protoloader, and some prototype name to generate kind of the same stuff here. So we generate the stuff. And then we're going to create the server because we're going to be using it. So how we are going to do that, we specify that we want to use an instance of this history data protocol. So we say that the history data service is running on this board. It's using the insecure connection. And then I show that I can make a client get by some date. I will show that it found its currencies from there, and I show the list as well. And let's actually do it one by one. Let me comment the client list first. So first, let's show how client is working. And then I will open another terminal here. I'm going to start this client. npm run start gpcclient. Yes, so that worked nicely. I got the client by this date. Let's have a look what I got. If I specify the wrong date, something like that. Yeah, so we go tell me in the details of an error message that it was not pumped. That's what we specified in the server. Okay, let's get, get was very simple. Next is a list. So list should return the whole list of JSON prices, of JSON prices and that's what it is. And the last one that I wanted to show is exactly the list string. So I, and in here I use another interesting thing. I use a, for a wait or a way to iterate through iterable objects in JavaScript and streams supported in node.js and also gRPC supports it. So I just go through it as it would be my just a synchronous array, but in this case, it's actually a synchronous. And I just call, when I, when I call it, it will show us every half a second to get a new price and I would put it to console.

12. Using gRPC in JavaScript

Short description:

To finalize this demo, I have just one more thing is Java client. We're not gonna talk about Java today. So if you don't like Java, don't worry about that. In this demo, we demonstrated the use of gRPC in JavaScript and showed how to make calls to a server using different technologies. We also discussed the benefits of using TypeScript and Learner in the development process. The next part of the workshop will focus on using TypeScript and the protocol.gmtc file. We will explore how TypeScript and protocol buffers can work together to provide strict typing and a bridge between the two technologies. Additionally, we will discuss the advantages of using Learner for building monorepos and the upcoming DevOps-GSConf conference where we will cover deployment. Let's begin by cloning the cryptocurrency converter repository.

Something like that. Very, very nice. And I like specifically this example because it's show how elegant some constructions are in modern JavaScript. Quite cool in my opinion. To finalize this demo, I have just one more thing is Java client. We're not gonna talk about Java today. So if you don't like Java, don't worry about that. But just to show you that it's possible, I went through the guide of how to start something similar, but in Java and it looked, well, I was having a little bit of troubles, but in the end, yeah, just copy pasted the prices, properize this is my still prices doesn't change. I told you that in this photo files, you can specify some compiler flags. So let's don't worry about them. I actually didn't dig too much into that. And then in your prices client, I will also connect to this port 8001. And I will call them get or a list message. Yeah, blocking stop list on request. So that should just make a list cold, to my running server. And yeah, return the prices as it would be. And I think I hope that demonstrates a nice way for JRPC. So what we had to summarize in this demo, we did have five steps. In the first step, we downloaded some JSON data. In the second step, we just generated the protobuf front-end so just the protobuf formatted data types from our protofiles. Then we transformed our data into protobuf parameters just to show the format is indeed correct. And then in the second part of this demo, we started Microsoft Service ONE, that reads the data in the KDE JSON. And then in our client part, we started another Microsoft service that does a call to a server. And it does it in plain JavaScript, but inside there's a stub that is generated for us. And this stub is actually converting data into a message, into a stringified version of that, into protobuf formatted message, sends it back through some client. And by default, in gRPC, this format is http2, as we said. It receives it on a server, it did the opposite direction operation and send back a response. And we could read it in different ways. We saw it also with streaming, getting one response, getting a list of responses. And then the last part, we saw that we also can just make another client in absolutely different technology, generate stuff from it, and it will still receive the same message from our server. I really think this is nice. And Yeah, that finalizes our first part, that's even better. So Andrew, are you still here? Yeah, definitely, Alex. Such a lot of material, I'm really, really impressed by you. Okay, I feel myself, you know, my head almost exploding because of all this information. But I feel myself so smart now. Thank you. Yeah, thanks Andrew. Thanks, for this work, maybe. So Andrew, why have we chosen to keep the perspective and why we've chosen TypeScript to demonstrate today? Well, TypeScript. I just, I mean, I recently started to learning Google language and that was a crucially different experience after JavaScript, because you know, all of these types and, you know, strict typing, you understand what you send, what you receive. And basically, right after that, I also started to, you know, exploring TypeScript for myself. I haven't worked with it too much before, but yeah, this is pretty much that, I mean, good typing system understanding of all, most of your bugs on the earliest stage before you even go to production or to any environment or when you run a build. Mm-hmm. Right. Thanks, and actually, for me, you know, like, I think TypeScript would be really, really logical here because proto buffers kind of already allows us to use a strict typing, right? And it would be logical if we have a kind of bridge between TypeScript and the proto buffers, and then I think that's where it gets really nice. So, so far, guys, we just use the pure JavaScript, but don't worry, in the next part, we're gonna be using TypeScript, and it will be really nice with this protocol, protocol.gmtc, if I'm not mistaken. Next question to you, Andrew. So why, why have we chosen to use Learner today? A good question. I mean, Learner is one of the tool which allows you to build effectively monorepos, so monorepos is something when you decide to put your different services and different libraries in just one repo, it will bring some benefits, like you don't need to, you know, manage all these dependencies kind of in a separate way. So you have all the code base in one place. You can go change, see changes, and, you know, start moving faster. Learn, adjust allows you to do some extra features on top of that. Like you can build all your packages in one go. You can test all your packages in one go and things like that. All right, all right. Yeah, and are we going to deploy? So for now, answer is no, but it's gonna be another workshop soon. So join and we will deploy it. Yeah, yeah, thanks. Exactly. I don't know if you're following. I hope you follow this nice conference. There's gonna be a next conference soon. It's DevOps-GSConf. So we gonna be building kinda second part of this workshop there. And we gonna show on a based on this code, how to deploy to some interesting environment. But yeah, let's not cover it today because we have a plenty of stuff. So Andrew please take a screen. Yeah, so cryptocurrency converter. I hope you all just, let's just begin from the clone in this repo.

13. Preparing the Environment and Monorepo Structure

Short description:

To prepare your environment in this repository, install the necessary dependencies and run the command 'learna yarn learna-bootstrap' to bootstrap all the projects. The repository is divided into packages, including common packages for sharing code between services. Inside the JRPC package, there are currency converter, DevOps, and ECB provider services. The Proto folder contains all the necessary Proto files. The Lerna tool allows us to work within one space and includes the Yarn Workspaces tool for including common packages without publishing them. Lerna provides the ability to run commands on the monorepo, such as 'yarn run build', which builds the specified packages. The common and services folders contain shared code and different types of services, respectively. The root folder defines the structure of the package.

I already have this clone so there is nothing to do, but yeah, I know by our link you can find the explicit explanation how you can install all the companies you need. So this is the repo for sure. And another thing it's called Protoc. I think you're showing your ID, not a browser. Yeah, yeah, yeah. Okay, okay I shut up. Yeah, let's go. Yeah, thanks. Yeah, so you have this material. You can just install this away and try using these tools. You know, using these tools. So Protoc is a tool which Alex spoke a lot. It allows us to build our proto files to some code basis files. So then we can just start using this code in our project. So yeah, so to prepare your environment in this repository.

First of all, you need to install dependencies for sure. So let's start from that, but yeah again, I already have it. But anyway, it's all done. Then the next step we just need to run learna yarn learna-bootstrap. This is a command which bootstraps all the projects we have here. So in one repo we have multiple different projects. So when we run learna yarn learna-bootstrap, it just execute yarn install in every single project. And we just make sure that everything is prepared for our demos, so let's just say okay. Okay then.

Yeah, so next we're gonna speak about monorepostructure. I hope you can see this well, but if not just tell us. So we have we divided this repo into the packages, so we'll have common packages. And common packages is something we'd like to use between our services and between our other packages. So some common code we just like to share between different services what we have. So, here we have like a GoJRPC, Tracer and Boilerplate. Boilerplate is just a thing to kick off a new project. And inside your JRPC, what we have? We have a currency converter service, we have DevOps service and we have ECB provider service. So let's have a deeper look what's inside. Inside Proto, we have also a Proto folder, which contains all our Proto files you need. A few words about Lernu and how it allows us to work inside one space. So we have a next configuration for Lernu, where we have defined packages and here we defined all our packages is under this folder called packages. And we also define that we use npm-client, which is yarn. And we also define that we want to use the Yarn Workspaces. So Yarn Workspaces is another tool which allows us to actually include package like any common package inside this package without any need to publishing it somewhere. So with yarn workspaces, this platform understands that there is some package which is just number of files located in this folder and it can be used from another project, from another package and other service. Yeah. So let's have a quick look into the learner. So what learner gives us? Learner gives us some abilities to run comments on this monorepa and basically do some magical stuff, I would say. Like for example, we can use this comment and type it Yarn Learner, not Klearner. Run build. And also if we run these comments, just like that, it will build all the packages inside, like all the common packages, all the services inside and so on. But here, we just say, okay, we're done. We don't want all of them. We want just specific scope. We want the scope starting from common. In one common, so here inside, if we go in the packages, we see that package is created, it started from common, as well as here in the services, we have all the packages starting from the gRPC. Just, you know, giving a bit more context in case of we want to build on the gRPC packages to our REST packages, and so. So yeah, let's run this, and you see when we run it, it actually says, execute and common in three different packages. The command is that yarn run build. And what it does, it actually goes to every single package in common and just execute command, yarn build. Here's another interesting thing, so you can see that somewhere we have like lint, and for example in Tracer, probably we don't. So let's imagine we don't have it, like we remove it, yeah? And we just try to run this command, yarn lint on common packages. And yeah. It automatically detects that only two packages for now have this command lint, so it will just run lint on two packages and third package where we just removed this command, it won't be affected anyhow, so you don't see any error coming out. A few words about common and services and why we decided that. So we have a common folder as I said, it's like common code shared across different services and one of the important things we have here it's called GoJRPC. You may imagine this is written on Go language, but it's not, it's just like, hey guys, let's go with JRPC. Yeah. And we have services also under the services folder we have a JRPC folder. This is another kind of, gradation of the services. So for example, we could have here services, rest services also, it will be another folder and we can have, may have CLI services as well like demons or something like that. So it will be a separate folder. Yeah. Another thing to say about this structure and important thing in the package. So in the root folder, we actually need to define this part.

14. Creating a Currency Converter with Providers

Short description:

This part focuses on creating a currency converter using different providers, such as the European Central Bank and crypto providers. The converter receives requests to convert currencies and aggregates the responses from the providers. The proto file defines the Convert method, which accepts sell currency, buy currency, and sell amount, and returns sell amount, sell currency, buy currency, buy amount, and buy currency, along with the conversion rate. The providers implement the GetRates method, which returns the actual rates for conversion. The proto file and providers are connected using the Go-JRPC library.

This part is picked up by Yarn workspaces. So for Yarn, we need to say that this is the packages of common and here's all the packages we have and here's the packages of gRPC. So we are now, when we define one service inside the package so on, and we say, okay, let's just use this service from there. And Yarn understands from where because of that lines. Yeah, so let's have a look in a picture what we're actually trying to achieve by this project. What's the business goal? So the business goal is the following. So we want to to have a currency converter which will allow us to convert currencies between different sources. So we also have multiple providers. So provider, this is a quote, which gives us currencies from some specific source. Like for example, we have a central bank, a European central bank provider. Let's see, I think this is, so basically yeah. So this is a rate we utilize from one of the providers. It's a European central bank provider. We call it like that. And then just gives us a big CMA. With all the rates they can convert and now idea to take this rates from one place, from another place and from the third place and also from crypto conversion, crypto providers. And then we kind of find aggregation here on the converter side. So converter can allow us to respond on the request. Like I want to convert 0.345 Ethereum to Australian dollar for example and converter is intended to ask all the providers understand what currencies you can convert. And if these currencies could be converted, converter actually respond to you with some result. Let's say that, yeah, a bit of information about that. Pretty much that. Let's go back to the code and see a bit deeply what we have in our protocol. So protocol is a really good way. So here you can see the picture, yeah. And on the picture, you actually can design how your application will look like. And then after having a picture with all these flows, connections, like the step number two, go to the proto and try to, you know, try to create the same picture, replicate this picture inside the, you know, set the code. So we don't really implement anything in here, but we're just saying that this is something should be implemented. Let me close this for a bit. Yeah, so we have a currency converter proto, and it has just one method. It's called Convert. So Convert accepts a request with sell currency, buy currency, and sell amount. So we got to, you know, explicitly define what we want to convert. And from a response, we got to receive a sell amount, sell currency, buy currency, buy amount, and buy currency. And also we want to see the conversion rate as a result. What else? So else we have another part as it's shown on this picture. So now here, we just described converter part, and the rest part is providers. So how we are going to define providers. So we have a currency provider protocol, which also describes a very simple function or method. It's called GetRates. Yeah. And here you can see, we have GetRates, and basically the request is empty. So we just ask provider give us all the actual rates for now. And we gonna use it for conversion. We also have base currency. Base currency is the currency in which this rates are represented for this description. And we also provide an exchange rate which contains from currency and the actual rate, which is a number. And here's an example of how we can represent repeated structures. Like these structures will be repeated. So it will be like an array. In TypeScript it will be an array of exchange rate because object. Yeah. And then we just go to implementation of two of our providers. So first provider, we have UCB provider which is European central bank provider. And as you see another interesting thing here, we, instead of writing the same quotas we have here in currency provider, we'll just say, okay, we're gonna import these. So we import this and we say, currency provider get rates request, currency provider get rates response. So we are reusing proto between each other which is also kind of beneficial thing to use. Yeah, and here is pretty the same part for currency compare provider, so currency compare if you just go here, if you're not familiar. Here's an example, yeah, how it looks like if you want to convert crypto. Oops, sorry. Oh yeah, here will be a response for crypto. Let's say we only cares about USD dollar. So all this currencies is represented in currency which is in dollars. Yeah, so what's next? So we just had a look into the proto. So how this proto is going to be used, how we actually will receive our code implementations of this proto. It's a good question. I mean, this is where this Go-JRPC library will help us. So if we go here deeply, we will see, so the library contains from two things. It contains from the thing which is called when we run build. So if we open our terminal and we go to zit.

15. Creating a Logger Common Library

Short description:

We create a common library called Logger, which is a wrapper for other implementations. It allows us to implement a logging functionality and use it in other services. The common library is created using the Higgan tool, which provides templates for creating boilerplate code quickly. The Logger common library includes an index.js file and a simple Logger protocol with an expert constructor and a debug function that logs a message to the console. We need to implement the business logic behind the Logger service and integrate it with other services.

We need its place. So we go into this folder into this common library and we say yarn build, okay. Okay, it says, no, I'm not gonna do this. Okay, let's try from another part from here. Ah, yarn, learn, build, no, run build, scope, common, gRPC, gRPC. gRPC go gRPC. So on real it, it, it just run, it just executes this, how to say, this common from packages on, which is a build. And here we just execute build MGS. And build MGS, it's just a wrapper of Google. Another, another tool from Google is a X. It allows you to write bash code just inside the JavaScript. Not, not very cool, but it allows you to, you know, simply reuse some variables from the JavaScript, which is, you know, which was the idea. Yeah, and as a final, as a final thought, what this common does, you just run pro talk plugin. So it's pretty much the same if we just run by ourselves the same command and it will do the same, pretty much the same. So let's, let's, yeah, let's actually try to run it and see if it's going to be executed or not. Hopefully it is. Okay. So, oops, sorry. Yeah. So if we go to a PC in here, let me try to run this comment, I'm just copy it because it's just a very long one. And it says no, again. Interesting. Okay, anyway, we have a, just not sure why this is happening, but we have well working copy from here. As you can see just being executed to say pretty much the same stuff. And as a result, what we get as a result, the main key, I didn't describe it yet. So as a result, we have this folder here, which contains all the proto converted to TypeScript, to TypeScript and then to JavaScript. And basically, these classes could be used from other places as just services, as clients. And there's this library, which we use. It's called, how it's called, let me see. It's called prototype-en-TS. We use this library. So this library only generates clients for our gRPC services, but not servers. So that's why we just implemented here our common server, which will look pretty much the same for all the other stuff. So what it has, it has a, you know, kind of a wrapper class which allows us to create a new server from where? Somewhere. And also it has a method called, at service, which is a proper fast way to add new services inside your server, basically to add implementation. So here, as you can see in the proto, so in the proto way, we just defined how these servers will look like when they run gRPC build, and that's created the same service, but in JavaScript. What else? Then else we just need to implement the service. So we need to write our business logic behind this service. Yeah, so let's probably try to do something on top of this. I think it's pretty much clear for now. So we have server. Let's try to, let's try from simple. Let's start from simple things. Which is basically, let's create a common library here. So we have three common libraries. We're going to create another one and the library is going to be a logger. So I want to implement a logger, kind of a wrapper on some, any other implementation. And I want to use this logger in other places, in other services for example. Yeah, so to start with logger, I want to introduce another thing which I used for, which we used for bootstrapping. So here we have like already created some codes, so we have an option like copy boilerplate paste replace some other parts in this thing. Oh, we can just from the root of this folder run something like yarn bootstrap common. And run and see. So this, these tools, this common uses a tool called Higgan. It allows you, it's kind of a template in Jim, which provides an ability to simply, you know, create boiler plates quite fast and efficient. So here I will show you a very quick briefly. We have a templates underscore templates folder, which contains two services, which is called like service and common and pretty much the same, like we run it here, Higgin common due, Higgin service due. So common, you know, contains templates for, which will be used to, when we create a new server, a new common package. Yeah, some questions which we see here right now asking from us and some, you know, implementation, where it should be placed and so on. So let's, let's proceed. I'm gonna call it Logger, the name of the common library. Logger. Okay. Andrew, my name is Andrew. Okay. As a result, this Higgin tool created few files for us and we gonna go there and see them. So here's the files. Pretty simple. We have empty index.js. And here, we, we tried to implement a very simple protocol of the Logger. So let's say we have the expert constructor debug, we have a debug function, which basically a message, string, and which does a very simple thing like, you know, just producing the same part into the, the same messaging to the console standard library.

16. Adding Logger to ECB Provider

Short description:

We define and export code pieces to make them accessible. We add the logger library to the gRPC, USB provider. We import the logger and use the info function to log a message. We build the ECB provider with the logger and verify if it works.

But here we can, we can define actually whatever we want. We're just going to create few more info and error. So error here, info here, and we also need to export these code pieces to, to be accessible from out there.

Okay. Okay. Once we've done that, we, we can go to our.onu folder. Common, packages, common, logger, and what we can do here? Well, we can try to run.build and see if it works, if it creates anything. Hopefully, this.

Yeah. It says no. Can the strategic failure variable info? Oh, okay. I'm already expert in it. Okay. Okay. Yeah. So there, here is a mistake, I want to write here and default statement. So yeah, now it doesn't show any error here. So it should be fine. Yeah, let's see how it goes on the second run. Okay, it's done. Do we have also a Mint? Okay, it shows some warnings as well. Okay, no console, which is okay, we know about this. Yeah, so we have this library. We have built it. So it appeared here. So what's next? We're gonna use it somehow. Yeah, so let's go to... Let's try to add it first. So Nirma uses a specific way of adding the libraries like that, our private project libraries. And we just use the same way. So we go to the root folder and we type here and learn at Common Logger. Yeah, we just say there now we want to add logger but still need to define scope, where. So it will be added to gRPC, USB provider. Let's try. Learn does the installation, but finally we don't install anything. It's just our library, which is located just near us, but it's still need to use GR work spaces to proceed this thing. So if we go to, if we now go to USB provider, you should be provided a folder service. We see here that the logger was added with our version, which we define it. Okay, it's a bit luck. Okay, what's next? So here we have our server as an example of using server. Yeah, let's just try to add our logger here, right? So here we do import from common. Okay, we see it here. And we're gonna import info. Okay. I know we're just gonna do these logger whatever. And then logger to see if it works. We just add a new logger in here. Info server, let's say UCB. UCB. As a provider has started, nothing more. Okay, then we can go to this folder. Oops, not that one. Okay. Okay. All right, I'm having some issues with the terminal here. Some reason, let me try to resolve it. I cannot actually navigate there. Okay... For some reason it doesn't work. Let me try to start this thing quite quickly. Hello, Alex. Actually, DSL is not that great with terminals. I sometimes show it with just native terminals. Okay, but I just have everything here. It will be just easy to do it here. Okay. Okay, we are here. So what are we going to do? So we can actually try to build this ECB provider where we just added our new theme, which is logger. Let's try to build it and see if it works. Should be.

17. Implementing CryptoCompare Provider

Short description:

We implemented a simple library called GoGRPC to log messages in our services. Next, we implemented the CryptoCompare provider using the Higgin template. We defined the proto file, service name, and proto package name. The implementation files were automatically created, and we added our implementation to the server. We created a new file, get rates, to implement the server functionality. We imported the currency provider and implemented the get rates function. The implementation currently returns empty rates and USD as the basic currency.

Okay, it's done. Let's try to run it, yarn start. So basically when we run it, it executes a standard TypeScript command, which compiles on the fly all the TypeScript and just run it. Yeah, so here we can see a ECB provider has started message. This is the one we just added here. And we see also another message is, which is like a, you know, server started. This message, I believe it comes from standard. Our common library GoGRPC. Let's try to find it and see. Server. Yeah, this is it. And here we just use console log because we don't, we didn't have logger before and now we have it. Yeah, that's great. We just created one simple thing. We just created a simple library, which we reduced then in another our service. Which impressive, I'd say. So let's move on. Let's move on and try to implement something more, more sophisticated, I would say. So we try to implement another proton. So here you can see that we have a UCB provider here. And we also have a CryptoCompare provider. But for the UCB provider, we already have an implementation inside our code but for CryptoCompare, we don't. So let's try to do that and see how it goes. So again, we just using the same template and stuff, which is, you know, Higgin. Let's see the common. So the comment is, Bootstrap service, Yarn Bootstrap Service. What's the name of the service? Oops, sorry. I just copied. So another question is, define the proto file, which is that, proto. And what is a proto service name and the proto service name is a... Let's go and see. So service name is that, so we need to define this. Just copy it. And what is the proto package name? This is the same but started from the lower character. And yeah, my name is Andrey again. Okay. So now this Keegan tool created a bit more stuff behind this scene, which will be located already in here, in the services GRPC. So here we can see that new folder created, which is CrypticCompareProvider. And we also see some tests here. And as well as, you know, two files. And one of the file, which is index, index.js. In the index.js we just simply run the server and pretty much, that's it. And if you go to the server, it shows us some errors, because here we don't have any implementation yet. And here we just need to add our implementation. Here you can see that all the values we just provided here to the Higan, it's been inserted in here. So we have most of our code already written, but still some code should be added. So let's create our implementation of our server. So we gonna create a new file here, which will go back services, get rates. So get rates will be our implementation. And so here we just write in both currency provider, import currency provider. Okay, automatically insert it. And we X per default function, which is as well as a sync, which will be implementing our stuff. Yeah. So we have a request. I quest and now our request will be currency provider get rates request. Easy. And we have also a response as we'll have a sync function. We, we are required to return promise with our response. So the response will be currency provider. Get, get rates response. And here we just, you know, will implement this say, sorry. So let's, we're not going to get deeper into the implementation of the service. We just simply return some values to see if it works because the implementation, the real implementation will be one of the tasks to be done. So it will be nice practice for you to try it a bit deeper. So we return new response. And here, if we start typing, we can see that we have two things to return inside of it, which is rates, we just return empty. And basic currency, let's say it will be USD dollar. Okay. So that we don't need to request here, actually. And we fix this.

18. Implementing getRates Function

Short description:

We implemented a new function in the server to add the implementation of getRates. We tested the server and verified that it is compilable. We used the GRPC-QRL tool to send a request to the running server and tested the CrypticCompareProvider service. We encountered an issue with an empty response and attempted to debug it. We also started the ECB provider and ran the currency converter service to test the getRates function.

Okay, we just implemented this function. What else? So here in the server, there's a special sign here. We need to add this function to our server. So it understands that here's an implementation. So we add it with add service. So add service is a function, templating function which we say it's going to be accepting request, currency provider request and currency provider response. But here, as earlier, we just said, it should be a promise. So we should promise instead of just response. And inside this function, we shall define the method name and the method name should be exactly the same as we have it here. So we just copy it from here. What is the method of gRPC? It's get rates. So we define here get rates and then we just use get rates from our EMBLE implementation. Yeah, it should be this one. Let's check. Okay. Okay, so we added to our gRPC server, which I'm going to understand that the protocol is located in here. This is the server name, this is the package name, and here's the implementation of our method. We go here, we just return an empty response. So let's try to see if it really works. Yes. Sorry. Okay. Services, gRPC, CryptoCompareProvider. Okay, I'm in the right place now. So now I'm just going to run yarn build and see if this new service is going to be crafted successfully by TypeScript. And it is. Okay, so we just verified that, you know, all is compilable. Compilable. Here we see the build folder, which has the result of our build. So this build folder is something we can actually run. Run it in production if we deploy somewhere. So the idea don't use TypeScript in any environment, instead of, you know, compile it to native Node.js files and JavaScript files. And then just execute it on your environment. So yeah, let's try to see. Here, we also have a file. It's called Nth Example. So let's copy it. This file is going to be used by the environment to pick up this port and use it when the server is going to be started. So let's start. Yarn, start. Okay, it's running now. As an example, we're going to use a tool called GRPC-QRL. Let's have a look into this. So GRPC-QRL, it's a tool which allows us to run a CLI command against our GRPC Server and try to test it. Basically, we just send a request to this running server. So here we have a server running at this port, and here we just try to call it somehow. Let's try to see what we have here. We have here, it's a cryptocurrency, I don't know, CrypticCompareProvider, and we're going to show all the list we have inside. So we have CrypticCompareProvider here. Okay, let's try to see what's inside. There is this, actually reads the proto file and just understands that it is one service. It's called that, and there is also a method code like this. So as simply as these, but I have an example for our GCB provider, not for this one. Let's try to change it and see if it works successfully or not. Yeah, we just have this line and here we also need to change the prototype to compare. Yeah, you can see here at the beginning echo and just empty JSON object, which means like we just send an empty request to the service. Server closed the stream without sending trails. Interesting. Do we see here? Nothing. Yeah, it's interesting actually, I almost sure that I just done some kind of a mistake in the middle. Let's try to run it on the ECB provider maybe because I have a practical example of that. So yeah, let's run start ECB provider. As a side theme, it started here. Okay, it started, we have to stop this one because we are going to, you know, we're going to run a currency converter which is another piece of software we already have implemented before as a second service. So for now we're trying to run two different services and one service is going to execute another one to proceed some values. Let's start also, do this here. Okay, so two services started on different ports as you can see. There's some experimental stuff we used. Okay, and my example is about these calling getRates which will hopefully will be executed successfully. Okay, so here's an example I called UCB provider which is an existing one, I called it with, on this specific port.

19. Implementing Service and Currency Converter

Short description:

We implemented a real service that reads an XML file and converts it to our proto format. We tested the service by creating integration tests and mocking the implementation of NotFetch to fetch external rates. We created a UCB provider client and tested the provider by running the get trace function. We also implemented a currency converter that asks all our providers to return current rates and performs a simple conversion. However, we encountered an error while testing the converter and attempted to restart it. The error may be due to a rejection inside the service.

So if we, for example, decided to go here and try to change it, not here but here. So here's some implementation of real service reading this XML file which I showed you before. This reading this XML and just converting it to our proto stuff. So that's it. Yeah if we change it. Just returned like you know have to make hopefully it will work too. True though. Okay it's restarting, it's restarted. Let's try another one. Okay now it's just return base currency because when you when you return some empty structure here to automatically optimizes it and that doesn't even pass it on the other side. So from the other side we will never know will it be you know feel it or not. Unfortunately, so let's get it back. So, yeah another yet thing to mention how actually to test this, to test it. There is also a folder if we go to UCB provider which is called test. And here we have, here we have integration test for our server. So what exactly we do here, we mock the implementation of NotFetch because we use this NotFetch to fetch external rates. And does a mock we would just return some specific values in here as an XML document. And here, what we do, we create a UCB provider client, and then we just test this provider. Basically on top of this client, we try to run get trace function, and we expect that this function will return some specific community result. So the result which actually being generated in here in this business file, business implementation of this file, of this service. Yeah. What else I wanted to say? So we know how to create a service now. This service return us some data. We also have a currency converter here. Let's try to ask this converter to convert us to something and see if it works. So I will show you also a converter, the implementation of converter for clarity just to make it clear. So we have the same file called server, which is pretty much the same as the one we implemented. We have a convert method, let's go there and see. And here what we have. Here we have a provider services environment variable, which for us contains at least of all the providers we have implemented, as for now, we just implemented one ECB provider. We have just one here, but the general idea, we define all the providers using comma, comma separated in this variable. And then inside the code we have a provider clients, which is the clients like that. And then, get currency. Yeah. And here, what we can do, when we have a new request arrive to converter, we just ask all our providers to return us, get current rates, to provide us rates. And then this happens, we have these segregated rates where you, and basically we can... Let me show it. So for now, it's just implements. As for now, we have just one provider, which is UCP provider. Just takes the first result from the first provider basically, and then it takes the rates from it, and quite a simple conversion here. Also rounding, and here we return some results. Okay, so we have running this currency converter. Currency converter. Now here I want to actually call it. So here I need to change a currency converter. Proto, it's on the port 51, and here it's also currency converter. Converter. Yeah. So what else we are missing here? We are missing here the request. So here, as our service expects some requests to be sent to it. So we need to actually define it. So it's a cell currency, I would say it would be USD dollar, it's another thing called BICURRENCY. I'd say it would be GBP, and I want to sell, I'm sorry, it's not cell currency, it's cell amount. So I'm going to sell 100 of, 100 of USD dollar, and I want to have a conversion to GBP, let's see. The magic is not happening for some reason. Let's see what we have here. Okay, we have a, we have some weird error here. I think it was a rejection inside of a service. Let's try to restart it. Currency converter yarn install. Just to make it clear. It's just some dependencies. It's not really there. Oh my goodness. And just yeah, I'm just used MVM. If you ever familiar with that, really cool to to switch in note Okay. I'm on the not version was the eight. And if I use MVM and then use eight when the stuff like like that. It's probably when I restarted. And my idea, everything goes wrong.

20. Summary and Benefits of gRPC

Short description:

We have two running services: a provider and a currency converter. The service can be easily expanded by adding new providers and handling different types of currencies, including cryptocurrencies. gRPC is language-agnostic and efficient, with strong typing and support for multiple languages. The compiler and library are supported by the community and Google, providing stability and performance. gRPC is scalable, secure, free, and open source. It also offers plugability, allowing for the creation of extensions. The HashiCorp company has implemented gRPC plugable services in their cloud platform. Overall, gRPC is a powerful framework for microservices communication.

I want the time trying to install everything, which is not clear. Yeah, indeed, MVM is a very nice script. Yeah, indeed MVM is a very nice script to control which version of node and VM you use, like I would say must have, if you're working with a node environment quite often. So it's a simple script that you can install for any operational system basically. And I can also imagine that it wouldn't work. The dependencies that you already installed on the other version wouldn't work on the older version of nodes. Makes sense. Yeah, sure. Thanks guys for the feedback. Okay, now it worked. Finally. So what we finally have, we have two running services. So this is the first provider we have and we have another service running which is a currency converter. So here in the code, we just call one service from another and ask all the rates, which is pretty much simple thing and then just make a conversion on this side. But the idea of the service, if we just decide to add more providers in here, it will be easy to do. So we implement new providers, we add them inside this chain and then we just run pretty much the same protocol on here for converter, but instead of having just regular currencies, for example, we also have cryptocurrencies as well. And we are able to convert cryptocurrencies as well. But for now it's just a simple implementation when I sell him $100 and receiving some GVP and here's an effective rate of it. So I think I'm pretty much finished with the material we had. So Alex, what do you say? Hey yeah, great job Andrew. Let me quickly back to my screen again. So what I like about the structure of this workshop is that we, can you still hear me? I'm sorry that was a share screen for me, okay. One second I felt it's... I lost the connection as I was helping. So what I like about this structure is that we kinda start from a really ground, right? And then we, on the second part, we show like more like a real scenario, right? Because how I understand it, what you're doing in your regular basis, you pretty much dealing with kind of a list set up, right? You have learner, you have services defined, and then much more services, of course, much more definitions, but still kinda that. Yeah, right. Okay, then what we think is gonna be logical next step is to try out some of those techniques yourself. But before we do that, let's maybe make some summary. Let's say give some final words, and then I think we can explain technical part, technical tasks that you can take for yourself and practice a little bit more on that. Yeah, I'm gonna start the summary, so Andrew, please join some of the points if you think they are relevant or not relevant to you. So first of all, what we discussed today, what is gRPC, what is Protobuf, and that it seems to be yet another standard to learn, like we already know many, many other approaches and frameworks that can be used for service client, server client communications. So yes, this is yet another standard. However, it's a very profit to learn so we think. Why? This is because it's environmental language agnostic. Well, you can say so, but as we've seen, it's not environmental language agnostic. But it is more like, it has support for many environments and for more languages. You still understand what exactly to use, how exactly to use, which version to use. But what you get out of the box from it is one common standard for the whole ecosystem of microservices that I used inside your application. The protobuf itself is strongly typed messaging format. So together with TypeScript, I think it's a really nice match. You can define your services protocol, you can convert messages into strict types, they would be checked before sent or after received, actually, in a service. It's also quite efficient. We didn't show today the performance, any performance test, but you can find some of those here in this grpc.io. They have a really nice metrics, especially for the more traditional backend stack, technical technology stack like Java and C++, you can imagine it's super fast. Actually, I think if you look at some presentations from 2017 or even early presentations for grpc, they will, the Googlers who are working with and promoting this GRPC format were saying something like, yeah, I don't know, they said something like 10, yeah, billions operations per second across all, microservices environment the application, you better check the numbers, but it's incredibly high and incredibly fast. So really efficient format for transport and as a data itself. The compiler and the library is supported by, well, first of all, community and more importantly, that is supported by Google that gives you a lot of stability actually in this choice, if you choose a GRPC. This repository that we saw today, it allows you basically without a doubt to put a dependency on this library because it will be supported by lots of people to make it according the goals that we've also discussed today, like efficient and the performance, for example. Security from the box, yeah, a little bit discussed today. Scalable is more as all microservices calls up, well, those are two words that I cannot elaborate on top of today's workshop. We didn't discuss it that much, but I think this is an idea of all microservices to be scalable. Free and open as all open source libraries. Plugable, this is a very key feature that allows basically to add your extension on top of the GRPC framework. Like if you wanna generate code in, I dunno, LOA is, for example, LOA is supported, I'm pretty sure. But yeah, for example, the PizzaScript language that you invent in your nights without sleep, then yeah, then you can make your own extension for the GRPC. And I imagine you will have a microservice set around the GRPC and PizzaScript. Now I can just dream about it to be honest. Actually I can add a bit about Plugable. If you've heard maybe about company HashiCorp, this is one of the companies, I started to use it for a company for infrastructure creation, basically using language Terraform. If you've heard of it. And then I just get a deeper dive into these microservices and GRPC stuff. And then at some point of time, I realized that these guys just implemented this Plugability inside their platform, cloud platform, by using GRPC Plugable services. So basically they allows you to create your own service which just follow some specification, which is written in GRPC. And then you can just plug it inside these clouds and it start executing. Given the cloud, a lot of features like they ensure that the service, if it goes wrong, it don't break the whole system. It just break itself. And they handle it. So yeah, that's it. Sorry, Alex. No worries, very nice.

21. Comparison of REST, RPC, and GraphQL

Short description:

In this part, we discussed the comparison between REST, RPC, and GraphQL in terms of their focus, semantics, coupling, and format. REST focuses on resources, while RPC focuses on programmatic actions. GraphQL is a mix of both. REST has a loose coupling, while RPC has a strict coupling. In terms of format, REST is a text format, while RPC is binary. We also provided practical tasks for participants to try, such as creating a local stub configuration with Docker Compose and updating dependencies in the monorepo. We advised against attempting the advanced task of tracing paths between services unless familiar with OpenTelemetry and JavaScript.

Indeed. And the layout is kind of the same here. Yeah, I have this also comparison table. Let's just quickly go through it. I'm not saying it's finished or it's 100% yeah, precise, but let's have a look here. So what I tried to do and summarize the other formats like architectural formats and architectural patterns that they used to turn some messaging, transport transportation between services. So we have REST RPC and the GraphQL. So you see here, I'm comparing RPC, Not-gRPC is more like a pattern. So if you compare them by, for example, a focus. So REST is focused clearly on resources like semantics of resource. And RPC because of course, it's calling a procedure is focusing more on the program. Let's say, an action in the programs. I don't know. I wanna pet my dog, for example. That's the message method that you call in remotely. Then a GraphQL, I think is kinda similar to REST in that sense, also focused on the resource but this is more mixed, I think in a GraphQL is also very flexible. Next semantics or protocol related semantics. Let's call it like that. So that means I put here HTTP and REST is, I think has a really, really big and a strong connection with REST works themselves, like you get, delete, update. And the patch and post and boot, for example, and the RPC and GraphQL having more like a programmatic semantics. So what you have in a gRPC, for example, you just say, just API call my method, again, pet my dog. And then, so it's a little different approach as you can see. Coupling like how resources and how these services are coupled in the final ecosystem and the in-rest to that, quite a loose, right? They don't have much dependencies. If you don't enforce a format check-in, like if you don't have, for example, sfagger that is consistently tested against the version of application that you deploy. Then basically you don't know what is running there in terms of API semantics in terms of version of API. And in RPC, it's a completely opposite side. It's like if your RPC, if your protocol file is not matching the one that you implement, your application will not start, it will fail. And otherwise if you, if your clients trying to use the service of this protocol format, then yeah, it's a 30-day so it will fail and they will not allow you to do so. GraphQL I also put loose, but we don't discuss it today much. And yeah, last year a point is a format and the inREST is a clearly text format. You can, yeah, you can argue that REST is, sometimes it's called JSON over HTTP, but I think people more laughing about that is can be also XML, can be any other format. But in RPC clearly it's, it's binary. We saw it today that it's serializing some data into, it's serializing text into the binary data that you cannot even read without the stringifying it first. Yeah, this is a kind of summary for today.

Now, about practical tasks that you can do yourself. So in here, in practice session, you will find a link with the issues on GitHub. So first of all, the issues are defined in this public monorepo node-js svc sample. That's exactly what Andrew was showing us today. We were going through all the steps that are basically defined in this workshop, and we try to make different interesting tasks for this monorepo. They are not always about programming, but always, let's say, have the connection to what we discuss today. I will start from the bottom, just because I can. It's actually the node programming task here, and this one says create local stub configuration with Docker Compose. So if you are really into making a Docker composition, you wanna make and you're having fun with playing with Docker, then it would be a for you to see the structure of this monorepo, create some Docker files, maybe challenge yourself to reload this Docker images on build, change in resources. You can have flexible solution of course, but the idea is you create just a local Docker setup, and you might want to run all the services at once with the Docker compose up command, for example. Next one from bottom, also you can see here GoodFirst issue. And so if you consider yourself less experienced maybe with this technology or with this particular task, then GoodFirst issue also embraces more like a structure of this repository allows you to experiment a bit with dependencies, install dependencies, update dependencies or something like that. And yeah, second one from the bottom actually about the dependencies. So in our case, showcase today we use the protocol gmtc of version 1039, and actually it's not the latest. I think the latest one we checked was a 0.8.1. So feel free to update it. And then please be sure that the code is working. So check the regenerate codes, test services. And this is indeed a good first issue to try it out, I think. However also, you don't know the whole, how the whole infrastructure should look like, I think is one of the start points. Yeah, next one. Trace paths relation between services is quite advanced, let's say. I will just quickly describe it, but I really doubt you should pick this one up. So in one pool request to this repo, we created a trace service client. And this one is OpenTelemetry client for nature. So it's like a logger you can think of, like a logger or some common utility that is used across other services. And the idea is that we want to bind it to some tracing system. We use the zipkin in this call. We added to the tracer, we added some functionality for it. And it starts, but it doesn't show the connection. And that's an idea. You have a call from one service to another, and in the zipkin interface, you want to see actually that the first service called second service and the request took half a second to run. So that part is not working. We expect there is some small bug there. If you want to deep dive into that, and are familiar with the open telemetry technology and with JavaScript quite good, this is a good feature to take for you. But other than otherwise, I would not suggest to pick this one.

22. Creating Logger and Crypto API Provider

Short description:

In this part, you will learn how to create a common library logger, implement a crypto API provider, and connect it to the currency converter. These tasks are more advanced and involve touching different services within the repository. You will also explore how to modify the conversion logic to handle multiple providers. The tests for the gRPC services can serve as a reference. Additionally, you can experiment with the code and use TDD to ensure the tests pass. This part concludes with the option to explore extra resources, including YouTube crash courses.

Next one, maybe Andrew, you can explain a little bit the create common library logger. Yeah, create common library logger. Basically, this is pretty much the same as I was showing you, but I was not sharing this code anywhere. So this is a really good first issue to start with to try this learner set up, to try this monorepa, and also it has some step by step guidance to how to go there. So if you want to try it from the beginning, it's a good point to go. Yeah, and the next one is similar to that what I was showing. It's about implementing crypto API provider, which is a new provider you need to implement. But basically the final thing, the final goal is to implement this API. So this API should return some rates and then as a final thing, you can connect this provider into the currency converter and basically be able to convert cryptocurrencies. So that's the goal. I would say it's more advanced, so you need to you know try to implement it from nothing and then just move on. And also try to touch different services between each other. And nicely directed actually the best how it should be working in a current repository. So you can also have a look at how the tests are running for these geo PC services. And yeah if you experimenting with it first make sure that the tests are passing and you know then modify a code, make them fail and yeah use a TDD there, that would be great. Yeah and last in the list, the the ratio basically inside the converter and that method as I've shown. We only take in one, one provider result and then we just, you know making all the calculation and here the idea that we started to think that, okay we gonna have like multiple providers and now we need to amend this conversion somehow, so it will work correctly based on different providers. So here's a, yeah, an example of different responses. Basically it could be achieved even with just tests. So you can go to the test of this crypto compare concurrency converter and just add more provider in response and just go to the code and implement your logic. And this is actually a very beneficial tool, I think that every single service you can test and cover by its own, just, you know, emulate and all the environment around it, which gives you a more option. So you don't need to run all the infrastructure on your laptop. Yeah. Great, yeah. I hope that to give some very, verity and what you want to play with. Of course you don't have to do it now, but if you have a little bit of time, why not? Yeah, speaking about time, I think it's perfect timing. It's almost three hours. To finalize this part also, of course, feel free to choose extra resources. So there are some My YouTube links. I really enjoyed, we really enjoyed this video, some crash courses.

Watch more workshops on topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.
Node Congress 2023Node Congress 2023
119 min
Decomposing Monolith NestJS API into GRPC Microservices
Workshop
The workshop focuses on concepts, algorithms, and practices to decompose a monolithic application into GRPC microservices. It overviews architecture principles, design patterns, and technologies used to build microservices. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated TypeScript services in the Node.js stack. The workshop includes a live use case demo of decomposing an API application into a set of microservices. It fits the best architects, tech leads, and developers who want to learn microservices patterns.
Level: AdvancedPatterns: DDD, MicroservicesTechnologies: GRPC, Protocol Buffers, Node.js, TypeScript, NestJS, Express.js, PostgreSQL, TurborepoExample structure: monorepo configuration, packages configuration, common utilities, demo servicePractical exercise: refactor monolith app
Node Congress 2023Node Congress 2023
102 min
Decoupling in Practice
WorkshopFree
Deploying decoupled and microservice applications isn't just a problem to be solved on migration day. Moving forward with these architectures depends completely on what your team's workflow experience will look like day-to-day post-migration.
The hardest part of this can often be the number of vendors involved. Some targets are best suited for specific frontend frameworks, while others are more so for CMSs and custom APIs. Unfortunately their assumptions, workflows, APIs, and notions of security can be quite different. While there are certain advantages to relying on a strict contract between apps – where backend and frontend teams work is limited to a single vendor – this isn't always realistic. This could be because you're still experimenting, or simply the size of your organization doesn't allow for this kind of specialization just yet.
In this workshop, you'll have a chance to explore a different, single vendor approach to microservices using Strapi and Next.js as an example. You'll deploy each app individually, establishing a workflow from the start that simplifies customization, introducing new features, investigating performance issues, and even framework interchangeability from the start.
Structure:- Getting started- Overview of Strapi- Overview of Platform.sh workflow- Deploying the project- Switching services- Adding the frontend
Prerequisites:- A Platform.sh trial account created- The Platform.sh CLI installed

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
JSNation Live 2021JSNation Live 2021
19 min
Multithreaded Logging with Pino
Almost every developer thinks that adding one more log line would not decrease the performance of their server... until logging becomes the biggest bottleneck for their systems! We created one of the fastest JSON loggers for Node.js: pino. One of our key decisions was to remove all "transport" to another process (or infrastructure): it reduced both CPU and memory consumption, removing any bottleneck from logging. However, this created friction and lowered the developer experience of using Pino and in-process transports is the most asked feature our user.In the upcoming version 7, we will solve this problem and increase throughput at the same time: we are introducing pino.transport() to start a worker thread that you can use to transfer your logs safely to other destinations, without sacrificing neither performance nor the developer experience.