How to Convert Crypto Currencies With GRPC Microservices in Node.js

Rate this content

The workshop overviews key architecture principles, design patterns, and technologies used to build microservices in the Node.js stack. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated services using the monorepo approach with lerna and yarn workspaces, TypeScript. The workshop includes a live practical assignment to create a currency converter application that follows microservices paradigms. It fits the best developers who want to learn and practice GRPC microservices pattern with the Node.js platform.


- Good understanding of JavaScript or TypeScript

- Experience with Node.js and writing Backend applications

- Preinstall Node.js, npm

- Preinstall Protocol Buffer Compiler

- We prefer to use VSCode for a better experience with JavaScript and TypeScript (other IDEs are also ok)

117 min
16 Jun, 2023


Sign in or register to post your comment.

AI Generated Video Summary

This workshop explores JRPC and how to build JRPC microservices to convert cryptocurrencies in Node.js. It covers the low-level aspects of gRPC and its integration with Node.js, emphasizing its performance benefits over traditional REST APIs. The workshop includes demonstrations of generating protocol buffer types, protocol buffer transformation and serialization, creating server and client with gRPC, and building a cryptocurrency converter using multiple providers. It also discusses the comparison between REST, GraphQL, and gRPC in terms of coupling, verbosity, and discoverability.

1. Introduction to JRPC Workshop

Short description:

Welcome to the workshop on JRPC and how to build JRPC microservices to convert cryptocurrencies in Node.js. This workshop is the first part of our investigation into gRPC, where we explore the protocol and its underlying technology, such as protocol buffers. The workshop is expected to be less than two hours long and focuses on the low-level aspects of gRPC and its integration with Node.js. We have provided a repository with all the necessary resources and demos for the workshop. If you want to follow along, make sure to install the required dependencies, including the protocol buffer compiler. Now, let's introduce ourselves. Hi, I'm Andrew, a software engineer from the United Kingdom. I have a passion for open source and enjoy working on projects related to NodeJS and TypeScript. I also explore other technologies like Go language and DevOps practices to enhance my skills and have more fun with open source.

So, welcome everyone. I'm Alex. This is Andrew. Hi, Andrew. Hello, guys. Nice to see you all. Nice to see you. Yes, special thanks for people who already joined our workshop this week. We have one for now, but let's see. Maybe we'll get more people. So, welcome everyone to the workshop.

Today we talk about JRPC. A bit different topic than on Tuesday. So, the JRPC and to be concrete, how to build JRPC microservices to convert cryptocurrencies in Node.js, of course. Yeah, a little bit of history of this workshop. So, me and Andrew were exploring this topic for last three years, let's say, approximately, and we built different solutions for ourselves just to try out this technology, to see different angles of it, and this workshop is actually the first part of our investigation in which we touch the topic of gRPC itself. What is this protocol about, what lays behind it. I'm talking about protocol buffers, for example, and specifically how to run it in Non-JS of course. So that's the main idea for this workshop.

I hope you had a chance to read the description upfront, if you don't, if you haven't done it, then hopefully you will find it useful anyways. And I will post the link directly to the chat which is where you can find resources related to these workshops, including documentation links, different articles and we will actually explore them. Yeah a little bit more about the workshop itself. Some of the timing that we expect. It actually is not going to be three or four hours so I don't want to take a lot of your Friday time today. I think it's going to be less than two hours honestly, especially if we speed up the introduction, and the topic like we said it's going to be about gRPC itself so it's going to be a low level gRPC mostly and how to bind it with Node.js and that's the main idea. You can find also as a repository for this material, for this workshop material. So we put all the resources in this repository and you can find some of the demos here and know the dependencies as well. Yeah, like we, because we introduced this workshop a while ago, it still uses Learner so please, please don't be mad at us about that. But I think it still works and that's the most important. What else? Well, if you need to if you want to go through all the steps that we have documented in this workshop, you will need to install some of the dependencies that we added here. I would say that Node.js and NPM you probably already have installed, but most important, in our case, would be protocol buffer compiler. So it's run a different and different demos again to do the conversion, you will need to do that. And we can explore how to do it. Exactly. Alright, so that's sort of what this workshop about the introduction and let's maybe introducing ourselves.

Maybe, let's start with Andrew today. Andrew, hello. Wow. Such a pleasant present from you to be the first. Hi guys, I'm Andrew. I live in United Kingdom. I'm a software engineer. And also a guy from open source. I really like to do not just one day solution, but also think maybe I could use this solution in on my next job and my personal project and there. There are the moment when basically open source comes to help me when you create a report, you wrap it up. And you have the solution. You have this project and then you can just apply it to any in any place. That's really great thing to think about open source and to speak about it in general. I really inspired about NodeJS, do many projects related to this and TypeScript at all. And Alex is trying to change my face right now. That's yeah. No worries. Right after NodeJS and TypeScript. I started to explore. Yeah. This is another picture of me. Cool, another thing I started to explore after right after NodeJS it was Go language. I think Alex at this point. Also also joined me on this path. Go language is in most cases. Looks similar, but also a bit different and a bit faster than NodeJS. Workshop is for NodeJS or NodeJS is the best thing we could use right now. And yeah another beat. I also started to touch its DevOps, practices, tools and the things which simplifies you. Your daily routines to you know to have more fun with open source. That's pretty much about me. I would say I'm not gonna have the same picture of you, unfortunately. But next time I will be prepared for that. I know I know. Yeah, that's me.

2. Introduction to gRPC and its Benefits

Short description:

Welcome to the workshop on JRPC and how to build JRPC microservices to convert cryptocurrencies in Node.js. This workshop is the first part of our investigation into gRPC, where we explore the protocol and its underlying technology, such as protocol buffers. We have provided a repository with all the necessary resources and demos for the workshop. Hi, I'm Andrew, a software engineer from the United Kingdom. I have a passion for open source and enjoy working on projects related to NodeJS and TypeScript. I also explore other technologies like Go language and DevOps practices to enhance my skills and have more fun with open source. Let's have some fun indeed. We are gonna be doing approximately that. So first of all, we gonna explore bit of what is a gRPC, what are the technologies used in gRPC, why gRPC exist and what is under the hood of gRPC. In the first part, we gonna explore this particular possibilities of GRPC. There will be like conversion possibilities. How to build a server, low-level server, low-level client in node.js and the second part, we gonna explore Repository you will probably already found in the description before. So there we going to build kind of system which contain several GRPC services, connected to each other and the goal of the system will be to to convert currencies basically. By the end of the session, we prepared few tasks, few exercises that you can go yourself and try out. Let's go on with the Chair PC. What is a chair PC? It is a modern open source remote procedure call framework that can run anywhere enabled client and server application to communicate and makes it easier to build connection systems. The gRPC protocol is about the communication and it also allows to connect different technologies and services to build on different technologies. The main reason for using gRPC over traditional REST APIs is to achieve better performance over the endpoints. JSON format is quite verbose, which affects the size of the message sent over the network. gRPC provides a more efficient way to send and receive data by using protocol buffers.

Welcome to the workshop on JRPC and how to build JRPC microservices to convert cryptocurrencies in Node.js. This workshop is the first part of our investigation into gRPC, where we explore the protocol and its underlying technology, such as protocol buffers. The workshop is expected to be less than two hours long and focuses on the low-level aspects of gRPC and its integration with Node.js. We have provided a repository with all the necessary resources and demos for the workshop. If you want to follow along, make sure to install the required dependencies, including the protocol buffer compiler.

Now, let's introduce ourselves. Hi, I'm Andrew, a software engineer from the United Kingdom. I have a passion for open source and enjoy working on projects related to NodeJS and TypeScript. I also explore other technologies like Go language and DevOps practices to enhance my skills and have more fun with open source.

It's almost like by the way. Thanks for introduction Andrew so myself. I'm a software engineer. I live in Amsterdam. And I am I was a JavaScript engineer for quite a long time started my career as a JavaScript engineer and was in a while in a front-end area and somewhere around 2015 was a little tired of how front-end how fast it was the development so now it's really hard to follow it that at that base and yeah, I decided to move maybe two more to and back-end aside and first technology that was logical to choose was Node.js and I really enjoyed it and still I think it's very that's some yeah that takes fires inspires me a lot not just or JavaScript on back-end beside that also what drives me is that to build different tooling for engineers. So it's not particularly DevOps job but somewhere around that so I really like to work in it in the core teams, you know like to support engineers one of my favorite project was to support and maintain pipeline for front-end engineers was a very cool experience and beside that like Andrew mentioned I also big fan of Golang. I can't say that I have a loss of knowledge of it, but a little and I enjoy it and that would be probably my preferable language if not NotJS environment, and yeah, DevOps is also something that we match with Andrew. I practice it and I had some professional experience with that as well. So that's me and let's not spend too much time on introduction, I would say. Yeah, actually it would be interesting and curious to hear about your background a little bit, maybe you can combine it, what exactly you want to find in the workshop. By the way, the workshop like let's do more interaction, like if you want to learn about something specific, just ask us a question because we did this workshop sometime ago and maybe, you know, like some things that we find already obvious for us that we don't articulate very good right now because some time spent already on that so don't hesitate. I'm just saying it, last time. All right. So that was an introduction, was quite quick, I think.

So what we gonna do today? Let's have a look. We are gonna be doing approximately that. So first of all, we gonna explore bit of what is a gRPC, what are the technologies used in gRPC, why gRPC exist and what is under the hood of gRPC. Spoiler alert, it is a protocol buffers. So we're gonna talk a little about protocol buffers at the main part for this first first module, I don't know how to how to name it. It's going to be demo and this time actually demo will be like a for real demo with demo effect like we all love, you know, something will be broken I would guess and yeah, in this demo in the first part, we gonna explore this particular possibilities of GRPC. There will be like conversion possibilities. How to build a server, low-level server, low-level client in node.js and the second part, maybe Andrew, can you tell us what is going to be second part about please? Yeah, so second part, we gonna explore Repository you will probably already found in the description before. So there we going to build kind of system which contain several GRPC services, connected to each other and the goal of the system will be to to convert currencies basically. So we have multiple providers from one side and we have one kind of converter which aggregate all these providers and can calculate you the real conversion by your request. So that's a very small task, but the main idea here is to touch all these technologies which Alex will be explaining on the first part and also to see some parts, how to test it, how to get deeper, how to build it and so. So yeah, it's gonna be fun. Yeah, let's have some fun indeed. By the end of the session, we prepared few tasks, few exercises that you can go yourself and try out. So let's see by the end of it, if we can explore them and then maybe we can just explain them. So you can yeah, do some of the exercises yourself if you're interested in. And you can find, of course, them in repository itself. It has SVC samples there.

Alright, so let's go on with the Chair PC. Let's gonna be the main part. What is a chair PC? This is a nice logo, logotype, logo tip is, it's called pancakes. I think that's the most important actually to know about chair PC. How the dog is called pancakes? Everyone knows that. So, yeah, the chair PC protocol or Google remote procedure calls. Well, most likely, but actually, officially it's just the jee remote procedure calls. Let's have a look at the definition. Like to start with definition first, modern open source remote procedure call framework that can run anywhere enabled client and server application to communicate and makes it easier to build connection systems. As I think it's quite a good definition of what it does. If you can type in the chat, by the way, what's your experience a GRPC so far like from skill one where to 10 where one is you don't know anything about it and 10 you super expert and that would be helpful to know like a level how do we want to go in this introduction particularly and yes, so a little bit about that. How nice? Nice Bruno. Thanks for that. So to me, I like to start and explore a bit this definition here. So the open source remote procedure call. Well remote procedure call framework that is already telling us. Maybe how is going to be the communication built behind not behind but between the services that we're going to be showing. So the gRPC protocol is about the communication and it also does a very important. Task. Allows to connect different technologies and services to build on different technologies. So this picture here, for example, is a yeah, just a different technology stack, not just communicating to python to C++ to Java to Ruby and so on. So imagine the big organization like Google like Facebook and so on that has a. I don't know thousands maybe tens of thousands. Microsoft is run into their projects and they want to do it efficiently because because just on this scale infrastructure, you don't you want to use infrastructure really efficiently You'd want to spend as much money on infrastructure as you can and also you want to have a really good performance out of it and yeah, the of course we have already something in place for for those for this goal, right? We could communicate before a jrpc any like we can name already Some sort of for jrpc, which is I think the oldest one will be arrested here alternative. So rest API as well as actually it is it was defined. I think in and by the end of 90s, like there was a you know, restful applications specification given however, yeah, we already have been using that API. Select my Crest before. But yeah, the rest is that quite obvious. You have some resources. You have endpoints for these resources center. You can pull these endpoints are using a some protocol well HTTP protocol to be fair and you get the data out of the endpoints. So what's wrong with that? If something is wrong with that. Well, the main reason like I already said is that you want to actually have more performance over these endpoints and how you can achieve that. Well imagine that this is a use case like if you if your endpoint for example serve some JSON. So that means that you will send this JSON documents over the network and as you may know JSON format is quite verbose. It's like it has some fields and that fields are repeating for each item in the array for example, so you will specify fields in every object in this JSON. So that will affect of course a size of the message that you will send and that means that if the document has a really big resources list for example, you will send it in patch in multiple parts over the network and yeah, your client will be receiving them and then packing together before you can actually get the data out of it. I mean, you can, you will be using streams in most cases, but still you will be sending this data over the network and it will be less, let's say not minified.

3. Introduction to gRPC Communication

Short description:

gRPC was initially used by Google and later pushed to the community. It provides better performance than traditional REST APIs and has concise documentation. gRPC allows you to treat third-party services as objects in your code, simplifying communication. The steps to use gRPC involve making a request to a server, using the gRPC API instead of low-level HTTP calls. The gRPC framework handles the communication layer, including data serialization and transport protocols. Unlike REST APIs, gRPC does not rely on third-party libraries for communication, making it more self-contained.

So first it was inside Google only and then they pushed it to the community. Yeah Bruno, I think Google still using gRPC for most of them. I mean, I cannot imagine that from 2015 they could migrate it potentially to any other technology. I mean, yeah, it's just in big companies it takes a lot of time to migrate from one technology to the other and I think that's why most of these big companies they prefer to you know use some standard. For example, in the Meta case, you know, they invented the React for that purpose so that they can build applications and they're not using some third party vendor. But it's originated from Facebook company. Anyways, so yeah, so Google started to standardize it, published it to community. And since then, of course, Google supports the community that drives gRPC and uses it, as we said.

Interesting, I haven't understood the comment by Andrew about gRPC, but you know mine. Yeah, so Google, I did it, by the way, before with other technologies as well, as you probably know, and the ones related to the gRPC, I put here SPD, QUIC, and STABI. Those are the transport protocols that can be used even inside the gRPC. I'm gonna talk a little about it but gRPC, by default, uses HTTP2 protocol. But actually you can extend or you can define what exactly you wanna use as a transport protocol for the communication layer. What else? I think the cool thing about gRPC is documentation, you have this tiny website and it has actually very nice blog post spread like they showing all the news. So they also show the principles like it's very concise documentation and it answers most of the questions. And I like the guides specifically like you can go here and find Node.js, for example, and then you have a few tutorials. A little bit outdated but they are still working and that's most important. So yeah, here you can find really where you can start and that's very good for the technologist like that. Yeah, it's part of Cloud Native Computing Foundation by now, which is logical as well. So yeah, what is about Chirpvc? What is RPC for us? Well, if you haven't met RPC before, it's a bit different from the world that we used to live with REST API. So with REST API we always treat the services as a third party components that we need to call using some library. In the case of Node.js, you can be for example, built-in model, built-in library, HDP request, I'm saying, or there is a better alternative now for a built-in HTTP model, which still lives in Node.js itself, it's called MDC. We'll type it in the chat if you haven't heard it. Just you can look it up to what's. So it's basically real rewritten and not just built-in HTTP model, which is much more performant and faster. Anyways, we know how to deal with this third-party services we need to do fetch for the resource that believes somewhere, and then you will get data. The service will serve this data and you will use a HTTP layer or alternative third-party HTTPS, for example, to get this data into your service, and then you will deal with it. Bruno says about Express, of course, Bruno. It's still quite popular, but yeah, that was already sometime back when you could leave Express behind. Anyways, with RPC, with RPC, your communication with the different, with the other service differs slightly. You treat this third-party services as they leave close to your service itself. Like in your code, you will be using the service as just an object and you will be saying, hey, third-party service, I get data, and then you will likely get data and continue working inside your code. So that's maybe the developer experience, how it looks like comparing to REST API world. So let's have a look at actually the steps that are needed to do that. So imagine we have the client and the server, that's most communication we have in web and in modern architecture services. So we need to do the request to some server to execute the procedure, actually, the flow will be still stay as the same as the rest, you can do the request, the service will do the job to give you the data back and you will continue working on your own service with this data. But the difference is that instead of using the low level HTP, fetch call, for example, you will be using API that was produced by gRPC framework to you. So they will be needed to more steps with it, of course. And the steps that you will need is some idea of this protocol, like how is gonna be actually calling the third party service. For that, gRPC provides specific way. Like, it will give you some specific stuff showing you this feature now. This is a feature from the gRPC website. So every server can be a server or client. They will use a specific piece of code inside the application. It is called a staff, server staff, client staff. And this staff will take care about the communication layer. So it will take care about serialization of data, sending the data to a third party, to known server. And then on the other side of communication, on the client side or server side, it will unmarshal the JSON or the message. It will give the execution flow to the original service. The original service will do the requests to a database, for example, will then convert and send the data back to this staff and the staff will do the opposite actions. So it will marshal the message, the response message. We'll send it back using some protocol and yeah, it will use some transport protocol to successfully send it to origin a request. So how it does it and what is actually a V-Hack. Well, that's less we can explore. Again, about the alternatives, with the release client and server communication, like the picture that I tried to explain now, the main idea for me here is that you always will need some library that is able to do the goals over the network. In case of, for example, REST API, where you have the resources somewhere in the network, you will be just using this UNDICI or HTTP request or fetch in a new version of Node.js, direct fetch you can use to send actually data over HTTP. But the problem, or maybe not a problem, but the dependency here is this HTTP library that can convert the data to the HTTP protocol and actually specifying HTTP protocol where to go. And same goes to a browser, right? If you treat the client as a browser client, for example, your browser client will also depend on some library that can communicate over HTTP. So some implementation of HTTP communication layer. And same goes basically to gRPC, but in gRPC, the difference with gRPC is the way that it actually provides you this transport layer with the gRPC framework. So you don't rely on a third party library for this communication. So you don't have to support, for example, your Node.js version if you want to change the format, but instead you rely on the gRPC framework itself.

4. Introduction to gRPC Features and Extensibility

Short description:

Other alternatives to gRPC include Trest, GraphQL, and SOAP. gRPC allows you to choose from various transport protocols for low-level communication, such as HTTP3, TCP, UDP, web sockets, and server-side events. It provides a client library that handles communication within the framework. gRPC uses protocol buffers to define services, which are concise and easy to express. It generates code for client and server communication, supporting multiple programming languages and environments. gRPC can be extended using plugins, enabling customization and additional functionality. It includes features like strong type definition, code generation, and support for streaming. While gRPC does not have direct integration with Socket.IO, it can be used with web sockets, making it a possibility. The focus of this workshop is not on extendable points, but on asynchronous and synchronous communication.

Okay. So other alternatives, we already mentioned Trest and GraphQL. Maybe for people who saw it before, there is SOAP. I personally haven't used SOAP myself. So I can't talk about it with a big confidence. So for the transport protocol for low level communication techniques, you can actually choose much more than just HTTP or HTTP2. So nowadays, actually, there is also HTTP3. And I think up to HTTP5 is already in beta, you can choose. But you can also choose to use some other protocols like just directly TCP or UDP, or in case of server to client communication, it's also very nice to use some kind of bi-directional stream. For example, web sockets or server side events. And yeah, the point is that you always need a client library to do this communication for you. So in case of GRPC, it's not an exception, you will need this client. But you will take care about this client inside the framework itself.

Uh-uh, yeah, yeah, yeah, Bruno, indeed, yeah. Also, I know that this is about XML definition of services, but, honestly, I never tried it of course after I started my career, I think. Um, before I started my career. So let's talk about the features that the GRPC provides. Well, first of all, let's have a look at the code of this first touch of the code that we can see this protocol buffer format. And we're going to explore it in a bit. But here, we define in this protocol buffer, we define in a very concise syntax, we define what our service will do. And, uh, as you can see here, we define the package, package is just namespace, but more important for us is we define the service, give it a name, and then we give the, we define which methods our service will have. So we have with the keyword RPC, we define the methods, just hello, then we specify in this, um, argument, what type of data we'll receive as, um, uh, as, uh, parameters for this method. And we also specify what type of data it will output. So if you think this definition is really hello world only, um, it is hello world, indeed, but, uh, uh, there are real services definition are not much harder than that. So most likely, um, services are very tiny, uh, and that can be defined, this definition of services can be really, really small, and you don't need much to express what exactly your service will be doing. So most of the services definition will look alike, I would say, um, okay. So what now, when we looked at the example definition and you maybe have more, um, idea what, what it is, um, look like. So what then gRPC provides you, it provides you the strong type definition of services. It uses protocol buffers for that, but let's talk about it a bit later. And for client and server communication, I like that. I tried to explain before, you will need to use or generate code up front, so that's a needed step, let's say. By the way, it can be also generated for you on the runtime, and in most examples that we're gonna see in the second part today, it will be just generated on the fly on the runtime when your server starts. Yeah, but gRPC will generate this stuff for you. That's what I wanted to underline here. As we already saw in the official website, they support a lot of programming languages for you. Since it was in 2015, they even included Node.js originally in these supported languages. So, as you can see, it's quite extensively storied. And it also supports lots of environments. Where did I open my link? Just a second. Okay, here. So for the environments, that means that we can use it not only for like server-to-server communication, but you can also use it, for example, gRPC on a browser side. I think I haven't tried myself to use it on that. I know it's possible. They have a tutorial for that. I think it shouldn't be differed too much on what we have on the client side, on the server side. But just, I haven't used this particular gRPC web dependency myself. Yeah, extension points. So that's a part of what actually, with which you can extend your version of gRPC. Like you can design your communication, for example, to use web sockets. In this case, you will need to adjust your gRPC with a plug-in, gRPC integration with a plug-in. You will specify intermediate layer, which will convert the data for a web socket format and then send this data over web sockets, for example. And not only that. So you have, actually, you can extend it in any way you want, like you can add some additional validations and so on. So it's a very, very, very simple format. And I think the main reason for that is that you have this stuff generated anyways. And that means that for this stuff, you can register plug-ins and the plug-ins will be doing more job if you need it. But by default, you can just use it as a Blackbox and it will be already very, very good. It will include communication over HTTP2. It will give you the serialization. It will guarantee you the message ordering. That's what happens in the HTTP format itself. It will provide you a way for streaming. Are you familiar, by the way, with the stream pattern? Maybe type minus, if you haven't heard about streams before. Yeah. Bruno's question. Can GRPC be used on socket IO? Definitely it should be possible. But I haven't saw the implementations and I mean, yeah, you can't attach implementation for WebSockets and therefore, socket IO as well. It is a possibility, I would say. But yeah, extendable points is not what we're gonna cover today unfortunately. Yeah, and then asynchronous and asynchronous.

5. Introduction to JavaScript and Protocol Buffers

Short description:

In JavaScript, asynchronous protocols are commonly used, although synchronous protocols can still be used with more effort. The main concept is the client and server, treated as services, using a layer generated by the JRPC framework for communication. Protocol buffers define types and are used for high-performance data serialization. They were introduced before gRPC and are widely used in various technologies, such as Kafka.

So that's a bit different in JavaScript. For example, if we would be discussing Java today, Java environment, then it would be more straight forward where we use asynchronous and where we use asynchronous but in JavaScript implementation or for JRPC, we will be using callbacks anyways, and that means that it will be using, in most cases, asynchronous protocol by default. And I think we can still use synchronous but it will need more effort than just using asynchronous. That we're gonna see it in a bit in a demo, I hope.

So one more time, the main picture for today is you have a client and server. By the way, both client and server can be treated just as services in your environment, like services that communicate to each other. And in order to do the communication, you will be using some additional layer generated by JRPC framework for you. And JRPC framework will take care about network connection about sending data, marshaling and marshaling message. And then you will just use this framework in your code directly.

So now let's go back a bit to this example, to the protocol buffer. Let's actually explore a little bit more what is the protocol buffers. So like I said, protocol buffers is allows us to define services, but actually the main reason the protocol buffers is not services. The main reason is to define in types and that's this message keyword is responsible for. So the message is the type of data that you define with protocol buffer, form up. And the main reason for protocol buffer for format is that to have a very performance serialized data in your application. So it uses very similar approaches as a gRPC and the reason for that is that it was actually introduced before gRPC and nowadays it is spread around the different technologies. Like I can give you an example, for example, in Kafka, you can just use protocol buffers as a message data type for communication.

6. Introduction to Protocol Buffers

Short description:

Protocol buffers are a well-known technology used for efficient data type formatting. They convert data into a binary format, making it more efficient to transmit over the network. Protocol buffers require strict data typing, which may contradict JavaScript's dynamic nature. The protocol buffer compiler is used to serialize and deserialize data and is required for using gRPC or protocol buffers. Fields in protocol buffers have rules, types, and tags that define their order and properties. Void types are supported but require a specific version. Packages and services are additional features in protocol buffers, allowing for extendability and customization. Compiler options can be used to specify output formats and plugins, such as protoc genTS for TypeScript. Fields can be specified as required or optional.

So let's have a look at this message here. So we specify a message with a keyword and that's our data type. Then we specify the name of it and we can put the fields that we're gonna be using in this data type. We can use primitive data types, again. So for the fields, we have streams, we have integer Booleans, and we give it a name for our fields. And on the right side of these definition, we will give a number. And so number is very important. It basically defines the order of a field in the side of your structure. And well, it is important because it's going to be used by protocol buffers, much on the system, serializing and deserializing system to do it efficiently. So it will rely actually, which field should appear after which for you in a binary data format. And even more important is that you can continue developing your message type after you declared it once, I can imagine you have a version one of a person and then later on in the life cycle of the application, you decide to introduce more fields. And in this format, you need to be sure that you haven't repeated the number that you use. So this is the index, basically the index, ordering index of other field that will appear in message type. And one more here is a repeated. So we haven't seen that yet, repeated is basically defining array types of data. So that will say that, yeah, I will have that values field, that will be an array of integers and that will be the field number four. So that's how you can read this data.

A little bit of history, again, provided by Google to us. It was also moved to open source somewhere around 2008. And yeah, they also have a very nice documentation, very alike what we saw. Where is it? Cool buffers, yeah, I should do that. There are the cool buffers. So again, very nice documentation, and as you can see here since it's from 2008, it doesn't include Node.js here. So by default, you don't have support for Node.js. But luckily you have support from community and if you look at this one, third-party protocol buffer plugins, you will be surprised by how much technology is already using it. Just for JavaScript, you can see here, just these around nine implementations of protocol buffer support. And even more, you can search for TypeScript and you will find gmts library that is most useful for TypeScript. And even more, and those are just published in this repository. You can imagine there are much more than that I just specified here. So it's very well-known technology and that is used again for the performance data type format. We can say also binary formats or what protocol buffer will do. It will convert your data into the binary format and therefore it will be much more efficient to transmit through the network. And it will be also very easy to serialize and deserialize because of the strict ordering rules and the strict data types that you have to specify. So this, I think, contradicts a little bit with a JavaScript nature, right. Because in JavaScript, we have everything dynamic. So what protocol buffers and the gRPC tell us to do is we need to be strict in our data types that we use. And we saw it in both protocol buffer data types format, like message here, but also we saw it in services definition, like we specified data type format, which would be requests type and response type. So that's the key for the efficiency in case of these frameworks for us. Yeah, what else have protocol buffer can we say about protocol buffer? Well, yeah, it's a.proto format if you haven't met it before, maybe. We will need the protocol buffer compiler for some of the demos. Again, you don't always need it, but protocol buffer compiler is responsible for this layer that is gonna be used inside the application to serialize, deserialize data. And in case of a gRPC, which is on top of protocol buffer, we can think of, it will be also needed for the transfer protocol. So protocol buffer compiler for the protocol tool is what you will need to install for in most cases for your application in order to use gRPC or protocol buffers. Yeah. That's sort of what it is. Let me check. I think we haven't missed anything so far. Yeah, this is a definition of a field again. So we have a rule, optional. We have a type, a pre-set of types, but you can imagine we need the most important. We have the most important ones like strings, booleans, integers, floats, and so on. And the tag, the index here on the right which shouldn't be repeatable again. So that's the order of the field in your message. Yeah, more on that. We don't have a void type. Actually we have support for void type, but we need a specific version to enable that. So you may see in some of the services implementation that this define in the void type itself inside the protocol buffer definition. Packages and services is something additional. So it's not about the data type, but it's about services. So we specify here at the syntax version, the version of protocol buffer that is gonna be used and the package name that contains other service. Again, it's extendable. So you can add your own plugin to that, you can have different compiler options. If we can have a look already at the demo, and by the way, we almost there for the demo. So in here, I specify that GS output, and I specify that I wanna actually use common GS in a binary format. That's a compiler options and these compiler options, it's related to this list here. So for example, we can install protoc genTS and we can specify that we wanna use this plugin here. So we will use at the TypeScript output and it will produce a type things for us. Yeah, and you can specify the fields as required, by default, they will be optional. And yeah, there are more than that, but in general, it's just simple as that. So you have message as a data type definition, and that's a sort of what you're gonna be need here.

7. Demo and Downloading Prices

Short description:

Let's proceed with the demo. We will be working with a repository that contains the structured insight demo protocol. I will switch to my VS code and explain the tokenization of embedded web services. Before we begin, let me show you the package JSON file, which includes the necessary scripts for downloading prices and generating protobuf. However, the API we initially planned to use is not working, so we'll switch to an alternative. We may need to create the protocol buffer types manually, but that should not be a problem.

All right, let's go for the demo. A demo, by the way goes to the, I think this, the same repository, yes. It is a structured insight, the docs demo protocol, and then you will find a package, and we should find it's the same scripts here.

Cool, then let me switch to my VS code now. VS code now. Demo, demo, demo, yeah. Trying to, tokenization embedded web services, Bruno, what do you mean? If you can elaborate a little bit.

In the meantime, here is my, here is my repository. And this is my package JSON. Package JSON has these scripts, download prices generate protobuf, runtime. And then we will explore how to convert from a JSON to protocol buffer and how to run a simple server and the client on top of that. So first I need to download prices and that's where I already have a problem before starting this workshop. Actually, this API doesn't work anymore. This was not responding before the demo. So let's switch it to another one that I found before preparing that. So it will give us a slightly different response. So we will need to make the types for protocol buffer from scratch, but that should be fine. I hope I will succeed.

8. Generating Protocol Buffer Types

Short description:

The first part is to get the data and save it into prices.json. We adjust the format and define the protocol buffer for the object. The object has a prices field, and each item in the prices array is of type price. We define the type price with its fields. We also adjust the services. Next, we generate the types of data from the downloaded prices using the protocol buffer compiler tool.

The organization service for the client, interesting. Okay, okay, interesting Bruno, interesting use case. I think it's really, sounds related to what we're doing in the workshop, but they're slightly different, so it might be a good introduction in this case.

Anyway, so this first part is just to get in data and saves it into the prices.json. Let's hope that will work, I just have tried it before the demo. Okay, download prices. So it's executed and it's saved as something like prices.json. Okay, so that's good.

I see it's already using a different format that I was, format document, yeah, okay. So I will need to slightly adjust it. Let's because I will be using actually that the data array. So let's now, so now when we have the JSON, let's create a protocol buffer definition for this object. So let's see, I have prepared this one, prices.proto. That was my previous version. And then we need to somehow, ah, yeah, we need to define basically this message so it will be recognized correctly.

Well, first of all, I think we will need to slightly adjust the field, so I don't wanna actually do this mess here. Let's, let's just leave one object. Let's do the prices, like that. So that's gonna be our object. And then I think that's gonna be fine like that. So, short adjustment of our data here. And now let's define what we see here. So we have an object, which has a price, which has a prices field, and each item of these prices array is of type price. So let's define our type price here. It has a time. So let's define a time. Let's use some of the types that we've already sold. And don't wanna mess up with that. So that's gonna be our time. It's gonna be a number one. Then we have a high field. So this is already a float. So let's use a float here. Let's say, this is, we will need to redefine that. Same goes for float for low right? Is gonna be a low with the peg free. Same goes for open with a peg four. And then same goes for volume from, volume two. Okay, let's actually copy them all. Little easier. Okay, so it stopped at open. And I will need to put something like that. Okay, good enough. So volume from is gonna be float. Volume two is gonna be float. Close is gonna be float. And then conversion type and conversion symbol, that's gonna be strings. So let's just change it like that. And we will need to do the ordering of these types. Five, six, seven, eight, nine. I think that's legit like that. So, okay, again, time, high, low, open. Volume from, volume two, close, conversion type. And conversion symbol, so that sounds good to me. I will need to also adjust my services slightly. What I said is that this is gonna be a really a row demo. But first of all, let's try do the first steps. And the first step was to download the prices, we did that. So now let's try to generate actually the types of data from it. So for that, I will need to execute the protocol buffer compiler tool. I actually downloaded it just before this workshop, the version three 19 three, it's not the latest one. But I needed in order to make it around with my previous configuration. So be careful in the newer versions, you will need to use a different output of arguments here. So what we do, we say, hey, protocol buffer compiler. I use the prices, brought on to generate this stub code that I can use inside my application. Let's do that I would say, let's do NPM run two and generate protocol buffer runtime. NPM run, and of course you need to do NPM install. I will be doing it from vendor, I guess. Just in it. Yeah, it's generated now it should be fine. Yeah, now it's executed without an errors.

9. Protocol Buffer Transformation and Serialization

Short description:

The Protocol Buffer Compiler produces code for an API that we can use in our application. We define a price type inside our Bitcoin prices namespace and have methods for serializing and deserializing messages. The next step is the protocol buffer transformation, which involves using protos prices to convert JSON data into protocol buffer data types. We create a new object for each field and use setters to specify the values. The script then serializes the prices object in the protocol buffer format, writes it to a file, reads it back, deserializes it, and produces the JSON output. However, there is an issue with the setConversionType function that needs to be fixed.

This is, by the way, shows that the exit code of the latest command. So, it was no error code, just zero, so it's executed successfully. Yeah, I will show it a bit nicer. And so, in here, you can think of it as just, if you have installed Protoc, for example, with the brew package manager, you will be just using Protoc, but in my case, I put this version inside the vendor. And so, I use a particular version of a Protocol Buffer Compiler. And it produced me a code here. It produced me this prices, pb.js. Let me show you. So, you, you believe me that it actually happened. Generate Protocol Buffer Compiler, and indeed, it appeared to this file protocol pb.js. And if we look at here, we see that it's slightly ugly interface because it's using a still older version of Node.js libraries, but what it does in a nutshell, it just produces a code for API that we can use inside our application. So, we can, we define here a price type inside our Bitcoin prices namespace. And then we have, this is everything generated for us. And then we have some methods that can help us with serializing message and deserializing it. Yeah, I'm wondering actually if that will all work with my demo because it did work with the previous adjacent, but for that one, I will probably need to slightly adjust it. If something goes really wrong, I think we will skip some parts of it. So next part, next part is the protocol buffer transformation. I have a third step for it, not protoc.js. Let's have a look at what it does. Protoc.js. So it's very simple, very simple script again. So we just use this generated API process BV. It takes our prices.json, right? And then it will try to go through this json data and convert it into the protocol buffer data types. So for that, we're going to be using protos prices, right? It should be prices. Is it prices here? Yeah, prices are here, you see? This is our prices object. So we will generate these prices. Then we will go through the prices data and we will create a new object price. And then we will need to do specified fields. So for each field, we will need to put it with a setter. That was again, generated for us from the definition that we have given to it. Like for example, set time, it should be there. Let's again, copy these details here. Okay. I hope that all will work, but you know, it's a very raw demo, so it might be something that's not working correctly. So again, we gonna be using set. I will comment it out or delete it. And then I will just do something like, let's actually do it like that. And then I will copy, set data, set date, like that. Like that. Interesting. I think it will fail somewhere. I would guess it will fail somewhere. And I will need to put the data what exactly I wanna put inside each method, right? That's by the way, double check. For example, we have these methods, set value from, well looks legit to me, it should be working, I would say. And then price data. The price data here is just the JSON, right? So we go through the price as JSON and we just modify this slightly. Actually we could skip the modifying of it because it was okay already. So we go through the prices and we will get, yeah, price is data, price, sis, right? That's gonna be our price data. And then I will need just to, say, take the field and put it to there. Let's run it once. And if it doesn't work, then I'm sorry for that. I will try to modify while maybe Andrew was talking. So for that, I will need to do something like price data like that. Yeah, so we take the field from our JSON and we put it with the set time. Let's see if that would work. And then we add all this data into my prices object, which actually contains the framework, prices object inside. Let me double check that, prices. Okay, yeah, looks fine to me at prices. Then what my script does, it serializes these prices object with our user and the protocol buffer format tries it with a prices.file. I will rename my existing file. This is cool. Then it will be using the prices to read it back, will deserialize it and a produce back the JSON. Let's see. Let's hope it would work. Any guess? Do you think it will work? Yes or no? It should work, Alex. No, it doesn't. SetConversionType, it says is not a function. Let's check quickly maybe we can fix it, setConversionType is not a function. Interesting. SetConversionType should be a function.

10. Demonstration of Data Type Format Difference

Short description:

It's about the generator that modifies the definition of the method setConversionType. The object is serialized to the protocol buffer and then put into the prices file. The binary format is almost unreadable, and the size is significantly smaller compared to the original JSON format.

Where do I use it? It's setConversionType. Maybe. Yeah. So it's about the generator that you see. It actually it doesn't use upper case format. So it modifies slightly the definition of the method. So it should be that like that and symbol probably like that. Yeah. Let's try one more time. Wow. That's amazing, guys. That's just took me five minutes but I haven't tried it before. So nice, nice. So it read this object, JSON object that we received before we serialized it to the protocol buffer and then we put it to the prices file and then we read this file and output it back with that deserialization from a binary format. So let me show you them the prices that we produced. This prices binary format. We open it any way with a text editor and you can see here this is indeed binary. Almost, you can see still that some strings are used directly because that was a type string and it didn't modify it when it was serializing data to the format. But other things definitely been serialized with some encoding. So it's almost unreadable here. And let me do one more thing. So if we do LS minus LA, for example, and if we compare sizes of those, this is our prices for 1006 and comparing to our original prices, the JSON, which is almost six time bigger. Well, not six, five times bigger, I would say, right? So that's just a simple demonstration already shows a big difference in a data type format.

11. Creating Server and Client with gRPC

Short description:

Next step is to create a server and a client using the gRPC proto loader and the gRPCJS. We define the service and its methods, such as list, list stream, and get. The list method retrieves all the prices from a prices.json file, while the list stream method sends a new price every half a second using the streams notation. The get method returns the first price found by date. We create a server that implements the HistoryDataService service and specify that it runs on port 8001. We start the server and then create a client that interacts with the service. However, when running the client, we encounter a 'not found' error, which we identify as an issue with the prices.json file path.

So what's gonna be next? Next step is gonna be to create a server and the client. So for that, I actually don't need the protocol buffer compiler, if I'm not mistaken. So I don't need to do these first steps to create a server. That was just for demonstration reasons for protocol buffer. Let's define the server and the server will serve this data to us. So for that, I'm gonna be using the Dependencies gRPC proto loader and the gRPCJS, those are sort of official for gRPC. Then I specify that I'm gonna be using protocol buffer file definition, prices proto. I'm loading it, so this is a runtime conversion of the protocol buffer services definition into my runtime server and then I need to define the service itself. So let's quickly open this guy again. In this proto file beside the data type we define also the service. This is a service history data. It has pretty methods, get, list and list stream. I probably will modify some things here because again it was using other data before so probably some of those will not be working but let's try to make it work on the run. So now I need to specify what my service will be doing. So this is the definition, right. And now in my implementation I will be defining how it's gonna be doing that. For that I define three methods list, list stream and get in here. And that contains basically the implementation. For example, a list will be just a getting given all the prices that they've got from a prices.json. So that's straightforward. Actually, let's try it up to, after I explain it at a least stream. This is a part about the streams. And no one said, by the way, that that doesn't know this template. So the stream is a type of data that communicates that sense of small chunks of data and that can be received by client with these chunks of data. And then, you have an OGS built-in stream model which can operate on while receiving these chunks of data. So protocol buffer and GRPC, therefore supports this format. And in the case of least stream, that's what we do. And every time, every half a second, we're gonna be sending a new price that we're sending it with using the streams notation, as you can see. I'm just seeing call.write. So the call is something that my GRPC framework uses for request response objects. And in this case, it has a write and end method like an OGS built-in streams do have. So we implement streams like that. And then, yeah, the get, get by date, say that method get, which takes a date, and then it will try to find prices by date. This will not work. So I will just return the first one. I guess found is gonna be prices first. Yeah, let's start covered like that. And yeah, something it looks good to me. So we defined three methods, and then we need to say, okay, actually I wanna declare that my server, I will create a server here. I'm saying I will be implementing this service, HistoryDataService, and we can double check that we have it. HistoryData, it's not defined here but I think in the case of our dependencies, on runtime, it will be generated for us. So this service, we'll be using GetList and ListStream for implementation of it. And yeah, then we just say that started on port 8001, and that's it. Here we specify that we don't use any certificate because we run it on local, but we know that gRPC, it also supports certificates. So it's a really nice from the box. You can have additional security to your services. Cool, almost by the end of it. So let's hope everything would work. First, I will start the server, npm run gRPC server. Well, says it started. I cannot be happier than now. Well, actually I can be if my client does work, APS, gonna be my unicorn here. So let's quickly have a look at client because it doesn't differ too much from a server side. Again, we using proto loader gRPC GS and the same ProPrices proto definition that we saw before. But in this case, we're not gonna define in the service because it's already running. But we say that I'm gonna be using client from the service, so I create a client. I say that this can be implementing the history data. I'm not gonna be using any additional certificates because I'm running it on my local. And yeah, let's see if that the first or for example, would work, client.get, client.list, and then I have a specific version for our runtime of streaming. Let's see, let's see. I'm really excited if that would work. NPM run gRPC client. No, it doesn't work. I got not found. Wow, that's interesting. Well, at least I got an error, right? That's what my server was saying here. If it doesn't get the prices, if it doesn't have a prices, then it will, okay. I know what it goes wrong. I need to say here prices. I think it was inside my prices.json.

12. Demo and Cryptocurrency Converter

Short description:

In the demo, we used the protobuf compiler to serialize and deserialize data. We also started the server using the gRPC.js proto loader libraries. The dynamic generation of the service definition is possible at runtime, but it results in a slower start for the application. Now, it's time to move on to the cryptocurrency converter. To get started, make sure you have installed protoc and prepared your environment by running 'yarn install' and 'yarn learn bootstrap'. Our repository contains a folder called proto where all the proto definitions are stored.

So again, this was because of, yeah, of the conversion. Let's see. Oh, I need to restart my server now. Okay. To RPC server. And declined. Oh my God. Wow, I couldn't be more happier now. Now, definitely. Let's see. So, get works. Let's get a quick check on list. So, list will get the full amount of prices, I hope. And it does. Do I have a JQ? No, I don't. So, I cannot format it for you in the console, unfortunately. Last one is gonna be receiving a streams of data, stream of data of my prices. In here, I use a common GIS format. So, that's why I need to specify the asynchronous function, and then I will be using 408 to get every price. So, this nice construction, I just used that asynchronous function because I wanted to use this nice 408 construction. Okay. Wow, Andrew, high five! Ha, ha, ha, ha, ha, ha, cool. Nice, nice guys. That completes my demo, I guess. Let me quickly switch to the slides. Quick question. I'm not sure if you mentioned this, is it possible to also to send kind of binary? Not the, like, I want to send a file for example, through the gRPC, is it something achievable? Wow, I don't know, actually, do you know, Andrew? Maybe I haven't saw the file before. I guess it would be possible, but- I mean, I think there also a file, another type called binary, I guess, or something like this. So I'm just, yeah. Just remind myself like we did something like this, like file uploads. Okay, okay. Using this gRPC. I see. I haven't met it before, so not sure. Cool. Anyways. What we saw so far was basically using the protobuf compiler to serialize deserialized data. And I said that the demo worked almost from the first attempt and then we started the server which was using the gRPC.js proto loader libraries. And as you could see that we didn't generate, we didn't have to generate this protocol buffer compiler assets up front because that is also possible on the runtime. In this case you will need a slower start for your application, but it would work and yeah, it's just a matter of long start which is also important by the way. So that's the dynamic versus static generation of the service definition. Access, stab I called then before right? This stabbing generated called stab. Okay and we also solve streams. So Andrew, I guess it's your turn. For me, demo looked like radius, something low level coding like, you know, you just stop and C plus plus and starts doing some magic in the middle of your PC proto something. Nice, nice. So yeah, I think now it's time for a cryptocurrency converter. Finally, let's get started. Hopefully you've got installed everything, you've probably checked the repo, which is Monorepo, not just a server six sample, but for now it's just like about currency converter. To be honest, we added the word crypto there because some years ago crypto was a hype, so just to increase our chance to get to the conference and that, as you can see, that worked. Yeah, so finally crypto would be implemented by CryptoCompare provider. So yeah, to go through all these examples, if you would like to, you have to have installed protoc and also prepare your environment. So basically, yarn, install. I've did it before, but yeah, hopefully it's not gonna break something. Not sure what's going on. Maybe I need just to return my dollar here. Okay. No. Let me just close this. And new one. Cool. Cool. Yeah. Cool. Install. Nothing gonna happen for me as I already installed this, hopefully. Yeah, another command Yarn Learn Bootstrap. Which basically roll out everything for learner to making sure all the packages got installed dependencies in here. Let's have actually a deeper look at what we have in our repository. So we have basically a folder called proto where all our proto definitions are stored.

13. Monorepo Structure and Proto Files

Short description:

We have a monorepo with multiple projects and services managed by Lerna and yarn workspaces. The monorepo structure includes common packages shared across different packages and services. Lerna allows us to build and manage dependencies efficiently. We can use yarn-learner-run to build all packages or specify a scope to build specific packages. The goal is to build a gRPC service that converts currencies using multiple providers. The providers, such as the European Central Bank, have their own gRPC services. We can add as many providers as needed and switch between them. The proto files define the currency converter service and the UCB provider service, which includes the get rates function.

We're gonna look at a bit later. We have the folder called packages, which contains, so this example contains a monorepo. So that means that we put multiple different projects, different services, and different, you know, you chills in the one thing. And one, that one thing is managed by, by Lerna and yarn workspaces. So I'm gonna show it a bit later how to configure it. But in general, we have like common packages, which just contains something very common and shared across other different, across different packages in this repository. So we can usually add any of these packages in another package and start using it straight away. We also have services. So services, this is where we're gonna contain all our final applications, which we kind of run to perform the whole system work. For now we have just GRPC, but here you also can add something like REST or whatever other kind of applications you have. All the structure is pretty much something we created by ourselves, so there is no any streak to follow these or requirement to do this. This is another file called learner, which describes where the packages are located. So we show that there is like a two pass command and services GRPC if we're gonna have like another REST one. We're gonna add this, we don't have it so don't add it. So let's see, let's see what's gonna be next. So another important bit to start with a monoreport was adding this annotation and package is on this part about workspaces. As I said, under the hood we use yarn workspaces. Basically these are two which allowing you to not publish any of these packages somewhere in the npm or to any package manager, but just keep it locally. And then it allows you to locate all these packages between each other. And so you can install them in other packages. So this is, this contains pretty much the same as we had seen with Lerna. Let's have a quick look what Lerna can give us. Okay, Lerna. So to call Lerna, we do YARN Lerna, because the Lerna basically it's installed on the top level. So as you can see, we have like packages here. Every package has its own package JSON. However, there is no... a yarn log, because a yarn log is located on this level. And basically the idea of this monorepo that when you install all the dependencies, it's going to be installed on this higher level. So for example, if you use some specific... something specific, some specific package in one package, and then in another, it will be installed just once in here. And then here you can see just a sim links to this package. So you can see here, there is no really something in the node modules. Here as well, I guess. Yeah, just binary, copy the binary. Let's see it. And that's pretty much optimizes your space when you install all of these. So yeah, yarn-learner-run. And here, interesting thing we can make like yarn learner build, which will build everything across whole packages. Like every single package will be built during this command. But we also can specify like a scope of these. So we can define what exactly, what exactly we are willing to build for example, in our case it's gonna be common. And in this case it will be only this folder and these packages built because I specified it in the scope and the name of these packages can be found here. So we have kind of a convention here like all the packages, all the common packages starting from common and all the gRPC I guess starting from gRPC but let's check. Yeah, that's right. So okay let's move on. What else we should see? So next probably let's have a look what the picture of the services we are trying to achieve here. Hopefully it's visible on your screen. So yeah, the main idea, as I said, what are we trying to build? We're trying to build one service which is using gRPC and which allows us to accept a simple command to convert one amount of currency to another and receive all this conversion. So on the back side, we also have a provider, kind of multiple providers in our case. So we have one provider, which is a provider like European Central Bank provider, which is a bank which we call through the XML, I guess, and get the XML response. And then on the backend, we convert it to gRPC and get it back to converter. And so the idea here to have multiple providers to develop some certain protocol, and then we can add as many providers as we want in this chain. And then we can use our converter, which can just grow as many providers we add and we can switch between them depending on some of our decision we make. And here like, so every provider, this is a gRPC service as well. And underneath there is some provider API, like third party API, we're gonna not consider right now, but this is something we really call inside the gRPC service. So let's see. Yes, let's have a look now. So we know, we know kind of a design what we are trying to build and now we can go to the our proto files and see how we can achieve that. Basically proto files already created as many of the things in this repository but we still need to check it. So let's see. So first of all, we have a currency converter, about the protocol itself, Alex already explained very well. Thank you for that, Alex. We have a service called currency converter and just simple one method called convert which accepts some requests and return some response. This is a definition of request. So we want to define what will be the sell currency, buy currency and what will be the sell amount. Would be interesting actually to add here like then later on a property called buy amount. And basically to have an option like switch between them but this could be a task for later. Yeah so let's have a look on a UCB provider. This is a provider we also already implemented and this provider has some certain function which called get rates.

14. Importing Proto Definitions and Building Libraries

Short description:

In this part, we explore the process of importing proto definitions and building them into TypeScript and JavaScript files. We use a tool called ZX, created by Google, to write bash code within a JavaScript application. The protoc plugin generates TypeScript files, and the TypeScript generation tool creates JavaScript files. We also introduce the Higgin tool for templating applications. We demonstrate the creation of a new common library called Logger and implement its methods. Finally, we use the yarn-learner-run command to build the Logger package.

And by the way this is another interesting thing which Alex not sure if he shows it or not, but anyway this allows you to import another proto definitions from so here we imported from currency provider let's look. So here's a currency provider and here like a basic stuff about like how our currency provider, abstract currency provider should look like. So it's gonna be having like get rates and here's request and response for it as well. And then the implementation we still have to define within this service and other service was the same method, the same cold method and then defining the same parameters basically. Yeah, looks nice. So let's try to build. So we have a Proto here, and we also have... Let's see. We have a package which is called Common Gojrpc. So package, Common Gojrpc. Yeah, the interesting thing about package, I mean, it does pretty much the same as Alex was doing before, manually. So when we built this, first of all, it has a server, which is our Gojrpc abstract server, which we can use for simplicity here. So we don't need to write all this code manually every time on every server we create. We can just use this one. And another thing what this service does, it does a building, building these proto files into the TypeScript files, and then JavaScript files. We go here, we're gonna see this, all these commands. So this kind of a way utility called, let me see. Package stage so on. Utility called ZX. It's a tool created by Google, which allows you to basically write bash code just inside the JS application by adding such a symbol. Just to follow this, all of this was written in bash, but for the workshop, I was thinking it would be interesting just to have it in written in purely kind of purely on not just JavaScript way. So yeah, anyway, so we run this command protoc plugin, protoc nts, which generates everything into TypeScript, and then on top of this we run TypeScript generation, which basically generate like all these types in TypeScript first, and then it's a dist folder, which creates the same in JavaScript. Yeah, and if we go in any of these, so the index one it's just manually created just to export everything and here this is like a pretty much generated by our execution, but this is pretty much the same as I said, as Alex showed before. Let's just close all tabs, save some space, figure out where we at. So yeah, for now just let's try to build and see if it's happening. Yarn build. So another nice tool it also together with actually executing this command, it does print it in here, which is really nice. So yeah, all the folders appeared again. So we are good to go. So there, I guess. So let's try to create our new common library. And to do so, I'm gonna explain a bit more again on tooling. We used to another interesting project, which is called, let me find it, Higgin. I'm not sure how to pronounce it correctly, but this is a really nice tool to do templating, templating of your applications. So for example, to create a new common service, we can just copy and paste something from here, but then we need to do a lot of cleanup and stuff like that, but instead we can create some templates of how our new service should look like. And here's the template. You can see it has like some specific format. So to what file is gonna be generated and then the definition of the file and then some templating of what should be inserted in this template based on the prompt. So let's try this out and see. So on the top level, we're gonna have the yarn bootstrap common. Yeah. So what is the library name? So let's create a library called Logger, which is a very common thing we need to do in any projects I guess. My name is Andrew. Yeah. So it shows us that it was loaded three templates from the templates folder and they all are created here. And here we can see now a new folder added called Logger. So let's implement this Logger and see how it works in general. So Logger gonna have some fields I guess. Methods, debug, this is a kind of copilot trying to help me unsuccessfully. String, but let's see. Maybe he was able to help me later with the next. Okay. Nothing is happening. Let's give him another one info, info. Yeah, nice. This is what I was going to see, finally. So we have our logger implemented. I think. What it works about? Return type. It's not gonna return anything. Okay. And it's gonna be fine. So now we can do, where are we at? We are on the top level. So we can do yearn, learner, run, build, scope, equals common logger. Not very simple common to just type it out. But that will execute pretty much the same as you go to this folder, like logo, logger folder and run build here. So it's been built. Okay. That's nice. We have a new package.

15. Injecting Package into Existing Service

Short description:

Let's inject the package into an existing service, like the GCB provider. We can manually add the package or use the 'add common logger' command provided by the learner. After adding the package, we can import and use the logger in our server. When running the ECB provider, we encountered an error, but after fixing it, both the common package and the logger messages were shown. Now, let's create another service and explore its implementation, including the tests, server, index, constants, and get rates. The server creates a server on a specific port and starts it.

Edit. So let's now try to, let's now try to go to an existing service and try to inject this package inside of our existing application. So we have, for example, GCB provider, right? And how to do this? Let's see. Yeah, here's a learner again can help us. However, we still are free to go here and basically manually add this package, like that. But if we not gonna do these, there's another learner command. Uh, I think it's called add common logger. And then we also need to define the scope, not score. Uh, GRPC is a B provider. And it, yeah, basically fetches through the, you are in stuff behind it, but basically it does pretty much the same. It just does edit here. And try it right, right after that, we can use this new package we just, what we just created in, in our project. So let's try to use it in server. Import logger. Yeah, thank you. And here just type something, logger, info. Okay. Oh, ECB, ra, provider is running. Oh, almost running. Who knows? Oh, yeah, right. So let's, let's try to run ECB provider, yarn start. Hopefully it's gonna be working as alix-demo, but it's not. Let's see what it says. Can't read properties so find the find reading info. Maybe that. Yeah, I think so. Because I forgot basically something in the previous example. I think. It should be like expert. Yeah, let's try to build it one more time. What was it to us? Run build so we rebuild it. Okay. It's done. Close. Close this and here we really want to use logger. Oops. Rebuild in one more time around top. It's done. Trying to start one more time. Okay. So we have two messages shown. One message is coming from these common package which I showed you before, like goq-jrpc. It has a server. It has server here. So here's a message, I guess somewhere. Yeah. Server started. It's here. Let's close these. Yeah. And also, our new message from our logger is also shown. Yay! So let's try then now create another service. Not common one, but also the service from here. And by the way, why does it show some error here? What does it say? I'm not sure. Go to your PC, cannot find model or its corresponding type declarations. That's weird because this is the same model from our thing, right? And it's shown. Maybe I've been, I removed it. Not sure. No, it's here as well. But anyway, it works. Yeah, so let's now try to create another provider. So first of all, let's explore this provider a bit more just to understand what we have inside. So we have tests, which we will take a look later on. We have server. We have index, constant, get rates. So get rates, this is a purely implementation of our thing. Yeah, let's go to the server, have a quick look. We've already been there. So we create a server on some specific port. We also define some proto definition, the server name, the package name and then we can start our server.

16. Creating Crypto Compare Provider and Testing

Short description:

We created a new provider called crypto compare and made necessary changes to the server and proto files. We also fixed a test and ran it successfully. Next, we started the UCB provider and the currency converter on separate ports. We used gRPC URL, a tool for calling gRPC services from the terminal, to test our services. We encountered an error message indicating that currency conversion is not supported. We realized that we were calling the wrong method and corrected it. Finally, we called the convert method, which requests rates from the ECB provider and performs the conversion.

And here important thing, that all together with a server creation, we also need to create definitions and define that our method from a proto will be implemented within our method, which we just created in here and get rates. So let's have a look, get rates. Get usb rates. Here we just fetch that party API, which is a user be a provider somewhere, it's daily XML. And then we convert all this receive data, we pass it and then we convert it to our format, which we define in our proto schema. Yep, pretty much like that. Let's have a look also in the test. So here in the test, interesting thing, we use not fetch and we mock it, we mock implementation of what's gonna be returned by this test so we don't go real provider every time. And here's pretty much a very simple test. So we define our test server with some test specification we created, and then we start this server at the beginning, stop at the end, and then we just call these get rates method. So let's quickly try it. Yarn test It started, okay. It's done. It's done and executed. So now let's try to create our new provider which is like currency convert cryptocurrency provider. So that time let's just copy for simplicity and call it currency, no, crypto compare crypto compare provider, okay. So we are copied. Here's a few touches we need to do manually. So first of all, we need to change the name crypto compare then we need to disclose all others, close all tabs, we need to go to SRC. Here we're gonna have like get rates, just and nothing like this. Okay, it's just not the right method just removing this at all. And here instead we just haven't like base currency where it comes from, from const. Let's have change these to USD. So base currency is USD for crypto compare. Here we're gonna have like just an example of new currency provider. Yeah, exchange rate is an example, currency is BTC and the rate is here. We're not gonna implement the whole provider right now, but we could just fake it for now just to see if something you add into this ecosystem will work at all. So we created this, we also need to go to server and change this usb-provider everywhere. Basically in few places to crypto-comparer. So crypto-comparer provider proto is also already created before. Looks pretty much the same as usb-provider, basically, exactly the same. The only changes is like a naming of package and the service at all. So let's go, crypto-comparer. Here, we're gonna use this package name from the proto. Here we're gonna use like service name from the proto. Okay, and here we just rip the compare. Why it's running? So let's try to run it and see. Packages, services, chairpc, let's see, crypto compare, let's go here, yarn start, let's see, load up. Yeah, I think we need to check not version, okay, and we also need to do yarn install here. Okay, so we installed, we, let's try one more time. Okay, it's started to run. We see crypto compare provider is running, server started, all good. So, let's quickly also fix a test we have here. It's not gonna take a lot of time. I mean, compared to our fake service but at least it is gonna be, you know, doing something. So we cover it with a test as well. So we have test starting. We need to change, first of all, we need to change here. If you compare provider, cryptocompare client, host. This is the absolute, this is a CPA we'll call it. Cat rates should return currency rate. Which will contain USD and the rate will be BTC and something now 0.002. Let's try. Now let's try to run test, yarn test. It doesn't work because it says like, we are returning the real values 0.1. Let's change it. So yeah, let's check our test is running and then finally test our like the whole thing. Okay, the test is good. So we need to start our UCB provider in here. Let's start it. We also need to start currency converter on another top. Okay, this is running on the port 552. This is running on 551. So there is another tool called gRPC URL, which allowing to call gRPC services straight away from the terminal, which is really nice to see. I mean, you can just pass a whole thing. Like, here's an example, you doing a whole of some kind of JSON inside and then you define where is the prototype file, where is the services running and what methods should be called. Basically this is something which supposed to work but here it says currency's not supported because this message is thrown from our code because we just sent like empty JSON to the currency converter, basically saying nothing, actually wanted to call another method. Fine, yeah I wanted to call this ECB provider rates, let's call it, so this is the JSON returned, kind of JSON created from GRPC response, this is all the rates, what our ECB provider returns for now, let's call finally convert method, which is going to, according to our picture as we showed here, let's show it one more time, yeah. So according to this picture we have just, for now we have just one provider request basically comes here, converter requests of provider rates from here and then from provider API and then does a conversion. Here we go.

17. Summary and Comparison of REST, GraphQL, and gRPC

Short description:

In this workshop, we covered the basics of gRPC, including its protocol and the use of protocol buffers. We built microservices for currency conversion using the ECB provider. We also discussed a comparison between REST, GraphQL, and gRPC, focusing on coupling, verbosity, and discoverability. The table showed that gRPC has high coupling and medium verbosity. However, my table differed slightly, emphasizing the focus of each protocol. Overall, it was a pleasure to meet everyone, and I look forward to future workshops. Thank you all for attending!

Yeah, currency converter, crypto, all this is in one. Converted through the GRPC online. That is good. So I think we covered most of- Wow, congratulations man, wow, that's awesome, man. Almost without any, you know, luck on the code pathway. You know? Yeah, so Alex, how are you? Yeah, I'm good, I'm good, listening for your part was interesting. Shall we wrap it up with some summary with things that we've covered so far? Yeah, maybe you can show your screen one more time. Yeah. Absolutely, you can see it, right? Yeah. Nice, nice, so in this workshop we went through the basics of the gRPC, how the protocol looks like, what it consists of, we discussed what is a protocol buffers, we built one particular set of microservices to convert currency in an ECB provider. European currency... Central bank provider. Yeah, nice, so actually I found, Oh, it doesn't work like that, but I found this link, which was given a nice overview of the like REST and GraphQL and GRPC. So I found there a table with the comparison, that's what I wanted to share with you. So let's quickly check, like maybe we agree or disagree with something. So we have REST, GraphQL and GRPC here for coupling, like how deep it is coupled with the code. For the REST API it's low coupling it says, for GraphQL medium and for GRPC high. I would say it's legit. It depends how you interpret it, but sounds fine. What is the chatiness, like how verbose it is I guess? So with the rest it's quite a lot. It says the GraphQL is low and with the GRPC medium, yeah. Do you see something strange here? Like something that you agree specifically with what I want to focus on particular part? Sure, PC. Well, I see version is indeed the heart in the GRPC. So yeah, that's the here. Yeah. What about discoverability? Hard to say. Yeah. What is supposed to be in the sense of technology? Indeed, indeed. So my table looks a bit different. I added on what focus the protocol like in REST, it's resource and RPC section. GraphQL is also probably resource. For semantics, it's more like a programming. In the RPC case and in REST, you just specify which URL we should be calling. And actually I specified to REST loosely coupled. It's like this author of this article agrees with me and RPC is medium and GraphQL is loose in my case. And they actually specified coupling for HRPC as high. Do you agree with that? Well, I sort of agree myself as I can see in here. Yeah, that's all, folks. I was nice to meet you. And yeah, I hope to see you in our future workshops then. Yeah, thank you very much for coming, everyone. See you later.

Watch more workshops on topic

React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Featured WorkshopFree
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.
: intermediate
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.
We’ll show you how to create an app that accesses information from a development store and can run in your local environment.

Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:
- User authentication - Managing user interactions, returning session / refresh JWTs
- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents
- A quick intro to core authentication concepts
- Coding
- Why passwordless matters
- IDE for your choice
- Node 18 or higher
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.

Node Congress 2023Node Congress 2023
119 min
Decomposing Monolith NestJS API into GRPC Microservices
The workshop focuses on concepts, algorithms, and practices to decompose a monolithic application into GRPC microservices. It overviews architecture principles, design patterns, and technologies used to build microservices. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated TypeScript services in the Node.js stack. The workshop includes a live use case demo of decomposing an API application into a set of microservices. It fits the best architects, tech leads, and developers who want to learn microservices patterns.
: Advanced
: DDD, Microservices
: GRPC, Protocol Buffers, Node.js, TypeScript, NestJS, Express.js, PostgreSQL, Turborepo
Example structure
: monorepo configuration, packages configuration, common utilities, demo service
Practical exercise
: refactor monolith app

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Let’s face it: technical debt is inevitable and rewriting your code every 6 months is not an option. Refactoring is a complex topic that doesn't have a one-size-fits-all solution. Frontend applications are particularly sensitive because of frequent requirements and user flows changes. New abstractions, updated patterns and cleaning up those old functions - it all sounds great on paper, but it often fails in practice: todos accumulate, tickets end up rotting in the backlog and legacy code crops up in every corner of your codebase. So a process of continuous refactoring is the only weapon you have against tech debt.
In the past three years, I’ve been exploring different strategies and processes for refactoring code. In this talk I will describe the key components of a framework for tackling refactoring and I will share some of the learnings accumulated along the way. Hopefully, this will help you in your quest of improving the code quality of your codebases.
React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk

React Advanced Conference 2022React Advanced Conference 2022
22 min
Monolith to Micro-Frontends
Many companies worldwide are considering adopting Micro-Frontends to improve business agility and scale, however, there are many unknowns when it comes to what the migration path looks like in practice. In this talk, I will discuss the steps required to successfully migrate a monolithic React Application into a more modular decoupled frontend architecture.
React Summit 2023React Summit 2023
24 min
Video Editing in the Browser
Video editing is a booming market with influencers being all the rage with Reels, TikTok, Youtube. Did you know that browsers now have all the APIs to do video editing in the browser? In this talk I'm going to give you a primer on how video encoding works and how to make it work within the browser. Spoiler, it's not trivial!