How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps


    The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.


    Hello, I'm Alex, and me and Andrew will conduct this workshop for you today. The workshop is about deploying node.js microservices with technologies like Bluemie into azure devops. I'm sorry, into azure cloud. And yeah, we have a lot of plans to be honest for today. So I hope you have read this agenda a little bit. I hope you had a chance to go through the prerequisites through some general idea of what we're going to do today. So if not, I just pasted the link in the chat. This is a page where you can find all of this information and maybe not doing directly the exercises that we're going to do today in the workshop. But later for you, that would be I think useful to get back to this material and find something useful I hope. Yeah, so that's our plan for today. Quite a lot of topics to cover. Let's begin with the simplest one and I will get to the agenda a little bit later. So let me introduce myself. I'm Alex Korzhikov. I'm a software engineer located currently in Netherlands. Let's say most of my career was doing javascript and before 2015, that was only front-end and I enjoyed it quite a lot. But actually around 2015, maybe you remember somewhere in this time, there was a big topic in the javascript community, javascript fatigue. And I think I was in a wave when I actually also felt it. But for me, it worked a little bit differently. I still love the javascript, but I wanted to switch it, switch my scope of javascript to more like a backend and more to devops tools. And actually, I found a very nice opportunity for that. I joined some European bank and my team was doing one huge pipeline for all the front-end engineers in our bank. And that included also the devops works, like configure some Jenkins servers first, then configure some virtual machines as well, and then to move to some GitLab CI-CD system to configure nodes there specifically for the GitLab. And later, we also did a migration to azure devops, which we use currently in our company. And yeah, that was the devops part, but we also had a server-side application, a backend application, some api. Yeah, so I enjoyed this work quite a lot. A big part of my work that I also find very interesting to myself is a building tooling for engineers. And in this time, we also had some architecture solution, also known as a CLI for engineers in our company. And the idea was that an engineer can make around the CLI locally, build the project, for example, link it, what normal development lifecycle requires nowadays. But the idea is that the same tooling you could execute and apply on the pipeline. And therefore, at least in the design point of view, that would make your development faster and more predictable. So yeah, that's my experience with javascript, specifically on devops and back-end stack. Later, in the last two years, I've worked also on more traditional back-end stack, like Java. And also, we have a nice tooling for engineers, but nowadays it's written in Golang. And to be honest, I find Golang myself a very, very lovely language and the programming language to work with. That's in short about me. Feel free to contact me if you find me in any of these social platforms. Andrew, how are you doing? I'm good. Thanks, Alex. Yeah, you have such an extended speech. I'm not sure if I'm ready to repeat this from the early beginning of my career. Yeah, I just wanted to say I'm Andrew. Hi, I'm a software and platform engineer here in the United Kingdom. I joined about two years ago as a fintech startup here. And that was pretty much the moment when I started exploring devops for myself. And the biggest part I started with, it was Terraform, and also building some automation tools about devops, devops features, and so on, to help developers move faster in their daily routines. Yeah, but in sense of languages, I started exploring years and years ago PHP. I had a work as a PHP developer that was actually the moment when I met Alex, more than 10 years ago. And we had some common projects together. And since then, we're good friends and also good engineers, I mean, exploring something new altogether. Yeah, then I moved to, I started to gain some beats from node.js and javascript at all. I wrote some cloud parser, like it's called GooseParser, which allows you to scrape any website on the internet quite fast using multiple backends. By backends, I mean multiple web browsers, real web browser or headless web browser. Yeah, then quite recently, I also started to to learn Go language. I've done some research and also done some microservices development using Go language, as well as node.js. Both are quite cool, I would say. Go language, I would prefer if you need to do something very, very thin and very, you know, memory less consuming compared to node.js. And again, Go language is one of the things I was inspired was about testing. So testing from framework is included. So we just open and start doing the things you want to do, as well as another tool that is benchmarking, also included in the Go language original package. So you just can investigate any of your function and see where the problem is. Yeah, quiet. That's it about technologies, maybe. And yeah, feel free to contact me if you want to. Right. Thanks, Andrew, was a quite comprehensive thing. Yeah, I just enabled the transcript transcription from what we're talking about. I don't know how that's working, but I hope that would help someone. So yeah, that now we come to a point what we're going to do today. So let's go back to this description. Yeah, this workshop, as we said, is about deploying node.js microservices with Pulumi and azure. So we're going to use quite a lot of technologies today. But if we look at the agenda, let me actually start from this point now. So it has, I think, two logical parts today. I will be starting the workshop and Andrew, I will continue in the second part. So first, we want to show you the application itself. So let's demonstrate the idea of the application that we're building, the structure of the repository, maybe the tooling that we use, dependencies, some of those. We're also going to touch a little bit of our gRPC world. Yeah, it's not going to be very deep dive into this technology. So if it's new to you, we have done actually with Git Nation javascript workshop. And that was more about the internals of the ecosystem, a gRPC ecosystem with node.js. So feel free to find it on Git Nation. Also, I just pasted the link to our other text description of this workshop. So you can go with the steps. You can understand how this thing's working more in depth. So yeah, gRPC, a little bit about protocol buffers. I will also demonstrate how it all should work locally. Then we're going to explore how to prepare the Docker image for these applications or microservices that we're going to use. And the reason for that, actually, maybe that was not originally a big part of the workshop, but we understood that makes a lot of sense. So some ideas have changed while we were preparing for this workshop. But we encountered that we had to explain a lot of kubernetes. Well, not again a deep dive, but we need to touch a lot of concepts for kubernetes. And for kubernetes, of course, you need a Docker. You need to prepare a Docker image to make your application working both locally and then as a remote image that you can pull into your system. So we can explore a little bit of a Docker. And there we go to the azure part, how to migrate basically your equal to azure or what abilities you need from Microsoft azure to be in place. And then start preparing this image as a full ci cd. And this is going to be end of part one. And the part two will be performed by Andrew. And what this is basically, it's, let's say, deep dive code, writing code and explaining different things that Pulumi provides for you. I find it quite hardcore, this part of the workshop specifically. So if you are really into Pulumi, you are in the right place. Otherwise, please enjoy the material and let us know what we can improve. I think approximately this workshop would take around maybe up to three hours, I hope a little less, maybe two and a half to three. So this is a little bit of the agenda for today. We don't have much of practical exercises. We prepared in the last workshop the GitHub issues that you can pick up. And currently, it's not much, but the idea is that if you want to touch some of the exercises, just feel free to contribute to this report, just to fork it and start working on that. It has both devops and also code exercises. And let us know if you need more details on these issues, of course. That's it for the intro. I think maybe just briefly check the technologies again. What we're going to touch today is a microsource architecture. Our main focus would be on Pulumi, but we're also going to show how it all works with azure and specifically azure devops product. Our main application is built on node.js javascript and typescript with a learner configuration on it. And it's a shared PC based microservices. Of course, we're going to use as all node.js stack engineers, npm and yarn. And Docker, Git and other stuff you can expect from any software nowadays. Yeah, that's quite a lot. So let's not waste too much time in the introduction. Let's begin the workshop. All right. All right. And of course, feel free to post a message in the chat or just say something out loud as possible. So feel free to ask any question that I'm trying to say. All right. So what we're going to build today or what we're going to showcase today is a simple microservices application. We call it cryptocurrency converter. And this is a set of microservices built in node.js technology. And this is a small diagram that shows how this would work. So ideally, we have multiple providers in a system. Each provider, you can think of a separate microservice written in node.js. And we also have a different type of microservice. It's a converter microservice. So when the request from a user comes in, a converter can have a look at what providers it has, can send a request to each provider to get a raise from these providers, accumulate it and then send a response to a user. So it's quite simple, to be honest, in this flow. But let's think a little one more time about that. So what a user wants, he wants to convert one currency, one amount of currency to another, like ethereum to Canadian dollar in this case. And the providers can be, well, anything, anything that a user slash developer is implement, is wanted to implement or willing to implement, already implemented as a provider of microservice. That can be, for example, your central bank rates or Bank of England rates, or any other types of rates. So you can imagine that in these providers, you normally do some requests to outside world where this api leaves something, and maybe you need to register it if you're building a really user application. So you need to register there, get an api key, for example, and you do a request and you receive basically the array of rates that you can exchange your currency to. And that's one part. The converter does again, request to each provider at some point. And when it receives response back, it can decide what is the best for user or maybe can provide something to a user back that the user decides what is the best for him or her, of course. So this is in a nutshell, what we're going to be building. You see that the provider microservice is called in a provider, one api, the second provider can call some other api, and it's up to implementation of a converter to decide what's the best rate to choose among this. Maybe it has some commission agreement with the sum of providers. So yeah, of course, everything should be transparent to the user, you know. Now more about the technical prerequisites that you need for this project. Well, we put this into GitHub repo originally. We have this in this URL, you can see, and basically this is all code you need to start it locally. So let's check it out. Of course, you need a Git for that. And then you need something that is called Proto-C. So please, by the way, we haven't checked much of your reactions so far. So please let us know which technologies are absolutely new to you or put some number one to 10, how you estimate yourself, let's say, in these technologies today. And again, what technologies are absolutely new to you to hear today. Yeah, Andrew, thanks for that. Thanks, Mathias. Proto-C, yeah, that makes sense. So yeah, Proto-C is, well, don't worry, we can explore some of those, of course, today. Well, Proto-C is something that comes for the proto-buffer world and that covers a little bit of gRPC. Basically, you need it for gRPC and the gRPC is using the proto-buff partially. Yeah, you better go to official proto-buffer page and it describes how you can install it for your operational system. So for Mac user, you can just use a brew install for the buff and then you will have this binary installed in your local machine. Again, with this binary, you can compile proto-buffer files into a javascript code. In our case, we use a javascript services or typescript services, microservices, then we're going to use this output target for us. Yeah, there are some comments here for Linux, but again, it's better to go to the protocol buffers official web page. We have all links in the prerequisites section of this document. What else you would need after we cloned it, a repository, after we installed the protocol buffer compiler. Next, and maybe the last thing we're going to need before starting it up locally, it's just the yarn environment and the learning environment. So what this project is, it's basically a set of, it's a monorepo and it contains a few packages that I used internally for microservices or for ecosystem itself. I will show it in a bit in my VS code, but for now, this covers pretty much of it. So you can see the proto folder and that contains the proto files. The proto files I will demonstrate a bit later. On the services in gRPC folder, you see that we have an indentation of some currency converter and European central bank provider. So if you remember this picture, we have implemented converter microservice and we have implemented wide provider microservice. And for you to play with it, you can of course try to implement another provider and make the application more complete. We have also some internal packages that are needed to start these microservices, but I will show it in a second. All right, so this is a point when I'm going to share my VS code. I'm using VS code by the way, but that's not necessary and all people use something different. I think Andrew is using WebStorm. So I'm really addicted to VS code and with it since version, so quite a while. And what I love about VS code is how easy it is to debug node.js applications. Basically, that's what would hold me for this editor. So again, let's have a look. I like to begin with the package JSON with the dependencies in here and you can see that we don't have much dependencies in a top level package JSON. We have Lerna. That's what kind of do a trick for internal packages. If you are not familiar with Lerna, this is a tool that allows you to configure your monorepo and configure it in a way to define separate packages in this repo, to define the development lifecycle for this repo, like install, like linked, like, I don't know, build, maybe even start. So ideally, you can define anything for your internal packages, also release cycle. Let's say in a Lerna, you can define and configure that you want to publish versions of internal packages with the same version or otherwise, you can specify that it's going to be a separate version per package. So when you install Lerna, you can also provide what workspaces you're going to be using and this is just an array of local folders and you see we have packages common and we have services gRPC and that's basically it. We have free for common packages and free of services and that's what we defined here as workspaces. All right, this is packages and then Lerna. Again, it repeats some of these packages thingies, but that's about how yarn and Lerna divide, separate concerns between each other. So you specify that you want to use the yarn, not npm, as a package manager for your local project. You specify that you use yarn workspaces, logically, as you specified npm client, and the version is what I just described before. So with a version independent, you say that each package that you've specified in here has its own release cycle and they have a separate versions management. All right, that's kind of about Lerna. So next, let me quickly check. Yeah, so maybe now I can show you quickly and I'm not really going to spend a lot of time in here, what packages do we have here and I think the main common package that is important for us is the Go gRPC, not like a Go language gRPC, but like a Go, let's Go gRPC. So this is a main web server implementation for our microservices. It's actually self-made, handmade, but you can see it's not that much. So we just define a server class, we specify default course, where to start, what it does importantly to us and I'm going to explain it in more details. So as a part of gRPC and Protobuf, it can load the gRPC, the Protobuf definition and based on that, either create a, to define a server that this Protobuf is defining, the Protobuf file is defining and after that, it just knows how to start a server and output some information. So yeah, it's a short, small wrapper as you can see, it's based on gRPC.js package, it's the official one and I was expecting something that is calling express here, but now I cannot see it here, so maybe it's down in some internal package, but for us, it doesn't matter, it's the internal implementation, it's basically express-based wrapper basically. Yeah, so this is a go gRPC and then let's have a look at the currency converter and the EACB provider, European Central Bank provider. So if you start with that one, it's the package that the microservices does a call to external api, it gets data out of the data like arrays and the currencies and there it's only to a user and that a user here or in terms of gRPC a client can be either the provider application or a user, the end user, if we allow that certificates. So if you have a look at this SRC again, well actually I wanted to start with the package ascent, you see that, am I looking at the correct one? Yeah, it's correct one, you see that it's using a common go gRPC like we just showed and beside that it uses some of the tools that should be actually in depth dependencies and the more important thing to us, not fetch, that one does a call to external web service. So if you look at this server, it's just a bootstrapping thingy, here it's a small configuration that explains that it wants to load the European Central Bank provider protocol, it starts on the port that we defined and then we say that we're going to implement getRace. So maybe I will spend a little more time when I talk about gRPC on that but you can see in the implementation that it's really one screen of coding here, it does a batch to this rate URL that we hardcoded, it waits for response, it extracts the data out of there and then it sorts this object from outside. And of course you need to make a correct wrapper so that gRPC syntax is correct when you send in requests from one application to another application. So in the end, it does return an object that is called getRaceResponse with a race and base currency, how we will define it in the proto file. But that's as simple as that and that last one here maybe if we talk about currency converter and now remember this picture when a currency converter basically does calls to all the providers, aggregates data and returns it to a user. So again in a server.js or maybe it's better to again start with the package JSON, it's using common gotRPC dependencies and that's basically what it needs to have here, the rest coming with this dependency, the rest should be I would say in dev dependencies but for us it doesn't matter here. So in here we define a service converter, currency converter, we say that this is going to be our method implemented in there and again implementation is sort of straightforward. So what it does, it gets all the provider services and for each provider services that are sent to the configuration, creates a chrpc client and with this chrpc client you can basically make requests, how you do requests with that, you just say client and then getRace and the getRace, that one was just defined in this getRace code in an ECB provider like we saw before. So after that it needs to accumulate it and this is a purely javascript slash typescript logic so nothing really big happened here, we just implement some of this logic and return in the end the convert response that I'm going to show in a second but you can see it's just some type of data that needs to be instantiated and to a user. That's it I think at that point, let me get back to the slide. All right, looking good, so far so good. So we have explained the configuration of our project so far, I showed you that we use a learning configuration, so a learning monorepo approach for configuring our microservices in one repo. I think, yeah, I don't know if you have opinion about monorepo versus multi repos approach, if you ask me what is a better, at some point I prefer to not answer any of these questions for myself but I think for smaller projects monorepo is at least as valid as a multi repo approach. So yeah, I'm not really strict in what should be used but I think for microservices you have a lot of packages to control and at least for showcasing some things it's really easier to keep the code closer to each other so you can change things quicker and that's helping I think. All right, so now let's talk a little bit about the internal technologies that we're going to use today and the first one is a gRPC, this is an official logo of a gRPC technology. It's a dog that I think his name is Pancakes, if I'm not mistaken, but this is of course the most important question for today, what's his name, but what is a gRPC? A gRPC is a Google Remote Procedure Calls, well actually no one ever says that g is Google but that is kind of implicitly what it means I would say. gRPC is a modern open source remote procedure called framework that can run anywhere, it enables client and server applications to communicate and make it easier to build connected systems. So what it is all about, it's a framework but I would rather call it technologies ecosystem and it was born in a Google company and I think officially started in 2015 and it's not that long time ago, the idea was that Google was testing this approach I think more than 10 years internally and that Google is a giant company and it experiments with the different stacks, with the different technologies, it sometimes produces new stacks, new technologies, new languages like full length for example. You can also think of projects like a kubernetes, whatever else we name it, the list is really huge and growing every day I would say, but the thing is that in Google you cannot just define one language for all the products that are going to be used for them, it's not the case, basically you put down this decision to the teams, it's up to the teams to decide which technology is the best to use for their products and then the team can decide to use Google, node.js, Python, C++ or Java and it is really great that you can unite these services in one ecosystem and there where the problem basically starts because you want this ecosystem to be stable, you want it to be performant and you really want to get rid of this painful on a team's level, you want to abstract it out and then teams think only about the business logic to produce the product in a faster way they can. So that's why Google first experimented with the CHRPC technology and then outsourced it to open source, so that's I think many other projects doing when the technology matures and to help to mature it even more, they put it to open source so let's say people from outside can bring something back and contribute to this to this project like I don't know maybe in terms of ideas, giving feedback to this project, also writing code, why not, but I think even more what you buy for this approach is that engineers from all over the world adapt to this and basically when Google hires someone it doesn't need to teach you this technology anymore, so that's what drives these projects like let's say it standardizes microservices, architecture, framework and infrastructure, but what it is internally? Well it's not that complicated to say, so each microservice and I just showed you a very simple implementation of such a microservices, it needs at some point to make a request to another microserver and for that you can think of two roles, a server role and the client role and to note your microservice can be both a server and also a client, so it needs to reply to some microservices, but it also needs to do requests to some microservices, so what gRPC does for you and what it comes from the box, it puts some small amount of code in your microservice and to be honest you can generate it, so the protoc, this protobuf compiler stuff, what it does, it generates a stop stop code for your application and then inside your client or your server you call this stop methods, generated stop methods to do a call for other microservices, let me show you this protobuf definition finally, so what you see here is a classic protobuf example, it starts with a protobuf version, but that doesn't matter to us, we define a package namespace, doesn't matter, so what is the next here is a service and some data types and basically that's it, that's what gRPC definition is, so you can define a service and this is your client or your server side, server node and the message is some data types that you need to basically send the data or reply with this data, so if you look at this hello service you define with a keyword rpc the methods inside this hello service, you can use the different formats for sending this data, I think we don't need to know about these types of data, but to be honest that covers pretty much all the possibilities of gRPC already, so again we define the method just hello, it receives a hello request input and returns hello response output and what is a hello request, well we defined it in here, message hello request is an object with a keyword with a key greeting and the hello response is an object with a string key reply, that's it, that's how easy it is to define your protocol format, protocol between the microservices and that's what makes the ecosystem of microservices really stable, so imagine you have this protocol buffer defined and what that means is that basically you can test your code for the strict data types before you move it to a next stage to acceptance and production and test is there, so you will already know if your formats, data formats for example are not interchangeable between servers, so let's repeat that, so we have a client and server and to be honest we can even unite that definition for us, it doesn't matter what gRPC does for us, it generates a code stub for us and this code stub in a nutshell can help us with making calls to another services in the unified syntax, it takes care of a transfer protocol, so you don't have to specify where and how and what to do with this request, it all comes from the box, it's supported, these generated calls are supported for a number of platforms already, so it's supporting node.js from the beginning, but I think the most languages used maybe even not Go, but Java, Go and the other things, but you can see a list is quite big and it supports many of the platforms, speaking of platforms basically what that means that you can execute your stub code not only on server side where you can configure everything yourself, but you can also generate this code for a web, that means that you can let's say make a call from a browser in this format and that would be recognized by the service and you will get a response back like it was a normal client application, yeah you can see it also supported for Android Flutter this generated call, so that's quite an extended list, yeah one more picture, you have a client, it uses the generated code stub to send the data and this is a protocol buffer that I just explained and it is received on a server side by the generated code stub and the server can do the business logic, apply to the requested data and then yeah repeat the process and send the response back to apply, all right protocol buffers now, so protocol buffers is yeah it's another format, it's another generated code format let's say and yeah it might be confusing that we start with a gRPC because this is what we saw in the definition of this hello service, it's already a protobuf yeah Matias I'm reading it now let me check yeah yeah yeah I think it makes sense to use monorepo for that because to basically update the code, update the dependencies faster but it doesn't have a requirement to use gRPC for monorepo, you can still configure your code or put your code in a separate repos, it would be I think important on a stage when you build your code and that means you're getting a dependency of a gRPC slash protobuf file and in this case if your application uses an old version of protobuf for example or it doesn't support the services as a protobuf file describes it then your application will fail to build and that would prevent your microservice to appear in production and yeah we'll do something wrong on the production environment but basically yeah it's not required yeah all right so protocol buffers like I said this is what we already saw in the previous example but just to explore it a little bit more so it's another technology by Google but that one is more mature it started in 28 and it also was open source for the same reason and basically it defines this .proto formats files that you can define to specify protocols between your well let's say originally for data types only but as we saw with the gRPC already this service part is supported by protobuf but it's not the data specific so you can think of it as a service it's just for gRPC reasoning so the protobuf the protobuf format is only about data types it's also about code generation and that's where your protoc is intended to shine so when you run protoc it will actually generate some code for you and it will generate either the data types but as we said it also supports these services from gRPC so it will also generate the stop services for you yeah protobuf I would say is more powerful technology even than gRPC because it's more mature it supports even more platforms and technologies so in this list is a similar but in the other one driven by community you will find yeah like a huge list of other technologies that are currently supported by a protobuf and what it is I didn't say what it is it's a format of data types and why you need this format was the specific format is because when you compile this format it actually provides you a way to get a really performant protocol between services data protocol between services so it brings you two things in a nutshell it brings you the strict typing and the second it provides you the super performant data communication layer or data type as we said so yeah let's that's that's a little bit introduction about that part so now let's let's start our services I think I don't have to install all the dependencies like I as I did it before so but it shouldn't be a problem I think if you follow instructions in the guide that should work for the box so yeah I see a lot more listing there and how learner works we don't cover too much internal details of learner but if you install the dependencies on a learner top project it will be much more much more performant and much more space saving for the for the project then then if you would do it for each dependency separately what I'm saying is that this node model is let's say the currency converter for example is just four folders here and that's not what you see in a project project like that if we look at the packages and we see a lot of dev dependencies a lot of dependencies so yeah just counting this number of keywords here would be already more yeah more more folders in a normal list but what learner does for us and that makes it really cool it basically propagates the dependencies similar dependencies from all the packages that are declared in in this repo so it takes these dependencies from here it takes the dependencies from this package and if if there are no conflicts between dependencies versions these dependencies will just pop up and end up in this top level node model folder so they will not repeat each other and actually this is a this is where it connects to how node ecosystem works in a node model so you can on each level on your application say require something or import something from something and what node model resolution system will do it will crawl up to the top level of your folder file system and on each folder while it's going on this ladder it will check as a node model folder and it will check if the dependency lives there if it lives there then no problem and it will just load the dependency from from the path that is upper than your current model is so that's how just node.js works from the box and that's what learner uses in in its design that's very nice i think so let's start let's start one of the services let's begin with an ecb provider we let's open this ecb provider folder and for us it's going to be just sufficient to start yeah node.js like that it should say something like it started listening yeah it does so now i'm gonna show you how you can make requests to this gRPC server because if you just do a call to this server yeah you will get that yeah some HTTP error but we can explore even more but yeah it's basically it wouldn't be that easy to make the protobuf call to this service you need to follow the specific request that was that was specified in the proto file so what i'm going to use i'm going to use the just node console so now we have now we inside the node environment i will again we'll just load the common go to rpc model then i will create a client like that so i say that i want to have a client to call ecb provider on the local host on this port done that and then i then i make it a call so that's just three lines in our setup so i made a call without no no problem so far so now we can have a look at what the i think to object would be yet object method so this is a this method to object by the way is what comes from the box when you generate your pc file stuff so two objects is not that i have ever written for this project it's just comes from the box when you compile a proto file so if i do proto response to object i will get this currencies list and you can see this is the type of data that we expect uh yeah maybe actually that's a point that i should show you these proto files so what we just seen here again and where we saw it in the presentation i think but let me show it again so we have this ecb provider and if you look at this console i just did a client get trace and you see i'm just using a node.js here so i'm calling the programming interface so it's just i'm programming my application and i'm doing the normal call to this service running so this is a this is a provider now we can also start the converter service let's let me start it in a separate terminal and then start again yeah of course it will start it will crash because i think it is not allowed to use it in the same port of course okay let me try to fix it port yeah and now it's interesting you see i have it in my history of course but i specify on which port i want to start it and i want to also specify the provider services so if you recall i show you the code that if this provider service is given to the currency converter it will iterate over them and make multiple calls to these services and then it will be possible to get their request to them so okay the service is started i will go to the node request with a print loop again i will require my boldjrpc package i will make a client now you can see that i'm creating a currency converter client on a different port but the rest look the same and let's do a call to that let's have a look at that at how it would look like so we made a call client convert so this is a method that was a defined and currency provider in here so we have a get rates we specify get rates request it's an empty object here and it should get rates response which is as you can see this message should contain a base base currency it should contain an exchange rate of the some more details and the rates array so let's have a look now we specify this convert request so let's just let's just relate it to here so get so we send get rates request well it's in our case it's a free formatted object but let's see what we what we got back from it response to object okay and this one has a okay cell amount base currency so maybe something maybe something that i'm missing here let's me quickly check in the proto files i want that yeah that's the only way that's the only place where is defined base currency i would say i was expecting a little bit a different format but maybe i'm maybe i'm missing something maybe it's just a matter of regenerating regenerating these steps but for us i think it doesn't matter we we got response back in a currency cell amount cell currency or which amount we want to buy which currency to buy maybe to make it even even cleaner we can have a look at this currency converter code so this method that that is actually implementing it and that's indeed what is specified in the conversion rate buy amounts sell amount buy currency sell currency all right so far so good so next step is to dockerize our microservices let me go back to the presentation again so now i want to speak a little about docker so dockerize you i hope as you know this is a tool set that enables you to develop a declare deliver and run applications and that actually maybe we can say technology non-specific to technology applications so yeah if for that we need to install docker as a as a local tool for you and basically after that you will be able to run any application that is dockerized like a docker on hello world in this example so what this thing does look around hello world it downloads something that is called an image and then it starts it up to make it a running container that's two i think main and the key definitions for docker so image is a declared application that contains all the dependencies inside talking mostly about the file file system dependencies and the container is basically this image so all the file dependencies or the yeah you know maybe maybe something something other like network dependencies and what it brings on top of that just to run how to start this application snapshot for you and then it's going to be running for you locally and then you will be able to get a full power of docker unlike jrpc and protobug docker is built by i think a company mono if i'm not mistaken and what it uses is a something that's called linux containers so this concept introduced in if i'm not mistaken red hat 6 version somewhere in the 2200 2005 so that is a concept that enables you to virtualize your services your parts of your operational system by using two main tools the c group and namespace and both of those tools come in with a red hat with a linux distribution distribution and what you they allow you to to limit the usage of cpu memory some address blocks also network resources so it enables you to define what application or the part of the operational system can use for this for this container and the namespace does the same but for other other concepts of your operational system such as the processes and users and also file system so if we look at this diagram this is a from docker official website you have a excuse me you have a free main actors you have a client this is a mostly your cli you have a docker host and docker host that can be local host local hosted host can be somewhere in your in your company and you can also you also have a registry which contains all the images from outside world that can be pulled from external into your docker host again let's talk a little more about that so you have a when you have a docker locally you first start a daemon this is a background process that will get the request from a client and then it will know what to use or what to call like a call the docker registry a local registry or the external registry to pull the image will download images and in the end it will it will start containers for you when image is already on local and that's that's not that's not on the way anymore so you have a client and the client cli common lighting interface that a user can start typing some commands like docker pool docker build docker run docker exec we'll explore it in a second a registry the database of all the images published well and some other tools like a desktop and docker well not really docker specific kubernetes is something that andrew will talk a lot today and docker compose to bind all your services but before that a few more words about a docker file so docker file defines basically the structure of your application snapshot in terms of in terms of a docker specific format you can specify the base image that you're gonna gonna use for that you're gonna specify what command to start your service with sorry around to install actually assets different assets and dependencies cmd to start you can also copy some local folders and yeah and this is a some really nice commands that i use a lot like if you are if i look at it in the bash history of the commands that i use i think docker is one of the top commands that i use for development so you just you build your image you can build it local like that you tag it with some specific name you specify where to put the docker file but then you can start it like a tmp base one for example so you will execute this cmd to start and they will trigger the container to be created this container that is isolated from the rest of the operational system you're working on after that you can basically manage this container running you can explore it with the docker ps you can kill the running container you can clear the cache of all the all the images and containers and maybe some additional data that was used you can attach to logs stream from from the running container and that the the command that i specifically love is a docker exec and with that when you can attach to any running container and basically specify which command you want to execute inside and when it's a minus e it's it's going to be interactive and you can basically do normal bash stuff for example so i specify that i will execute bin shell and after that i can do any calls to bash but inside the container so that's really nice so docker compose it's another orchestration tool that that is used hardly for for defining your local microservices ecosystem so in our in our use case we defined a few microservices too in our case but we don't want to start them like i just showed you right let's just go to some folder install dependencies and then trigger it on a specific port that's a lot of manual work and you don't want it specifically when you deploy application into the cloud because some everything can go wrong someone can update the version there yeah everything can go wrong you want to automate it as much as possible so for that we're going to create a docker image and then we're going to create a docker compose and docker compose will define services all services to start so that's a part where when i'm going to show docker compose and with that we almost done with the first part i will just explore a little bit azure azure ecosystem afterwards all righty so let me go to the yeah to this editor again so i think we explored this project already enough time so we we did this docker compose and docker file let's say the naive approach so it's a very straightforward it's and it's very simple so when i first started doing that i i just tried to make it really nice like make it as a simplest and the tiniest image possible but i encountered a little bit of troubles because we have a learner composed project monorepo and then to install all the dependencies you will need to do installation for each microsoft separately because you in a in a learner local dependency structure you have a lot more less popped up like we explored but on the build time and more importantly on the deployed time you basically you will need to do something with the dependencies differently you will you will need to install them separately for each package and that's a task itself i would say so it's one of the practical steps that you can think of for this ecosystem so if we skip that if we keep that in mind then we will see that we use a node ltsr file from a docker file in defining our microservices then we do just upgrade like all good guys do and we do the add bash proto c so the proto c is the same proto protocol buffer compiler that that was a prerequisite for for today's workshop and we had a bash because alpine node doesn't have it we specify and basically yeah where what i ended up is the same the same infrastructure inside my docker dockerized application snapshot so i decided yeah the simplest would be just to install all the dependencies and basically move the application in there and then what would differ how the yeah so that's what that's what i'm doing here i'm just adding the whole package basically local package local folder or monorepo into the docker in our case it's a it's fine because we showcase it but if you want to really optimize it optimize your docker image then you will need to you will need to think of how to make sure you have a minimum set of dependencies for each microservice but yeah if you just add this one to your docker file then you still don't know how to run it well how to start it so that's where docker compose can can help let me let me close this terminals should also stop my applications i hope yeah looks fine so what i specify here i build this image team pip base and that more than that i just specify where to start it and you see my entry point is a synonym to the command in docker file in our case just says start npm start like like i did before in a in a common prompt i also specify that i want to use 551 report and i want to expose it to external and for the second service currency converter i use the same image but i use a different path to start it up still using npm start yeah yeah matthias thanks i i also think it's the simplest and specifically for our for our part yeah and in here i just configured what i showed you locally i just specify the port environment and variable and the provider server system environment variable one thing to note here is that docker and docker compose when it's starting it creates a network per service and therefore you're going to have a local host five hundred fifty thousand fifty one fifty fifty one i don't know how to pronounce it five hundred fifty one maybe but you have a ecb provider service like it was specified in here and therefore you have to specify that network will be different dcd provider like that so let's give it a try how you got to start it up very simple you just say docker compose up and there you see it does what we did before manually and now we should say also that it started both of them so if we do the again this note session all right and i will just copy the whole command let's do the current converter command yeah it's still it still works it's still responsive with this type of data yeah okay okay that was the docker thingy so now let's let's go to infrastructure to azure part let me go back to my browser yeah now about azure where to start where to begin well to begin i think i need to start with creating your account in azure if you haven't done it before so is just one of the services that exist from azure but if you go to the thing after you created the azure account microsoft account the main entry for your okay help us so in it so after you create this yeah we use one account because actually azure is quite expensive let's say but it's enough to make all kind of experiments so you just need to register with a new user and you will have 150 euros i think so that's more than enough and that's around a couple of and use a couple of products they have so yeah so that's it they have so yeah azure is huge if you haven't explored it ever it's gonna be tremendously big to start with so you see i'm scrolling already and talking for like 10 minutes and still i haven't finished scrolling no i'm kidding but it has a lot of services inside so for us we are just gonna be using a few of them today we're gonna be using well to say we're gonna be using cicd for that we're gonna be needed need azure devops so azure devops contains a repository connection you can also have also have some user management there well not management but communication there like you have here boards backlog task organization springs even so things like that so azure devops is the product that allows you basically to deal with code and also with the different stories it also contains cicd pipeline that's what we'll be exploring a little bit today we're also going to need azure container registry so container registry is a separate separate source a separate product this is what basically the docker can be the docker registry that i just explained so for that you've got to need to create a docker registry manually but after that is done you can just easily do docker login like you would do with another docker registry and then you can tag your local image to this new container registry image and after you can manually push to that so yeah azure devops azure container registry and azure pipelines i think it's a part of azure devops but i think i had to still create a service connection that's something that enables you to basically secure insecure way contain non-personal account credentials for your registry and therefore to configure your pipeline to build and publish images for example but that can be pretty much anything it can be a token key that is going to be used for calling external api so yeah if i look here what i just did for this repository i cloned it from from a github i added the remote to to a new created to register github remote and then i pushed here so i had my similar well the same repo as i have it in github one small thing that i added to this to this environment it was just this azure pipelines file so pipelines is what defines your pipeline in in azure devops thingy and again if you never explored it before it's a it has a lot of new concepts maybe but i think most modern pipelines most modern cicd solutions have a similar structure so that is something that is a yeah configuration is called is called so basically we define what happens when you when you make a change on your code so on this branch it says i want to start the stage build it will build an image and in here it will actually build and publish so first it will build it and then in the next step it's actually yeah i thought it will be building and publishing so yeah build and push is called so i have to specify the task task is is the smallest the smallest piece of for the pipeline concept in a in azure devops so i say here a task is docker two so this is a technology binding to a task and yeah if i show you this maybe can i show you share screen i'm not sure it will show you a nice editor but in many cases it can no it doesn't show it like that but yeah if i go from here maybe maybe and yeah there are many many buttons and many many ways of configuring same thing in azure devops so you have to get used to that eventually so yeah but that it has for example this right part it has a helper how to specify specify a task so in here i specify a docker task and then i can say yeah it helps me basically to fill in and needed input data so i specify the container registry the one that i created manually i specify this microservices united this is the tag that that i created attached to my built image command to execute on docker and a docker file so docker file is my top level or top level docker file in this repository and let me collapse it a bit okay like that so then if you look at these pipelines and i'll look at the last one it basically does all what we said and all we think it does it just builds the docker image so uh doing the docker it does from node lts alpine like we explored in a docker file it does start it uh it does the yarn install it does copy files it does some something else and then in the end it does a docker publish docker push i would say and then it ends up in my microservices united repository that's uh that's i think the last thing i wanted to show i have uh maybe a few questions before i give you the words before i give you the screen did you know that uh node lts alpine actually contains yarn installed no no it's um it's amazing right it's just a user lts alpine node it has yarn as a global uh global actor there is it that started to to be provided with node.js as well uh yeah i don't think so no i wouldn't say so no but it's interesting yeah uh right then yeah maybe uh what are the advantages you think of infrastructure as code approach what uh do you have any anything to say about that uh about infrastructure as a code i mean in general i'm the biggest the biggest advantage that you can just reduce this infrastructure so for example once you created it then you can apply the same style scripts to your staging co-production environment and just repeat your different environments configuring them in different ways awesome man thanks uh and uh please uh take it from here okay let me try to share and notice that each package there is a question yeah uh so each package have the property private defined in the monorepo config uh yeah i think it's correct if uh maybe andrew will correct me uh but i think it's uh it's about npm publish and that's that's true so we don't publish our packages to npm if it would do that i think it would be it would be a simplify the build and deploy step for us the one that i explained i hope about that putting dependencies of learning monorepo into the docker image so that would be much easier if if we actually publish these dependencies to npm registry directly because after that we could just install them from npm registry and it would would take get our image very very small i think yeah and uh we have also a question on kubernetes that i think andrew will cover i hope yeah okay we'll start to share my screen can you see it guys yeah awesome yeah i mean uh we will go to this answer about kubernetes and versioning uh i think by the end when we speak about help a little bit uh if uh it's okay so i will answer this question a bit later uh today yes you can but uh for this specific example we just we just took uh one docker image and we just put all the all the packages inside so we basically we have one docker image for every single every single uh every single uh microservice and then we deploy this the whole image and then as a final thing we use just one thing like one service from it from from this image or another service but in in general yeah you could just improve this behavior and actually build a very thin docker image based on every single service it's it's up to you yeah but for simplicity we just started to use this more common example i would say so yeah uh first of all uh i want to uh start from introduction of palumi uh so palumi is a developer first uh developer developer centric i would say uh tool of creating uh infrastructure as a code uh the i'm not i'm not sure about uh you guys uh have you been familiar with terraform or not but compared to terraform uh it's much more flexible uh because you can use just regular expressions of your common language which which you used to use in your daily routines like you can use loops uh conditions functions classes and basically anything uh what is covered inside the language which you pick uh it's becoming very productive because you just can you know take this uh basically you can you can write your application code and then just move to move to infrastructure in a separate folder and start coding your infrastructure on the same language which is you know kind of beneficial because you don't need to uh learn much more on top of it again compared to terraform uh sharing and reusing it's also uh kind of a theme which also exists in terraform but in palumi you can you know define uh basically single file in our example it it would be javascript file or typescript file and describe some module or package inside this file and then we use this package from by using import uh from another file uh yeah so what what languages are supported uh there's various of languages supported like python typescript javascript go language c sharp and sharp uh uh let me move this out yeah uh it's reusable as i already said uh yeah just comparing another bit to say uh secrets are encrypted uh in the transit so when you encrypt your secret it's uh encrypted on on your site as well so when you uh again compared to terraform uh uh where you can see where you can open state file so basically state file is a a current description of all your infrastructure created so for example if you uh create a private network it will appear in this state file as a resource and then if there is some private information stated in this while you create this resource you can go into this state file in terraform and see this private information in in the plain mode which is not really uh beneficial compared to pulumi it allows you to hide this information and encrypt it off or on any site uh yeah so how how how pulumi is compared uh to terraform uh so terraform is is kind of a dedicated language uh hcl language uh but this language is about uh description of resources description of things but not about writing the code real code and then when you come to come to a question okay how how i could create uh any resource conditionally depends on whatever uh condition in in the environment or dependent on the fact that another source exists exists or not uh that becomes very complicated in terraform because you you need to start using some uh workarounds uh basically they don't provide these abilities from the uh from the box so you have to find a way to do that and basically when when i started first of all i started two years ago i started uh with terraform and it looks uh it looked exactly like pulumi for now for me uh just like an amazing technology which which i can apply everywhere for everything but when uh moment by moment when you start to uh dig deeper into this you understand that there is some disadvantages as well so there is a hard to describe whatever you want in this kind of language uh yeah another disadvantage as i already said it's it uses a separate tool called vault for encryption of secrets and as well as part of this information could be lost or you know showed uh in the state file uh but yeah from advantages it also gives you an ability to to use terraform modulus terraform modulus basically similar thing to packages in pulumi but you just you know separate it in in a external folder and then you can publish even in this module and then somebody can reuse it from the cloud uh let's move on with the pulumi and azure setup uh so we'll go here for a while yeah uh so uh uh yeah if you if you if you use uh macos uh it will be easy for you to install uh azure uh cli and pulumi click cli cli comments sorry uh to start using them so first first comment which i just wanted to show you how it goes from the early beginning uh basically i want to create another folder so i already have a folder called infrastructure which already connected to my current uh pulumi configuration and stuff like that but here i just want to create another folder uh let's call it just infra oh no okay i created i moved to there and for now i want to do pulumi view and see what happens so yeah first of the first comment you you basically need to do is pulumi view when you define uh which technology and also cloud you are going to use so our our choice is uh typescript and azure for now so i will pick this one it also asked me to you know provide the project name okay it will be infra description the stack is there okay it's created uh i don't remember my current uh configuration let me see it in here so location okay south as i'm in the uk it will be more beneficial to try it here and it started to you know create a new folder i think it should be appearing here yeah so when we choose on everything uh basically it's cloned uh an example repo uh with a all the code we need it also has some examples uh samples of the cotton here uh and let's see what happens on the other side on the other side for us it's pulumi uh pulumi cloud uh it actually should create another stack yeah so the stack is created here if we go inside and see what do we have uh we have a basic description of the stack uh basically with the opener of this repository and things like that we also can see the activity uh which is empty for now uh yeah and there is no resources which also obvious uh let's see if it's got installed or not and let me move this out a bit okay while while it's creating i will go to our main uh theme which is infrastructure and we'll start showing it bit by bit uh so yeah uh so basically when you done this setup uh it creates you a new file which is index ts.ts and basically everything what is located in this file pulumi will try to uh will try to create in the cloud so here i i have already something yeah but here in my project example i have nothing so if i go uh to this folder which is infrastructure folder and try to do pulumi app it will try to apply these changes but basically okay it's saying want to create why oh okay because i already destroyed that before so for now it just will create a stack again i think and basically we'll do nothing on top of it two seconds the very best performance which pulumi showed me so far let me see it's installed here i don't know why it takes so much time to install typescript it's kind of a theme alex would love yeah i have it installed globally oh really yeah it's one of the i think three packages i have installed globally nice which basically will solve solve the issue yeah part of uh part of installing this creating this new stack in the pulumi you also need to uh after rise uh your pulumi tools do like that so it basically it's goes okay you're logged in cool and the same stuff for azure select select command okay uh it gives me here i'm not gonna do this again already done this which will cancel it but yeah in general you have to do this because before uh without it it's not gonna work and then as well you you need to install the uh dependencies by yarn okay it's just taking ages to proceed uh yeah anyway we'll continue in another folder which is infrastructure folder and we will start with a description of our infrastructure so let's start from our cluster and for cluster we have already created uh everything which is i show you and have cluster here so basically cluster cluster in this context is a is a module or package package separate thing which i already created somewhere in in a separate file and we can reuse it and another thing is a resource group so basically we want to put all all our resources under this resource group and we also call it group group name and cluster what it says and another thing to say about blumi so we are in index.ts file at the moment and we also so if if i just import it yeah i will try to do the same like blumi blumi blumi up command and it should produce me nothing however my expectation was like why i imported these things so it should be it's already in the index.ts file it should you know giving me the result but it's not uh so blumi really sees on the fact that you started to use any resource in real and if you start it you you can you can then deploy it basically in the cloud take so much time to you know so basically yeah when you when you when you run blumi up it happens two things here now so first thing is a planning so basically blumi runs the difference of your code against the real state of infrastructure what already was created by this moment and it shows you a plan of what changes should be applied to this new infrastructure which is basically yeah we're gonna say no here if my computer will allow me to do that okay so yeah next step let's let's already start deploying something let's uh expert expert cluster name equals cluster what's going on cluster dot is cluster and name and as well as we want to expert another theme which is group name uh yeah so let's try to do again blumi up so for now we are going to see uh basically the source group in the plan and the cluster as well by cluster i mean coordinates cluster uh and yeah while we're waiting we're gonna go to these files and see what's inside so here we have a azure resource so basically we we imported uh we installed a new model blumi azure and just blumi blumi i would say uh yeah and here's the name of the of our newly going to creative resource like this name uh workshop group we want to put it in the uk south region and yeah and we also export it so if we export some resource from uh from our file we can then reuse it in here first of all and uh yeah let's see uh okay let's let's press uh yes i'm here so we going to create everything while it's creating we're going to go to the cluster and see indeed what what's inside there so cluster basically it's another resource which we take from azure we also take a configuration for it to speak a bit more about configuration i will go here so go back to this so for configuration purposes we can define we can use blumi clean commands for uh showing it's showing our configuration and we also can create a new one one or read one let's say my name is andrew so we added a new configuration uh environment uh it should be appearing here not here maybe here why oh yeah here's my name is andrew and yeah if we decide to uh to create our something sensitive sensitive or private information we can set secret and what are my password one two three four five six seven yeah once we've done this uh you see it's appeared a bit different line uh which basically says uh that this this is an encrypted line uh so we cannot get this value back encrypted line uh so we cannot get this value back uh from here but we could still get this from here cli common so it's unencrypted for you if you if you try to do get yeah and let's let's have a look what what actually uh changed uh on blumi site or when we've done this so if we go to infrastructure uh okay secrets are not shown here yeah i think i need to uh i think i need to deploy uh so these new secrets to be applied but there is another secret which is a cloud player uh iti token you can see there is a no you cannot get it out uh from here yeah let me move to another room maybe uh in a second please please so we stopped on the cluster ts so let's go back to this file and see so here's we create a couple of resources uh for creating basically our kubernetes cluster and yeah here's the here's the main resource to be to to to be honest uh it's uh our workshop cluster which for creating this we already importing existing resource uh existing resource resource group which we created before and we're also using this resource group in here so our cluster will be created under this resource group as well and here's a couple of things we add as a little extra which is credentials and we also extract expert and extract cube config so cube config is a you know a configuration for your cluster so you can apply this config to your local environment and start connecting to this remote cluster and start you know interacting with it another thing to say we also export a kubernetes provider we also we created first of all and then we use this cube config to to connect to this cluster basically we will use this provider for all further resources which we are going to put into our kubernetes cluster so for that resources which is from the kubernetes cluster we can tell them that you have to be deployed in this particular one particular cluster uh okay let's go back here uh yeah here we can see that two resources created two things basically uh two modules let's see so one of them is uh our kubernetes cluster and another is a resource if we go here and see uh what happens here so here we already see some resources created it's not just two see some resources created it's not just two but as alex was describing uh there's a couple of very very lots of things which uh azure can provide to you so many of these things already been created and another fancy view you can see the picture of your infrastructure like like the chart and also useful you can see every every single change which was applied was it successful or was it unsuccessful change you can see and hear and basically go i think into the details and see the changes like what was changed so far inside the resources yeah so let's uh let's continue so for now we have uh we have kubernetes cluster which is a managed cluster so it takes out all our head a lot of pain so it sorry it covers many things which we don't really want to cover so yeah let's let's continue on this index ts file and continue creating our infrastructure so next thing we need to our microservices is a registry so we're gonna export registry from another creative file which is registry yeah which is registry ts and and we're gonna also export this registry const registry name wheels registry registry name wheels registry uh here it's called login server i think yeah so this is a server where we need to authenticate to start pushing and pulling our images so here we good let's do once again pull me up and see what it says yeah while it's proceeding we can go and see uh resources in here so uh in azure we have a section called all resources so easily you can find here all all your resources which you literally created recently uh yeah so this resource was used by alex and basically this this is the one which was created manually and all the rest you can see uh all the rest was created uh by our uh blooming code if we try to go to kubernetes uh and see what what we have inside i think we have pretty much nothing we have some ip address not sure if it's the right one and also it should be some workloads cannot see it for some reason oh sorry it's uh it's not the right kubernetes it's a networking networking theme for kubernetes why it's called kubernetes it's strange but another thing we have a workshop cluster which our managed kubernetes cluster kubernetes service and here we have workloads workloads and here we have workloads workloads it's our applications which is executing in the cluster at the moment basically this is a pretty much standard thing which installed by default with when you install the kubernetes and next we started the process with a registry let's see if it go how it goes uh okay it's suggested to create one thing let's say yes and while it's creating let's go to the registry file registry.ts and see how it looks like it's pretty much the same as clusters so we create a new resource from the azure from the azure from the azure library we provide the location from the group and we provide the name of the of the group and also some policy which is i'm not not very much familiar with but this is a basic standard basic format uh so okay now we should see here one more resource should be a registry let's see okay let's go back and back again all resources you'll be faster sorry uh so it should be registry yeah so here's two registries for now so one was created by alex and one was created by me okay so what uh what we can do with this registry next so our our goal to publish the our application which is stored in here so we have a docker file but we need to push uh an image for this registry let's go here and just of all we need to uh authorize uh uh inside the inside the azure so we we provide to our docker daemon so we provide credentials uh from azure so let's do that what was the name uh uh okay if i if i do pull me up again sorry because i need the name of this registry basically to to use it uh in the next comment but let's type it first login login login name i need the name uh so yeah when i just recall this blew me up it's it shows like 10 resources will be unchanged which basically happens like we we haven't done anything on top of it so it's just fine fine but the thing is i need the response after this after this change so when when when it brings out everything yeah so i need this one uh so yeah we need to uh basically use this name just name of new of new registry name because this is a way how azure works so yeah it says login succeed basically this login succeed comes from docker uh docker command so for now we can uh actually build our application and use this tag like this jrpc and the tag will be latest and dot at the end to build everything so it will take a moment for it to be created i think while it's creating let's go and see here in the registry is a way to see activity log no no i think repositories it should be here so for now it shows no results okay let's try to uh finalize this building and publish it so stands after that we should see our your repository created in in the cloud yes let's close east this is where our cluster is so yeah our next goal will be to push this newly created image to the cloud while it's still building we can see doc per image in deep once again i think alex already showed that uh yeah the interesting thing i i think you you may know that but if you if you put this installing uh copying of the just two files which is uh information about the packages first and then if you run uh yarn install after it so uh you will receive on this moment you will receive a layer within with installed dependencies so later on if you change any code like whatever file it could be read me uh it will it will reuse the cache of this layer uh which will be significantly faster if for example i have the version like that uh before and every time every time i i do a single change in any file in this folder because for now i'm copying everything so this layer this layer changes so the process starts again from this moment so i copy everything again again install dependencies and that happens like on every single change but having this like these like that uh like three lines of code it will be cached on this moment so because we don't change any of these two files but then if you change some of these files for sure it will it will proceed the same proceed again the full reinstall of the dependencies inside the docker uh let's see yeah i think yarn learner already almost finished because for now it's just running yarn learner build which is the latest comment in here so hopefully it will finish soon yeah and i will remove this uh example now yeah but if you if you create an example as we as we done from the beginning you can then just run this columi app so you just created a repository what i've done basically several months ago created the repository done this pulling you down this filming you then picked up the cloud and the old stuff and then you just can run columi app and these some of these resources will will appear in the cloud so it's very fast way to try it out yeah i will delete it delete it for now takes uh pretty much the same time to delete it as the time to install it crazy it's actually you know blocks me i think you know cancel this okay let me go back uh we've been here so okay yeah it's complete i think it's just analyzing some docker stuff before it just the image is created the most to consuming operations and the yarn install and docker build which basically the same yarn install so let's go and see here while we are waiting yeah so next thing after after this we will probably will have a look into the dns provider it's going on okay finally it's finished uh so now we we can push this repo push this image into the repo so we need to use dr push command and basically this is the name of our image already built so it started to push in and when it's finished we can move forward and basically we can use this image inside our kubernetes no actually yeah we could we couldn't use it for now uh so for start using this uh private uh docker registry we need to uh give an access to to what to say to registry not to to registry to take images from our kubernetes so we need to give an access to kubernetes i will just copy this part because it's quite extensive one so yeah there's it uh blue me yeah so what's going on here let's see so first of all we need to we need to extract uh uh identity id from the cluster so here's a very strange lines to get this principal id from out of the cluster and here to say uh identity i mean all all the input and output are not just like just simple strings like input is usually like that input t and output is output t which the generators of these types and basically yeah to uh there's a problem here that uh the problem here that you couldn't just use it use it like this and take it uh like object id because it's not here because this is the output uh which basically is a different type and here's a special method they provide in pulumi it's called apply which extracts this uh output uh it's extracts it to a simple variable so here we already have if we here for example have a string it will be output of string and when we run and apply this one will be already string and here we can operate with this string as usually and then by the end of this method it will become again output string but already processed by us so we can modify it somehow if we want to yeah uh okay here while we've been given this access we can see that the image is pushed completely so for now we can go back here and run pulumi app again so we move this assignment of uh access to to the cloud so it will be applied to the cloud and while it's applying and we're gonna go here I think we've been here in the registry right so yeah it should be already something yeah so for now we have one repository which is called gRPC as well as our application uh and we have one tag which is the latest well as our application uh and we have one tag which is latest okay now we will say here yes so we given access to kubernetes to pull images with this special role pull from this registry and now we can move on uh so yeah next we need to uh we need to put some system resources kind of system resources to our kubernetes cluster uh let's see what is that so we have a special folder here which called chi8s and here we have two files one of them called system.ts let's go here first and here we have an ingress so for now yeah to say a few words about ingress ingress unfortunately the image not so well created yeah basically ingress is a tool which we install inside our cluster and it operates in front of any of our services inside the cluster and it allows us to basically you know have a single point of uh incoming traffic or multiple points but the main idea here that we can provide a special rule routing rules uh routing rule for this ingress so we can understand that for example when we call currency converter it should understand that okay this currency converter traffic should be moved to this particular service and if we decide to do it in for other for other thing it will move it to another yes okay so now it's time to actually apply this ingress so to do so we again we need to just import it just import from chi8s system so slow sorry and it's up to say it called chi8s system and again we need to export something to make it happen and here from the ingress we need a thing uh which is called ingress service api uh ip called public ip so let's yeah go back here while it's creating and see what ingress is so basically to install so basically to install the ingress we started uh to use helm i will give a better dive into it into the helm a bit later when we see the our applications and the particular helm but to stay for now let's say it's just a package package for kubernetes so we say here is a package for kubernetes and we want to install it inside the kubernetes and there's a easiest way and yeah not familiar with other maybe competitors with helm but yeah it's one of the most famous one uh here we import cluster as well we import cluster and the reason for that we need to give a provider provider field to our helm chart so this helm chart knows where exactly it should be installed well basically from science this line it's already nowhere in which particular cluster it will be installing uh during this installation uh inside the helm we ask we ask our we ask to install service for uh yeah we ask to install uh this service for this ingress to be to be built as a load balancer so when we ask that basically cloud provider which is azure for us uh they will create a public uh public ip address and assign to this service provider to this service and give it to us so basically our main idea main goal here to find this ip address out there so when we install cluster let's say we have five nodes in the cluster five different machines which is operated by one cluster under one cluster but for ingress yeah for ingress we we need one and for dns we also need a one single point of entrance so basically we need an nfp public ip address which we can assign to the dns so our site uh can appear on the internet so let's try to use this uh ip address basically yeah it shows us 404 but it shows us 404 from our newly created cluster which is good uh so what's next we have an ip address now we are in the moment when we are ready to basically create our dns so for dns uh as i said we we're going to use uh cloud flare let me show it here so for cloud flare i have one dns name which is in in one dot up uh for now it's empty as you can see and we are going to assign this ip address to this name assign this ip address to this name for that we need to again we need to import another module which we already created before from dns i think it's called yeah yeah again expert to clue me understand that it should be used uh hostname so let's run it once again i will add an extra flag minus e meaning that i'm agreed to apply this change so it doesn't ask it every time i said so just just yeah what what what are we are expecting is to something to appear in here uh right after that and basically since that we can call our server for the dns should be quite fast we already have uh 13 resources by the way all have uh 13 resources by the way all 14 with this one and let's see no it's still not there uh why oh it's uh let me see once again maybe i'm blind okay yeah now it's here uh yes so for now we assign that so basically when we go to this domain we should see the same yeah 404 page which is okay for now let's keep it open and so what's next next we we finally created some kind of infrastructure for our application so we are in the moment when we are ready to deploy our apps into the infrastructure uh give me a second yeah so we have a another file here which is called apps.ts so let's start from importing these apps from oh it's not apps it's under the folder i'm sorry so here we also export and here we have multiple things we have a namespace newly created for this particular one and we have a currency converter service and we have ECB providers so let's say we can export this one whatever just you know part of the resource it's kind of a bit of silly that you know this is the only way to pull me to say that this resource needs to be installed into the system and while it will be doing this we're gonna go to apps and see how it looks like okay yeah as i as i promised i'm going to explain a bit more about the helm itself here okay so helm uh helm is a it's kind of a manager uh managing tool uh like if you imagine yarn or npm uh it's also a package management tool uh as well as helm uh it gives you a rule uh it gives you a rule uh it gives you special rules how to define your application uh inside the kubernetes it also gives you a templating engine so using this templating engine you can reuse your application multiple times recreate this picture how it looks uh yeah to show the helm let's let's go in this folder see what we have here so we already have one chart which is called gRPC very straightforward let's try to create another chart which will be calling rest so now we have two charts we go to this folder we see charts and here you can see actually a bunch of files uh right uh so we it will be like uh service service account ingress which we mentioned before and deployment so if we go to the ingress we will see some kind of a templating part where you can where you can define uh okay this is uh this this host to be used uh to listen on in front of the kubernetes and this is a service where this traffic should be proxied to service and port and as you can see there is a many many values are just templated so it's not real values but it's something which can be overwritten for the values dot yaml file so here in values yaml we have a bunch of information describing our chart this is the default values and this is the values we can we can replace here in the moment when we install our chart so when we install uh our chart we can say okay install this chart rest chart but let's say that ingress dot enabled should be true uh okay what we can do else with helm we can we can have a look helm template and see how this rest chart will looks like when it's built so this is the final version of the chart uh which we will be do which we will deploy to our cluster an example of it just uh pretty much not not the real one but an example you can see there is a lightness probe readiness probe to define that the application is ready and you know live life okay let's go back to our devops part so here we created okay there's a some error happened i think during the deploy okay and the strange thing that we have we use a type script but the error happened in go in request dot go client library okay it couldn't read something let's let's try to see what what we have inside our kubernetes cluster uh for that we need a couple of things so we need to use a command az i guess get credentials i want to do credentials uh with name cluster name here's a cluster name and the group drop will be dropping in okay so for now since this moment i can uh use two cubes to tell cube to get any information from my cluster because i've just got all the credentials for it and let's first of all see the namespaces inside oh sorry i should say get groups until get namespace okay so great here we can find several namespaces where we have some applications so here's the namespace we used for our ingress so let's see what what we have there and get uh get bot i'm sorry okay here's one running ingress bot basically a service so let's let's see what we have in our applications namespace it should be two times good okay okay we have two of our services up and running inside this application so uh uh and as we uh for currency converter we also assign the dns so it should be kind of accessible for the dns but we will see the error like that uh because this is uh basically we we need to use gRPC protocol instead of you know regular arrest it's kind of something you know to improve so here it should be some uh web form where you can use our currency converter but for now it's just uh I think like that uh but the as a final thing uh what we could try to do uh we could uh try to do give ctl uh port forwarding uh we also need this namespace this and the name of this but without that and the ports as well so what exactly this command does uh it will bind your local port 50051 to the kubernetes port of this service uh this port so basically when you when you run this command it will be a special channel created uh and all the traffic from your local machine from this port will be forwarded directly to this kubernetes cluster in the cloud to this specific port so here's a created two forwarding rules when you type this for uh type this command and next we need to try to try to run our try to run something against our service sorry uh so for that we are we are going to use jrpc url it's a comment it's a command which help us to investigate this let let me just copy this because it's a comment is really long somewhere here it's a very long comment uh basically this is the same as uh regular curl which we which you used to use but this one is written specifically for jrpc uh so basically the idea that you can just send your specific command in the plain plain text format uh and it will be forwarded directly to this service to this service and the service will understand the jrpc format because you here you're also building that this specific uh proto should be used so yeah here's a example basically when we called uh it says like our local one yeah but if we go here we will see that our previous command of port forwarding started to doing some accepting connections so we handled new connection from this uh on this port and we forwarded it to kubernetes cluster and here i believe we can actually uh try to play with it a bit we try to do this to find this one and if i try to i'm just trying to call another service which we have uh which we have in the cloud we try to do this to find this one and if i try to receive a provider so i'm just trying to call another service which we have uh which we have in the cloud and see if it works as well but i believe it works because uh basically as alex was showing yeah basically alex was showing that it should be a picture like that oh my gosh yeah so what what we what we have deployed here we have our cluster and we deployed two services we deployed converter we deployed uh usb provider so basically converter when we call converter it goes to usb provider and then this provider goes to another provider like third party provider and takes rates out of it and then it returns here and on on converter side it just get converted and return a value to us yeah so i think that's fair enough to to see the results like that if we try to amend it to be something different i believe yeah so it's uh pretty much it uh let me have a look into the questions if you have one in just a second i have one screen uh okay no questions so far uh yeah about uh going back to the question uh if it still was to answer it uh about versioning in kubernetes so yeah about versions uh so when uh when we define versions uh so when uh when we define the helm chart here in the chart channel we have two things basically defined we have a version of this chart so basically when we change something in this chart and then we publish this chart in some public repositories so anybody can pull it and install so this version will be changed and here is another version it's called up version so for us as we just install latest it will be like that the latest if we have a look by the way i forgot to mention this if we go back to up upts and see how it looks so basically yeah here i define that i i want to use i want to create this application with this name and then i want to use this chart this local chart which is located in here and another thing i i also define that this chart should be deployed in the application namespace which i just installed the step before and the next few moments what i do i tell to this chart to use tech latest and there is the repository so the repository the one which we just created in the on the previous step so this is the registry dot registry dot login server and slash gfpc because we called this repository name gfpc so that based on these lines chart and the kubernetes itself will understand that we need to install this specific application we also define the you remember i think you are you've asked if it's possible to to use multiple docker images yeah but for now as we use one docker image we have to have to do things like that so we define where this application should be executing so for this particular service we define that it should be executed from services to your pc currency converter and then another one it should be executed from another folder so basically we have one docker image with multiple folders inside and multiple services inside and another thing we also define we tell to invest to use base domain name which is the domain name we just created in the cloud in the i'll say it i forgot the name in the cloud flare yeah so we created the domain name in the cloud flare we also passed it in the chart so kubernetes and interest inside the kubernetes will understand that all the traffic from this domain name should be forwarded to the service and another thing we also need to define some environment variables so i mean for this particular case here you can see for us to be provider we don't have any environment variables so we just install it as it and for this service as it has a dependency on the UCB provider we provide him a definition how to find this UCB provider inside the inside the kubernetes so yeah if we go still handling if we go here and try to get namespace again let me show you this service exist and all others as well service so here yeah here we have two services running one one of them currency converter and another is this provider so here basically this name i took from this real record i see in the services and as well as i should put the port here and yeah that's pretty much pretty much it next i'm gonna show you how to destroy all of this i will stop port forwarding finally so you're gonna destroy what has been done and that's it right that's where it ends yeah yeah i mean this is the last comment we're gonna we're gonna execute but yeah if we have some particular equation we can discuss it first yeah and yeah after all this execution basically what what you can see here in volumi like you can see all this activity what was produced like all these changes i've been applying during the workshop and as well as you can find all the resources in here it's many of them actually you can see there is a dependency already um yeah those are kubernetes concepts mostly right the one that i generated and deployed in the cluster yeah okay go go destroy it go go and destroy everything okay yeah okay let's go there uh yeah to add a little bit more understanding on polumi uh it's still a way to you know the the same way as terraform it's a way of representing your infrastructure i wouldn't in your infrastructure i wouldn't say that this way is very you know proficient and the way without any problems it still will be some because most of the examples in volumi you can find you can find them on examples that perfectly works and then when you can try to extend it to something more sophisticated it's you start spending more and more time not that time as you expected to uh when you just took this ready to go to and start using it so it's still uh not the final version i would say and finally i press the button to destroy everything 25 resources to be killed alex do you have something to add to today's workshop uh yeah well i'm not much i think we can say a few words about future plans with these technologies for us and also future plans with exploring microservices i think we we're already working on that starting from end of the last year and yeah we exploring both technology like programming like programming these microservices and now you see this is devops part of the uh of whole ecosystem so we're planning something more we planning part three basically which is a more like architectural vision on the programming i think and in that part we we're gonna try to explore how to decompose one monolith application into microservices basically and what practices to follow how to deal with it how to separate databases and yeah basically where to where to start how to how to go on that's uh that's something and regarding this particular workshop that we did today i think logically would be to finish it indeed how how you uh how you explained it like to have a form maybe some user interface where you can actually send requests and to demonstrate it all and that would be a nice step for the future and maybe also to have a like complete pipeline that also uses a Pulumi inside right you update something and then it immediately starts a Pulumi task which deploys it but i think for this one and well actually also for UI it's not that much work to to follow yeah yeah but yeah again going back to Pulumi it's it's really incredible that you can use just one language which you used to use for building multiple things so i mean for developers it will open new doors like you can go deeper into the application and just continue doing some infrastructure stuff just in the same language as you already work so that's great for Terraform it was much more challenging for me to first study the language and second to understand this all all these workarounds you have to have you need to have some condition or loop or whatever yeah then i would say thanks thanks Andrew for presenting was a really great second part i think it's like a really deep into Pulumi how to how to deal with it also your natives and that also a concept that i think is best to practice as much as you can with thanks everyone for joining the workshop and joining the conference as well i hope you enjoyed and learned a lot of stuff also today and i hope to see you soon again
    163 min
    12 Apr, 2022

    Watch more workshops on topic

    Check out more articles and videos

    We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career