How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps

Recording available for Multipass and Full ticket holders
Please login if you have one.
    Rate this content

    The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.

    163 min
    12 Apr, 2022


    Sign in or register to post your comment.

    AI Generated Video Summary

    The Workshop is about deploying Node.js microservices with Pulumi and Azure. It covers Docker, Kubernetes, Azure, and Pulumi concepts. The project uses a Monorepo approach and relies on gRPC and protocol buffers for communication between microservices. Pulumi is a flexible tool for creating infrastructure as code and supports multiple languages. The workshop also explores deploying applications with Helm in Kubernetes and discusses future plans for exploring microservices and database separation.

    1. Introduction and Workshop Overview

    Short description:

    Hello, I'm Alex, a Software Engineer located in the Netherlands. I have experience in JavaScript, DevOps, and backend development. I enjoy building tooling for engineers and working with Golang. Andrew Haj is a software and platform engineer in the UK. He has experience with PHP, Node.js, and Go. Today's workshop is about deploying Node.js microservices with Pulumi and Azure. We will show you the application, discuss the repository structure, dependencies, and touch on gRPC and protocol buffers.

    Hello, I'm Alex, and me and Andrew will conduct this workshop for you today. The workshop is about deploying Node.js microservices with technologies like Bluemix into Azure DevOps. Sorry, into Azure cloud. We have a lot of plans, to be honest, for today. So, I hope you have read this, this agenda a little bit. I hope you had a chance to go through the prerequisites, through some general idea of what we're going to do today. So, if not, I just pasted the link in the chart. This is a page where you can find all of this information, and maybe not doing directly the exercises that we're going to do today, in the workshop, but later for you. That would be, I think, useful to get back to this material and find something useful, I hope. Yeah, so that's our plan for today. Quite a lot of topics to cover, and let's begin with the simplest one, and I will get to that agenda a little bit later.

    So let me introduce myself. I'm Alex, Alex Porjikov, and I'm a Software Engineer, located currently in the Netherlands. I'm, let's say, most of my career was doing JavaScript, and before 2015, that was only frontend, and I enjoyed it quite a lot, but actually, around 2015, maybe you remember, somewhere in this time, there was a big topic in JavaScript community, JavaScript fatigue, and I think, I was in a wave when I actually also felt it. But for me, it worked a little bit differently. I still love JavaScript, but I wanted to switch my scope of JavaScript to more like a backend and more to DevOps tools, and actually, I found a very nice opportunity for that. I joined some European bank and my team was doing one huge pipeline for all the front-end engineers in our bank, and that included also the DevOps works, like configure some janky servers first, then configure some virtual machines as well, and then to move to some GitLab CICD system and also configure nodes there specifically for GitLab. And later we also did a migration to Azure DevOps, which we use currently in our company, and yeah, that was a DevOps part, but we also had server-side application, a backend application, some API. Yeah, so I enjoyed this work quite a lot. A big part of my work that I also find very interesting to myself is building tooling for engineers, and in this time we also had some architecture solution, also known as CLI for engineers in our company. And the idea was that an engineer can make, run the CLI locally, doing the project, for example, or link it, well, you know, what a normal development life cycle requires nowadays. But the idea is that the same tooling you could execute and apply on the pipeline, and therefore, at least in a design point of view, that would make your development faster and more predictable. So yeah, that's my experience with the JavaScript, specifically on DevOps and backend stack. Later, in the last two years, I've worked also on more traditional backend stack, like Java, and also we have a nice tooling for engineers, but nowadays, it's written in Golang. And to be honest, I find Golang myself a very, very lovely language, and the programming language to work with. That's, in short, about me. Feel free to contact me if you find me, find in any of these social platforms.

    Andrew, how are you doing? I'm good, thanks Alex. Yeah, you have such an extended speech, I'm not sure if I'm ready to repeat this from the early beginning of my career. Yeah, I just wanted to say I'm Andrew Haj, I'm a software and platform engineer here in the United Kingdom. I joined about two years ago as a startup, a FinTech startup here. And that was pretty much the moment when I started exploring DevOps for myself. And the biggest part I started with, it was Terraform. And also, building some automation tools about DevOps, pitches and so on to help developers move faster in their daily routines. Yeah, but in terms of languages, I started exploring years and years ago PHP. I had a work as a PHP developer that was actually the moment when I met Alex more than 10 years ago. And we had some common projects together. Since then we were good friends and also good engineers, I mean, exploring something new all together. Yeah, then I started to to gain some beats from Node.js and JavaScript at all. I wrote some Cloud parser, it's called Gus parser, which allows you to scrape any website on the internet quite fast using multiple backends, by backends, I mean, multiple web browsers, real web browser or headless web browser. Yeah, then quite recently, I also started to learn Go language. I've done some research and also done some microservices development using Go language, as well as Node.js. Both are quite cool, I would say. Go language I would prefer if you need to do something very very thin and very memory less consuming compared to Node.js. Again, Go language is one of the things I was inspired was about testing. So testing from framework is included. So we just open and start doing the things you want to do. As well as another tool that is benchmarking. Also included in the Go lang original package so you just can investigate any of your function and see where the problem is. Yeah, that's it about technologies maybe. And feel free to contact me if you want to. Right? Thanks Andrew, it was quite a comprehensive thing. I just enabled the transcription from what we're talking about. I don't know how that's working but hope that would help someone. So yeah, then we come to a point what we're gonna going to do today. So let's go back to this description. Yeah, this workshop, as we said it's about deploying Node.js microservices with Pulumi and Azure. So we're gonna use quite a lot of technologies today. But if we look at the agenda let me actually start from this point now. So it has, I think, two logical parts today. I will be starting the workshop and Andrew will continue in the second part. So first we want to show you the application itself. So let's demonstrate the idea of the application that we have built in the structure of the repository that maybe the tooling that we use, our dependencies, some of those. We also gonna touch a little bit of our gRPC world. Yeah, it's not gonna be very, very deep dive into this technology so if it's new to you, we have done actually with a GitNation the JavaScript workshop and that was more about the internal software. The protected ecosystem, a gRPC ecosystem with Node.js. So feel free to find it on GitNation also. I just pasted the link to the text description of this workshop so you can go with the steps, you can understand how these things working more in deep. So yeah, gRPC, a little bit about protocol buffers. I will also demonstrate how it all should work locally.

    2. Docker, Kubernetes, Azure, and Pulumi Workshop

    Short description:

    We will explore how to prepare the Docker image for the microservices and touch on Kubernetes concepts. Then, we will move on to the Azure part, discussing how to migrate the repository to Azure and prepare the image for full CI/CD. The workshop will cover Pulumi and its features, focusing on a microservices application called Cryptocurrency Converter. Technical prerequisites include Git, ProtoC, yarn, and a learning environment. The project is a Monorepo containing packages for microservices and ecosystem components.

    Then we gonna explore how to prepare the Docker image for this application so microservices that we're gonna use. And the reason for that, actually maybe that was not originally a big part of the workshop but we understood that makes a lot of sense. So some ideas changed while we were preparing for this workshop, but we encounter that we had to explain a lot of Kubernetes. Well, not I get deep dive, but we need to touch a lot of concepts for Kubernetes and for Kubernetes, of course, you need to prepare a Docker image to make your application working most locally and then as a remote image that you can pull into your system. So we gonna explore a little bit of a Docker and there we go to the Azure part, how to migrate basically your repo to Azure or what abilities you need from Microsoft Azure to be in place and then start preparing this image as a full CI CD. And this is gonna be end of part one and the part two will be performed by Andrew.

    What this is basically it's, let's say deep dive code, writing code and explaining different things that Pulumi provides for you. I find it quite hardcore, this part of workshop specifically. So if you are really into Pulumi, this is a UI in the right place. Otherwise, please enjoy material and let us know what we can improve. Yeah, I think approximately this workshop would take around maybe up to three hours, I hope a little less, maybe two and a half to three. So this has got a little bit the agenda for today. We don't have much of practical exercises. We prepared in the last workshop, the GitHub issues that you can pick up currently it's not much, but the idea is that if you wanna touch some of the exercises, just feel free to contribute to this report just to for it and start working on that. It has both the DevOps and also called the exercises and I'll let us know if you need more details on these issues, of course. That's it for the intro.

    And now I think maybe just briefly check the technologies again. What we are gonna touch today is a microservice architecture. Our main focus will be on Pulumi, but we're also gonna show how it all working with Azure and specifically Azure DevOps Product. Our main application is built on NodeJS JavaScript and TypeScript with learner configuration on it and it's GRPC based microservices. Of course, we're gonna use all NodeJS stack engineers, NPM and ER and Docker Git and other stuff you can expect from any software node nowadays. Yeah, that's quite a lot. So let's not waste too much time in the introduction. Let's begin the workshop.

    All right, all right. And of course, feel free to just post a message in the chat or just say something out loud that's of course possible. So if you're free to ask any question than trying to say. All right, so what we're gonna build today or what we're gonna show case today is a simple microservices application. We call it Cryptocurrency Converter. And this is a set of microservices built in Notch S technology. And this is a small diagram that shows how this would work. So ideally we have multiple providers in a system. Each provider you can think of separate microservice written in Notch S. And we also have different type of microservice. It's a converter microservice. So when the request from a user comes in, a converter can have a look at what providers it has, can send a request to each provider to get a raise from these providers, accumulate it and then send response to a user. So it's quite simple to be honest in this flow, but let's think a little one more time about that. So what the user wants, he wants to convert one currency, one amount of currency to another, like Ethereum to Canadian dollar in this case. And the providers can be well, anything, anything that a user slash developer is implement, is wanted to implement or willing to implement, already implemented as a provider of micro service. That can be for example, your central bank rates or Bank of England rates or any other types of rates. So you can imagine that in these providers, you normally do some requests to outside world where at this API leads something and maybe you need to register it if you build in a really user application so you need to register the API key, for example, and you do a request and you receive basically the array of rates that you can exchange your currency to. And that's one part of the converter and that's again, requests to each provider at some point and when it receives response back, it can decide what is the best for user or maybe you can provide sound for to user back that the user decides what is the best for him or her of course. So this is in a nutshell what we gonna be building, you see that the provider microservices is called in the provider one API, the second provider can call some other API and it's up to implementation of a converter to decide what's the best rate to choose out among this. Maybe it has some, you know, commission agreement with a sum of providers. So yeah, of course everything should be transparent to the user, you know.

    Now more about the technical prerequisites that you need for these projects. Well, we put this into GitHub repo originally. We have this in this URL you can see, and basically, this is all code you need to start it locally. So let's check it out. Of course you need a Git for that, and then you need something that is called Proto C. So please, by the way, we haven't checked much of your reaction so far. So please let us know which technologies are absolutely new to you, or put some number, one to 10, how you estimate yourself, let's say, in this technologies today. And, again, what technologies are absolutely new to you to hear today about? Yeah, Andrew, thanks for that. Thanks, Matthijs. ProtoC, yeah, that makes sense. So, yeah, ProtoC is a, well, don't worry, we can explore some of those of course today. Well, ProtoC is something that comes for the proto buffer world, and that covers a little bit of gRPC. Basically you need it for gRPC, and the gRPC is using a proto buffer, proto buffer partially. Yeah, you better go to official proto buffer page and it describes how you can install it for your operational system, so for Mac user you can just use real install proto buffer, and then you will have this binary installed in your local machine. Again, with this binary you can compile proto buffer files into a JavaScript code, in our case we use a JavaScript services or TypeScript services, Microsoft services, and we're gonna use this output target for us. Yeah, there are some commands here for Linux, but again it's better to go to the protocol buffers official web page. We have all the links in the prerequisites section of this document. What else you would need after we cloned it repository after we installed protocol buffer, compiler, next and maybe the last thing we gonna need before starting it up locally. It's just a yarn environment and a learning environment. So what this project is basically a set of, it's a Monorepo, it's Monorepo and it contains a few packages that are used internally for microservices or ecosystem itself. I will show it in a bit in my BS code, but for now, this covers pretty much of it. So you can see the proto folder and that contains the proto files, the protofiles I will demonstrate a bit later. On the services and gRPC folder, you see that we have imitation of some currency converter and European Central Bank provider. So if you remember this feature, we have implemented a converter microservice and we have implemented Y provider microservice. And for you to play with it, you can of course try to implement another provider and make the application more complete.

    3. Internal Packages and Microservices

    Short description:

    We have internal packages for the microservices. I will quickly show the dependencies and the use of Lernar. Lernar allows you to configure a monorepo with separate packages and define the development lifecycle. We have the packages 'common' and 'services gRPC'. The Go-GRPC package is the main web server implementation. It loads the GRPC protobuf definition and starts a server. The currency converter and ECB provider packages call external APIs and return data to the user. The implementation is straightforward. The Currency Converter calls all the providers, aggregates data, and returns it to the user.

    We have also some internal packages so that are needed to start this microservices. But I will show it in a second.

    All right, so this is a point when I'm gonna share my BSCode, I'm using BSCode by the way, but that's not necessary and all people use something different. Andrew uses Webstorm. So I'm really addicted to BSCode and with it since version 0.5 or 0.7, so quite a while. And what I love about BSCode is how easy is to debug Node.js applications basically. That's what bought me for this editor.

    So again, let's have a look. I like to begin with the package ASIN from with the dependencies in here. And you can see that we don't have much dependencies in a top level package ASIN. We have lernar. That's what, I'm gonna do a trick for internal packages. This is if you are not familiar with lernar, this is a tool that allows you to configure your monorepo and configure it in a way to define separate packages in this repo, to define the development lifecycle for this repo, like install, like a lint, like, I don't know, build maybe, even start. So ideally you can define anything for your internal packages, also release cycle.

    Let's say in lernar you can define or and configure that you wanna publish versions of internal packages with the same version. Or otherwise you can specify that it's gonna be separate version per package. So when you install lernar, you can also provide what workspaces you're gonna be using, and this is just an array of local folders. And you see we have packages common and we have services gRPC, and that's basically it. So we have free for common packages and free of services, and that's what we define here as a workspaces. All right, and this is packages, and then lernar again, it repeats some of these packages thingies, but that's about how Yarn and Lernar divide a separate concerns between each other.

    So you specify that you wanna use the yarn, not npm as a package manager for your local project. You specify that you use yarn workspaces logically as you specify npm client, just in the version what I just described before. So with a version independent, you say that each package that you specify in here has its own release cycle and they have a separate versions management. All right. That's kind of about Lernar.

    So, next let me quickly check. So maybe now I can show you quickly and I'm not really gonna spend a lot of time in here. What packages do we have here? And I think the main common package that is important for us is the GoGRPC. Not like a Go language GRPC, but like a Go, let's go GRPC. So this is a main web server implementation for our microservices. It actually self-made, handmade, but you can see it's not that much. So we just define a server class, we specify default force where to start. What it does importantly to us, and I'm going to explain it in more detail. So when a part of a GRPC and protobuf, it can load the GRPC, the protobuf definition, and then based on that, either create or to define a server that they said protobuf is defining, protobuf file is defining. And after that it just knows how to start a server and output some information. So yeah, it's a short, small wrapper. As you can see, it's based on GRPC-GS package, it's the official one. And I was expecting something that is called an express here but now I cannot see it here. So maybe it's down in some internal meme file internal package. But for us doesn't matter, it's the internal implementation, it's basically express based wrapper basically.

    Yeah, so this is a Go-GRPC and then let's have a look at the currency converter and the EACB provider, European Central Bank provider. So if we start with that one, the package of the microservices and it does a call to external API. It gets data out of the data, like arrays in the currencies and their results to a user and that a user here or in terms of GRPC client can be either the provider application or a user that end user if we allow that in our certificates. So if we have a look at this SRC again, well, actually I wanted to start with the back JSON. You see that we're looking at the correct one. Yeah, it's correct one. You see that it's using a common Go GRPC like we just showed and beside that, it uses some of the tools that should be actually in their dependencies and more importantly to us, not fetch that one does a call to external of that service. So if we look at this server, it's just a bootstrapping thingy. Here it's a small configuration that explains that it wants to load the European Central Bank Provider Proto. It starts on a port that we defined and then we say that we're gonna implement GetTrace. So maybe I will spend a little more time when I talk about GRPC on that, but you can see in the implementation that it's really one screen of coding here. It does a page to this rate URL that we hard code it, it waits for response, it extracts the data out of there and then it sorts this object from outside. And of course, you need to make correct wrapper so that GRPC syntax is correct when you're sending a request from one application to another application. So in the end, that's return an object that is called get rate response with the rates and base currency how we will define it in the protocol file. But that's as simple as that.

    And that last one here, maybe I could talk about Currency Converter. And now I remember this picture when a Currency Converter basically does calls to all the providers, aggregates data and returns it to a user. So again, in Server.js or maybe it's better to again, start with the package.json. It's using Common-Go.jrpc dependencies. And that's basically what what it needs to have here. The rest come in with the dependency, the rest should be, I would say, in Dev dependencies, but for us, it doesn't matter here. So in here, we define a service currency converter. We say that this is gonna be our method to implement in there. And again, implementation is sort of straightforward. So what it does, it gets all the provider services and for each provider services that are sent to the configuration, it creates a gRPCClient. And with this gRPCClient, you can basically make requests. And how you do a request with that, you just say client and then get trace, and then the get trace, that one was just defined in this get trace code in ECB provider, like we saw before. So after that, it needs to accumulate, and this is a purely JavaScript slash TypeScript logic. So nothing really big happened here, we just implement some of this logic and resort in the converted response that I'm gonna show in a second, but you can see it's just some type of data that needs to be instantiated and restored to a user. That's it I think at that point, let me get back to the slide.

    4. Monorepo Approach and Introduction to gRPC

    Short description:

    We explained the project configuration and the use of a Monorepo approach for microservices. The internal technology we'll use is gRPC, a modern open source remote procedure call framework. gRPC enables communication between client and server applications, making it easier to build connected systems. It was born at Google and was open-sourced to create a stable and performant ecosystem for microservices. Each microservice can act as both a server and a client, and gRPC generates stub code to facilitate communication between microservices. The protocol format is defined using Protobuf, which allows for strict data type testing and ensures compatibility between client and server. gRPC supports multiple platforms and can be used in web and mobile applications.

    All right, looking good, so far so good. So we have explained the configuration of our project so far, we showed you that we use a learning configuration, so learner Monorepo approach for configuring our microservices in one repo, I think, yeah, I don't know if you have opinion about Monorepo versus Multi-Repos approach, if you ask me what is a better, at some point I prefer to not answer any of these questions for myself, but I think for smaller projects Monorepo is at least as valid as Multi-Repo approach is, so yeah, I'm not really strict in what should be using, but I think for microservices, you have a lot of packages to control and for a showcase and some things it's really easier to keep the code closer to each other so you can change things up quicker and that's helping I think.

    All right, so now let's talk a little bit about the internal technologies that we're going to use today and the first one is a gRPC. This is a logo for official logo for gRPC technology, so it's a Docker that I think his name is Pancakes if I'm not mistaken. But this is the most important question for today, was his name. But what is a gRPC? gRPC is Google Remote Procedure Calls. Well, actually no one ever says that G is Google, but that is kind of implicitly what it means I would say. gRPC is a modern open source remote procedure call framework that can run anywhere. It enables client and server applications to communicate and make it easier to build connected systems. So what it is all about, it's a framework, but I would rather call it a technology ecosystem.

    And it was born in a Google company and I think officially started in 2015. And it's not that long time ago. The idea was that Google was testing this approach, I think more than 10 years internally. And Google is, as you know, a giant company and it experiments with the different stacks with different technologies. It sometimes produces news, new technologies, new languages, like a Core length, for example. You can also think of a project like a Kubernetes, like, yeah, or whatever else we name, the list is a really huge and growing every day, I would say. But I think, is that, in Google, you cannot just define one language for all the products that are going to be used for them. Yeah, it's not the case. Basically, you put down these decisions to the teams, and it's up to the teams to decide which technology is the best to use for their product. And then, the team can decide to use a go-lang, Node.js, Python, C++, or Java. And it is really great that you can unite these services in one ecosystem. And there where the problem basically starts.

    Because you want this ecosystem to be stable, you want it to be performant, and you really want to get rid of this pain on a team's level. You want to abstract it out, and then teams think only about their business logic to produce the product in a faster way they can. So that's why Google first experimented with Chair PC technology, and then outsourced it to open source. So that's, I think, many, many other projects doing when the technology matures, and to help to mature it even more, they put it to open source. So let's say people from outside can bring something back, can contribute to this project. Maybe in terms of ideas, giving feedback to these projects, also writing code, why not? But I think even more what you buy for this approach is that engineers from all over the world adapt to this. And basically, when Google hires someone, it doesn't need to teach you this technology anymore. So that's what drives projects like that. Standardizes microservices, architecture, framework, and infrastructure. But what is it internally? Well, it's not that complicated to say. So each microservice, and I just showed you a very simple implementation of such microservices, it needs at some point to make a request to another microserver. And for that, you can think of two roles, a server role and the client role. And to note, your microservice can be both a server and also a client.

    So what JRPC does for you, and what it comes from the box, it puts some small amount of code in your microservice and to be honest, you can generate it. So the ProtoC, this Protobuf compiler stuff, what it does, it generates stub code for your application, and then inside your client or your server, you call this stub methods, to generate a stub methods to do a call for other microservices. Let me show you this Protobuf definition finally. So what you see here, it's a classic Protobuf example. It starts with a Protobuf version, Protobuf version, it doesn't matter to us. We define a package namespace, it doesn't matter. So what is the next here is a service and some data types. And basically, that's it. That's what gRPC definition is. So you can define a service, and this is your client or your server side. Server node and the messages is some data types that you need to basically send a data or apply with these data. So if you look at this hello service, you define with a keyword RPC, the methods inside this hello service. Yeah, you can use a different formats for sending this data. I think we don't need to know about these types of data, but to be honest, that covers pretty much all the possibilities of a gRPC already. So again, we define the method just hello, it receives a hello request input and reserves hello response output. And what is a hello request? Well, we defined it in here, message hello requested an object with a key word, with a key greeting and a hello response is a object with a string key c. Object with a string key reply. That's it. That's how easy is it to define your protocol format, protocol between microservices and that's what makes makes the ecosystem of microservices truly stable. So imagine you have this protocol buffer defined and what that means is that basically you can test your code for the strict data types before you move it to a next stage to acceptance of production and test it there. So you will already know if your formats, data formats, for example, are not are not interchangeable between client and server. Let's repeat that. So we have a client and server and to be honest, we can even unite that definition for us, it doesn't matter what gRPC does for us. It generates a cold start for us and this cold stuff in a nutshell can help us with making calls to another services in the unified syntax. It takes care of a transfer protocol. So you don't have to specify you don't have to specify where and how and what to do with this request. It all comes from the box. It's supported, these generated calls are supported for a number of platforms already. So it's a supports in Node.js from the beginning but I think the most, most languages used maybe even not go, but the Java go and the other things but you can see at least it's a quite big and it supports many of the platforms. Speaking of platforms, basically what that means that you can execute your stop code not only on server side where you can configure everything yourself but you can also generate this code for a web. And that means that you can, let's say make a code from a browser in this format but that would be recognized by the service and you will get a response back like it was a normal client application. Yeah, it will support for Android flutter at the generating also that's quite an extended list. Yeah, one more picture, you have a client uses a generated code stop to send the data. And this is a protocol buffers that I just explained and it's received on a server side by the generated cost up and the server can do the business logic applied to the requested data.

    5. Protocol Buffers and Service Start

    Short description:

    Protocol buffers is another format for generating code and defining protocols between services. It supports data types and code generation, including stub services. Protobuf is more mature and powerful than gRPC, supporting multiple platforms and technologies. It provides strict typing and a performant data communication layer. Now, let's start our services and explore how Learner handles dependencies.

    And this is a protocol buffers that I just explained and it's received on a server side by the generated cost up and the server can do the business logic applied to the requested data. And there, yeah, repeat the process and the send the response back to a client.

    All right, protocol buffers now. So, protocol buffers is a... Yeah, it's another format. It's another generated code format, let's say. And, yeah, it might become confusing that we start with the gRPC because this is what we saw in the definition of this hello service is already a protocol.

    Yeah, Mathias, I'm reading it now, let me check.backs give exactly the same changes as before but based on the protocol port. That's what you mentioned, for example, that the jQuery and the mrpop were different partitions at all. Yeah, that's what I'm seeing. Yeah, yeah, yeah.

    I think it makes sense to use MonoRepo for that configure your code or.... Yeah, put your code in a separate repos. It would be I think, important on the stage when you build your code. And that means you get an independency of a gRPC slash protobuf file. And in this case, if your application uses old version of protobuf for example, or it doesn't support the services as a protobuf file describes it, then your application will fail to build. And that would prevent your microservice to appear on production. And yeah, we'll do something wrong on the production environment. But basically, yeah, it's not required.

    Yeah, all right. So protocol buffers, like I said, this is what we already saw in the previous example, but just to explore it a little bit more. So it's another technology by Google, but that one I think is more mature. It started in 2008, and it also was open-source for the same reason. And basically, it defines this.proto formats and files that you can define to specify protocols between your, well, let's say, originally for data types only. But as we saw with the JRPC already, this service part, the service part is supported by Protobuf, but it's not the data-specific. So you can think of it as a service, just for JRPC reasoning. So the Protobuf format it's only about data types. It's also about code generation, and that's where your Proto C is intended to shine. So when you run Proto C, it will actually generate some code for you, and it will generate it either the data types, but as we said, it also supports this services from JRPC. So it will also generate the stub services for you.

    Yeah, Protobuf, I would say, is a more powerful technology even than the JRPC, because it's more mature, it thus supports even more platforms and technologies. So in this list is similar, but in the other one, driven by community, you will find like a huge list of other technologies that are currently supported by Protobuf. And what it is, I didn't say what it is, it's a format of data types. And why you need this format, this specific format, is because when you compile this format, it actually provides you a way to get a really performance protocol between services, data protocol between services. So it brings you two things in a nutshell, it brings you that strict typing. And the second, if it provides you a super performant data communication layer or data type as we said. So yeah, that's a little bit introduction about that part. So now let's start our services. I think I don't have to install all the dependencies as I did it before. But it shouldn't be a problem. I think if you follow instructions in the guy that should work for the box.

    So yeah, I seen on one listing there and the how Learner works, we don't cover too much internal details of Learner. But if you install the dependencies on Learner top project, it will be much more performant and much more space saving for the project. Then if you will do it for each dependency separately. What I'm saying is that this not more than this, let's say the currency converter, for example, is just for folders here. And that's not what you see in a project like that. If we look at the packages and we'll see a lot of dependencies, a lot of dependencies. So yeah, just accounting this number of keywords here would be already more folders in a model list. But what Learner does for us and that makes it really cool, it basically propagates the dependency similar dependencies from all the packages that are declared in this repo. So it takes these dependencies from here. It takes the dependencies from these package. And if there are no conflicts between the dependencies and the versions, this dependency will just pop up and end up in this top level not model list folder. So they will not repeat each other. And actually, this is where it connects to how Node ecosystem model list ecosystem works. In a node model list on each level on your application, say require something or input something from something. And what node resolution system will do, it will crawl up to the top level of your folder file system. And on each folder while it's going on this ladder, it will check as a node model list folder and it will check if the dependency lives there. If it's lives there, then no problem. It will just load the dependency from the path that is upper than your current model is. So that's how just a node.js works from the box. And that's what Learner uses in this design. That's very nice, I think.

    So let's start one of the services. Let's begin with ECB provider. Let's open this ECB provider folder. And for us it's gonna be just sufficient to start node.js like that. It should say something like it's started listening. Yeah, it does. So now I'm gonna show you how you can make requests to this jr-PC server. Because if you just do a call to the server, yeah, you will get some HTTP error, but we can explore even more.

    6. Protocol Call and Dockerization

    Short description:

    We need to follow the specific request specified in the protocol file. I will create a client to call the ECB provider and make a call. The response should contain the expected data. We can also start the converter service and make a call to it. Next step is to Dockerize our microservices. Docker is a tool set that enables you to develop, declare, deliver, and run applications. Install Docker as your local tool and you will be able to run any Dockerized application.

    But yeah, it's basically, it will not be that easy to make that protocol call to this service. We need to follow the specific request that was specified in the protocol file. So what I'm gonna use, I'm gonna use just Node Console. So now we inside Node environment. I will, again, will just load the common jr-PC model. Then I will create a client like that. So I say that I wanna have a client to call ecv provider, on a local host on this port, down that. And then I make it a call. So that's just three lines you know, set up. So I made a call without, no problem so far.

    So now we can have a look at what, I think to object would be, yeah object method. So this is a, this method to object by the way, is what comes from the box when you generate your PC file stuff. So to object is not that I have ever written for this project is just comes from the box when you compile a protocol, protocol files. So if I do proto response to object I will get this currencies lists. And you can see this is the type of data that we expect. Yeah maybe actually that's a point that I should show you this are proto files. So what we just seen here again, so we saw it in a presentation I think but let me show it again. So we have this ECB provider. And if you look at this console I just did apply and get the traits. And you see I'm just using Node.js here. So I'm calling in the programming interface. So it's just, I'm programming my application and I'm doing the normal call to this service running. So this is a provider. Now we can also start the converter service. Let me start it in a separate terminal. And start again. Yeah, of course it will start. It will crash because I think it is not allowed to use in the same port of course. Let me try to fix it, port. Yeah, and now it's interesting, you see, I have it in my history of course. But I specify on which port I want to start it. And I want to also specify the provider services. So if you recall, I show you the call that if this provider service is given to the parents of converter, it will iterate over them and make multiple calls to these services. And then it will be possible to get the request to them. So, okay, the server is started, I will go to the load request with a print loop again. I will require my doger pc package. I will make a client. Now you can see that I'm creating the currency converter client on a different port, but the rest will keep the same. And let's do a call to that. Let's have a look at that and how it will look like. So we made a call, client converts. So this is a method that was a defined in currency provider in here. So we have a get rates, we specify get rates request. It's an empty object here and it should get rates response, which is as you can see this message should contain a base currency, it should contain an exchange rate of the some more details and the rates array. So let's have a look. Now we specify this convert request, so let's just relate it to here. So we send gets rates request. Well, it's in our case it's a free formatted object, but let's see what we got back from it. It responds to object, okay. And this one has a, okay, sell amount, base currency. So it might be something, might be something that I'm missing here. Let's be quick, we check in the portal files. I will want that. No, that's only way that's the only place that where it's defined base currency. I would say I was expecting a bit different format, but maybe I'm, maybe I'm missing something, maybe it's just a matter of our regenerating and originating these steps, but for us, I think it doesn't matter. We got response back in a currency, sell amount so currency on which we wanna buy, which currency to buy, maybe to make it even cleaner. We can have a look at this currency converter code. So this method that is actually implementing it. And that's indeed what is specified in that conversion rate, buy amounts, sell amount, buy currency, sell currency. All right, so far so good.

    So next step is to Dockerize our microservices. Let me go back to the presentation again. So now I wanna speak a little about Docker. So Docker as you, I hope as you know, this is a tool set that enables you to develop, declare, deliver and run applications. And that actually, maybe we could say technology non-specific to technology applications. So, yeah, for that, we need to install Docker as your local tool for you. And basically after that, you will be able to run any application that is Dockerized like a Docker on Hello World in this example. So what this thing does, Docker on Hello World, it downloads something that is called an image and then it starts it up to make it a running container. That's two, I think main and the key definitions for Docker. So image is a declared application that contains all the dependencies inside so can mostly about the file system dependencies and the container is basically this image.

    7. Docker, Docker Compose, and Application Structure

    Short description:

    Docker is built using Linux Containers, which enable the virtualization of services and parts of the operating system. Docker Compose is an orchestration tool used to define local microservices ecosystems. Dockerfiles define the structure of application snapshots, including base images, commands to start services, and dependencies. The Docker Compose file specifies the services to start and their configurations. Docker and Docker Compose create a network per service, allowing communication between them.

    So all the file dependencies, all the, yeah, maybe something other like network dependencies and that what it brings on top of that just to run how to start this application's snapshot for you and then it's gonna be running for you locally and then you will be able to get full power of Docker. Unlike gRPC and ProtobuG, Docker is built by I think a company, Mono if I'm not mistaken and what it uses is something that's called Linux Containers, so there's a concept introduced in, if I'm not mistaken, Red Hat 6 version somewhere in 2005 so that is a concept that enables you to virtualize your services, your parts of your operational system by using two main tools, C Group and Namespace and both of those tools come in with Red Hat with a Linux distribution and they allow you to limit the usage of CPU, memory, some address blocks, or some network resources. So it enables you to define what application or a part of your operational system can use for this container. And then Namespace does the same but for other other concepts of your operational systems such as the processes and users and also file system. So if we look at this diagram, this is from Docker official website. You have a, excuse me, you have three main actors, you have a client, this is mostly your CLI. You have a Docker host and Docker host can be local host, local hosted host can be somewhere in your company. And you can also, you also have a registry which contains all the images from outside world that can be pulled from external into your Docker host. Again, let's talk a little more about that. So you have, when you have a Docker locally, you first start with Daemon. This is a background process that will get the request from a client and then it will know what to use what you call the Docker registry, local registry, or the external registry to pull the image and download images. And in the end it will start containers for you when images already are local and that's not on the way anymore. So you have a client and the client like over like in interface that a user can start typing some commands like Docker pool, Docker build, Docker run, Docker exec, we'll explore it in a second. Our registry that database of all the images published. Well, and some other tools like a desktop and Docker, well, not really docker-specific, kubernetes is something that Andrew will talk a lot today. And Docker can post to bind all your services. But before that, a few more words about docker file. So docker file defines basically the structure of your application snapshot in terms of a Docker-specific format. You can specify the base image that you're gonna use for that, you're gonna specify what command to start your service with. You can also you can run to install actually assets and dependents and CMD to start, you can also copy some local folders. And yeah, and this is some really as nice commands that I use a lot. Like, if you're in the bash history of the commands that they use I think Docker is one of the top commands that they use for development. So first you build your image, you can build it local like that, you tag it with some specific name, you specify where to put the Docker file. But then you can start it, like a TMP base one, for example, so you will execute this CMD to start and they will trigger the container to be created, this container that is isolated from the rest of the operational system you're working on. After that you can basically manage this second container and you can explore it with the Docker PS, you can kill the running container, you can clear the cache of all the images and containers, maybe some additional data that was used, can attach to a low stream from the running container and the command that I specifically love is a docker exec and with that communicate attach to any running container and basically specify which command you want to execute inside and when it's a minus E it's going to be interactive and you can basically do normal bush stuff, for example. So I specify that I will execute bin shell and after that I can do any calls to bush but inside the container, so that's really nice. So Docker compose, it's another orchestration tool that is used hardly for defining your local micro-services ecosystem. So in our use case, we defined a few micro-services two in our case but we don't want to start them like I just showed you, right? We just go to some folder and install dependencies and then trigger it on the specific port that's a lot of manual work and then you don't want it specifically when you deploy application into the cloud because everything can go wrong, someone can update the version there. Yeah, everything can go wrong. You want to automate it as much as possible. So for that, we're gonna create a Docker image and then we're gonna create a Docker-compose and Docker-compose will define services, all services to start. So that's a part where, I'm gonna show Docker-compose and with that, we are almost done with the first part and we'll just explore a little bit Azure ecosystem afterwards. All righty. So let me go to this editor again. So I think we explored this project already enough time. So we did this Docker-compose and Dockerfile, let's say, naive approach. So it's a very straightforward and it's very simple. So when I first started doing that, I just tried to make it really nice, like make it as a simplest and the tiniest image possible. But, I encountered it a little bit of troubles because we have a Learner-composed project, a monorepo, and then to install all the dependencies, you will need to do installation for each Microsoft separately because in a Learner local dependency structure, you have node modulers popped up like we explored. But on the build time, more importantly on the deploy time, basically you will need to do something with the dependencies differently. You will need to install them separately for each package. And that's a task itself I would say. So it's one of the practical steps that you can think of for this ecosystem. So if we skip that, if we keep that in mind, then we will see that we use a Node LTS outline from Dockerfile in finding our microservices. Then we'll do just update upgrade like all good guys do. And we do the add bash protoc. So the protoc is at the same protocol buffer compiler that was a prerequisite for today's workshop. And we add the bash because Alpine Node doesn't have it. We specify and basically, yeah, what I ended up is the same infrastructure inside my dockerized application snapshot. So I decided, yeah, the simplest would be just to install all the dependencies and basically move the application in there. And then what would differ, how the, yeah, so that's what I'm doing here. Just adding the whole package, basically, or local package, local folder, or Monorepo is with the docker. In our case it's fine because we showcase it, but if you wanna really optimize your docker image, then you will need to think how to make sure you have a minimum set of dependencies for each microservice. But yeah, if we just add this one to your docker file, then you still don't know how to run it. Well, how to start it. So that's where Docker Compose can help. Let me close these terminals. It should also stop my applications, I hope. Yeah. Looks fine. So what I specified here, I build this image, tmp-base, and more than that, I just specify where to start it, and you see my entry point, it's synonym to the Command in Dockerfile in our case. It just says start npm start, like I did before in a command prompt. I also specify that I want to use 551 port, and I want to expose it to external. And for the second Service Currency Converter, I use the same image, but I use a different path to start it up, still using npm start. Yeah, yeah, Matheus, thanks, I also think it's the simplest and specifically for our part. Yeah, and in here, I just configure it to what I show you locally. I just specify the port environment and variable and the provider services environment variable. One thing to note here is that Docker and Docker Compose, when it's starting, it creates a network per service. And therefore, you have a local host 50,000, 51, well, I don't know how to pronounce it, 551, maybe, but you have a ECB provider service like it was specified in here. And therefore, you have to specify that the network will be different, ECB provided like that.

    8. Docker Compose and Azure Part

    Short description:

    To start the application, use the command 'Docker Compose up'. We will now move on to the Azure part, where we will create an Azure account and explore Azure DevOps, Azure Container Registry, and Azure Pipelines. Azure DevOps is a code and story management product that includes a CICD pipeline. Azure Container Registry is a Docker registry where you can push your local image. Azure Pipelines allows you to configure your pipeline to build and publish images. We will clone the repository from GitHub and add an Azure pipelines file to define the pipeline. The pipeline will build and publish the Docker image, which will end up in the Microservices United repository.

    So let's give it a try. How you are gonna start it up is very simple, you just say Docker Compose up, and you see it does what we did before manually. Now we should say also that it started both of them. So if we do that again, this is a node session, and I will just copy the whole command. Let's do the Current Converter command. Yeah, it's still works, still responds with this type of data. Okay, that was the Docker thingy.

    So now let's go to infrastructure, let's go to Azure part. Let me go back to my browser. Yeah, now about Azure. Where to start, where to begin? Well, to begin, I think I need to start with creating your account in Azure. If you haven't done it before, so is just one of the services that exists on Azure, but if you go to the, I think after you've created Azure account, Microsoft account, the main entry point for your... So after you create this, we use one account because actually Azure is quite expensive, let's say, but it's enough to make all kinds of experiments. So you just need to register with a new user and you will have 150 years, I think. So that's more than enough that's around a couple and use a couple of products they have. So yeah, Azure is a huge, if you haven't explored it ever, it's gonna be a tremendously big to start with. So you see I'm scrolling already and talking for like 10 minutes and still I haven't finished scrolling. No, I'm kidding but it has a lot of services inside. So for us, we just gonna be using a few of them today. We're going to be using, well to say we're going to be using CICD for that we're gonna be need Azure DevOps. So Azure DevOps contains a repository connection. You can also have some user management there. Management there well, not management but communication there like you have here boards, backlog task organization, springs even. So things like that. So Azure DevOps is the product that allows you basically to deal with code and also with different stories. It also contains CICD pipeline. That's what we be exploring a little bit today. We also gonna need Azure Container Registry. So container registry is a separate resource, separate product. This is basically the Docker, it can be the Docker registry that I just explained. So for that, you got to need to create a Docker registry manually. But after that is done, you can just easily do Docker login like you would do with another Docker registry and then you can tag your local image to this new container registry image. And after you can manually push to that. So yeah, Azure DevOps, Azure Container Registry and Azure Pipelines, I think it's a part of Azure DevOps but I think I had to still create a service connection that something that enables you to basically insecure way contain a non-personal account credentials for your registry and therefore to configure your pipeline to build and publish images for example. But that can be pretty much anything. Can be token key that is gonna be used for calling external API. So yeah, if I look here, what I just did for this repository, I cloned it from a GitHub. I added the remote to a new created GitHub remote, and then I push here so I had my similar, well, the same repo, as I have it in GitHub. One small thing that I added to this environment was just this Azure pipelines file. So pipelines is what defines your pipeline in Azure DevOps thingy. And again, if you've never explored it before, it has a lot of new concepts, maybe, but I think most modern pipelines, most modern CICD solutions have a similar structure. So that is something that is, yeah, configuration is called. So basically, we define what happens when you make a change on your code. So on this branch, it says, I wanna start the stage build. It will build an image. And in here, it will actually build and publish an image. So first it will build it, and then in the next step, it's actually, yeah, I thought it will be building and publishing. So yeah, build and push is called, so I have to specify the task. So it's a smallest piece of the pipeline concept in Azure DevOps. So I say here, a task is Docker two, so this is a technology binded to a task. And yeah, if I show you this, maybe. Can I show you? Share screen, and I'm not sure it will show a nice editor, but in many cases, it can, no, it doesn't show it like that, but yeah, if I go from here, maybe. And yeah, there are many, many patterns and many, many ways of configuring same thing in the Azure DevOps, so you have to get used to that eventually. So yeah, but that has for example, this right parts, it has a helper, how to specify a task. So in here, I specified Docker task and then I can say, yeah, it helps me basically to fill in needed input data. So I specify the container registry, the one that I created manually. I specified this microservices-united. This is the deck that I created attached to my built image command to execute on Docker and a Docker file. So Docker file is my top level or top level Docker file in this repository. Let me collapse it a bit, okay? Like that. So then, if I look at this pipelines and I'll look at the last one, basically, that's what we said and all we think it does is just builds the Docker image. Doing the Docker, it does a from Node LTS Alpine like we explored in a Docker file. It does a start it, it does the yarn install, it does copy files, it does some something else. And then in the end it does a Docker publish, Docker push, I would say. And then it ends up in my Microservices United repository. That's I think the last thing I wanted to show. I have maybe a few questions before I give you the words, before I give you the screen. Did you know that Node LTS Alpine actually contains yarn installed? No. It's amazing, right? It's just a user LTS Alpine, Node it has yarn as a global actor there. It's that started to be provided with Node.js as well? Yarn.

    9. Introduction to Pulumi and Azure Setup

    Short description:

    Pulumi is a flexible tool for creating infrastructure as code, supporting multiple languages. It allows you to write infrastructure code using familiar programming language features and provides easy sharing and reuse of modules. Pulumi encrypts secrets and hides sensitive information, unlike Terraform. While Terraform has some limitations in describing conditional resource creation, Pulumi offers more flexibility. It also supports Terraform modules and allows you to use them in your Pulumi projects. Pulumi can be set up with Azure using the Pulumi CLI and Azure CLI commands.

    I don't think so, no, I wouldn't say, no, but it's interesting. Then, what are the advantages you think of infrastructure-as-code approach? Do we have anything to say about that? About infrastructure-as-code? I mean, in general, the biggest advantage that you can just reduce this infrastructure. For example, once you've created it, then you can apply the same style scripts to your staging or production environment and just repeat your different environments configuring them in different ways. Awesome, man. Thanks and please take it from here.

    Okay. Let me try to share. Notice that each package, there is a question. Yeah, so each package have the property private defined in the monorepo config. Yeah, I think it's correct, maybe Andrew will correct me, but I think it's about NPM publish, and that's true, so we don't publish our packages to NPM. If they would do that, I think it will be simplified build and deploy step for us. The one that I played, I hope, about putting dependencies of learner monorepo into the Docker image, so that would be much easier if we actually publish these dependency to NPM registry directly, because after that, we would just install them from NPM registry, and that would get our image very, very small, I think. Yeah, and we have also a question on Kubernetes, that I think Andrew will cover I hope. Yeah, okay, we'll start to share my screen. Can you see it, guys? Yeah, awesome. Yeah, I mean, we will go to this answer about Kubernetes and versioning, I think, by the end, when we speak about Helm a little bit. If it's okay, so I will answer this question a bit later today. Yes, you can, but for this specific example, we just took one docker image and we just put all the packages inside. So basically we have one docker image for every single microservice, and then we deploy the whole image. And then, as a final thing, we use just one thing, like one service from it, from this image, or another service. But in general, yeah, you could just improve this behavior and actually build a very thin docker image based on every single service. It's up to you, yeah, but for simplicity, we just started to use this more common example, I would say.

    So yeah, first of all, I want to start from introduction of Pulumi. So Pulumi is a developer-centric, I would say, tool for creating infrastructure as a code. I'm not sure about you guys, have you been familiar with Terraform or not? But compared to Terraform, it's much more flexible because you can use just regular expressions of your common language which you used to use in your daily routines. You can use loops, conditions, functions, classes, and basically anything what is covered inside the language which you pick. It's becoming very productive because you just can take this, basically you can write your application code and then just move to Infrastructure in a separate folder and start coding your infrastructure on the same language. Which is kind of beneficial because you don't need to learn much more on top of it, again, compared to Terraform. Sharing and reusing, it's also kind of a theme which also exists in Terraform, but in Pulumi you can define basically a single file. In our example, it would be JavaScript file or TypeScript file and describe some module or package inside this file and then reuse this package by using import from another file. Yeah, so what languages are supported? So there's various languages supported like Python, TypeScript, JavaScript, Go language, C Sharp and F Sharp. Let me move this out. Yeah, it's reusable as I already said, just comparing. And now another bit to say, Secrets is encrypted in the transit so when you encrypt your Secret, it's encrypted on your site as well. So when you again compared to Terraform where you can see where you can open state file. So basically state file is a current description of all your infrastructure created. So for example, if you create a private network, it will appear in the state file as a resource. And then if there is some private information stated while you create this resource, you can go into this state file in Terraform and see this private information in the plain mode which is not really beneficial. Compared to Pulumi, it allows you to hide this information and encrypt it on any site. Yeah. So how Pulumi is compared to Terraform? So Terraform is kind of a dedicated language, HCL language, but this language is about description of resources, description of things, but not about writing the code, real code. And then when you come to a question, okay, how I could create any resource conditionally depends on whatever condition in the environment or dependent on the fact that another resource exists or not, that becomes very complicated in Terraform because you need to start using some workarounds. Basically, they don't provide this abilities from the box. So you have to find a way to do that. And basically when I started, first of all, I started two years ago. I started with Terraform and it looked exactly it looked exactly like Pulumi for now for me. Just like an amazing technology which I can apply everywhere for everything. But when, moment by moment, when you start to dig deeper into this, you understand that there is some disadvantages as well. So there is a hard to describe whatever you want in this kind of language. Yeah, another disadvantage, as I already said, it uses a separate tool called vault for encryption of secrets, as well as part of this information could be lost or showed in the state file. But yeah, from advantages, it also gives you an ability to use Terraform modulus. Terraform modulus is basically similar thing to packages in Pulumi, but you just separate it in an external folder, and then you can publish even in this module, and then somebody can reuse it on the cloud. Let's move on with Pulumi and Azure set up. We'll go here for a while. If you use Mac OS, it will be easy for you to install Azure CLI and Pulumi CLI commands to start using them. So first command, which I just wanted to show you how it goes from the early beginning. Basically, I want to create another folder. So I already have a folder called infrastructure, which already connected to my current Pulumi configuration and the stuff like that. Here, I just want to create another folder. Let's call it just infra. Oh, no, okay. I created, I moved to there, and for now I want to do Pulumi new and see what happens. So yeah, first of the first command to you basically need to do is Pulimi you when you define which technology and also cloud you're going to use. So our choices TypeScript and Azure for now. So I will pick this one. Also asked me to provide the project name. Okay, it will be infra, description, whatever. The stack is there. Okay, it's created. And remember my current configuration.

    10. Pulumi Cloud and Infrastructure Description

    Short description:

    Let's see what happens on the other side. Pulumi Cloud creates another stack with a basic description and no resources. We authorize Pulumi and Azure tools and install dependencies using Yarn. In the infrastructure folder, we start with a description of the cluster and resource group. Pulumi only deploys resources that are used in the code. When running Pulumi up, it performs planning by comparing the code with the existing infrastructure and shows a plan of changes.

    Let me see in here. So location in case of as I'm in the UK, it will be more beneficial to try it here. And it started to, you know, creates a new folder. I think it should be appearing here. Yeah. So when we, choose on everything, basically it's cloned an example repo with the all the code we need. It also has some examples, samples of the cartoons here. And let's see what happens on the other side. On the other side for us, it's Pulumi Cloud. It actually should create another stack. Yeah. So the stack is created here. If we go inside and see what do we have. We have a basic description of the stack, basically who's the owner of this repository and things like that. We also can see the activity, which is empty for now. Yeah. And there is no resources, which also obvious. Let's see if it's got installed or not. And then we move this out. Good. While it's creating, I will go to our main thing, which is infrastructure. And we'll start showing it, bit by bit. So yeah, so basically when you've done this setup, it creates you a new file, which is index.ts. And basically everything, what is located in this file, Kulumi will try to create in the cloud. So here I have already something, but here in my project example, I have nothing. So if I go to this folder, which is infrastructure folder, and try to do kulumi-app. It will try to apply these changes, but basically, okay, it's saying want to create. Why? Oh, okay, because I already destroyed that before. So for now it just will create a stack again, I think, and basically we'll do nothing on top of it. Two seconds, the very best performance in which Volumi showed me so far. Let me see if it's installed here. I don't know why it takes so much time to install TypeScript. It's the kind of thing Alex would love. Yeah, I have it installed globally. Wow. Oh, really? Yeah, it's one of the, I think, three packages I have installed globally. Nice. Which basically will solve the issue. Yeah, apart of installing this, creating this new stack in the Volumi, you also need to authorize your Volumi your Volumi tools. Do like that, so it basically, it's goes, okay, you're logged in, cool. And the same stuff for Azure. Select, select command. Okay. It gives me here. I'm not going to do this again. I already done this. I just will cancel it. But yeah, in general, you have to do this because before, without it, it's not gonna work. And then as well, you need to install the dependencies by Yarn. Okay, it's just taking pages to proceed. Yeah. Anyway, we'll continue in another folder, which is infrastructure folder. And we will start with a description of all infrastructure. So let's start from our cluster. And for cluster, we have already created everything which is I show you. We have cluster here. So basically cluster in this context is a module or package separating which I already created somewhere in a separate file and we can reuse it. And another thing is resource group. So basically we want to put all our resources under this resource group and also call it group name and cluster what it says. And another thing to say about Pulumi. So we are in index.ts file at the moment. So if I just import it, yeah, I will try to do the same like PulumiApp command and it should produce me nothing. However, my expectation was like, why I imported these things? So it should be, it's already in the index.ts file, it should, you know, given me the result, but it's not. So Pulumi really sees on the fact that you started to use any resource in real. And if you started, you can then deploy it basically in the cloud. Take so much time to, you know. So basically, yeah, when you run Pulumi up, it happens two things here. So first thing is a planning. So basically Pulumi runs the difference of your code against the real state of infrastructure what already was created by this moment. And it shows you a plan of what changes should be applied to this new infrastructure, which is basically, yeah, we're gonna say no here.

    11. Deploying Cluster and Creating Resources

    Short description:

    Next, we start deploying something by creating a cluster and resource group in Azure. We also explore configuration options and secrets. Moving to the cluster TS file, we create a Kubernetes cluster under the resource group. We add credentials, export kube-config, and create a firewall. We use kube-config to connect to the cluster and export a Kubernetes provider. Multiple resources are created, and we can view the infrastructure changes and details. The managed Kubernetes cluster simplifies the process. We continue creating the infrastructure in the index.ts file.

    If my computer will allow me to do that. Okay, so yeah, next step, let's already start deploying something, let's expert cluster name equals cluster. equals cluster. Let's put the cluster dot its cluster and name, and as well as we want to expert we choose group name. Yeah, so let's try to do again, blue map. So for now we are going to see basically the source group in the plan and the cluster as well. By cluster I mean Kubernetes cluster. And yeah, while we're waiting, we're going to go to these files and see what's inside.

    So here we have Azure resource. So basically we imported, we installed a new model, and just plume.plume, I would say. Yeah, and here's the name of our newly going to create a resource like this name, workshop group. We want to put it in the UK South region. And yeah. And we also export it. So if we export some resource from, from our file, we can then reuse it in here. First of all, and yeah, let's see. Okay. Let's press. Yes. And here. So we go into create everything while it's creating. We're going to go to the cluster and see what's inside there. So cluster, basically it's another resource which we take from Azure. We'll also take a configuration for it to speak a bit more about configuration. I will go here. Go back to this. So for configuration purposes, we can defi, we can use Pulumi clean, similar command for showing this showing our configuration. And we also can create a new one or read one. Let's say my name is Andrew. Andrew. So we edit a new configuration environment. It should be appearing here, not here, maybe here. Not, why? Oh yeah, yes, my name is Andrew. And yeah, if we decide to create something sensitive or private information, we can set secret. And whatever my password, one, two, three, four, five, six, seven. Yeah, once we've done these, you see it's appeared a bit different line, which basically says that this is an encrypted line, so we cannot get this fairly back from here, but we could still get this from here, CLI command so it's unencrypted for you if you try to do GET. Yeah. And let's have a look what actually changed on Pulumi's side when we've done this. So if we go to infrastructure,

    Okay, secrets I'm not showing here. I think I need to deploy some of these new secrets to be applied, but there is another secret, which is a Cloudflare API token. You can see there is a, now you can not get it out from here. Yeah, let me move to another room maybe. Give me a second, please. So, we stopped on the cluster TS. So let's go back to this file and see. So here we create a couple of resources for creating basically our Kubernetes cluster. And, yeah, here's the main resource to be honest. It's our workshop cluster, which for creating this we already import an existing resource, existing resource, resource group, which we created before, and we also using this resource group in here, so our cluster will be created under this resource group as well. And here's a couple of things we add as a little extra, which is credentials, and we also extract, export and extract kube-config. So kube-config is a, you know, a configuration for your cluster, so you can apply this config to your local environment and start connecting to this remote cluster and start, you know, interacting with it. Another thing to say, we also export a kubernetes provider, we also we created a firewall, and then we use this kube-config to connect to this cluster. Basically, we will use this provider for all further resources, which we are going to put into our kubernetes cluster. So for that resources, which is from the kubernetes cluster, we can tell them that you have to be deployed in this particular one, particular cluster. Okay, let's go back here. Yeah, here, we can see that two resources created two things, basically, two modulus, let's say. So one of them is our kubernetes cluster and another is the resource. If we go here and see what happens here. So here we already see some resources created. It's not just two, but as Alex was describing, there's a couple of very, very lots of things which Azure can provide to you. So many of these things already have been created. And another finest of you, you can see the picture of your infrastructure, like the chart, and also useful. You can see every single change which was applied, was it successful or was it unsuccessful change. You can see in here and basically go, I think, into the details and see the changes, like what was changed so far. Inside the resources. Yeah, so let's continue. So for now, we have Kubernetes cluster, which is a managed cluster, so it takes out our head a lot of pain. So, yeah, let's continue on this index.ts file and continue creating our infrastructure.

    12. Registry Creation and Image Publishing

    Short description:

    Let's continue creating our infrastructure by exporting the registry from registry.ts and authenticating to start pushing and pulling images. We can see the resources we created in Azure, including the workshop cluster and workloads in Kubernetes. We start the process with the registry, creating a new resource from the Azure library. Now we have two registries, one created manually and one created by our Pulumi code. To publish our application, we need to push an image to the registry. After providing credentials from Azure, we can build our application and use the registry name to complete the process.

    Sorry. It covers many things which we don't really want to cover. So, yeah, let's continue on this index.ts file and continue creating our infrastructure. So next thing we need to, our microservices is a registry, so we're gonna export registry from another created file, which is registry. Yeah, which is registry.ts, and we're gonna also export this registry, const registry name which is registry. Here it's called login server I think. Yeah, so this is a server where we need to authenticate to start pushing and pulling our images. So here we did, let's do once again, to do me up and see what it says. While it's proceeding, we can go and see resources in here. So in Azure, we have a section called All Resources. So easily you can find here all your resources, which you literally created recently. Yeah. So this resource was used by Alex and basically this is the one which was created manually and all the rest, you can see, all the rest was created by our Pulumi code. If we try to go to Kubernetes and see what we have inside, I think we have pretty much nothing. We have some IP address, not sure if it's the right one, and also it should be some workloads. You cannot see it for some reason. Oh, sorry, it's not the right Kubernetes, it's a networking theme for Kubernetes. Why it's called Kubernetes, it's strange. But another thing we have a workshop cluster, which our managed Kubernetes cluster, Kubernetes Service. And here we have workloads. It's our applications, which is executing in the cluster at the moment. Basically this is a pretty much standard thing, which install by default with when you install the Kubernetes. And next, we started the process with the registry. Let's see if it goes, how it goes. Okay, it's suggested to create one thing. Let's say yes. And while it's creating, let's go to the registry file, registry.ts, and see how it looks like. It's pretty much the same as clusters. So we create a new resource from the Azure, from the Azure library. We provide the location from the group, and we provide the name of the group and also some policy, which I'm not very much familiar with, but this is a basic format. So, okay, now we should see here one more resource. It should be a registry. Let's see. Okay, let's go back. And back again. Oh, all resources will be faster, sorry. So it should be a registry. Yeah, so here's our two registries for now. So one was created by Alex and one was created by me. Okay, so what we can do with this registry next. So our goal to publish the, our application, which is stored in here. So we have a Docker file, but we need to push an image for this registry. So let's go here. And just a fall, we need to up to rise inside the Azure. So we provide to our Docker daemon. So we provide credentials from Azure. So let's do that. What was the name? Okay. If I, if I do pull me up again, sorry. Because I need the name of this registry basically to use it in the next comment. Let's type it first. Login, login name. Login name. So yeah, when I just recall this blue orbits, it shows like 10 resources will be unchanged, which basically happens. We haven't done anything on top of it. So it's just fine. But the thing is I need the response after this change. So when it brings out everything, yeah. So I need this one. So, yeah, we need to basically use this name, just name of new registry name. Because this is a way how Azure works. So yeah, it says login succeed. Basically this login succeed comes from Docker command. So for now we can actually build our application and use this hot tag. Like this gRPC and the tag will be latest and go to the end to build everything. So it will take a moment for it to be created, I think. While it's creating let's go and see here in the registry. Is there a way to see activity log? Activity repository should be here. So for now it shows no results. Okay, let's try to finalize this building and publish it.

    13. Pushing Image to Cloud and DNS Provider

    Short description:

    We will push the newly created image to the cloud and explore the caching of dependencies in Docker. After that, we can run the Culumi app to see the resources appear in the cloud. Next, we will look into the DNS provider. We need to give access to Kubernetes to use the private Docker registry. We extract the identity ID from the cluster using the apply method in Pulumi. Once we have access, we can push the image into the repository and use it in Kubernetes.

    So times after that we should see our user repository created in the cloud. Yes, let's close this. This is where our cluster is too. So yeah, our next goal will be to push this newly created image to the cloud. While it's still building, we can see doc per image in deep.

    Once again, I think Alex already showed that. Yeah, the interesting thing, I think you may know that, but if you put this installing, copying of just two files, which is information about the packages first, and then if you run yarn install after it, so you will receive, on this moment, you will receive a layer with installed dependencies. So later on, if you change any code, whatever file, it could be readme. It will reuse the cache of this layer which will be significantly faster, if, for example, I have the version like that before. And every time, every time I do a single change in any file in this folder, because for now I'm copying everything. So this layer, this layer changes. So the process starts again from this moment. So I copy everything again. Again, install dependencies. And that happens like on every single change, but having this like this, like that, like three lines of code, it will be cached on this moment. So because we don't change any of these two files, but then if you change some of these files, for sure it will proceed the same, proceed again, the full reinstall of the dependencies inside of the Docker.

    Let's see, yeah, I think yarn-learner already almost finished because for now it's just running yarn-learner-build, which is the latest component in here. So hopefully it will finish soon. And we'll do now, yeah. And I will remove this example for now. But if you create an example as we've done from the beginning, you can then just run this Culumi app, so you just created a repository, what I've done basically several months ago, created a repository, done this Culumi. We've done this Culumi. Then picked up the cloud and all stuff. And then you just can run Culumi app. And these, some of these resources will appear in the cloud. So it's a very fast way to try it out. Yeah, I will delete it for now. I think it takes pretty much the same time to delete it as the time to install it, crazy. It's actually blocks me, I think. Cancel this. Okay, let me go back. I've been here. So, okay, yeah. It's complete. I think it's just analyzing some Docker stuff before it just images created. The most two consuming operations are NPM Yarn install and Docker build, which basically the same Yarn install. Let's go and see here while we are waiting. Yeah, so next thing after this, we will probably have a look into the DNS provider.

    Okay, it's finished. Now we can push this repo, push this image into the repo. We need to use push command and basically this is the name of our image already built. It started to pushing and when it's finished, we can move forward and basically we can use this image inside our Kubernetes. Actually, yeah, we couldn't use it for now. So for start using this private Docker registry, we need to give an access to to how to say, to registry to registry to take images from our Kubernetes, so we need to give an access to Kubernetes. I will just copy this part because it's quite extensive one. Yeah, there's it. The Lumy. Yeah. So what's going on here? Let's see. So, first of all, we need to, we need to extract identity ID from the cluster. So here's a very strange lines to get this principle ID from out of the cluster and he had to say then to, I mean, all, all the input and output not just like just simple strings, like input, is usually like that. Input T and output is output T, which is generators of this types. And basically. Yeah. To, the problem here that, the problem here that you couldn't just use it, use it like this and take it, like object ID. Because it's not here, because this is the output, which basically is a different type. And here's a special method that provided in Pulumi, it's called apply, which extracts this output. It's extracts it to a simple variable. So here we already have. If we here, for example, have a string, it will be output of string. And when we run and apply this one, we'll build a ready string. And here we can operate with this string as usually. And then by the end of this method, it will become again, output string, but already processed by us. So we can modify it somehow if we want to.

    Okay, here while we've been given this access, we can see that the image is pushed completely. So for now we can go back here and run Pulumi app again. So we move this assignment of access to the cloud so it will be applied to the cloud. And while it's supplying, we're gonna go here. I think we'd been here in the registry, right? So yeah, it should be already something. Yeah.

    14. Deploying Apps with Helm

    Short description:

    We have created the infrastructure for our application, including an ingress for traffic routing and a DNS name assignment. We are now ready to deploy our apps into the infrastructure. Helm is a package management tool that defines application deployment in Kubernetes and provides a templating engine for reusability. We have two charts: GRPC and REST.

    Well now we have one repository which is called gRPC as well as our application. And we have one tag which is latest. Okay. Now we say here yes, so we give an access to Kubernetes to pull images with this special role pull from this registry. And now we can move on.

    So yeah, next we need to, we need to put some system resources, kind of system resources to our Kubernetes cluster. Let's see, what is that? So we have a special folder here which is called chi8s. And here we have two files, one of them is called system TS. Let's go here first. And here we have an ingress. So for now, yeah, to say a few words about ingress. Unfortunately, the image not so well created. Yeah, basically, ingress is a tool which we install inside our cluster. And it operates inside this function it operates in front of any of our services inside the cluster. And it allows us to basically, you know, have a single point of incoming traffic or multiple points. But the main idea here that we can provide a special routing rules, routing rules for this ingress so it can understand that. For example, when we call currency converter, it should understand that, okay, this currency converter traffic should be moved to this particular service. And if we decide to do it for other thing, it will move it to another. Yes. Okay, so now it's time to actually apply this ingress. Okay, to do so we again need to just import it just, import, from APS, this term so slow, sorry is to say cold guy in the system and again we need to export something to make it happen And here from the interest we need to in which is called the service AP ah IP ah or the public IP So Let's Yeah, go back here while it's created and see what intro sees So basically to install the English we started Ah to use how I will give a better dive into the into the home a bit later when we see the applications and the particular hell but to stay for now let's say it's just a package package for Kubernetes. So we say here is a package for Kubernetes and we want to install it inside the Kubernetes and there is a easiest way Yeah not familiar with maybe competitors with helm but yeah it's one of the most famous one Here we import cluster as well we import cluster And the reason for that we need to give a provider provider field to our helm chart So this hell chart knows where exactly it should be installed well Basically from science this line it already know where in which particular cluster. It will be installing During this installation inside the helm we ask We ask our Mm, we ask to install service for Yeah, we asked to install this service for this ingress to be To be built as a load balancer so when we ask that Basically cloud provider, which is an Azure for us They will create a public public IP address and assign to this service provider To this service and give it to our so basically our main idea main goal here to Find this IP address out there So when we install cluster, let's say we have five nodes in the cluster five different machines Which is operated by one cluster? by one cluster under one cluster, but for increase for ingress we we need One and for DNS. We also need one single point of entrance So basically we need an NFP public IP address Which we can assign to the DNS so our site Can appear on the Internet? So let's try it to use this If you address Basically Yeah, it shows us all four But it shows us for fall from our newly created cluster which is good So what's next We have an IP address now. We are in the moment when we are ready to basically create our DNS so for DNS as I said we We're going to use cloudflare Let me show it here Cloudflare, I have one DNS name which is in in one up On our it's empty as you can see and And we are going to assign this IP address to this name For that we need to again we need to import another module which we already created before And as I think it's called yeah Yeah Again expert to culturally understand which should be used Now So let's run it once again, I will add an extra flock minus E Meaning that I'm agreed to apply this change so it doesn't ask it every time I said Just yeah, what we are expecting is something to appear in here right after that and basically since then we can call our server through the TN-NS should be quite fast. We already have a TNS known as a block should be quite fast. We already have 13 resources by the way or 14 would be this one and let's see. No, it's still not there. Why? Oh, it's there. Let me see once again. Maybe I'm blind. Okay, yeah. Now it's here. Yeah, so for now we assign that. So basically when we go to this domain we should see the same, yeah. 404 page, which is okay for now. Let's keep it open. And so what's next? Next, we finally created some kind of infrastructure for our application. So we are in the moment when we are ready to deploy our apps into the infrastructure. Give me a second. Okay. Yeah, so we have another file here, which is called Apps.ts. So let's start from importing this. Apps from, oops. Oh, it's not apps, it's under the folder. I'm sorry. Apps. Yeah. Here we also export gronster. Here we also export gronster. Converter I think. And here when we have multiple things, we have a namespace newly created for this particular one. And we have a currency converter service and we have ECB provider server. So let's say we can export this one. Well, Tawa just part of their source. It's kind of a bit of silly that this is on the way to pull me to say that this resource need to be installed into the system. And while it will be doing this, we're gonna go to apps and see how it looks like. Okay, yeah. As I promised, I'm going to explain a bit more about Helm itself. Okay, so Helm. Helm is kind of a manager, managing tool. Like if you imagine Yarn or NPM, it's also a package management tool, as well as Helm. It gives you a rule, it gives you special rules how to define your application inside the Kubernetes. It also gives you a templating engine. So using this templating engine, you can reuse your application multiple times. Recreate this picture, how it looks. Yeah, to show the Helm, let's go in this folder, see what we have here. We already have one chart which is called GRPC, very straight forward. Let's try to create another chart which will be called REST. So now we have two charts.

    15. Folder Structure and Ingress Configuration

    Short description:

    We have a folder with charts containing files like service, service account, ingress, and deployment. The ingress file contains a template where you can define the host for Kubernetes and the service to proxy the traffic to. The values in the file can be overwritten in the values.yaml file.

    We go to this folder, we see charts. And here you can see, actually, a bunch of files, right? So it will be like service, service account, ingress, which we mentioned before, and deployment. So if we go to the ingress, you will see some kind of a template and part where you can define, okay, this is this host to be used to listen in front of the Kubernetes, and this is the service where this traffic should be proxied to, service port. And as you can see, there is many, many values are just implainted, so it's not real values but it's something which can be overwritten for the values.yaml file.

    16. Chart Values and Template

    Short description:

    In the values.yaml file, we can replace default values when installing our chart. Helm template shows the final version of the chart, including probes for application readiness and liveness.

    So here in values.yaml, we have a bunch of information describing our chart. This is the default values, and this is the values we can replace during the moment when we install our chart. So when we install our chart, we're going to say, okay, it's called this chart reschart but let's say that ingress.enabled should be true. Okay, what we can do else with Helm. We can have a look Helm template and see how this chart will look like when it's built. So this is a final version of the chart which we will deploy to our cluster an example of it. Just pretty much, not the real one but an example. You can see there is a lightness probe, readiness probe to define that the application is ready and live.

    17. Exploring Kubernetes Cluster and Services

    Short description:

    Let's go back to our DevOps part. We encountered an error in the deploy process. We need to retrieve credentials to access the Kubernetes cluster and explore the namespaces and running services. We can use the gRPC URL command to investigate and interact with the services. By setting up port forwarding, we can forward traffic from our local machine to the Kubernetes cluster. We can then use the gRPC URL command to send requests to the services. We have deployed two services, the converter and the UCB provider, which communicate with each other to retrieve and convert currency rates. Let's now address any questions you may have.

    Okay. Let's go back to our DevOps part. So here we created... Some error happened, I think, here in the deploy. Okay. And the strange thing that we have, we use TypeScript, but the error happened in the request.go client library. Okay, it couldn't read something. Let's try it to see what we have inside of Kubernetes cluster.

    Okay. For that we need couple of things. So we need to use the command az access, and get credentials, credentials. And I want credentials. With name cluster name. It is a cluster name and the group will be referencing. Okay so for now, since this moment I can use two CUBESTL text to the URL and do a dive back to get any information from my cluster because I've just got all the credentials for it. And let's just as well see the namespaces inside.

    What? Oh, sorry. I should say, get kubectl, get namespace. So great, here we can find several namespaces where we have some applications. So here's the namespace we used for our ingress. So let's see what we have there. Get port, oh sorry, port. So here's one running ingress port. Basically a service. So let's see what we have in our applications. Namespace. Getting a name. It should be two times, get, okay, inverse. Okay, we have two of our services up and running inside this application. So... And as we, for for currency converter, we also assign the DNS, so it should be kind of accessible for the DNS, but we will see the error like that, because this is basically we need to use gRPC, gRPC protocol instead of regular arrest, kind of something, you know, to improve. So here, it should be some web form where you can use our currency converter. But for now, it's just, I think like that. But as a final thing, what we could try to do. We could try to do UPC-TL port control, forwarding. We also need this namespace. And the name of this one, without that. And the ports as well.

    So what exactly this command does, it will be your local port50051 to the Kubernetes port of this service, this port. So basically, when you run this command, it will be a special channel created and all the traffic from your local machine from this port will be forwarded directly to this Kubernetes cluster in the cloud to this specific port. So here is a created to forward in rules when you type this command, and next we need to try to try to run our, try to run something against our service. Sorry. So for that, we are going to use gRPC URL, it's a command. It's a command which helps us to investigate this, let me just copy this because it's a command. I think it's somewhere here. Basically, this is the same as a regular URL, which you used to use, but this one is written specifically for gRPC. So basically, the idea is that you can just send your specific command in the plain plain text format, and it will be forwarded directly to this service. And the service will understand the gRPC format because here you also bind that this specific proto should be used. So, yeah, here's a example. Basically, when we called it says, like our local one Yeah, but if we go here, we will see that our previous command of port forwarding started to doing some accepting connections. So we handled new connection from this. Uh, on this board and we forwarded it to Kubernetes cluster. Yeah and here, I believe we can actually, uh, try to play with it a bit. I'm just trying to, uh, try to do this, find this one. And if I tried to do CBU provider. CBU provider. I'm just trying to call another service which we have we should have in the cloud and see if it works as well. But I believe it works because, uh, basically as Alex was showing, yeah, basically Alex was showing that it should be picture like that. Oh my gosh. Yeah. So what, what we, what we have deployed here, we have our cluster and we deploy two services we deploy to converter. We deployed a UCB provider. So basically converter, when we call converter, it goes to use to be provider and then this provider goes to another provider like third party provider and takes rates out of it. And then return is here and on, on, on converter side, it just get converted and return evaluated for us. Yeah. So I think that's fair enough to, to see the results like that. If we try to mandate should be something different, I believe. Yeah. So it's pretty much it. Uh, let me have a look into the questions, if you have one.

    18. Deploying Applications and Destroying Resources

    Short description:

    In Pulumi, we define a Helm chart with a version and an up version. The chart is deployed in the application namespace and uses a local chart and a repository. We can use multiple docker images by defining where each service should be executed. Ingress is used to forward traffic from a domain name created in CloudFlare to the service. Environment variables are defined for services with dependencies. After executing all the steps, the changes are visible in Pulumi, and the resources can be found there. Pulumi is similar to Terraform but may have some limitations and require more time for complex scenarios. Finally, everything can be destroyed.

    Just a second. I have one screen. Okay. No questions so far. Yeah. About going back to the question, uh, if it's still was to answer it, uh, about pursuing in Kubernetes. So yeah. About persons. So when, uh, when we define the Helm chart here in the chart, the animal, we have two things, basically defined. We have a version of this chart. So basically when we change something in this chart and then we published this chart and some public repositories so anybody can pull it and install. So this version will be changed. And here's another version. It's called up version. So for us, as we just install latest, it will be like that latest. We have a look, by the way, I forgot to mention this. If we go back to up up to yes and see how it looks. So basically here, here I define that I want to use, I want to create this application with this name. And then I want to use this chart, this local chart, which is located in here. And another thing I also defined that this chart should be deployed in the application namespace, which I just installed the step before. And the next few moments, what I do I tell to this chart to use tech latest. And there is the repository. So the repository the one, which we just created on the previous step. So this is the registry dot registry dot login server and slash chair slash chair PC because we call this repository name chair PC. So that's based on this lines chart and the Kubernetes itself will understand that we need to install this specific application. Also define the, you remember I think you, you've asked if it's possible to, to use multiple docker images. Yeah, but for now, as we use one docker image, we have to have to do things like that. So we define where this application should be executing. So on this particular service, we define that it should be executed from services, your PC currency converter. And then another one, it should be executed from another folder. So basically, we have one docker image with multiple folders inside and multiple services inside. And another thing we also define, we tell to Ingress to use base domain name, which is the domain name we just created in the cloud. In the... How to say it? I forgot the name. In the CloudFlare. Yeah, so we created the domain name in the CloudFlare, we also passed it in the chart. So Kubernetes and Ingress inside the Kubernetes will understand that all the traffic from this domain name should be forwarded to the service. And another thing we also need to define some environment variables. I mean, for this particular case, here you can see for UCB provider we don't have any environment variables, so we just install it as it... And for this service, as it has a dependency on the UCB provider, we provide him a definition how to find this UCB provider inside the Kubernetes. So, yeah, if we go... Still handler... If we go here and try to get namespace again. And let me show you this service exists and all others as well. Service... So here we have two services running. One of them is currency converter and another is the provider. So here, basically, this name I took from this real record I see in the services, as well as I should put the port here. And that's pretty much it. Next, I'm going to show you how to destroy all of these. I will stop port forwarding, finally. So, you're going to destroy what has been done, and that's it, right? That's where it ends. Yeah, I mean, this is the last comment we're going to execute. But if we have some particular question, we can discuss it first. After all this execution, basically, what you can see here in Pulumi, you can see all this activity, what was produced. Like all these changes have been applied here in the workshop, as well as you can find all the resources in here. It's many of them, actually. You can see there is a dependence already. Those are kubernetes concepts mostly, right? The one that we generated in the technical classroom. Yeah. Okay, go and destroy everything. Okay. Okay. Let's go there. Yeah, to add a little bit more understanding on Pulumi, it's still a way to, you know, the same way as Terraform, it's a way of representing your infrastructure. I wouldn't say that this way is very, you know, proficient and the way without any problems. It still will be some because most of the examples in Pulumi, you can find, you can find them on the examples that perfectly works. And then when you can try to extend it to something more sophisticated it's, you start spending more and more time. Not the time as you expected to when you just took this ready-to-go tool and start using it, so it's still not the final version I would say. And finally, I pressed the button to destroy everything.

    19. Future Plans and Conclusion

    Short description:

    We have future plans for exploring technologies like programming and microservices. In part three, we will focus on decomposing monolith applications into microservices, covering best practices and database separation. It would be beneficial to have a user interface for sending requests and a complete pipeline using Pulumi. Pulumi allows developers to use a single language for building both applications and infrastructure. Thank you, Andrew, for the informative second part. We appreciate everyone's participation and hope you learned a lot.

    25 resources to be killed, starting from the beginning and so on. Alex do you have something to add to today's workshop? Yeah, well, I'm not much, I think we can say a few words about future plans with these technologies for us, and also future plans with Explorer and Microservices. I think we already working on that, starting from the end of the last year, and we're exploring both technology-like programming, this microservices. Now, you see this is DevOps part of the whole ecosystem. We plan something more, we're planning part three, basically, which is more like architectural vision on programming, I think. And in that part, we're gonna try to explore how to decompose one monolith application into microservices, basically, what practices to follow, how to deal with it, how to separate databases. And yeah, basically, where to start, how to go on. That's something. And regarding this particular workshop that we did today, I think, logically, it would be to finish it, indeed, how you explained it, to have a form, maybe some user interface where you can actually send requests and to demonstrate it all. That would be a nice step for the future. And maybe also to have a complete pipeline that also uses PullUMI inside, right. You update something and then it immediately starts a PullUMI task which deploys it. But I think for this one and, well, actually also for UI it's not that much work to follow. Yeah. Yeah. But, yeah, again, going back to PullUMI it's really incredible that you can use just one language which you're used to use for building multiple things. So, I mean, for developers it will open new doors. Like you can go deeper into the application and just continue doing some infrastructure stuff just in the same language as you already work. So, that's great. For Terraform, it was much more challenging for me to just study the language and second to understand all this workarounds you have to have. You need to have some condition or a loop or whatever. Yeah, then I would say thanks Andrew for presenting. It was a really great second part. I think it's really deep into Pulumi, how to deal with it. Also, that was a concept that I think is best to practice as much as you can with. Thanks everyone for joining the workshop and joining the conference as well. I hope you enjoyed it and learned a lot of stuff today.

    Watch more workshops on topic

    Node Congress 2023Node Congress 2023
    109 min
    Node.js Masterclass
    Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.
    : intermediate
    Node Congress 2023Node Congress 2023
    63 min
    0 to Auth in an Hour Using NodeJS SDK
    Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
    We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:
    - User authentication - Managing user interactions, returning session / refresh JWTs
    - Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
    At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
    Table of contents
    - A quick intro to core authentication concepts
    - Coding
    - Why passwordless matters
    - IDE for your choice
    - Node 18 or higher
    JSNation Live 2021JSNation Live 2021
    156 min
    Building a Hyper Fast Web Server with Deno
    Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.

    Node Congress 2023Node Congress 2023
    119 min
    Decomposing Monolith NestJS API into GRPC Microservices
    The workshop focuses on concepts, algorithms, and practices to decompose a monolithic application into GRPC microservices. It overviews architecture principles, design patterns, and technologies used to build microservices. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated TypeScript services in the Node.js stack. The workshop includes a live use case demo of decomposing an API application into a set of microservices. It fits the best architects, tech leads, and developers who want to learn microservices patterns.
    : Advanced
    : DDD, Microservices
    : GRPC, Protocol Buffers, Node.js, TypeScript, NestJS, Express.js, PostgreSQL, Turborepo
    Example structure
    : monorepo configuration, packages configuration, common utilities, demo service
    Practical exercise
    : refactor monolith app
    JSNation 2022JSNation 2022
    141 min
    Going on an adventure with Nuxt 3, Motion UI and Azure
    We love easily created and deployed web applications! So, let’s see what a very current tech stack like Nuxt 3, Motion UI and Azure Static Web Apps can do for us. It could very well be a golden trio in modern day web development. Or it could be a fire pit of bugs and errors. Either way it will be a learning adventure for us all. Nuxt 3 has been released just a few months ago, and we cannot wait any longer to explore its new features like its acceptance of Vue 3 and the Nitro Engine. We add a bit of pizzazz to our application with the Sass library Motion UI, because static design is out, and animations are in again.
    Our driving power of the stack will be Azure. Azure static web apps are new, close to production and a nifty and quick way for developers to deploy their websites. So of course, we must try this out.
    With some sprinkled Azure Functions on top, we will explore what web development in 2022 can do.
    JSNation 2023JSNation 2023
    104 min
    Build and Deploy a Backend With Fastify & Platformatic
    Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:
    - Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (
    - Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow ( 
    In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.

    Check out more articles and videos

    We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

    Node Congress 2022Node Congress 2022
    26 min
    It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
    Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
    You can check the slides for Feross' talk

    React Advanced Conference 2021React Advanced Conference 2021
    19 min
    Automating All the Code & Testing Things with GitHub Actions
    Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.

    DevOps.js Conf 2022DevOps.js Conf 2022
    33 min
    Fine-tuning DevOps for People over Perfection
    Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation
    controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
    Node Congress 2022Node Congress 2022
    34 min
    Out of the Box Node.js Diagnostics
    In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
    You can check the slides for Colin's talk