Powering your CI/CD with GitHub Actions

Rate this content
Bookmark

You will get knowledge about GitHub Actions concepts, like:

- The concept of repository secrets.

- How to group steps in jobs with a given purpose.

- Jobs dependencies and order of execution: running jobs in sequence and in parallel, and the concept of matrix.

- How to split logic of Git events into different workflow files (on branch push, on master/main push, on tag, on deploy).

- To respect the concept of DRY (Don't Repeat Yourself), we will also explore the use of common actions, both within the same repo and from an external repo.

155 min
12 Apr, 2022

AI Generated Video Summary

Welcome to this workshop on CICD in GitLab Actions and Terraform. The workshop covers building workflows from scratch, creating reusable actions and workflows, and destroying infrastructure. It also explores common actions and code reuse, remote composite actions, and centralized repositories. The workshop demonstrates the power of reusable workflows and how they can be called from another workflow. It emphasizes the simplicity and flexibility of using reusable workflows to deploy to different environments.

1. Introduction to CICD in GitLab Actions

Short description:

Welcome to this workshop where we'll discuss CICD in GitLab Actions and how to automate deployments and builds using GitLab Actions. We'll cover an introduction to GitLab Actions and touch on Terraform basics. You'll also see a visual representation of the results in AWS and learn about the structure of this workshop. Get ready to watch me build workflows from scratch and at the end, you can use these workflows for yourself.

So, welcome to this presentation, this workshop. We're going to be talking about CICD in GitLab Actions. I don't know how many of you have used, I was going to say CICD, I think every one of you have used any sort of CICD. Then, we are going to talk about GitLab Actions and see how we can actually have all our pipelines and workflows in GitLab Actions to do all the deployments or the builds for our software.

Let's jump into the presentation, so the agenda is this one. So I'm gonna be talking about myself. We are going to hopefully talk about you a little bit as well. We're gonna do an introduction to GitLab Actions. Terraform Basics is not mandatory, it is just because all the infrastructure that we are going to be building is automated, using code, using Terraform. So I'm gonna explain a little bit the basics for Terraform so you don't get lost when we are talking about Terraform. Don't really worry, it's not a Terraform workshop, it's a GHA workshop, so you don't have to worry about Terraform.

Also, we are gonna see a visual representation of the results in AWS and how we are gonna build what we are going to build with the workshop. We are gonna talk about the structure of this workshop and then we are getting our hands dirty. Really, you're gonna be watching me building workflows from scratch. At the end of the workshop, I will be sharing all the workflows, of course, without the keys and the accounts that we are going to be using in AWS. But then you can use your own accounts, your own keys to connect. You will be able to then use this workflows for yourself. Excuse my voice because I'm still with a little bit of the flu coughing. So I'm gonna be drinking water throughout the presentation or the workshop.

2. Introduction to GitHub Actions and Terraform

Short description:

In this part, David introduces himself and provides some background information. He explains that GitHub Actions is another CICD system and discusses its components. He also covers events and building blocks in GitHub Actions. Additionally, he briefly explains Terraform and its workflow. Finally, he describes the result in AWS, which includes a backend, frontend, and API communication.

So who am I? I'm David, David Rubio currently living in Spain. I'm a platform team lead in DX. There are another two developer experience in this call, Sam, Samantha and Daniele. And so I work with them. And we work in the zone. I don't know if you heard about the zone streaming company for global streaming company, for sports. If you, if you don't know, just check it out pretty cool stuff. And some certifications that you have certify solutions architect. I'm also an Amazon authorized instructor recently certified and instructing for AWS I'm also a certified professional master. And if you want some context details, you can follow me there and LinkedIn My handle for GitHub is the same. It's a DavidRV87. And my blog, if you want to have a look is in both Spanish and English. So feel free to just read some of the stuff. Some of the stuff that comes to my mind, I'll just put it there.

So we are going to give some introductions to GitHub Actions. I'm not going to read the definition, you have the definition there, but it's basically GitHub Actions is another yet, is yet another CICD system that you can use. You can guess that the owner is GitHub by the name. So is it even driven? We're going to see what those events are. So whenever something happens in GitHub, you put a, you submit a pull request or you close a pull requests, or you push something, then something's going to happen, or hopefully something's going to happen depending on the conditions that you place in the workflow. So let's see the components. A little bit of GHA, so some, we're going to be using some keywords throughout the workshop, so I just want you to get familiar with some of those keywords. Let me get my band up, I think it works. Yeah, so we have an event which is going to trigger the workflow, right? So that workflow is going to be made of jobs, right? I'm going to stress out what the jobs is throughout the workflow, throughout the workshop. So each job is independent from each other, and the jobs have steps, and each step has an action. So the action is the smallest piece that you can have in GitHub Actions. And then you're going to use those actions to create those steps and then jobs, and then those jobs are going to make the whole workflow. And those jobs are running in Runners. So there are two different ways of, or two different types of Runners. GitHub Runners, which are hosted by GitHub, and you can use them, or you can have your own self-hosted Runners. Those are more advanced topics. We are not going to cover those ones. But the thing that I want you to remember from this slide is that the job runs in a Runner, and is isolated. You can see there are two Runners here, one here, one here, and those two Runners are isolated from each other. You can share some stuff, or some artifacts produced in this Runner, into with this other Runner, or with this other job, but you have to do something. We're going to see what that something is throughout the workshop.

So, what else? Is this working? No, it's not working. Yeah, now it's working. So, events. What are those events? It can be a push that you do to your branch, a pull request. Maybe you create a tag, maybe you make a comment in an issue, or in a pull request, and you can also do manual triggers. If you want to control where you are deploying, if you want to control or manually deploy to production, you can actually create workflows that are going to be manually triggered, excuse me. You can schedule them. Different types of events that can trigger those workflows that we're going to be building. So what are the building blocks? I said that the building blocks are the actions. And so those actions are like blocks, like Lego blocks. So you have actions for AWS, for Java, for Kubernetes, for Sonar. So you have different building blocks, then you assemble them together to build your own pipeline. So that'll be a job, that's another job, then that's another job, and then with all those jobs, you create your pipeline or the workflow. So where are these actions coming from? So you have the Actions Marketplace. I'm going to share the link with you for that. I'm going to share a number of links at the end. So you have all the links. You can build your own custom actions and this is an example of a workflow. So you give it a name, then you specify the condition where when that job is going to run, then you have jobs. So in this case, we have two jobs. We have a job, which is linking and testing our code. We have a job, which is building and backing and uploading our code. Then as you can see, you have the steps. So a number of steps make a job and a number of jobs make a workflow. So I just want you to get familiar with those terms, Workflows, Events, Jobs, Steps, Actions, so we are all in the same page.

Some basics in Terraform, I know Daniela and Sam, you are familiar with Terraform, but how many of you have created an infrastructure as code? Maybe Terraform, maybe CloudFormation or something else that you've used for infrastructure? We're gonna see some Terraform and how it works. So in Terraform, Terraform is a HashiCorp tool which allow you to build infrastructure as code. So you have a number of config files and with those config files, you create some resources. In our case, we're gonna be deploying the stuff in AWS. I'm sorry, I'm really an AWS guy and so we are building stuff over there. So how everything works in Terraform are the flow. You initialize Terraform, you download all the dependencies and all the providers and all those things that are needed to actually connect to AWS for example. Then you plan what you're doing. So this is basically saying, hey, I want to do this. I want to create my buckets, I want to create my lambdas. I want to create my APIs. So you're planning, then you are applying those changes. So you write your code, you plan what you're actually going to do and then you apply the changes. So we are going to see that in a job. But I just want to get familiar with init, plan, apply because we are going to see those terms in a job. So I just don't want you to get lost on what is this guy doing? And finally, what we have is an output. We are going to have some outputs which is going to be URLs. We are going to have a URL for the API, a URL for the website that we're building. And then we are going to actually have some resources deployed in AWS. So what is the result in AWS? Simple, right? This is not an AWS course. This is not a Terraform course. This is a GHA course. Not course, workshop. So what we are going to do, we are gonna deploy a backend over here, a front end over here, and how do they talk each other these days through APIs? So this is what we are building. So this is gonna be a Lambda function. No, excuse me. Not. This is a Lambda function. This is just compute in AWS for those of you, this is serverless compute in AWS. And S3 bucket is where we are going to place our website, our static files, and then we are gonna talk.

3. Building a Workflow for a Feature Branch

Short description:

We're gonna talk to the backend from the front end using an endpoint called a greeter endpoint. Lambda will return a simple message saying hello, which we will display on the website. We'll be using some secret keys and access keys for AWS and roles to authorize the connection between AWS and GitHub. The structure of the workshop includes building workflows from scratch, creating reusable actions and workflows, and a workflow to destroy all the infrastructure. We'll start by building a workflow for a feature branch and explain the structure and conditions. Then we'll move on to jobs, starting with initializing the deployment using an action from the marketplace.

We're gonna talk to the backend from the front end using this endpoint, a greeter endpoint. Lambda is gonna then return simple message. It's gonna say, hello, and then we are going to bring that message in the website. Nothing fancy. Just to demonstrate how it works.

AWS off, just wanted to place this here because we're gonna be playing with some secret key and access keys for AWS and roles. So I just want you to keep this in mind. So we are using some keys in the pipeline or in the workflow. We are going to assume a role which is going to have permissions to do things in AWS. And we are going to build those resources with that role. This is everything that I want you to know. Don't worry. All the keys are provided, but it's just for you to actually have in mind how we are actually authorizing to connect to AWS from GitHub. We're just using some keys.

Okay. So, what is the structure of the workshop? Those times in here are negotiable. We're gonna see how we feel throughout the workshop. We're gonna have some breaks in between. We're gonna try to stop in some like key moments where we are stopping or we finish building, for example, the stage workflow. Then we can have a break, see how everyone is feeling. Then we are gonna build a workflow for production. Yes, we are going into production today. Then we are gonna build or we're gonna see how we can use the dry, use the dry, they don't repeat yourself because we're gonna, in this section, there's gonna be a lot of copy paste and we don't want that. Every good developer or engineer shouldn't be copying pasting, they should be reusing. So we are gonna create some actions to be reused, local actions to the repo, remote actions from other repos and we're gonna create reusable workflows. So we are gonna create one workflow which is gonna be called by another one. So these are reusable workflows. And then there is a workflow to destroy all the infrastructure. I will just show you for fun. So who is ready to do this?

Okay. So we are gonna start by building a workflow from scratch, gonna be typing everything, everything that I type, I'm gonna explain as well. Let's see if I can get this out of the way because... Oh man. How can I do this? All right, there you go. So is the font size or the zoom big enough? Do you see that it says, let's begin here. Do you want me to make it bigger? Looks okay. Looks okay. Peter, a bit bigger. Maybe like that. Perfect. Okay, so let's do the first one, the first workflow. So before I type anything, I'm just gonna explain the structure. So this is a repo. As I said, I'm gonna be sharing, if not this repo, a public repo, which doesn't have all the personal accounts and things like that. So don't worry, I will be sharing this with you. So the first thing that I want you to notice is that everything, all the workflows are going to have to be placed under the.github workflows folder. So this is where you place your workflows, the workflows are the yml files. And the second thing that I want you to notice is the name of my workflow. So this is a free name that I gave, but it means a lot to me. So not emotionally, but it means a lot because I know that this workflow is going to run when I do what? A push to a feature branch. So just by looking at the name, I can see that this is going to run when I push some code to a feature branch. So it's not the main branch, it's not the tag, I know that everything here is going to run for that event. So remember those events that I mentioned? So this is an event, but the event is not the name of the workflow which dictates what happens. We need to actually place a condition inside the actual file. So let's just do that. So what is the first thing that we have to do when we will have a new baby? What do we give to the new baby? A name, right? There you go. So let's give them a name for this workflow. So this is gonna be feature-branch. And also I'm gonna specify that this is for stage environment. So our workflow has a name. What is next? So next, what we have to do is run this workflow on a condition. So we use the keyword on and when is this going to run? On a push. We know that from the name of the file. So on push. And then here, we can specify a few things. So we can say branches. So you see the autocomplete is actually helping me. So branches or branches ignore. So we're gonna see, we're gonna say branches. And we're gonna specify an expression, which is going to be feature chaWorkshop star. So what does it mean? That for every branch that I have, which is named like this, my workflow is going to run. So this is my condition. So this is my event. So this event is gonna trigger this workflow. Just for the sake of completion, maybe because on your projects, all your branches are not named like this. So you could do something a little bit different, which I'm gonna leave here for reference but then we are gonna comment it out. So we can do on push, but then we can do, use the other one on branches ignore. And if your main branch that could be main or master. So this is basically saying for any branch, except from main or master, trigger this one. So this is actually more generic than this one. So I'm just going to leave this one for reference, but we are going to comment it out. And now we're moving into what, into jobs. Nice. So we are building jobs. So we are going to give a name to this job. So this is the very first thing that we do is we name the job or we identify the job with a name. And the first thing that we are going to do here is we're gonna init the deployment. This might not make sense just for now. I will show you what this means, but basically what this is is saying to GitHub, Hey, I'm starting a deployment. For this we are gonna use inaction from the marketplace.

4. Specifying Runner, Timeout, and Steps

Short description:

In this section, we specify the runner, timeout, and steps for the GitHub Actions workflow. We use an action from the GitHub Actions marketplace called deployment action to specify the initial status of the deployment. The marketplace provides good documentation and links to the GitHub repo for each action. We pass inputs to the action, such as the token for authentication and the target URL. We also use environment variables at the step level to define the environment and access them using the syntax dollar. This sets the stage for the workflow.

So what do we need to do now? So now we need to specify where it runs. So remember those runners, so we're gonna say it runs on, and for this workshop, we're gonna use Ubuntu. So there are different versions. Let's stick to the latest, why not? So this is the runner. We are bringing up a Ubuntu runner.

Next thing that we have to specify is the time out a minute. So this is for how long this job can run. If it goes beyond the timeout, the job will stop because if you're in a paid plan for GitHub, you're gonna be paying, or your bill, it's gonna depend on for how many minutes your workflows run. The longer they run, the more you pay. I'm using a free tier from GitHub, so I'm not paying anything. You have, I think, if I'm not mistaken, 2,000 or 3,000 minutes per month. For my own project, this is enough. But in the zone, for example, we have a paid plan and we have to be careful with this timing. And this is basically, if the job goes beyond those two minutes, it's gonna stop. Make sure that your timeout is longer than the duration of your job. Otherwise, it will stop in the middle of the process. This is a very fairly small job, so that's fine.

So, now steps. So we're gonna start, these are the actual actions. So this is what's gonna happen. For this one, we're gonna use an action and the keyword that we use is uses. Let me come quickly here. And this is the GitHub, GitHub actions marketplace. There you go. So this is the marketplace. And we're gonna be using an action which is called deployment action. And this is this one. So this action is gonna let us specify the status, the initial status of the deployment. We're gonna send some information to GitHub. So you're gonna be able to see in your pull request what happened with that deployment, if it failed, if it was successful, into which environment it was deployed.

So the thing that I really like about the marketplace is that the documentation is quite good, and then you actually have the link to the actual GitHub repo. So we're gonna be using just this action. So that would be the marketplace. I'm just gonna copy that link over here into the chat. Nice. Right, so going back here. So we are using that action which we can actually copy from here. Here you go. So there you go. As you can see it is using a at release version one. Then we're gonna give it a name. So this is a name that we are giving to this step. So we are gonna call it create GitHub deployment. We're gonna give it an ID as well. So we can identify these in different steps. We're gonna give it the ID of deployment. And with, so these are some inputs that we can pass to the action. So you can see here, the example of usage. So these are the inputs. We're gonna use some of them. So I'm gonna use the token. And so let me type it and then I explain what it is. Let me type it and then I explain what it is. So this syntax here, like that. So this is how you can access this GitHub. This GitHub context is a very important variable. Yeah, so GitHub context. And so here you have everything. So this GitHub object, so you can see here this GitHub context. It has a lot of properties. So you can find which even trigger this very workflow, we can see the job, we can see the reference if he was a branch or a tag, you can see the repo. So you have a lot of information about GitHub and you have, we're using this token. So this token, maybe a bit bigger. So this is a token to authenticate on behalf of of the GitHub action. It's actually giving you access or a token to perform some, it has unlimited scope, but for our use, it is enough. So you can actually fetch the token. So we're going to just, the token. I'm just going to play some quotes around because the other day, it didn't work without quotes. So then we are also using this target URL input. Allow me to just copy this whole thing. And here, hey, you notice something, right? So there is this environment variable. What is this coming from, David? I don't know. This is new. So this comes from another section. We can define environment variables using the EMV at different levels. So you can define an EMV at workflow level. You can define EMV at job level or you can define EMV at step level. What is the difference? Of course, this one will be accessible for the step only. This one will be any step under this job will have access to that variable and anything around, anything on there, this one will actually be accessible to all the steps in all the jobs. So that environment variable is pretty generic or it's gonna be used in all the jobs. So we are gonna place it here. So we could just give it a name. And as you may have guessed, the environment is stage. So you just type in a stage. So this is gonna be then translated into HTTP, my appURL.stage.com. It's a made up URL. But just for you to see that you can use those environment variables throughout steps with a syntax dollar, to liberate this, calibrate this. And also an environment. So there is another input, which is environment. And you're gonna assign the environment.

5. Building the Backend and Installing Dependencies

Short description:

We've created the first job and now we're going to build the backend, which is a lambda. This job will output the deployment ID that we'll use later. The backend job is named 'install test build and upload lambda' and runs on Ubuntu. It sets up node and defines environment variables. The first step is to check out the code, followed by setting up node with the specified version. We also discuss registry URL and package scope. We then use the cache action to speed up the process by caching dependencies. Finally, we install the node.js dependencies by running npm install.

So now we've created the first job. I'm gonna show you, it might look like weird, maybe you don't understand, but I will show you what this job is actually doing. So enough of this job.

So if you remember, we're gonna build this infrastructure. So now we are gonna build a backend, which is this lambda one. So we're gonna have another job. And we're gonna call it backend, because this job is gonna be responsible to do something for our backend. We can name, oh, sorry, before I move on, this job is gonna output something for us. Sorry, I totally forgot that one. So we actually go to here. We can see there are some outputs. And it's gonna output the deployment ID that we need then use to say, if this deployment was successful, or it was a failure, right? So we're gonna output that using what? We're gonna use outputs. And we're doing deployment ID. And allow me to copy, because it's quite long. So what is the output coming from? It's coming from a step, so steps.deployment. So that deployment is this ID here. And then outputs, so this is gonna output something. And then it's a deployment ID. We'll see how we can use this in the last job of this workflow.

So the backend, we're gonna give it a name. And it's gonna be called install test build and upload lambda. You can just give it a name. So runs on Ubuntu, this is a runner. And time out is a bit longer, because it has to build the code. It doesn't build the code because I want to make the workflow fast. So it pretends that it builds a lambda. So this is gonna go here into the lambda and it's gonna create a dist folder pretending that we are actually building. So actually this build, it goes into the package. Lambda greeter is gonna make that directory. This is gonna copy the javascript file into the dist and that's it. So we are cheating. Cheating is bad, but not in this case. Right, so timeout, five minutes, and we're gonna use some environment variables in here. So we are gonna define a couple of them. If you let me, I'm just gonna copy them. So one environment variable is the lambda name which is the LambdaGridter, so that's this folder over here, and then the bucket prefix. So this is actually where we are actually uploading the lambda code. If you're familiar with AWS, you can upload the code, the zip file into a bucket and then lambda can retrieve that code from the bucket and you can have your lambda running. So just for you to see, that bucket is already created. So actually let me go to stage. There you go. So that's the bucket over here. So that's the DevOps THA Workshop stage lambda's bucket. So we are uploading our code into that bucket. So we are using that the prefix of the bucket. So what do we do now? We do the steps. What is the first thing that we have to do? We have to, what? Check out our code. So this is basically doing a Git clone of our repo. This very same repo, right? So it's checking out devOps GHA Workshop repo. So now that we have our code checked out, we're gonna, what? We need to set up node because our Lambda is node. So we need node, right? So this is another action. All these actions are in the marketplace, so you can go and have a look. You can search for checkout. You can search for set up node. You can see how it works. You can see how it works. And for this one, we are passing some inputs. We are specifying the node version, and because we are building the same, we are building the front end with the same version, we are gonna use an environment variable, which is node version, and we have to define it. So node version. We are gonna use version 14. This is the latest supported version in Lambda. So let's use version 14.

Other things that might be of interest to you, they are not required for this workshop is the registry URL. So you may have a private repo or a private NPM registry in your company. So you may want to actually point to your registry so you can use this input. And also maybe your packages or your dependencies have a scope so you can define the scope of your packages. For example, the packages for those are not at the zone but your company would be your own ones. So you define your scope here. And then something which is really helpful, we are going to talk about is the backup shaders. We are gonna use a, we can name steps. So we gonna name this step like cache node modules. So if you're not familiar with node, node modules is where all the dependencies are installed and then you have to download them and that takes time. So the good thing that you can do is you can cache those ones. And if your dependencies don't change between one build and the other, instead of downloading all of them which takes more time, you can cache them and then you can copy them from the disk which is faster. And for that, you're gonna allow me to copy the whole thing. You're gonna use what? An action. Yes, you're using an action, so this is the action cache. You can just search for that in marketplace. And basically what this is doing is all the dependencies that you install when you do MPM install for Node goes under this folder, Node modules. And that can be quite long. Specifically in Node, it can be a lot of dependencies and quite big so it takes some time. So you can cache all of those, you can specify the key. So the package log JSON is the file that's created with all your dependencies tree and everything. So if that file changes, it means that your dependencies changed and it means then that you have to, instead of using the cache, you have to download your dependencies. So this is a strategy to speed up your process. Totally recommend it. It works for more than Node so you can use it for Go for Terraform even so check that out. So then it's time to install those dependencies. Install node.js dependencies. How do we install them? So we just run npm install.

6. Installing Dependencies and AWS Authorization

Short description:

If you are familiar with Node, you know that this is the command to install dependencies. You can consider using a built-in Node environment variable, node auth token, to authorize against your registry. Secrets are defined in the repo settings and can be accessed using the secrets dot syntax. The next steps involve running scripts for linting, testing, and building the project. AWS authorization is required to upload the code to an S3 bucket. The AWS credentials are configured using an official AWS action. The process involves defining the access key, secret access key, region, and assuming a role. The account IDs for stage and production are also specified. A GitHub AMV and SHA are demonstrated by saving the first 10 characters of the commit SHA into a variable. The lambda code is then zipped and the size is checked. Finally, the zip file is uploaded to the artifacts packet using the AWS CLI.

If you are familiar with Node, you know that this is the command to install dependencies, but if not, this is just a command which is gonna just go into your package JSON and it's just gonna get all those dependencies for you and get them installed in these node modules folder. Something really interesting as well that you can consider is to use a built-in Node built-in environment variable which is the node auth token. If you need to authorize against your registry for your private packages, you can use this variable here and then you can specify that token, npm registry token.

Wait a minute, what are the secrets? Where is this coming from? Is it something that we define here like the ENV? Is it what, what is it? Can we use YARN instead of npm? Yes, of course, it really depends on your pipeline. So if your project is YARN, of course you can just go and use YARN. That was a question in the chat, sorry. There's another question from John. Can, should we use npm-ci at this step? Yes, if it does your requirement for your project by all means, you can use instead of npm install, npm clean install. Really, really depends on your project, but yeah, it's just like all the commands that you use to build your code, you use it here. But yeah. So yeah, good question, so if you have questions, please put them in the chat, I have them here. If I don't see them, please stop me, but yeah, they're very visible, so I should be able to see them. So I was saying the secrets, where is this coming from? Okay, so let me bring it. So this is the repo over here, okay? And make it bigger, and the secrets, if you go to settings, then you go to secrets and actions, and actions, this is where you actually define your secret. So, and basically what they are, they are environment variables, they are gonna be encrypted. And then anyone with collaborator access is gonna be able to access. And the way you access is with the secrets dot, with this syntax that we use here. And then you just add your secrets. In this case, we are using this NPM registry token that is required to authenticate into our registry. We have different secrets. The AWS access keys and some GitHub token and some other secret which is required for production for the stage. So you define your secrets here and then you can use them. So that's how you access secrets in your workflow.

So, this next steps, I'm gonna just copy them because they're boring and we don't want to do boring, right? So, we're doing LinkedIn. These are just scripts that they run in our pipeline. So npm run lint, npm test, npm run build, whatever you need to run to build your project, you just put it here. So these are my scripts. As you can see, we are cheating again. We are saying that our code is awesome for testing all the test paths, nice. It's gonna be fast and simple, but whatever you need to do, just do it like that, and then build. It's just faking the build. So let's move into AWS authorization. Why? Because we need to upload the code into S3, into a bucket. So we need some credentials to do that. So let's get a step, which is gonna be configure AWS credentials. And if you let me, I'm just gonna copy everything, because it's too long and we've already seen what it is. So it's gonna use an action. So this is an official AWS action to configure AWS credentials. And for those of you familiar with AWS, you can access, you can use programmatic access. You need the access key and the secret access key. Those are secrets. Again, they are ready in my repo. I provision that beforehand. A region, so this is where we are going to be deploying our services. We're gonna be using a region in Paris. So we actually need to define that. So this is the AWS region and the region is EUS. EUWES3, this is Paris by the way. Which is the closest to me. And then if you remember from the diagram or from the presentation, I said to you that we are assuming a role using those credentials. We are wearing that hat, right? And we are just assuming the role, which is called GHACI automation. Everything I've done, I've done it myself in the AWS console. So it just works. And then we have this EMV AWS account ID stage. So this is the account ID for stage. Let me just copy it. Right. Piece of advice, you see I didn't quote this one. I did quote this one. I don't actually need it. But if your account is started with starts with a zero, you're gonna have a problem. You need to quote them, right? Only if it starts with the zero. There's a tip that I give you. We're also gonna need, in this case, the account ID for production. So I just better copy it here. So we are using two accounts, one for stage, one for production. I'll just put it there and we can use it later. Then we are going to, I'm just gonna copy a note, the next step. And the next step is something very weird, right? And this is just for demonstration. I just want to demonstrate this GitHub AMV and this GitHub SHA. And so what we are doing here is we are each, when we create a commit, we are gonna have a SHA, that SHA is pretty long. So we are actually taking the SHA of the commit only the 10 first characters, and then we are saving that into a variable called short SHA and then we are starting that into the GitHub AMV. So this GitHub AMV is only, available for this job. Okay. Each job is isolated from another job and each of them have, each of the jobs have this GitHub AMV and we can store some variables and then we can use them. So we are just cutting the SHA into the first 10 characters and saving that into this variable short SHA, and then we're gonna use it in a moment. Now what we are doing, we are zipping the lambda. We're not doing anything really fancy. So let me just copy that. So zip lambda, we're just running. Honestly, this is just running and zipping the lambda. So we're gonna, we're creating this EMV lambda name. So this is basically saying, lambda greeter.zip. And then we are saving those packages that these folder that I showed you before. And then we are just seeing the size of the lambda, of the zip file with the TU command. And now we are ready to upload, to upload the lambda to the packet. So let me close that one. Because it's a rather long command. So another step, which is gonna upload the zip file to the artifacts packet, the packet that I showed you in AWS. And when you set, when you configure the AWS credentials, you get the AWS CLI, right? So you can actually do some AWS CLI commands from here.

7. Building Infrastructure with Terraform

Short description:

We are providing a URL to S3 and using the Lambda bucket prefix. The GitHub repository returns a value. We cannot use the AWS CLI before setting up the AWS credentials. After setting up the credentials, we can build the infrastructure using Terraform. The job sequence is controlled using 'needs'. After the job finishes, everything is destroyed except the saved artifact. Environment variables and Terraform version are defined. The Matrix is used to test the infrastructure in both stage and production environments. A plan is used to detect errors early. The steps include checking out the code and configuring credentials.

And this is nothing but saying, I want to copy into S3. I just want to copy this file into S3. So we are just providing a URL to S3. And we are using this Lambda bucket prefix. Okay. Then this GitHub repository is from the context. So that basically is gonna give you, sorry. This is my way. So that's gonna give me DavidRV87, which is my handle slash devops GHA workshop. So that GitHub repo is gonna return that value. I'll be back to your question, Peter, one second. So that's the GitHub repo, then the Lambda name, and then we are here. You can see I'm using the shortchat, which is another environment variable because we placed it in the eMV. And then we just uploaded that. So question in the chat, can we use AWS CLI before we declare it in with? You cannot use the CLI before this setup. So that configurate AWS credential is gonna set up the CLI for you with these credentials. So you cannot use it before. So it has to be a step after that. Did I answer your question? Perfect. So that's the backend. So now we need to build. So now we have the backend code in AWS. Now we can build our infrastructure. We can build that Lambda, we can build a bucket for the front end files. Don't get confused. The bucket for the front end files is not the bucket for the Lambda, which is the backend code. We can build our API gateway. So let's do that. So how do we build that? This is where we are using Terraform. Let's name the job infra. And then let me copy a few things in here. We already know all of them, but this one, bucket needs. So this is the way that you control the sequence for your jobs, or in which order your jobs run. So because we need the Lambda code to actually deploy our Lambda, we need to upload that first, and then we need to wait for that job to finish. So that's why we have specified these needs. So we need the backend, and that backend is the name of the job. So I hope that's clear. So this is how you control the sequence of your jobs. And something else that I want to mention is as soon as this job finishes, everything is destroyed, okay? So we are safe because we've already uploaded the code into S3, so we have saved that artifact but anything else is gonna be destroyed. I don't have access to anything produced by this job unless I save it. We're gonna see this infra is gonna produce a file which is used by the front end job, and we're gonna see how we can save those artifacts to be used by a different job. So apart from that, the rest we've seen, we are gonna define also environment variables. The Terraform version that we're using is 1.5. The working directory is this Terraform folder because all the Terraform files live here. Don't worry, you don't have to know anything about Terraform. This is just, my infra lives in here, it's all you need to know, okay? And now we're gonna do something really interesting. So we are gonna test or we're gonna check that all our infra works for both stage and production environment. And if you remember, in the Terraform basics, there is a plan and a reply. So a plan is nothing but, but hey, I want to do this, but don't do it. Just tell me if everything is right. So for this workflow, we're gonna apply only for a stage, but we're also gonna plan for production. We're gonna say, hey, see if this also works in production. We don't want to then go into production and say, Oh man, this doesn't work. We want to detect those errors early, right? So for that, we're gonna use a Matrix, right? So that Matrix is a strategy, fail fast. I'm gonna set it to false. I won't explain what it is. And then we're gonna use a Matrix. What is this Matrix? So if we actually go here, so EHA Matrix. We go into this workflow and... Sorry, just gives me the Spanish one by default. I wonder why. There you go. Matrix. There you go. So you have all the explanations here. I'm gonna drop it into the chat. But basically what you can do with this is you can define a Matrix to run different jobs with different configuration. So you define the structure of the job and then you give it some different inputs for different environments, for example. So you can say, I want to use different nodes version. So I can build my code for node 10, 12 and 14. So different, it really depends on your needs. So maybe here you want to build for ubuntu 18 in these three versions and ubuntu 20 for these three versions. So it's gonna combine all of them. So we're using that Matrix and we are using different syntax which is also in the documentation. Then we copy it. So we're gonna run it for stage and for production. So environment stage, environment production, different accounts. The config lives in different places for stage and for production and different tokens for both environments. And now if you let me, I'm gonna copy 10, 12 lines of code because we've already seen them. So at the same level of strategy, I'm just copying the steps. So let's start with the steps. Checking out the code. If you remember, we've already checked out the code here but this is a different job. So as I said before, everything is destroyed when a job finishes. So I need to check out my code. I'm gonna configure the credentials but now it's a little bit different, especially right here. So I'm using a different syntax which I would like to explain. So if you know objects from JavaScript, so I can define a person. Just bear with me. And a person can have a name.

8. Syntax for Accessing Values

Short description:

We're accessing values using different syntaxes, such as person.name and env.matrix.accountid. These syntaxes evaluate to specific values, like env.accountid.stage. It's important to understand these differences.

Which is me. And I have an age. I'm 18. No, I wish I'm not 18. I'm 34. So how do I access those? Person.name, right? That's gonna give me David. But also this is equivalent to person. Name. So that syntax is equivalent. So we're doing something really similar here. So what is matrix.accountid? So matrix.accountid. What is it? It is for states, for example is this value. So what is env.matrix.accountid? So that's gonna be env. Like that, right? Because that's going to evaluate to that value, which is equivalent to env.accountid.stage. We've already used this syntax up here. So env.aws.accountid.stage. That's that value. So hopefully that you understood that part. This is somehow different syntax, but that's the explanation.

9. Setting up HashiCorp Terraform and Workspaces

Short description:

We're setting up HashiCorp Terraform, which provides access to the Terraform CLI. We'll cover commands like Init, Plan, and Apply. Workspaces will be created for each participant, allowing them to have their own resources and workspace.

This is somehow different syntax, but that's the explanation. So now we are setting up HashiCorp Terraform. So by doing this, now we have access to the Terraform CLI, the same way with that we have the access to AWS CLI with this action. We're specifying diversion that comes from here, 1.5, 1.1.5. And then now, if you remember from the presentation, there is the Init, there is the Plan, there is the Apply. I'm gonna copy all those commands or steps. And we are gonna break them down. There are more than the ones that I explained just for academic purposes. So we have the Terraform init, and Terraform init is just a, this command Terraform init, and then we have to specify some configuration. You don't really need to know all this. If you want to know more about this, we can take this offline. And you can contact me, we can discuss this further if you want more information about Terraform. We are going to create some work spaces. I used to run this really interactive, this workshop, but I didn't have quite the time to run it that way for DevOps.js, but it was really cool because every one of us was having, when I run it in the zone, this workshop, every one of us has a workspace and they have their own resources and it looks pretty cool.

10. Working Directory and Terraform Commands

Short description:

The working directory keyword can be used for the steps, and instead of repeating it for each step, we can define defaults. The defaults include the run command and the working directory, which is defaulted to the Terraform directory. This eliminates the need for environment variables and reduces the number of lines. Other Terraform commands include format, validate, and the shorten commit sha. The commit sha is used to specify the lambda to deploy, and it is obtained from the githubenv belonging to the job. The plan command is used with a VAR file from the matrix.

And then we have this working directory. So this working directory is another keyword that we can use for the steps, as you can see is everywhere here, but we don't like to repeat ourselves, right? So instead of defining a working directory for all the places or for all the steps, we can define up here some defaults, another keyword, which is nice.

So we have defaults, we have run. So it's a default for the run command for this run over here. So every time there is a run, there is gonna be a default, and for the default, we want to default the working directory, and we are defaulting to the Terraform directory. So now, instead of using this environment variable, we can get rid of that, and we can get rid of all these guys, because we've already defaulted the working directory for all the run commands. There you go. Saving some lines is always nice.

And so, some other commands for Terraform formatting is kind of the linting. This format and validate, this is kind of the linting, the equivalent for linting the code, the shorten commit sha, because we are uploading. If you see here, the lambda that we are uploading has the commit sha, which is short. We need to actually say to Terraform, hey, this is the lambda that I want to deploy, so we need to use the same sha. And we need to do it again because this githubenv belongs to the job, okay? So we are doing the same thing, using the same sha. And then the plan, this is something that we saw in the presentation. And is this doing anything? Sorry, this is planning. As using a VAR file, which is coming from the matrix, matrix dot VAR file. It's coming from here. Pretty cool.

11. Building Infrastructure and Frontend Job

Short description:

We have Dataform plan, which specifies the var file and environment variables. The if condition is used to apply changes only in the stage environment. The front end needs to communicate with the backend, so we share the API URL using Terraform output and save it in a JSON file. This file is then uploaded to GitHub as an artifact. We then have a job for the front end, where we wait for the infrastructure to be ready and upload files to the bucket. This job also includes changing the API URL in a script file.

Oh, there you go. We have a Chirpo here, Alessandro, the last message in the chat. I don't know if you're here. I cannot see who is in now. Oh yeah, there he is. So yeah, Chirpo is the head of the x in the zone. So it's the big boss. Is just spying a little bit. You want to say hello, Chirpo? No. No, he's not spying. He's just giving some motivational messages, I guess. You can speak, Chirpo. I don't have anything to say. Go, David, go, go. Thanks, thanks for dropping by.

Right, so we have Dataform. And what are we now? Oh, yeah, Dataform plan. Yeah, so Dataform plan is another command that we also saw in the slides. We're specifying the var file, which is all the variables that are required to build the infrastructure. But we can also specify some of the environment variables, like on the fly, because they are actually depending on where you're running, and this shore-side is actually different each time. So we need to pass them, let's say, live for each of the runs and also the secret token, same syntax with these- with the brackets. So nothing really different from here. The only thing different left to explain is the if. So these are some conditions that we can specify in the workflow. So if you remember, this is the workflow for stage. We only want to apply if the environment is stage. So we are gonna see matrix.environment equals to either stage or prod. And this environment for this workflow is stage. So if the matrix environment matches stage, please apply my changes. If it's prod then this is not gonna apply. So what we are doing here is just saying, hey, plan my thing in both accounts but only apply the changes in a stage, all right? So because we all know what happens in production, right? And it's in, everything breaks when you go to production.

So yeah, we've already done our infrastructure. We need, so we go back to the diagram, we need the front end to talk to the backend, okay? So when we apply the changes through Terraform, that API gateway that we are creating is gonna have a URL, an invoke URL that we need to call. So we need to share that with the front end. We need to tell the front end, hey, this is the URL that you need to invoke. So we need to share that between two jobs. So this is the fancy part. This is where two jobs can share some artifacts. So the first thing that we are gonna do is going to output. So there is another command in Terraform, which is Terraform output. We're gonna output some JSON into a file called, which is coming from an environment variable. So this is the last environment variable that we need to define. Actually, I'm lying because we didn't define this attendee name. So let me actually... This is the most important one I used to remember when I said in the zone, this is the most important variable that you need to define. And I forgot today. And sorry about that. And this is the other environment variable that we need to define. This is, yes, the last one. I'm defining this one up here because we need to share them between the infra and the frontend. And it is called the F output dot JSON. So this output, this is the only time I'm gonna show you some Terraform code. So it's gonna output the invoke URL of the API. And it's gonna also output the endpoint for the website that we are building because we are building a website, okay? Yes, we are. So these are the two outputs and we need to share them with the frontend. So we're outputting that in a JSON file, okay? Just pay attention to this dot dot because this is a run command. We are using the defaults terraform directory, if you remember. So we are actually outputting that outside the terraform directory. Just these two dot dots. And now we need to save that file because we then need to use it in the frontend job. So this is the upload terraform output to GitHub. So we are using this action. We are uploading an artifact. So this is saving this artifact in somehow the GitHub storage. This is not uploading it to AWS, this is something in GitHub. I'm going to show you when we run where this is actually stored. And we are doing this only for stage, right? So these last three are conditional steps which we are running only for stage. We are naming the artifact tf output. And what are we saving? Dot slash, why dot slash? Because this is not a run step. This is a an action step. So the default directory doesn't apply here. We are running that action or this step from the root directory of the project. So this is why we use dot instead of dot dot. Okay, so we're just saying, hey, save this file for me. I will use it later. Okay, so that's the infrastructure. So if everything is okay, we're gonna have something like this, a Lambda, API gateway, and a bucket, but the bucket is empty. These static files, we need to still build them and we need to upload them. So why don't we do another job for that? So this is the second last job that we're doing. I'm gonna move faster for this one because this is pretty much the same that we've been doing. So this is the front end job. We'll give it a name. We need to wait for the infra to be ready to be built because we need to upload those files into the bucket and then the rest is pretty much the same. So we are just using this frontend deleter. So if I can show you this. So this is not a React application, I'm sorry. I'm not here to teach you React. I'm here to teach you or to show you GHA. So again, this is cheating because we are building, we are pretending that we are building something. So we are doing, if you are familiar with bash, with some commands in the console set, we are changing something in this script.js file and that is the API URL.

12. Uploading Files and Checking Deployment Status

Short description:

We configure the environment variable API_URL to change the invoke URL in the script. We download the tf.output artifact, which contains the API domain and S3 bucket values. We use the GitHub script action to parse the output and set it as an output. We substitute the API_URL in the build command with the value from the environment variable. We upload the static files to the S3 bucket created by the infrastructure. Finally, we check the deployment status and update it based on the success of the backend, infrastructure, and frontend jobs.

So this is the invoke URL that I showed you before. So that which is built in the infrastructure step, sorry, in the infrastructure job. And we are then changing that using an environment variable. So what we need to keep from this is that we need an environment variable called API underscore URL, all uppercase, and change it to change it in the script. Okay, so we're gonna do something like that in this job. Again, we need to check out the code. We've seen this before, with configuring AWS, configuring node using the same node version. And this is where it gets interesting. So let me stop right here. So let's focus on these three steps. So we are, first one is easy. The equivalent for the upload artifact is called download artifact. Clever! And we are downloading an artifact which we named tf.output. So we're saying, hey, whatever you save for that one, give it to me. So this is going to place a tf.output.json file here for me. Okay, so I can read it. I'm gonna show you, probably it's not the most elegant or the most beautiful solution, but it's something that it works, okay? So we need to read this file, okay? So this file is gonna contain, it's a JSON object which contains a variable. I actually don't have it. Let me quickly show you how it looks because it's gonna, oh no, sorry. Oh, I do have it. So it looks like this. So the output JSON looks like this. So it's outputting the API domain and the S3 bucket. So we need to get this value out down here. So API domain and then a value, right? So how can we do it? So what we can do is we can cut. So cut is just printing in the console the content of this file, right? The content of this file, right? So this is the TF output ID. So let me actually name it 444, whatever, just to identify it. So the idea of this step is TF output 444. So we are doing, we are using this action which is called action GitHub script. This is going to let us run some node or JavaScript commands or scripts in an action. So we are calling this one so we can use something like this. So we are creating an object which is just parsing that process.env.output. So what is that process.env.output? This is an environment variable in node.js which is saying give me the content of this environment variable. And what is this coming from? So let me just put this 444 here. So this is coming from the previous step. The tf.output.444 outputs a standard output. So this is just basically if I do cut sample, so that's the standard output, right? So it's just that content placing it into the standard output variable. So you have access to that. So then I can access that and something really weird which I found it really weird the very first time I saw it. You can set outputs that then you can use in the, so that's the very same way that you are using some outputs from the previous step. You can set your custom outputs with the syntax, colon, colon, set output, then name, then this API URL. So that's the name of the output. And then the content of the output is this object which is the JSON parts already. API domain dot value. So API domain dot value. So all of this, I told you is not, probably not the most elegant and not the most straightforward way to do it, but it works. And let me actually remove this 444. It was just to show you what it meant. So all of this is just to have, so we have this step called with the API URL ID. So we can actually do API URL ID step, the outputs, which we set with this syntax, and then API URL is the URL or the output that we set, right? And if you remember, I told you that we needed an API underscore URL, all uppercase to substitute that, which is gonna be used by the build command. So we are actually saying substitute the API change B from here, with whichever value I'm passing you throughout this environment variable. So that's building our code, our static files for the frontend with the right, with the appropriate invoke URL. And now the last thing that we have to do is upload those files into the S3 bucket. So the S3 bucket is created by the infrastructure. It has a name, DevOps EHA Workshop, then that will be David Rubio, and then the name of the region, the environment. So it's just uploading the static files. I know this can be a little bit confusing. I can go over that if you need, and if not just have a look at this action, which allows you to run some commands like node commands to do whatever you need. And then finally, now we basically have everything, right? So we have lambda with the code that we deployed using the zip file that we uploaded API gateway deployed by infrastructure, and then the bucket with the files uploaded by the frontend job. And if you remember the very first thing that we did was initiate the deployment, okay? So now we need to say if this deployment was successful or if it wasn't successful. And for that we are building or creating another job. And this is the last job. And so after this we'll do a break. So this is the deployment status. And so the needs is in this way for all the jobs to finish. So you can place as many jobs as you need here. And there is this if, always, this means that regardless of what happens, so if you don't specify this part, what's gonna happen is as soon as a job fails, the whole workflow is gonna stop, but you can actually say, I don't care, just run this job. I'm gonna control what happens. So this is gonna run regardless of the state of the previous jobs. The rest is the same. And then we are doing the same action from the first job. And then we are saying, update the deployment status to success if the backend result is a success. If the infra result is a success and if the frontend result is a success. And just keep in mind that it says needs, you need to specify the jobs that you are actually waiting for here. So if I don't say that, even though the job, the needs, the backend job exists, it won't work because you need to access through the needs. So you need to specify the job in the needs. So then with this syntax saying, Hey, what I want you to do is for that deployment ID, which is coming from this needs init deployment, output deployment ID. So that's the very first job here. So, that's this job in here. And remember that it outputs a deployment ID, by saying for that deployment ID, if everything happens, like to be success, set the success. Otherwise, if any of any of them is a failure, backend or infra or frontend is a failure, just set the state for that ID to be failure. So this is the, so I say that there was gonna be a lot of typing here. So we run over a little bit, but we can probably move a little bit faster in the second part. Let's continue with this one. So what we're going to do now is to test it. To test it. So the first thing that we're going to do is we're gonna create a branch, which has a name like that. So let's create that branch, minus b, pha, workshop, webapps, workshop webapps, and we are going to add that. We're gonna commit. So this is the workflow for stage, and we are pushing.

13. Building the Production Workflow

Short description:

The workflow for the stage environment is successfully executed, and all the jobs, including the terraform.apply, are completed. The lambda is uploaded to the bucket, and the deployment status confirms the successful deployment. The website, powered by GitHub Actions, is built using Terraform and AWS. Now, let's create a production workflow that runs on tag pushes, and modify the necessary conditions and names.

Webapps. So if everything went fine, we should be able to go here and something should be triggered. There you go. So this is our workflow for stage. And if we go to the pull request, we create a pull request. I'm gonna be able to show you this init deploy. So you can see here that this branch is pending to be deployed into the stage environment. So it's pending 13 seconds ago. So it's basically that ID, that deployment ID has been created. This is running and we can actually go into the action. So this is where our workflow actually runs and this is how the pipeline actually looks. So you can see the dependencies. So you can see the deployment status actually depends on a init deployment and all of them as well. You can ignore this error, that's fine. That's because it's trying to create a Terraform workspace which doesn't exist and it's just erroring, but that's fine. So you can see this one as the install, test, build and upload lambda. So this is the pack-in one. It did everything like the unit test and the LinkedIn. So it ran all the commands. So it built, it configured the credentials using the actual secrets. So you can see that all the secrets are masked. It zipped the lambda and it uploaded the lambda. So this you can see uploaded the lambda into the bucket. So we can see this three C, C, eight, four, blah, blah, blah. We can go, this goes in the way. So if I refresh here, now we can see there is another bucket. So this is the bucket created by Terraform. And in the lambdas, there is this lambda. And this is the last modify. This is the three, C, C, eight, four. This is the lambda uploaded just now. So it seems that everything finished. We can... Merge is disabled whether when jobs are running. Not really. Merge, it looked disabled that I could still merge. I'll show you. I think I'll be able to create. We create another pull request later. But no, it's not disabled. You wait. If you need to wait, because you know, or if you know that everything's gonna be fine and you need to go ahead and merge, you can actually merge. So the plan and apply, so you can see, there are two jobs using the matrix. This one, which is 47, I don't know if you see it, 47 seconds is the stage. And you can see all of them run, including the terraform.apply, but they are all done. Including the terraform.apply, but if I go back and see this other one, which is shorter, see this three were skipped because we were using this if condition. So it's actually planning and saying, oh, that's fine, it's all good for production in case you want to run it next. Then this is the front end and this is the deployment status. The deployment status, we can see that says the branch was successfully deployed for stage. So that's how we can connect the init and then the last job that we created. If we go to AWS, we can see, for example, that this lambda, is it gonna ask me to log in? No. Interesting. Interesting. Let me log in again. Oh yeah, the role has a one-hour duration. So I need to log in again. All right. Production, stage, weird. Okay. So let me actually login. And do this console. So this is the stage account. So that's Lambda. And as you can see here, there is the Lambda greeter. David Rubio created four minutes ago with my code. And if I go to S3, API gateway is also created. And then in S3, there is this bucket, which is the actual website. Then we see the properties as a static website enabled. And if I click here, there you go. So this is the Lambda responding. We can actually see in the network, this is not static. So we can see in the network, if we refresh, there is a call to this employee. There's a call to this endpoint. So that API URL over there calling the endpoint and it's responding with a message, hello, David Rubio. And so that's the message. And then the front-end is just hard-coding that exclamation mark. So this is the website that we actually built using Periform, AWS, but we powered it. We powered it with GitHub Actions. So now let's create another workflow. But in this case, we're gonna do the production, right? We've done the stage, so let's do the production one. For the production, we create another workflow. And again, I'm gonna give it a very good name, right? I'm gonna say onTag. So I know that this workflow is gonna run onTAG. And what I normally do is I just copy everything from the stage and I start modifying some stuff. So the very first thing that I changed is the name. So this is onTag, PROT. The condition is different. So instead of being ON branches, it's gonna be onPUSH to a tag. So if I create a tag, you can see, it's also tag.ignore. So you can give some regular expression here for all the tags which start with V.

14. Deploying to Production and Testing

Short description:

To deploy to production, change the environment to 'prod' and update the prefix for the backend. Planning for the stage is not required, so remove it from the workflow. Hardcode the production account ID and update the bucket prefix. The frontend only requires changing the stage to 'prod'. The entire process can be completed in a shorter amount of time compared to the initial workflow. Testing production early is a best practice to avoid issues before the release. The planning step allows for checking the changes before applying them to production. The artifact is shared between the infra and frontend jobs. Merge the pull request and push a tag to trigger the workflow.

And then you can do like 1, 0, 0. So all the tags that start with V. Or maybe instead of onTag, you can do onPUSH to main or master. It really depends on when you want to deploy to production. And so now we start removing some stuff. So the environment sorry, removing, changing some stuff. So the environment in this case is going to be prod. We are not doing any planning for stage whatsoever. So everything that we are doing is in for production. So let's just work together our way through the workflow and make sure that we are always pointed to production for this workflow. So for this init deployment, we don't need to change anything because the environment is controlled by an environment variable up here, which is prod already. For the back end, we have to change the prefix here. We could actually use an environment here, but let's just hardcoded prod. It really depends on what you want to do, but you can use that environment there. So we build the code the same way, nothing to change here. There is nothing related to stage environment here. It's where we actually change the account ID. It's the production account ID. I'm using the same role in both accounts with the same. So these keys have access to that role in different accounts. So this is an AWS setup that I've done before the workshop. And then the rest is fine because we are uploading into a bucket prefix that we've already changed to prod here. In the infra is simpler now because we don't want to plan for a stage. We just want to go for production. So we don't actually need to do this strategy. I'm gonna leave it for a second because I need these values here. So what do we need to change? Here the account ID is not the matrix anymore. I'm gonna be removing the matrix. So everywhere that we see the matrix, we can remove. So we can actually hard-code the value in the bar file. So not in the bar five. It's not the matrix. And also this matrix here for the secret. Secret, secret. There you go. What else? Is there anything else for the matrix? I don't think so. So this is normally how I create my workflows. I just spend a lot of time with the first one. And because my code is really similar for deployment to production, I just do these things. So I just copy paste and then start changing and refactoring. So then for the frontend, we need to just change this stage for prod, and I don't think there is anything else related to stage here. It was the bucket has the environment, and for the deployment status, there is nothing related to production. So how long did we take for this one? So for the other one was over one hour. This one was probably five minutes. So as I said, you're going to spend most of your time doing the first one, the second one, or if you have development, test, stage, and production environments, you'll be doing the dev environment first, and then the rest of the environments are gonna be faster, unless there are specific jobs that you run for each environment. And that's it. So this is basically all you have to do. So what I'm gonna do now is I'm gonna go into GitHub. And I am going to merge my pull request. So imagine, I'll get back to you Jan. There's a question in the chat. I'll get back to you in a moment. Just want to make sure that you understand what we are doing now. So we are going to run this workflow when we push a tag. But if instead of having that my main branch is called main. So I'll just leave it like that. So if I leave it like this and then I push, if I squash merge, that's effectively doing a push into main. If this was the setup, then that workflow would trigger. But we're not doing that in this case. We are actually demonstrating how to do tags. So I'm just gonna push this and before I answer your question, it's stay to, git commit AM, add tag workflow. And then, oh, sorry. There you go. So while that is building, I'll answer your question. So it says, can you elaborate a bit on why the planning for prot was required in the first workflow? It wasn't required, it is a best practice, I would say. So, if I didn't do that on this workflow on the feature branch stage, the very first time that I would be testing production would be on this other workflow. So, if something goes wrong, the very first time I notice is right at the very end of the pipeline, sorry, at the very end of the process where I'm ready to go to production or I think I'm ready to go to production, but then oh, something goes wrong with production because I don't know, maybe this setup in the production account is different and you didn't take that into account and then it fails and then your bosses or you have a release to do and the very first time that you are testing production is right before the release time. So instead what we do is we move that testing early and we are not actually doing any changes in the production account because the apply, we're not applying, we're only applying for the stage, we're only planning, we are saying, hey these are the changes that I am planning to do in production. Yeah exactly, because here I can then go to the actions. So I can see this is the production one. So I can see that the reform plan and it looks okay. So there is no errors, it's saying, at the very bottom, it's creating, it's saying I'm planning to add 21 resources and to me that looks okay. If something goes wrong, if you say error, you are missing this and that, so this a pretty flight check, yeah. Good question, good question, thank you. So now my action, so we're fine, this other one, of course, it worked because we didn't change anything and as you can see here, this is the artifact that we are sharing between the job, the infra job and the front end job. So if I actually download this one, I can actually see, where did it go? I think it was here. Just to see if I put JSON, yeah. So if I actually place it here. So this is how the, it looks really ugly, but we are, so this is the sample output that I showed you. It looks a bit different for GHA. We'll just strip out this afterwards, but that's the output and that's the file that is saved. So now we're ready to go into the pull request and merge it. I'm just going to squash and merge. This is useful stuff here. And if I go and check out my main branch, I pull, and then I say, okay, I'm going to tag. Do I have tags? No, I don't have any tags. So I'm going to tag version zero, zero. Important, this V is important because otherwise it won't match. And then what I'm doing is it push origin B001. So I'm pushing a tag.

15. Debugging and Deployment Status

Short description:

So this is actually matching the condition. We have a production workflow. There is a question in the chat about running jobs locally. Unfortunately, you cannot run a workflow locally, but you can run the commands and create your own scripts. We encountered an error, but we were able to debug it and fix it. The deployment status shows a failure in production, but we were able to resolve it. You can see the deployment history and the green status. The workflow provides a good overview of deployments.

So this is actually matching the condition. So there you go, pushing attack. When you tag is here. And what do we have? We have a production workflow. So that's going to run. So as you can see here this feature branch stage would be this workflow as the name that we've given to that on-tag is this on-tag prod. So that'll be that one. And, so that's the, that's the workflow. It looks a bit different because you can see there is no matrix running. Here is only the, the production workflow is doing the Terraform.

And while we wait, there is a question in the chat. Peter, can we like run the jobs in our local just to see if things are working before pushing? And no, I mean, you cannot run the jobs like say, Hey, run this workflow. You can run the, of course, you can run the commands because at the end of the day, these are just your commands. So if your environment is set up with node then you'll be able to run these commands and test it. You can create your own scripts, maybe a make file that is gonna run these commands for you. But as far as I can tell, there is no way to run a workflow locally. Maybe there is a Docker image to do that but I'm not aware of that, sorry. But that's a very good question. That's interesting. So I'm gonna add it here, run GHA-workflow-locally. Good question, thank you, Peter.

Okay, so we have an error. That's very nice, that's very, very nice. Let's just debug that. I promise I didn't plan this error. Okay, so it's nice because we can see the deployment status. Here, you can see the environments. There's a failure in production, so yay. The, our deployment status, even though that's green, you can see it's actually running the failure step because one of the jobs failed. Let's debug this together. I don't know what it is. Starting the... Okay, very nice. So this is regarding the download artifact. Oh, okay. Okay, okay, okay. This is the beauty of copy paste. So it's actually, we need to remove these ifs because we actually don't need them. There is no matrix. So, perfect. Let's just remove that. Basically what is happening is say, I'm trying to download an artifact which you didn't upload because this condition was not met. So this step didn't actually run. So let's fix that. Just for the sake of speed, I'm just gonna do it in main, I'm not gonna create another branch. So gonna add that. It could need to fix my typo. And I'm pushing. But there's no problem because we can create another type, v002. And we push another tag. And hopefully that works. 002, yeah. Question in the chat. You've seen the positive in things. And. Yes, yes. I'm very positive all the time. So, I was actually planning to fail a job in purpose. I put like an exit one somewhere to show you how this actually work. But we got this for free. So looking at the bright side. Yeah, yeah, exactly. So, yeah, if you don't get an error, you're doing it wrong. But that's the beauty. And actually, you can see, you can see how the errors look like. And so that was my error. Unable to find any artifacts for the associated workflow. And then you can actually go in, say it was trying to download this TF output. And because there was this condition, which was not met, that output wasn't there. So, we have created another tag and that's going to work. Because we did it together. And so if I actually, in the meantime, I go into the production account, one second, and so, that's the build frontend. Let's watch it. So you can see here, it's waiting for a hosted runner. And so this is, if you remember from the presentation, I told you that there are hosted runners and self hosted runners. So this is, you get into the queue and whenever there is a runner available for you, you get assigned to that, and it goes to your job. Actually, it did run as you could see here. So now, it did download this one. So, just saying, hey, I found it. Very nice, David. And yeah, well, now everything's green, so this would be green. It's green. There you go. We have, yay. Nice. You can see the history, the deployment history. Really nice thing. Actually, I never used this before this DevOps JS workshop. I found it really, really nice because it can give you a good overview of your deployments. And actually, if we go into, instead of going into S3, we can see in the workflow itself.

16. Manual Deployment and Triggering

Short description:

If we go to the planning and apply terraform hopefully this output is here. We need to find the website URL from AWS. This is the bucket for production. We change the stage to prod. We can do a manual deployment using the on workflow dispatch event trigger. The workflow only appears if it's in the main branch. We can manually trigger it for production.

If we go to the planning and apply terraform hopefully this output is here. I don't know why it doesn't output the S3 domain here. So we need to actually go and find the website URL from AWS. Not a big deal. This is the bucket, new bucket for production. We scroll down and this is prod. So that's prod, right here. And if we copy this one, stage still working. So we change that stage, this stage. So nice, perfect. That's automated, that's cool, that's fine. But maybe you are thinking, hey, David, but mm, I don't like going to production on a tag, can I do it manually? And the answer is yes, if you can do it manually. Consistence of time. I'm not gonna demonstrate the whole thing because it's a complex one. I mean, not a complex one, there is nothing new that you don't know already, but I'm just gonna show you what's the difference in there, so, and it's gonna be just in the condition. So I'm gonna create a new file here and then that's gonna be on, on, on what? On, on deploy. Deploy. So this is a manual deployment. We can actually copy this one and that would be, and the only thing that changes is the condition when this actually runs. And let me copy it. I had this prepared like nicer, but I'm just gonna explain to you quickly. So this is, there is the on work flow dispatch. So this is a manual dispatch. So for the dispatch, you can specify some inputs. So for example, they add the new name, instead of hard coding it, you can specify here or the environment. So you can have like a choice of environments. And I'm just gonna remove the stage because I want to make it simpler and move to a different topic now. And if you want to read from those inputs instead of hard coding them, you can just read from GitHub event inputs and then the name of the input. So that will be the environment. And the rest is the same, okay? So nothing to change because we're going to deploy to prop. So I'm just gonna quickly push that, okay. Get at this. No, I want to copy git reset at yf output. Then we just remove that library. And then state the spin, yep. And then add manual deploy workflow. Push directly to main. Maybe you have some protection into your main branch, but for the sake of speed, I didn't put that protection. So now what do we have? So I've already pushed and here in actions, we have another workflow on deploy. On deploy, this workflow has a workflow dispatch event trigger. So because it has a workflow dispatch, it has this run workflow. So I can actually type my name here and I can select the environment. In this case, we only have production and we can run it and we can manually trigger just in case your team or your company is not ready to go to production in this sort of automation, you can do it manually. And just keep in mind that this workflow, this button here only appears if there is a workflow in the main branch with the workflow dispatch. Okay, so if this workflow is not in the main branch, that button will not appear. Just remember that, just in case you go and say, hey, I've added this, but it's not there. It has to be in main branch. Then this is gonna be the same. This is a deploy to production. Really, it's the same. We copy the whole workflow and then we just change the condition.

17. Exploring Common Actions and Code Reuse

Short description:

In this part, we explore common actions and how to reuse code for both production and stage environments. We create an action called 'install test and build backend' and specify its inputs and steps. The action uses composite actions and the bash shell. We then integrate the action into the workflow by checking out the code and calling the action using the 'uses' keyword. The action requires inputs, including the node version and a secret value passed from the workflow. By defaulting the inputs, we allow for flexibility in customization. This action helps reduce the need for copy-pasting and promotes code reuse.

So nice, so we've done feature branch, the tag, the on deploy, manually. But there are a lot of copy paste. We've copy pasted everything and we don't like that, right? We don't like copy pasting. So let's explore common actions now. So we are going to build some of this Conscious of Time. I'm gonna do probably just one or two and then the rest is, it will be all here. So you can see there is all the workshop part. So as soon as I share this repo with you, you will have access to the solution to everything. So we've completed this part one. So we're gonna do part two. So for part two, I'm gonna move. Give me one second. Sorry about that. So we are gonna grab this on feature branch. I think it's this thing. You don't see anything on top here, right? You don't see a menu up here? I don't think, yeah, okay. It's just a menu from Zoom, which is on top of my tabs. Right, so let's make this an action. So what do we want to do? So we want to reuse, because if you remember, all this code is the same for both production and stage. So what we can do is we can create actions and then reduce them. So let's actually do that. So I'm gonna, this is a best practice, it's not mandatory to place it here, okay? I'm gonna create a folder called actions. Your workflows have to be under the workflows folder, but your actions don't have to be here. They can be anywhere. And then I'm gonna create another folder which is gonna be called, let me just see, install build, install test and build backend. Okay. So I'm gonna call install build, no, install test and build the backend. Which is mandatory is the name of the file. It has to be called action.yaml file. Okay. Now it has to be there. So, and I want to have this steps here. So this is my action. We're done. No, we are not done. So we need to give it some nice formatting. So it understands that this is an action. So the very first thing that we do, we name it. So this is going to be name, installed, build add, or install test, and build backend. We give it a description. It's going to be installed test and build the Lambda or the backend. We are going to specify inputs. So we can specify these inputs. You remember those with? When we are using actions, we are using this with, we are providing inputs, so we are actually defining inputs to this action. Inputs, let's see what they are. And now we do steps. So let's always forget this thing. There we go. It's not steps, it's runs. And then we say, using composite, so they are called composite actions because they are using composite. And then now the add steps. And then under steps is, yes, all of them, all the steps. You can see there are some errors. If I hover, it says missing property shell. So inside an action, inside a common action, you have to specify the shell that you want to use to run the command. In my case, I'm going to be using bash. And you have to do so for all the run commands. So I just copy there. So now we are ready, right? We just need to define our inputs. In this case, we are actually specifying a couple of inputs. One of them is mandatory. One of them is not. Just gonna copy it and I'm gonna explain them. So we give it a name which is gonna be the node version. And then you give it a description if it's required or not. And if it's not required, you can actually also specify a default, okay? And so node version, what we can do is we can say, okay, take the node versions from the inputs. Node version with this syntax, or if there are no inputs, take it from the environment. Which environment? The environment of the color workflow. So this workflow is gonna call this action and it has a variable defined here. So this is a nice way to default your inputs. And so if you pass it, you can overwrite, but if not, just take whatever it is in the environment. And then I said that there is one required which is this one and it is because you don't have access to secrets inside the composite actions. So you have to pass the secret from the color workflow. So instead of doing that, you need to say, okay, take this secret or the value from here, from the input, and you just specify the name. Okay, so now we have the action and we want to use it in the workflow. So I'm just gonna put it side by side here, there you go. And we need to call it, right? So we did add to the build command, right? So we can add up to here. So this is already in an action. How do I call it? So the very first thing I do is I check out my code. So I need to check out the code because the action actually leaves in the repo. So I need to check out the action to call the action, okay? So do not move, do not move the check out inside the action because you need that action to be checked out. And now what do I do? This is, I'm using an action. Okay, to use an action, I use the uses. And where is my action? So my action is in my local. Here, github, action, and then install test and build. Okay. I do not need to specify this action YAML. That's why it defaulted to action. So it's saying, hey dude, my action is here, but it requires at least all the required inputs. At the very least, you need to specify that one.

18. Reusing Actions and Testing

Short description:

To reuse actions and pass inputs, you can define actions with required inputs and call them in your workflows. This allows for easy customization and reduces the need for repetitive code. You can also create an action for zipping and uploading files. By defining inputs and passing them in the workflow, you can easily reuse the action for different purposes. Common actions and remote actions can be used to simplify and streamline your workflows. By migrating actions to common actions, you can easily share and reuse them across different workflows. Testing the actions ensures that they work as expected. By changing the command in one place, you can update it for all the workflows that use the action. This makes it easier to maintain and manage your workflows.

So how do I do it? The very same thing I do with the other action, I use with my actions with npm registered token. What is the value? The value comes from the secrets. So if you need to pass a secret to the action, you need to do it from the color workflow. What else could I pass? The node version. The node version is not required, but let's say I'm doing an experiment. I want to test 16. I can override it. If I don't specify anything, it's gonna take it from the environment. And my environment is already defined. If I don't want to define it here, I can default here. Default. Oops, default 14. And then I can get rid of that part. So if I specify it, it will take my value. If not, I will take the default. Different options that you have. Really depends on where you want to place the logic. For me, it makes sense to have it here because I'm sharing the node version between the front end and the back end. So it makes sense to have it there. So what can I do now? I can get rid of this because this is my action already. Amazing. And what else can I do? So I can take this action. I can go into my tag. And I can reuse it here. Boom. Three lines instead of, I don't know how many, 20. What can I do here? Same. Whoops. Boom. Done. Three lines. What can I do? Oh, I can also make this an action. I can make all the actions that I want. So I'm just gonna show you, I'm just gonna copy it because I have it right here. So this is the zip and upload button. So I'm just gonna copy this one. Same process. I'm just gonna move it in here into the action paste here. And I'm gonna have it open here. I'm gonna place it side by side. And we are gonna use that. So it's a bit more verbose because I need to pass more actions, sorry, more inputs, but that's fine. I'm just gonna grab the end result from my solution that I have here. I promise I will share this with you. And if I make a promise I need to fulfill, all right, so this is my action. This is how I call my action. And I paste it here. Same deal. It's from my local. And then I pass all the required, so I need to pass this one because it's a secret, remember, I cannot pass secrets, I cannot use secrets. And this is how it actually looks. So I have my configured credentials. So I have this code, these four steps are this four steps here. So inputs will be my secret. I am passing the region but you can default it to whatever region you're using, the account ID, passing also the name of the file, and also the bucket prefix. So if we actually look at the bucket prefix oh no, it's required as well. But yeah, so I've defined all my inputs. So I am passing my inputs. So the access key ID is the secret, the account ID is the account ID that we defined up here before. The bucket prefix is my bucket prefix as before plus using the environment. Lamda's the path to my source, this is the dist, the output file, and then I can get rid of this. And how nice does it look? It looks much straightforward, easier to the eye because you can just see, oh, this is actually doing sip and upload. You can name it, you can just use the same, so this is zip on the upload lambda. And this other one is, this is installed lambda test. And what do we do? We test it, let's test it. So the rest of them, I've already I've also transformed this one into common actions, but we are going to use remote actions for to exemplify that as well. So I'm not gonna migrate them all. As I said, I'll be sharing this, but as you can see in part two here, there are all the actions of Terraform and also the frontend. So they're all migrated. So let's just make sure that it works. So let's create a branch and check out and GT workshop. Let's call it local actions. Let's, so what do we have? We have to add all of them, right? We have to add all of them. So we are adding all the actions. We commit and then like migrate some actions to common actions and then we push local actions. If I didn't make a mistake, this is gonna work as well. So this is the new push. And we can see that this is actually using, it's running my action, so we can, so it's gonna be tricky. Thank you. So it's running this action, right? With my values. And you can see it's doing the same thing, npm install, npm test, so everything. But it's coming from a file, which I can control. And if I need to, let's say, the way I test things, change, because instead of doing npm test, my command changes to npm run unit test. Then I change it in one place and it changes for all the workflows. So this is the beauty of common actions. Okay, so that, that works. Oops. So that works. We have it here. That's gonna, let's keep it running.

19. Remote Composite Actions and Centralized Repo

Short description:

You can centralize actions in a different repo, creating a centralized repo where common actions live. This allows developers to use actions without worrying about developing them. This is what my team does, providing actions for over 500 engineers across different teams. We have a centralized repo and a readme that explains how to use the actions. You can mix and match local and remote actions, combining team-specific actions with common ones. You can check out the common actions repo and call the actions in your workflow. The inputs and steps are defined in the readme, making it easy to use the actions.

But you may be wondering, hey David, but can I centralize all these actions in a different repo, so I don't have to have all my actions in all my repos? Well, you guessed, the answer is yes. Because otherwise I wouldn't be saying it because there would be no demonstration. So part three, remote composite actions. Let's do this quickly. And then we'll have a short break, if you need a break, because I want to also show you reusable workflows. I know we are close to 7PM, my time, 7PM, my time, wherever you are. So, remote composite actions, what are these? So what you can have is instead of managing your actions in your local repo, you can create a repo, which, a 1AM, here in the Philippines? Oh, wow. Okay. Well, so we have some people from far East and then some people in Canada, so, it's global. We're going global. All right, okay. Perfect. Let me move on.

So, I'm gonna show you these DevOps, DevOps workshop actions. DevOps workshop actions. What if I told you that you could, this looks familiar already. Install, test, and build backend. Zip and upload backend, Terraform. This looks really familiar. And if I go into, for example, this one, what do I have? An Action YAML. Didn't we create one Action YAML file? Oh, this looks really, really familiar. So this is an Action living in a different repo with all the instructions to install, link, and test, and build my code. This is cool, right? Because I can have a centralized repo where all my common actions live, and then I can share that with all my developers. Imagine that you have, I don't know, maybe you have a developer experienced team. Well, what is that? And you are able to provide this to developers so they don't need to worry about developing the actions, but using them. That would be a really nice developer experience, right? Guess what, we do this in my team, we have created actions that can be used by the whole engineering, but more than 500 engineers across the whole zone, different teams, different backend teams, frontend teams. So all the actions are really similar because we have really similar patterns to build the code. So this is what we do, we have a centralized repo where we are actually providing all these actions. And we also build this readme in a really nice format where you don't actually have to go into the action, you can just go into this readme, say oh, what do I need to provide? Does it look familiar? Node version is not required, it's defaulted. And then the registry, all right, how do I use that? There is the usage. So let's have a look, but instead of doing this one, let's do Terraform, right? Because we have Terraform here. And we are gonna mix and match local actions with remote actions because you can combine them, you can have your own team's actions that are only specific to your team. But maybe terraform is common across the whole department and you can just use this one. So let's pay attention to this. Oh, there's the usage, amazing, perfect. Let me just copy this. And this is what I need to do. I need to check out the common actions. Nice, and I need to call it, perfect. So let's do that. Just gonna close this. Just leave one nice thing. So let's do the infra. We can even use it in the matrix. So copy and paste whatever is in the usage. And we say, okay, we're checking out our code. This is this very same repo. And then we are doing another checkout. But for this checkout, we're checking out a different repo. So we're checking out the actions repo. We need to, if we don't provide a reference, oops, that's going to check out main the latest, but if we are controlling the versions, like we are doing here, we have a tag. We have one tag, version 100. Nice. I'll just stick to that for security. I don't want surprises. Token. This is a global GitHub token. And this is just a token that you create from here. So you go to the Developer Settings and then you create one of this personal tokens. So this is a token for DevOps.js GTA workshop that I created yesterday. So you create this token. That's going to give you a token and then you can go, you can go into secrets and actions, sorry not this one, into the repo that I want to get access. And then I create my global GitHub token. So this token has repo permissions to read from other private repos. And, okay so I'm providing that token. So I have access and then this path is where do I want to check out this code? I'm just going to check it out in a folder called common actions. So how do I call it? So now I'm checking out that into common actions and it's gonna check out this whole repo here. As you can see, there is a terraform folder, nice. So I'm just going to access terraform and inside that terraform folder, there is this action, nice. So I'm all set. And I'm passing the inputs, not doing anything different. So I'm just, the account ID in my case that's gonna be my account ID from the matrix. My secret token. How I was passing that. My secret token using the secrets with this syntax, like that. The backing config. How I was, how was I doing that with the matrix? Beautiful. This bar file was coming also from the matrix, bar file. And then the TF apply. How am I doing that? Same, so I'm just doing my if. So this is if I need to apply or not. And what can I do now? So let's see what this action is doing. So it does everything, everything, everything, everything. It even uploads my data from output, you can see it's using this inputs.tf apply equals true. So that's coming from the inputs. All my inputs are defined here. So instead of having all these lines, I have that. How beautiful is that? I think it's beautiful to be honest. So we can do the same for the front end. For the sake of timing, I'm not going to do that.

20. Testing and Reusable Workflows

Short description:

We're going to test this one. We can see the job is running. It's checking out common actions. The power of common actions is amazing. We are going to see now the power of reusable workflows. Usual workflows are more confined to the team. This actually works. We are using common action from the local and remote actions and it still works. It's up to you, to your imagination, how you can combine all these things. Let's switch to a different topic. Now we are going to see reducible workflows.

We're going to test this one. So I'm going to git commit, use Terraform, release, remote action. Push that. Boom. So if I go to my repo, my action, you see, use Terraform remote action. Mm, that's interesting. We're going to have to wait for this to finish and then we check the matrix. Any questions or anything that you want me to repeat? We're gonna, or maybe you can, if you can tell me if you need the break, five minutes break, or we can move to reusable workflows. Let me know. Jan, fine without a break. Fine to keep on. Yeah, that's perfect. So let's then continue. Let's try to stick to the timing. Finish in 28 to 27 minutes. So we can see the job is running. We can check, look what's happening. It's check out common actions. So it's actually checking out from this repo, it's doing this repo. So I think this is beautiful. The power of common actions is amazing. We are going to see now the power of reusable workflows. Reducible workflows are from my point of view less, how can I say it? Because an action, you can do data form, which is really common for all the teams. But maybe the workflow, because it's bigger, it might not apply to all the teams. So you have the granularity of all the actions. So you can just pick, I need to use this action, boom. But maybe a workflow, a reusable workflow is not applicable to your team because maybe you do something different. So you have the best of both worlds. So I would say that common actions, apply, are more applicable to the whole engineering or the whole company. But usual workflows are more confined to the team. And so this actually works. We can actually check that this stage, sorry, let me, what is the name? DevOps. So, still works. We are actually using common action from the local. We're using remote actions and it still works. So it's really up to you, to your imagination, how you can combine all these things. But then, let's switch to a different topic. Now we are going to see reducible workflows. And we will finish.

21. Reusable Workflows and Workflow Calling

Short description:

In this part, we learn about reusable workflows and how to call them from another workflow. We create a workflow called 'callme.iamreusable' with specified inputs and outputs. The workflow formats a message using the inputs and outputs it. We define a job called 'formatted job' that runs on Ubuntu and has outputs. The steps include formatting the message and printing it to the console. We also explore using secrets in a dummy job and how to call a reusable workflow from another workflow.

So, I'm just gonna use a very very simple workflow. Okay? I think, Sam, you've seen everything that I've explained so far, but you haven't seen reducible workflows, so hopefully you're excited about this one, because it's something new for you as well.

And so, a new file. A new file. So I'm gonna create a workflow, called callme.iamreusable. So, this is nothing but another workflow, okay? So, it has a name, I AM REUSABLE. And the thing that changes is the condition. So, on workflow call. That's what it make it reusable, right? Or callable. And then we specify some inputs. Gonna specify d, the name. So let me just copy it. There's nothing really new. So, you just specify if it's required or not, and the type, if it's a string. I think it supports strings, numbers, and booleans. I will need to double check that, but this documentation, actually, it's usable, reusable.

What? No, no, no. Why it doesn't let me type? Reusable workflows GitHub. Thank you. Spanish, again, reusing workflows. I'm just gonna paste it here for you. You have old docs in them. So inputs. And also because you may be passing some secret, and you don't want to expose them, you want them masked in your output, you can pass these inputs, then there is also secret section. And if there is inputs, there are outputs. So these are the three sections. So you specify inputs in the same way of plain text or secrets, and then you can output something from a workflow.

So this is very simple. So what this workflow is going to do, we are gonna pass a time inputs, and it's gonna return a message to us in a nice format. Okay? And the value is coming from text. And if the value is coming from a job, we will see how we can access that. So we are gonna define a job, which is gonna be called formatted job, and it's gonna output a message. Okay? It's gonna make sense in a moment. So we have jobs, nothing new. And it's a formatted job, we need that job. Nothing really new, it's gonna run Ubuntu latest. It's gonna have a timeout. And it's gonna have outputs. We're gonna define them in a moment. And it's gonna have steps, nothing new. You know all this already. You've been with me two and a half hours, you know this. I know you do, because you're a very good audience. And you're listening I think, sorry. We're just kidding all the time. Right, so what do we do with steps? We read the name. So this step is Step to format a message. We're gonna give it an ID because we need to access that for the output. So this is a formatter step. And we are going to run a command. So we've seen this as well. So as we're gonna echo, you remember this colon colon set output, we give it a name to the output. In this case, it's gonna be message and colon colon, and the value is gonna be hello and inputs at the D name, hello you. You are in the other input inputs environment. Okay so we are formatting, we are reading these two inputs and we are just doing something with those inputs, something really easy. And we are actually going to print that into the console as well. So allow me to copy, we are echoing then steps. The formatted step which is the one above. We are reading the outputs from this which is being set this output. And then message is this output name message. And we need to output because we are gonna use this output from this workflow into the call from in the color workflow. So this is the the workflow, which we are going to call from another workflow. So we are gonna output a message from, it's basically the same thing for me. Okay, so we are accessing from the steps. We are accessing this output and then the workflow is outputting something out to the world, okay. What is outputting is jobs, jobs, format the job, format the job, outputs, outputs, message, message, okay? So you understand the chain, the tree how we access in the step or from the job. So we're gonna have outputs in the step or we can have an output from the job. The one we are interested in to output to the caller workflow is the one from the job. Nice, and then we are gonna do some, remember we have this secret, okay? We haven't used that yet. Let's just use that. Let's create just a dummy job. This is a magic job. And this is using the secret token to do something, to do some awesome magic. Whatever, you can access the secret doc, super secret token from this section over here and you can do whatever. So this is the reusable workflow, right? We need to call this workflow from where, from another workflow. So how workflows is all about jobs, is all about steps and hopefully you are familiar with all the keywords from DHA.

So this is the caller workflow. So, do you want me to type everything or do you want me to copy? Probably for the sake of time I will probably copy and explain line by line. A caller workflow is nothing but another workflow. It's just, it's another workflow. It follows the same rules. So let me just copy it. And let's just do it together. It has a name. So this is just my workflow which calls something. It is triggered by, it is gonna be triggered manually. So we've already seen this workflow dispatch. We are specifying the name and environment. And then, actually this is...

22. Using Switch Case Action and Calling Workflows

Short description:

This part explains how to use a switch case action to get the right token for the environment selected. It shows how to output the global secret as a job output and call a workflow. The steps for calling the workflow are explained, including specifying inputs and secrets. The part also mentions the use of a reducible workflow that outputs a message, which is then printed using the needs called local workflow job outputs. Finally, the part discusses the process of trying the workflow and merging it into the main branch.

Okay. Let me make it simpler for you. Pretend you didn't see that, okay? I know this is recorded. I'm gonna share the workflow. Sorry, the repo. So you can actually have a look at that. I feel bad now. Anyway. So, this is gonna call... Okay, let me explain it. I'm gonna try to do it in five minutes. So there is this job, which is gonna... Because we have two options. We have stage and production, and you have a token, which is different for a stage, and a token for production, okay? And depending on your choice that you make when you are actually triggering this workflow, you need one token or the other. And there is an action which allows you to get a token depending on your choice. And that action is this switch case. So this is a switch case action. So this is basically defining the global token secret based on the environment, the environment that we actually select here. So the default that we are gonna grab is stage. But if we actually select production, we are using the production token. That was an action, that was actually five minutes. It was 35 seconds or 30 seconds. And I hope you get the, you understood what I just said. It's a switch case to actually get the right token for the right environment depending on your choice. Makes sense, I think it makes sense. So, nice. We are gonna output that global secret as a job output. And then we are gonna call a workflow. So it needs, we've already seen this. So it needs this job first because we need that token. And then what are we doing? We are calling, this is really similar to the way we call actions, local actions. We were doing.github slash actions and then the name of the action. But in this case, we need to specify the name of the workflow. If you remember the local actions, you didn't need to specify this actions.yaml file. But in this case, because you need to say, I want to call this workflow. So you have to specify the name. So then the width is just inputs. So actually, let me put it side by side. So this is the reusable workflow on the right hand side. This is the color workflow on the left-hand side. So the width is just this inputs. And we are passing the name, which is coming from where? It's coming from here, from my workflow dispatch. Coming from here, Environment also coming from my selection. So with width, we specify the inputs and with secrets, we specify the secrets. That's very obvious. And it's the secret. The secret comes from a job, which is a dependency for this job, this needs. And then getGlobalSecretToken, which is this job in here, which has an output and then GlobalSecretToken. So really, really nice, this switch case action, totally recommend it. And I'm really glad that I actually explained that to you because it's really helpful. And then, that's gonna call the workflow, okay? And if you remember that this reducible workflow outputs the message. So this actually output something. So we're gonna print that something. So it needs to invoke the workflow first, then runs timeout, I'm not gonna explain that. And then the steps. It's gonna print the message received from the call workflow. So it's just an echo. And what is it? It's needs called local workflow job outputs. And then this formatted message is nowhere here. But we know that our reducible workflow outputs formatted message. So I put it side by side. That matches. So that should be, those are. Make sense? I assume that it does. Yeah. Okay. So, now is the moment of what? Of trying. So, let me, let me do that. I think in main, I don't know if it works, if it's not in main. Let's find out. So, let's commit this, add reusable, reusable workflow. And let's go, we can just close AWS, we don't need AWS. Does it work? No, it has to be, yeah, this is what I assumed, it has to be in main so we can actually create a pull request and merge it. So, don't remember who asked, it was Peter I think, for the disable button. So this squash and merge, it's not green, so it's not disabled, but it's not green, but you can still squash and merge. So this is still running, but I can confirm squash and merge. The action is running because we did a push into a branch, but now, as you can see, as soon as I merge into main, I have this IAMReduceable and I have CallReducibleWorkflow. If I go to this CallReduceableWorkflow, I can see here, there is a Run Workflow. I can specify, let's just use Sam. Sorry, Sam. Sam and forestage. And let's use, I don't know, Peter, Anthony, Anthony, for production. So we've triggered two. So the first one was for Sam. So this get global secret token is actually getting the token for a stage because remember, I selected a stage for this first one. And so this is just running for that. Then, there are two jobs running parallel. One is the Formatter, one is the Magic. So the formatter is actually taking the inputs and it's saying, Hello, Sam. You are in a stage.

23. Efficient Workflow Management with Reusability

Short description:

This part demonstrates the power of reusable workflows and how they can be called from another workflow. By creating a single workflow and parametrizing it, you can deploy to different environments while maintaining only one workflow. This example shows how the output from a previous workflow can be used in a subsequent print message. It highlights the simplicity and power of this approach. Another example is given for the production environment, where a different message is printed.

And now, this print message if you remember is using the message from receive from the call workflow. And it's saying Hello, Sam, you're in stage. So, this, you can see is call local workflow is the one actually making the call to the reusable workflow. And the print message is just using the output from the previous one. So this is really powerful. I see some places like, wow, yeah. So this is very powerful because it means you can create one single workflow and then you can parametrize colors, say, oh, I want to use this workflow to deploy to the dev environment. So I pass dev the account ID for dev, the secret keys for dev from the color and I only maintain one workflow. Really powerful, really, really powerful. And this is a very simple example that I've used. But if we go back, we can see there was another one. And in this case, that'd be the Peter Anthony for production and the print message. And it's just printing that one.

Watch more workshops on topic

DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Workshop
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
WorkshopFree
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
163 min
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
Workshop
The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.
DevOps.js Conf 2024DevOps.js Conf 2024
25 min
End the Pain: Rethinking CI for Large Monorepos
Scaling large codebases, especially monorepos, can be a nightmare on Continuous Integration (CI) systems. The current landscape of CI tools leans towards being machine-oriented, low-level, and demanding in terms of maintenance. What's worse, they're often disassociated from the developer's actual needs and workflow.Why is CI a stumbling block? Because current CI systems are jacks-of-all-trades, with no specific understanding of your codebase. They can't take advantage of the context they operate in to offer optimizations.In this talk, we'll explore the future of CI, designed specifically for large codebases and monorepos. Imagine a CI system that understands the structure of your workspace, dynamically parallelizes tasks across machines using historical data, and does all of this with a minimal, high-level configuration. Let's rethink CI, making it smarter, more efficient, and aligned with developer needs.