Automate React Site Deployments from GitHub to S3 & CloudFront

Rate this content
Bookmark

In this talk, I will demonstrate how to create a CI/CD pipeline for a React application in AWS. We'll pull the source code from GitHub and run tests against the application before deploying it to an S3 bucket for static site hosting. The site will then be distributed using CloudFront which will point to the S3 bucket. All of the infrastructure will be built using Terraform. In addition, I'll make use of Terragrunt to show how to create this setup for multiple environments.

33 min
01 Jul, 2021

Video Summary and Transcription

This Talk focuses on automating React deployments to S3 and CloudFront using a CICD pipeline in AWS. It covers setting up the pipeline, sourcing code from GitHub, and configuring infrastructure with Terraform and Terragrunt. The Talk also demonstrates the process of building and deploying a React application using AWS CodeBuild and CodePipeline. Overall, it provides a comprehensive overview of the tools and techniques involved in automating React deployments in AWS.

1. Introduction

Short description:

I'm going to be speaking to you about automating React deployments to S3 and CloudFront. The main motivation for doing this talk is there is an increasing shift for development teams to optimize their release cycles for quality software in production.

Hi, everyone. Thanks a lot for tuning in to my talk at DevOps.js conference 2021. I'm going to be speaking to you about automating React deployments to S3 and CloudFront. The main motivation for doing this talk is there is an increasing shift for development teams to optimize their release cycles for quality software in production. And CICD is the automated practice that helps software teams accomplish this. However, building pipelines can be a tedious process if you have to do that manually for multiple environments, as well as time consuming, and it doesn't scale very well, especially when this has to be done over and over again.

2. Setting up CICD Pipeline in AWS

Short description:

In this talk, I will walk you through setting up a CICD pipeline for React applications in AWS using codebuild and codepipeline. We will use Terraform and Terragrunt for infrastructure setup. I will demonstrate multiple environment deployment streams and show the final output of three different sites from the same code base. My name is Lukhan Demwila, a senior software engineer at Intellect and an AWS Container Hero.

So, in this talk, I want to go through a detailed code walkthrough of how to set up a CICD pipeline for React applications in AWS, and the CICD tooling that I'm going to be using is our codebuild and codepipeline. Now, the pipeline will pull the source code from a GitHub repository and run tests against that application in the CI stage before deploying it to the S3 bucket, which will then be configured to act as a static site for hosting, and in addition to that, this site will then be distributed using CloudFront and point to the S3 bucket as its origin.

So all of the infrastructure that will be built out will be done using Terraform. And in addition to that, I'm going to be making use of a tool called Terragrunt, which is a Terraform wrapper, and it really helps in terms of having a dry approach to actually just programming your infrastructure. And so that will be very helpful. As you see, we won't have as much code in our code base, even for our infrastructure, which will be very helpful. And this is one of the very good practices when it comes to building out resources like this. And so in the end, I will actually demonstrate for you a multiple environment deployment streams as well as the final output of three different sites from the same code base. And that is something that will be pretty cool to see once everything comes full circle.

Before we go any further, I'm going to start by introducing myself. I am Lukhan Demwila. A lot of people call me Luke. Some people do like to have a little bit of fun and go as far as calling me Skywalker. I'm a senior software engineer at Intellect. I'm also an AWS Container Hero and am five times AWS certified. I currently consult in the financial services sector, primarily working as a cloud and DevOps engineer. Previously, I was involved a lot in application development, both web and mobile, on SaaS products for startups. But I have since transitioned into the more cloud and DevOps space.

3. Overview of Pipeline and Source Code

Short description:

I will show you the pipeline we're going to have inside AWS, using GitHub as the source stage and code build for running tests and deploying the React application to an S3 bucket configured for static site hosting. We'll also have a cloud front acting as our CDN. You can clone the automate React deployments to S3 and CloudFront repository from my GitHub profile to explore the source code. In this presentation, I'll focus on the main components and architecture of our infrastructure. Now, let's switch to VS Code and look at the folder structure and code.

Right, and so something that I find is very useful, especially when going on these types of journeys, because there's a lot of code that we'll be looking at. It's usually helpful to have a good bird's-eye view or a project in a picture. And so I want to just try and encapsulate that with this simple diagram over here and show you what we're actually working towards. I already elaborated on it earlier in the introduction. But this diagram can help summarize all of that, essentially.

We're going to have a pipeline inside AWS, inside the AWS cloud environment. We're going to be using GitHub as our source stage for the pipeline and for the build and deploy stage. We're going to use code build alone and code build will run all the necessary commands to actually run tests against our React application. And should the tests pass and we're happy with the quality of that software, then we deploy it and by deploy I simply mean copy over the static assets which are built from the React application into an S3 bucket, which is configured for static site hosting. And then we're also going to have a cloud front, which is our CDN, Content Delivery Network acting as our distribution point that has the S3 bucket as its origin point. So this will help give you a nice idea of the entire thing that we're actually going to be going through.

In addition to that, something that I think is very useful, especially when you have to have a look at a lot of code is when you actually have the repository on your machine. So you can head over to my GitHub profile. If you head over to the devops.js conference website, you'll see that there is a link provided there to go to my GitHub profile. And it is one of the pin repositories right on the landing page over there. And it is this one here called automate React deployments to S3 and CloudFront. Go ahead and clone that repo. It'll be really useful if you can actually go through the source code as well by yourself. Because in this presentation, I will try to focus mostly on the main components of our entire infrastructure and the architecture of everything that we're going to be building out as so that I don't waste your time by going through every little nitty gritty and actually showing you variables as well. We're not going to have the time for that.

So I'm going to switch over now to VS Code and let's get our hands dirty by looking at some actual code. All right. All right, so you'll see over here in the left panel is the folder structure. I'm just going to zoom in one more time just to make sure that's clear. And so you'll see over here inside the root, we've got client application and this is a basic React application. I will show you the tweak that I made to it and inside of, and in addition to that, we've got infra live and this folder is going to contain our Terragrunt configuration files. We'll get to the details of that a little bit later. Over here, we've got infra modules and this is going to contain the actual modules for our Terraform source code and modules are simply logical groupings or containers and I use the word container very loosely over there, not to be confused with processes or Docker containers, just, these are simply logical groupings for resources that are related in some way for a particular set of infrastructure that you want to deploy. And over here is the parent Terragrunt configuration file. So, just so you have a good idea of the landscape of the source code and what we're actually going to be working with.

4. Client Application and Environment Setup

Short description:

I'm going to open the client application, which is boilerplate code created using the create react app CLI tool. I'll draw your attention to the app.js and app.test.js files. In app.js, I've made changes to specify the environment, such as production, UAT, and development. This allows us to source different environments from different branches in our repository. In app.test.js, I've tweaked the test to ensure the app component renders correctly. If you're not cloning the repository, you can create a new repo in GitHub and generate an access token in the developer settings to give CodePipeline permissions.

And so I'm going to now open this client application and this is boilerplate code. I use the create react app CLI tool to create this. And you will notice an additional file here called build spec.yaml. Again, don't worry about that. We'll take a closer look at it later.

What I do want to draw your attention to at this point is the app.js file, as well as the app.test.js file. And so inside of our app.js file, let me just collapse that panel over there. The only changes that I've really made is this paragraph over here. And I specify environment and put the name of the environment over there, production. Now, obviously a great practice to work with over here would be to use environment variables. But the main thing that I'm going for is I want to actually show you how each environment will be sourced to a different branch inside of our repository. And so right now I'm on the master branch. And so the master branch has production over here in this code snippet over here. And I have two other branches. One is UAT, which is for User Acceptance Testing. And so I replaced that with UAT. And I also have a development environment which is sourced from the develop branch. And as you can probably guess, I'm making use of develop over there. So those are the only changes. So that'll help you get a better understanding when you do clone the repository and you see three different repos. So that way when we do eventually deploy our infrastructure and we have our site up and running, it'll give you a nice view of having multiple environments and you can see a difference between all three of them.

Inside app.test.js, over here, I really just tweaked the test. And all that I'm doing is essentially testing that the component called app actually renders correctly. That way we actually get to have the test running inside of our CI stage. I'm gonna close those two files.

So if I can take a step back, I'm assuming you've cloned the repository, but if you wanna have an entirely different cycle by yourself, you can go ahead and create a new repo in GitHub. And an important step that you will need to have is you will need to head over to GitHub developer settings and head over to personal access tokens. And over here, you're gonna generate an access token. And the reason for that is because you wanna give CodePipeline the relevant permissions to be able to access your GitHub repository. And so this token that you generate, you wanna make sure you save it in a secure place.

5. Saving Personal Access Token in Secrets Manager

Short description:

To save the personal access token in secrets manager, go to the secrets manager service in AWS. Create a secret with the name 'GitHub personal access token' or any other key. Specify the value for the personal access token. This is an important preliminary step.

So I'm saving it in secrets manager. So you will have to head over to a secrets manager inside of AWS. At this point, I am assuming that you know that you're working with an AWS account for any of this to work. So head over to secrets manager service. And when it comes to creating a secret, you'll go with the other type of secrets configuration over here, or type rather. And I'm using a fake value over here. So don't get too excited. That is not my personal access token. You can go ahead and give it the name GitHub personal access token or any kind of key just as long as it's something that you identify because this is something we will need to specify in our Terraform source code a bit later. And then the value for your personal access token is going to be placed over here. So this is one of the preliminary steps that's very important.

6. Sourcing Code and Terraform Setup

Short description:

Because remember it would be different if our pipeline was making use of a repository inside of CodeCommit, which is an AWS service, but because we're going to an external or third party in this particular case, we wanna make sure we have a secure approach to actually sourcing our code from that repository. Right, so get your personal access token sorted and head over to Secrets Manager and store it inside of there. And the two main things that I wanna show you will be the CloudFront distribution and the S3 bucket itself. Like I mentioned earlier on, it would require a lot more time to take you through every single line of code in terms of variables, et cetera. So I'm gonna focus on the main resources that we wanna create over here.

Because remember it would be different if our pipeline was making use of a repository inside of CodeCommit, which is an AWS service, but because we're going to an external or third party in this particular case, we wanna make sure we have a secure approach to actually sourcing our code from that repository. Right, so get your personal access token sorted and head over to Secrets Manager and store it inside of there.

Right, so then now I'm gonna turn our attention to some Terraform source code. And as you'll see over here, I've got two modules. One is called CICD pipeline and the other is called webapp. And we're gonna start off with the webapp one. And the webapp one actually contains all the source code for the S3 bucket, as well as the CloudFront distribution networking that we wanna set up. I'm gonna open my main.tf file over here. And this will show you a nice idea, or overview of what a module actually looks like in terms of creating one. And so you will set up the particular source in terms of where all the code lives for this particular module. And I've got mine inside of a folder called frontend. And over here I'm just setting some variables for the environment, the bucket ACL and the bucket name. We're gonna dive deeper now and go inside this frontend folder.

And the two main things that I wanna show you will be the CloudFront distribution and the S3 bucket itself. Like I mentioned earlier on, it would require a lot more time to take you through every single line of code in terms of variables, et cetera. So I'm gonna focus on the main resources that we wanna create over here. And so as you can see, I've got a resource that I'm creating, which is the S3 bucket. And then you can go ahead and give it whatever name. This specifies the type of resource, of course. And inside of here, I give my bucket a name. I define the access control list, which is for the permissions of the bucket. In addition to that, I also have a bucket policy. And in here I get to define what the rules are in terms of actually accessing the objects inside of that bucket. And because I'm setting it up for static site hosting, I provide it with the HTML file that will be used for, as an index document. And for an error document, I go ahead and use index.html again, because as you know, all the components are injected inside of your root or index.html file when it comes to a React application. So even your 404 page will essentially be a component that gets rendered over there. Versioning your bucket is optional. And it's just a good practice to have a tag for each one of your resources. So I set that up over there. And now I'm gonna show you CloudFront.

7. CloudFront Distribution and Front End Pipeline

Short description:

We set up the CloudFront distribution with the S3 bucket as the origin, caching, HTTPS redirection, and CloudFront's default certificates. In the front end pipeline, we create CodeBuild and CodePipeline. CodeBuild provides a container with a Node.js environment and allows for the configuration of environment variables. The code pipeline has two stages: source, which uses GitHub, and CI, which uses the CodeBuild project. The pipeline is relatively simple with only two stages.

And so for the CloudFront distribution, it's gonna take... The first thing that is important over here is the property. So we wanna set up our origin for the CDN distribution, and I'm pointing it to the S3 bucket that I created. So this is a reference directly to that. When you give it, you provide it with an origin ID, which in my case is website. And I go ahead and I set up some caching. Now obviously this is something that's going to look very different depending on the kind of application that you're setting up. So you don't have to worry too much about this. This is primarily to give you an idea of how you would set it up. But the actual values are something that are gonna look different depending on your particular use case. And I make sure that I set it up to redirect to HTTPS. As mentioned before, good practice to have some tags in there. And I also make use of CloudFront's default certificates. Right. So those are the main resources for our actual web app.

And let's take a look now at the front end pipeline. And so, two main resources we wanna create over here, just collapse this, would be CodeBuild and CodePipeline. So CodePipeline is the service provided by AWS for actual CICD tooling. So think of it as the main component over here, and inside of it is where you've got your stages and you can make use of different services inside of there. And so I'm making use of CodeBuild for as my CI stage, my continuous integration. And what CodeBuild essentially does is it creates a container where you will, with particular configuration that you provide it with in terms of the environment, the tools that you wanna have installed inside of it. And so mine is gonna be making use of a Node.js environment as you can imagine because I'm using a JavaScript application. And so you would provide it with, a lot of it is intuitive information in terms of the name of your CodeBuild project. You can specify the type of container, you can provide a custom container if you like to, for the running of your commands. In addition to that, I provide it with some environment variables, so obviously I do wanna specify the environment variable for the particular application, so I do provide that. In addition to that, I specify an environment variable for the S3 bucket destination because as you'll recall, I mentioned that when the source code is built or when the React application is built and I'm gonna use the NPM build command, that will produce our static contents and I wanna copy all of that content over to the S3 bucket, so this is a nice way of telling your CodeBuild project the destination where all that source code should be pushed through to.

Then over here I have my code pipeline and inside of code pipeline, you'll see that I have my two stages which I mentioned, I also get to specify the names for these particular stages. I've got my source, which is you can see over here as a third party, I'm using GitHub for that and you would provide some very important configuration values over here in terms of the repository name, the particular branch name and the branch name is something that's obviously going to vary depending on the environment and I'll show you how you can set that up in a cool way using Terragrunt and also as you can see over here very important as the GitHub token as well and so that value will be sourced from secrets manager and we can use Terraform to get that in a secure way. Second over here for the CI stage or the build stage, I'm using the code build project that I just showed you and I referenced that over here using the particular name of that code build project and I just specify the order in which this comes in my particular pipeline and so this is not a very complex one. It is only two stages so it should be relatively simple for you to set that up.

8. Terraform Configuration and Deployment

Short description:

In the main.tf file, I use the data API for Terraform to pull in existing sources securely. I populate variables for additional resources, such as application name, S3 bucket destination, pipeline bucket name, and code build bucket name. The infra live folder contains the CICD pipeline and web app modules. The parent Terragon config file specifies the cloud provider, region, profile, and versions. Remote states and state locking are used to keep track of resources, with the Terraform state file stored in S3 and state locking done with DynamoDB. The child Terragon config file for the CICD pipeline specifies the branch name and environment. TerraGrant simplifies Terraform CLI steps and points to the module for resource deployment.

You definitely get more complex ones. And so now if I were to show you the main.tf file which contains the module for my front end pipeline, you'll see over here I'm making use of the data API for a Terraform and this is a really neat way of pulling in some existing sources. And so in here, I'm pulling my secret that I created for GitHub and this is a really neat way of actually pulling that in and injecting it into some of the resources that you're creating, it keeps it secure and none of that sensitive information gets leaked in any way.

And so over here, similar to the module that I showed you previously, because there are some additional resources that I'm setting up, there are obviously more values to be or properties rather to be populated over here and all of these are just variables. You'll see over here, my application name, which I'm using across the project, the s3 bucket destination, the pipeline bucket name and code, build bucket name. In case you're wondering what those are for, these are not related to the S3 bucket for a static site hosting, these are primarily working for caching purposes so that when dependencies are installed on our particular, when our pipeline is running, that cache will make the things a lot faster in terms of every successive time that the pipeline gets run.

Okay, and lastly, we come to the infra live and over here, you'll see that I've split the folders into the respective environments and so what we have is a CICD pipeline and a web app, just reflecting the modules that we actually have and I'm also gonna open my Terragon config file, the parent file. And one of the child files. So the parent file is used to actually generate a provider and the provider is what we'll obviously make use of to specify the cloud provider that we wanna make use of to build out our infrastructure. I specify the region that I want my resources to be built in and the profile that I'm gonna be using for that. In addition, you get to specify some additional information in terms of the versions that you wanna use for the API. So in this particular case for AWS, what version I wanna be making use of and what version of Terraform that I'm also gonna be making use of. And over here, very important this last block, remote states and state is what is used to actually keep track of every, all the resources that have been created or that are alive or have been deployed. And so there has to be an optimal way of keeping track of those resources. And so you can have that set up locally or remote. A good practice is to have that set up remotely because if it's local and something happens to that local file, then you don't have a way of actually keeping track of what's been built out. And so I'm making use of S3 to store a Terraform state file, as you can see over here. And that Terraform state file is essentially a JSON file that contains all the resources that have been built out. In addition, I'm making use of state locking and my state locking is being done using a DynamoDB table. This is very useful when you're working in a team because what happens is when you're making use of state locking, it provides a protective approach to having several people working on the same kind of state. And so when one person is deploying something or is in the middle of some kind of execution, then they will essentially, the state will be locked from anyone else acting on that, which is a great protective measure when it comes to working in a team. So, sorry about that background noise. And now I'm going to show you a child TerraGrant config file over here. And this child TerraGrant config file is for the CICD pipeline. The one for the web app or the website will actually look very, very similar. I'm just going to collapse this over here and inside of it, for the pipeline, I specify the branch name and the environment. And this is a nice way of keeping things dry, like I mentioned, what TerraGrant essentially offers you is all the CLI steps that you would have in Terraform, but it essentially makes things more streamlined for you with these configuration files. And it runs all of those commands under the hood, and it's essentially a wrapper. Something I want to do, I do want to draw your attention to over here is this Terraform block, and it points to the particular module that you're going to be deploying resources from. And in addition to that, you can have some common variables and it will be able to source them from parent folders.

9. Overview of Pipeline and AWS

Short description:

I have a sensitive.tfvars file that contains variables with sensitive values. The configuration file for the website is similar, with varying inputs. I'll show you the pipeline in AWS, focusing on the dev environment. The resources were created without issues. The source is GitHub, with the 'develop' branch for the development environment. The build stage uses code build, which logs the commands and steps. After running tests and building, the static content is uploaded to an S3 bucket. CloudFront distributions are created using Terraform and sourced to the relevant origins. The final output includes multiple environments for React applications. The build spec YAML file is crucial for the React application's deployment steps.

So I do have a file called sensitive.tfvars. And it is right. I did not commit it to my repository for good reasons. So the sensitive.tfvars file is something that you don't want to have in your repo. It essentially contains variables with very sensitive values that have been set. So that's something that you wouldn't commit. So you just need to be mindful of that when you share a project with other teammates.

And so the configuration file for the website itself is not going to look too different. The inputs will vary because as you'll recall, those modules have different properties that need to get set. And so apart from the variables that you can store inside of the respective module folders, you can set them in this inputs block over here, which is what I'm doing. And that's just a quick overview of our modules as well as our Terragon config files.

I'm now going to turn your attention over to AWS and just give you a quick view of our pipeline. I'm just going to show you the dev one, though I have built out all three. And as you can see, the resources were created without any running into any particular issues. You can see the source over there is GitHub, which is what we expect. And if you were to click on that, you'll see that the branches develop, which is what we want for the development environment. And in this build stage over here is making use of code build. And you can click on that to get into details and it'll actually show you all the logs when it runs through the different commands and steps. And this made great, glad it didn't take too much time. I'm just going to scroll down. You can see installation of dependencies happening over there. And so you'll see, this is where it run that test for the application for the React application. And then I go ahead and build it and I upload my static content to an S3 bucket, as you can see over there. And so when this runs successfully, as well as using Terraform to build out my cloud front distributions, they're sourced to the relevant origins, which you can see over here are the S3 bucket URLs for my static site hosting. And you can just click on any one of these distributions and click on the distribution settings to get the relevant domain name, or you can get it right over here. And so, excuse me, I'm going to show you, this is the final output for the development environment, as you can see environment dev. And if I were to open this one, environment uat and environment production. And so we get our desired output in the end in terms of having those multiple environments for our react applications, and we've got a streamlined process for deploying these resources.

The last thing that I want to show you, which is the most important step when it comes to the react application side is the build spec YAML file. And this configuration file is what is used to actually tell code build run through these relevant steps.

10. Building and Deploying React Application

Short description:

And it's a nice way of actually customizing the container that is going to be running. In my case, I use nodejs version 12. And you'll see there are not a lot of steps involved over here. I'm just echoing a couple of things out just to say what I'm doing. I go ahead and install all the relevant dependencies for my react application. I run the React application tests using npm test. And then I run npm run build. Lastly, I use the AWS CLI to copy all the contents of the build to the destination S3 bucket. This is a cool setup using Terraform and Terragrunt to streamline deployments for different environments.

And it's a nice way of actually customizing the container that is going to be running. And so you get to specify the runtime version for that container. In my case, I use nodejs version 12. And you'll see there are not a lot of steps involved over here. And I'm just echoing a couple of a couple of things out just to say what I'm doing. And because this repository itself is the one that is being used for the pipeline, I cd into the client application directory, and then I go ahead and install all the relevant dependencies for my react application. And then I echo out the fact that I am executing this build stage for that particular environment. I go ahead and run the React application tests using this command over here, npm test. And I set ci to true. You can optionally add the date for that particular run. And then you can just run npm run build. If you're familiar with React applications, then you'll know some of these commands. And the nice thing is you're essentially taking what you would have locally into the CI stage over here in the cloud. And then lastly, I just mentioned copy all the contents of the build to the destination S3 bucket. You'll recall that in code build, I supplied it with an environment variable for the S3 bucket destination. And so that's what I do. I use the AWS CLI to recursively copy over all of the contents inside the build folder of my React application to the S3 bucket destination. And so this is a really cool setup using Terraform to build out all of your infrastructure and the AWS Cloud landscape with the relevant CICD tooling to streamline deployments for the respective environments. Making use of both Terraform and Terragrunt to have a good approach to your development in the sense of using a dry approach. Otherwise, you would have to run through several under-the-hood CLI. The same CLI commands that you would be running are then essentially shifted over to being run under the hood by Terragrunt.

QnA

Conclusion and Q&A

Short description:

And that is my talk in a nutshell. It's a lot, but I hope it was very helpful to you and gives you a really good perspective on how you would go about doing something like this. Please go ahead and clone the repository and have a look at it. If you have any issues, feel free to log them on that particular repo and I'll be more than happy to help you. Let's actually have a look at the poll question. So you asked the audience, what is your go-to continuous integration tool? And you asked CircleCI, CodeBuild, Jenkins, TravisCI or Other? Can't say that I'm too surprised about the results for CircleCI and Jenkins. I know that those are definitely popular ones and they are excellent tools. I am shocked that TravisCI got a zero. I've worked with TravisCI and I think it's really good as well. Yeah, there's a lot of CI tools out there. But please do. Everybody who has answered this, go to the DevOps talk Q&A and feel free to tell us what this Other thing is because we would really like to see.

And that is my talk in a nutshell. It's a lot, but I hope it was very helpful to you and gives you a really good perspective on how you would go about doing something like this. Please go ahead and clone the repository and have a look at it. If you have any issues, feel free to log them on that particular repo and I'll be more than happy to help you.

Let's actually have a look at the poll question. So you asked the audience, what is your go-to continuous integration tool? And you asked CircleCI, CodeBuild, Jenkins, TravisCI or Other? What are your thoughts on the selection here? What is that Other? Yeah, it'd be really interesting to know what the Other is here. Can't say that I'm too surprised about the results for CircleCI and Jenkins. I know that those are definitely popular ones and they are excellent tools. I am shocked that TravisCI got a zero. I've worked with TravisCI and I think it's really good as well. Very intuitive. Yeah, I know some folks from TravisCI, so yeah. Yeah, I'd be very keen to know what takes up that Other as well. Okay, okay. There's a lot of, to be honest, there's a lot of CI tools out there. And I'm sure people build their own as well. That is also a thing. Yeah, yeah. But okay. But please do. Everybody who has answered this, go to the DevOps talk Q&A and feel free to tell us what this Other thing is because we would really like to see. I see some people mentioning GitHub Actions. GitHub Actions is a lot. GitLab pipelines are being mentioned as well. So yeah, thank you. Yeah, GitHub and GitHub Actions as well. Definitely GitLab is a very popular one. Yep, yep, yep, yep, yep. I see somebody saying even manual. That's also a way to do continuous integration if you don't respect your time.

Terragrunt and AWS Code Deploy

Short description:

Teragrunt is a wrapper for Terraform that adds value when working with multi-environment setups and versioning Terraform modules. It amalgamates CLI steps into a streamlined functionality, reducing complexity. Terragrunt can handle Terraform state files and generate backend and provider files, simplifying the process. If using Terraform Enterprise, Terragrunt may not be necessary, but it is a great add-on for Terraform Community Edition. A question from Dennis asks if it's possible to set up AWS Code Deploy to wait for infrastructure setup before deploying the React application.

But actually let me jump into the questions we got from the audience because there was a few questions coming in from people. Question from Will. What does Teragrunt really add on top of Terraform? And just to make sense here, Teragrunt is a wrapper for Terraform. It does add some additional functionality, but what exactly? Sure. So the real value in Teragrunt is when you're trying to work towards having like a multi-environment sort of setup, or you want to do things like versioning your Terraform modules. So with smaller projects, it's very hard to see the value of Terragrunt. It really, you might feel like, okay, this actually just adds a layer of complexity that I can do without. But what Terragrunt actually does is it tries to essentially amalgamate all the CLI steps that you would be running with Terraform otherwise. And so it packages all of those into a functionality whereby you can specify for those functions to be run on your behalf. So it streamlines things a lot. And I mean, you deal with very few Terragrunt configuration files, so it doesn't beef up your code. If anything, that's what it's solving. It makes things more dry. And does Terragrunt handle also like your Terraform state files and the location of those things? Yes, you can use Terragrunt to generate your backend files, your provider files. So it's doing a lot of heavy lifting for you as well. Yeah, okay. That's a fair point. A lot of those we use, we use them to free up our own time to do something better. Excellent. Okay, so. Sorry, Darko. If I could just quickly add to that, if you were to use Terraform Enterprise, then you probably wouldn't need something like Terragrunt, but I know a lot of the people that use Terraform Community Edition. So if you're using Terraform Community Edition, then Terragrunt is a great add-on to that. Fair point, fair point. Excellent, excellent. Yeah, so if you're using Terraform Enterprise, it's a different story. Okay, excellent. So there's a couple more questions here. A question from Dennis says, you are setting up both the infrastructure and the code at the same time. Is it possible to set up AWS Code Deploy to wait for the infrastructure to finish setting up, then automatically deploy the React application? Alright, I see your point.

Pipeline Wait for Code Push

Short description:

If you want your pipeline to wait for the code to be pushed for your specific application before deploying, you can push your code first and ensure that your build spec file is already inside your React application. This way, when the pipeline pulls the source, your CI stage will have the configuration file to run the relevant commands and prevent pipeline failures. Thank you very much for joining us. We appreciate the talk and look forward to seeing you again soon.

I haven't tried that. Okay. So if I understand your question over here, is that you want your pipeline to essentially wait for the code to be pushed for your specific application and then only run after that? Yep, it seems like that, yep. It wants to deploy the infrastructure first, then deploy. Right. I think you can get around that if you use some kind of automated step or like thinking of it logically, push your code first essentially, because that's going to be the source for the pipeline. So it just means that you'll need to make sure that your build spec file is already inside of your React, in the case that you're using a React application like this. Make sure the build spec file already exists over there so that when the pipeline does pull that particular source, your CI stage will have that configuration file to run through the relevant commands as well. And that way your pipeline won't fail.

Okay, okay. Well, Lukandre, thank you very much. I appreciate your time and for you joining us here at the end. Thank you very much. I appreciate the talk. I'm a fan of infrastructure as code. I'm a fan of all things ZBS, so thank you very much. And we will be seeing you sometime soon. Cool. Okay, thanks. Bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye.

Conclusion

Short description:

Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye, bye-bye. Bye-bye. Bye-bye, bye-bye, bye-bye.

Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye-bye, bye-bye. Bye-bye. Bye-bye, bye-bye, bye-bye.

Bye-bye, bye-bye, bye-bye. Bye-bye, bye-bye, bye-bye, bye-bye. Bye-bye, bye-bye, bye-bye.

Bye-bye, bye-bye, bye-bye.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.
DevOps.js Conf 2024DevOps.js Conf 2024
25 min
Atomic Deployment for JS Hipsters
Deploying an app is all but an easy process. You will encounter a lot of glitches and pain points to solve to have it working properly. The worst is: that now that you can deploy your app in production, how can't you also deploy all branches in the project to get access to live previews? And be able to do a fast-revert on-demand?Fortunately, the classic DevOps toolkit has all you need to achieve it without compromising your mental health. By expertly mixing Git, Unix tools, and API calls, and orchestrating all of them with JavaScript, you'll master the secret of safe atomic deployments.No more need to rely on commercial services: become the perfect tool master and netlifize your app right at home!

Workshops on related topic

DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Workshop
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
WorkshopFree
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.
Node Congress 2021Node Congress 2021
245 min
Building Serverless Applications on AWS with TypeScript
Workshop
This workshop teaches you the basics of serverless application development with TypeScript. We'll start with a simple Lambda function, set up the project and the infrastructure-as-a-code (AWS CDK), and learn how to organize, test, and debug a more complex serverless application.
Table of contents:        - How to set up a serverless project with TypeScript and CDK        - How to write a testable Lambda function with hexagonal architecture        - How to connect a function to a DynamoDB table        - How to create a serverless API        - How to debug and test a serverless function        - How to organize and grow a serverless application


Materials referred to in the workshop:
https://excalidraw.com/#room=57b84e0df9bdb7ea5675,HYgVepLIpfxrK4EQNclQ9w
DynamoDB blog Alex DeBrie: https://www.dynamodbguide.com/
Excellent book for the DynamoDB: https://www.dynamodbbook.com/
https://slobodan.me/workshops/nodecongress/prerequisites.html