Infrastructure as Code with a Node Focus

Rate this content
Bookmark

This talk explores the power of tech infrastructure as code, allowing organizations to quickly, reliably and reproducibly build up, scale, and tear down real-world infrastructure as needed — with a focus on NodeJS stacks.

36 min
24 Jun, 2021

Video Summary and Transcription

Today's Talk discussed infrastructure as code using serverless Node.js with a focus on AWS Lambda and Terraform. The speaker emphasized the benefits of infrastructure as code, such as collaboration, versioning, and reproducibility. The Talk provided a step-by-step demonstration of deploying a Node.js app to AWS Lambda using Terraform. Key takeaways include the advantages of mechanized and automated processes, ephemeral state, repeatable processes, and transparency. The speaker also mentioned the importance of having DevOps experts on the team and highlighted the cost-effectiveness of serverless functions.

Available in Español

1. Introduction to Infrastructure as Code

Short description:

Today we're discussing infrastructure as code with a focus on serverless Node.js. Infrastructure encompasses all tech components, from servers and databases to gateways and CDN nodes. We'll use a lambda function as an example, exploring the need for a proxy or API gateway to ensure secure access. Disk read and write operations are also important considerations.

Hey, everyone. It's great to see you here at Node Congress. My name is Tejas. That's me. I'm working for G2I, stands for good news to the Internet. And really, what we do is we help JavaScript engineers or TypeScript engineers find good work, specializing in React, React Native, and Node. But we also help companies find JavaScript for TypeScript engineers specializing in React, React Native and Node. So it's kind of like Tinder but for Java.

Anyway, that's not why we're here. We are here today to talk about infrastructure as code with a focus on serverless Node.js. And you know, this is a hot topic. There's a bunch of talks coming up about this, like tomorrow Slobodin is talking about lambdas in the serverless architecture. And also Colin is talking about AWS CDK2 talks, I highly recommend. I think they're going to be amazing. I'm going to be looking out for them. And I'd encourage you if you like this talk, stick around for those.

But let's get started on this talk by defining the term infrastructure. What do we mean by infrastructure? Because it means different things to different people. What I mean for this talk is all of your tech stuff, your servers, your disks, your databases, your API, routers, your gateways, your load balancers through to your CDN nodes, which serve your static front ends probably. So everything. And let's maybe use an example. So say you have a lambda function. And it's just a function that does something, gives you an input, gives you an output. And at some point, somebody is going to want to interact with this function. And so they'll probably want to connect in some way. But allowing access directly to a function probably isn't the safest idea. So you might need some type of proxy or API gateway in front of it. So if the user talks to that thing, that thing then could talk to a load balancer or could talk to your function. And it's a bit safer. And then maybe your function needs to read or write to a disk.

2. Introduction to Infrastructure as Code (continued)

Short description:

But the API gateway shouldn't talk to the disk, because the user shouldn't talk directly to the disk. Traditionally, managing infrastructure was done through a graphical user interface or command line interface, which can be daunting and lacks collaboration and versioning capabilities. Infrastructure as code offers a better way, allowing you to describe your infrastructure in a text file that can be versioned and deployed as snapshots.

But the API gateway shouldn't talk to the disk, because the user shouldn't talk directly to the disk. So your architecture, your infrastructure can get a little bit complicated. That's just kind of normal. Things that are alive usually grow.

And traditionally, this stuff was managed by a cloud person or a DevOps person, or maybe many people. Maybe a team, even. But they would largely use a graphical user interface on the web of AWS or Azure or Google Cloud or Heroku or whatever. Or maybe a command line interface, but it's usually a largely manual process. And if we're being honest, this graphical user interface can be scary a bit sometimes.

And so, more than that, I believe the traditional way just has a few weak points. For example, it's not collaborative. If someone on the DevOps team changes the disk size from 8 gigs to 16 gigs on an EC2 instance, I don't know that it happened unless I ask them. And of course, with enough hacking around and creating audit logs, maybe, but by nature, like kind of out of the box, it isn't geared towards collaboration.

It can't be versioned. How cool would it be if I could have an infrastructure with a bunch of components and load balancers and things, and then I check that in Git and then I can time travel to it. I can even revert or commit a new version. You can't do that traditionally. It's largely manual. As we just saw, people would usually go in and change components or add new components or remove components by hand.

It's ultimately opaque. I've been a part of teams where I had no idea what the infrastructure looked like. I just didn't know. I pushed something and then it's deployed, but how? No idea. I feel by knowing this, we might facilitate higher-velocity software and so I believe there is a better way. I believe there's not just a better way, there is a sexy way. And that is infrastructure as code.

So you might be asking, how is this any sexier than traditional? Well, it's a version. So imagine a file, a text file, where it describes everything you have, from your load balancers, to your databases, to everything. It's a text file, it's a text file, it can be checked into Git versions. Secondly, when you deploy things, it creates a snapshot of the state of your infrastructure, and so if you want to change your disk from 8 gigs to 16, it can intelligently diff and find out change and update that component.

3. Introduction to Infrastructure as Code (Demo)

Short description:

Infrastructure as code is like React virtual DOM for your cloud infrastructure. It's reproducible, transparent, and acts as a blueprint. There are various tools available, including Chef, Puppet, SaltStack, Ansible, and Terraform. Personally, I prefer Terraform from HashiCorp due to its versatility and idiomatic nature. Let's switch to a demo with a browser, code editor, and documentation.

It's like React virtual DOM, but for your cloud infrastructure. It's incredible. It's reproducible, meaning you can tear it down, create a new AWS account and deploy all your infrastructure to that. And lastly, it's transparent. Everybody can see it. It's literally like a giant blueprint of all your infrastructure.

So, if there's an early-in-career developer on your team that wants to know where everything is, you can show them. This is the blueprint. And so, it's really cool. And it's really interesting. I can already see some of you on this Zoom call going, okay, but how? I want this. How do I get this? The answer is, we have tools. There's tools. There's Chef. There's Puppet. There's SaltStack. There's Ansible. There's Terraform. There's a bunch of them. And they're all good in their own way.

For me personally, I love Terraform from HashiCorp. I find it the most idiomatic. And it also works. It's very versatile. It can work pretty much everywhere. But instead of talking more about it, let me just show you. Let's switch to a demo. So, we have on the left-hand side, we have a browser. On the right-hand side, we have a code editor. And we have some docs here that we'll talk about in a second.

4. Deploying the NodeJS App to AWS Lambda

Short description:

We have a NodeJS app written in TypeScript that creates a puppeteer instance, a headless Chrome. It visits a page, takes a screenshot, and returns it. We need to deploy it to AWS Lambda by building a zip file. We'll write infrastructure code using HCL and Terraform, specifying AWS as the provider.

But what are we deploying? Well, we have a NodeJS app written in TypeScript that, you know, it's quite basic. It creates a puppeteer instance. Puppeteer, for those who don't know, is just headless Chrome. It's Chrome without a UI. It visits a page, goes to a URL given to the function, takes a screenshot, and then returns the screenshot. Pretty much. This is wrapped in an AWS Lambda handler. It just literally calls a function with a URL from query params and returns it. Just a screenshot. Nothing special.

But, of course, it's not deployed. It's just a thing. So, how do we deploy this? Well, for starters, let's go into the project and build it. This will give us a zip file that we can then deploy to AWS Lambda. It's going to create a nice little zip file for us. And in a couple seconds, there we go. We have a bundle.

Let's create a file called main.tf. That's going to be where we're going to camp out for most of this. Let's just get started writing this. We're writing a language called HCL. That's how you describe all your infrastructure. You start off with a block called Terraform. And you mention what providers you want. I just want AWS. And its source is HashiCorp slash AWS. And you might be thinking, what in the world is that? We'll get to that in a second. Also, I'll just kind of write one rule for this provider. And that is I'm going to be in the EU central one region. Because I'm in Berlin.

5. Working with Providers and Lambda Function

Short description:

HashiCorp, the company behind Terraform, has a registry similar to NPM. This registry provides modules that allow you to interact with various cloud providers, including AWS. You can declaratively describe your infrastructure using these modules. For our Lambda function, we'll search the docs and use a basic example. We'll make some changes to the values, such as the file name, function name, and IAM role. The handler will be set to index.handler.

I'm going to save this. Now, this provider, what is that? Well, HashiCorp, the company that makes Terraform, has a registry. This is literally like an NPM registry. And so I'm saying, hey, go to that registry and get me this module. It's like a node module. And this module allows you to talk to AWS. Now, these providers literally can talk to any back end, Azure, Google Cloud, or whatever. They connect Terraform to some cloud provider's API. And allow you to declaratively describe your infrastructure.

So we're working with Lambda. So let's search the docs for Lambda function. That's what I want. And this is really just copy, paste-driven development. So I'm just going to copy this basic example here. Whoa. I struggle with trackpads. Okay. So I'm just going to copy this whole thing and paste it here. This is a rule that allows us to work in AWS with Lambda. So I'm just going to leave that as it is. And I'm going to change some of these values here. So my file name from my deployment bundle is bundle.zip as you can see. Function name I don't care about. And this role is accessing it from here. So you can see that the resource name is AWS IAM role. And the resource name that I give it, or the ID, is IAM4Lambda. So it's awsiamrole.iam4lambda.arn, the roles are in identifier. The handler is index.handler, because handler is the exported const. So index.handler. And, yeah, we don't need this.

6. Configuring Lambda and API Gateway with Terraform

Short description:

We need to add memory and a longer timeout to our Lambda function. Additionally, we require a layer for the version of Chrome we're using. By copying the ARN for this layer, we can import it without shipping all the code and node modules. With Terraform, we can make our Lambda function and API gateway exist. Terraform will pull down the necessary dependencies and create the infrastructure. The API gateway is essential for secure communication with the Lambda function. Terraform apply will create a plan and generate the required resources.

But what we do need, we don't even need the environment. What we do need is a bunch of memory. Let's add that. Let's give it a longer timeout. And lastly, we need a layer. In Lambda, layers are like NPM modules. And there's one for the version of Chrome we're using. So I'm just gonna copy the ARN for this layer. And that's going to allow us to import it without actually shipping all that code, all that node modules with us.

Okay, so we have a Lambda. Let's just try and make this exist with Terraform. So I'll do terraform init. And this will...it's like NPM installed, basically. It's going to go to the registry, and it's going to pull down that thing. And once it pulls down that thing, we can then make real AWS infrastructure happen. While it's pulling that down, as we saw in the presentation, it's not enough to just have someone talk directly to a Lambda. We need an API gateway. So let's quickly map that up. So we need an API gateway. If you know AWS API gateway, there's some concepts here. I don't have time to go into them, but they're all on the docs, I promise.

That's great. Let's make it exist. We'll do terraform apply, and that's going to actually take our declarative file here and map it to real AWS stuff. It's going to create one role, and one Lambda. So what it's done here is it's created a plan. This plan is going to create stuff. It's going to make a role. You can tell with a plus and some stuff will only be known after it exists.

7. Creating API and Required Components

Short description:

We're going to make the infrastructure exist and create the necessary components for our API. We'll start with the API resource and give it a name. Then we'll move on to the other required components, such as the resource, method, integration, and deployments. Finally, we'll fix the issue of not being able to call the API by setting up the necessary configuration.

You can tell with a plus and some stuff will only be known after it exists. And it's going to make a function. You know, this looks good to me. I'm going to say yes, anything other than yes, will be rejected. And it's going to go off and make the infrastructure exist. Very, very exciting. And you can see it's kind of creating here.

So anyway, back to planning. We need an API. We need a resource. We need a method. We need an integration. And then we need root methods and root integrations. Yes, it's a bit complicated to play with APIs. And then we need deployments. We need an invoke permission to have the API, like invoke the Lambda. And lastly, these are the components we need. I may speed through them, but they're on the box. It's just copy-paste.

So we can tell it created our stuff. We just can't call it yet because we don't have an API. We're going to fix that. So let's start with the API. What I'm going to do is, I'm going to do a resource, AWS API gateway REST API. That's the name of the resource. And I can give it whatever name I want. So I'll just call it API and the name is whatever I want. It's a description, which is, I don't really care. And this is important because we're going to be returning an image.

8. Creating Resource in AWS API Gateway

Short description:

We need to create a resource in the AWS API gateway for our image return. The resource will have a REST API ID, a resource ID, and a path part. The path part will proxy everything from Slash to the integration. This is a special script for AWS.

And this is important because we're going to be returning an image. So binary media types is a wildcard for everything. We need a resource as well. And the resource is just a thing on a pack. So AWS API gateway resource. Boom. And we'll call it RES, I guess. And you know, it needs a REST API ID, which is the ID of our newly created REST API. It needs a resource ID, sorry, it needs a parent ID by that parent ID and we will .root resource ID here. And it needs a path part. And this what we're gonna do here, this just says proxy everything from Slash to whatever integration you have. So it's just a special script for AWS, but that's all we need.

9. Creating Method and Integration for API Gateway

Short description:

We need a method for the AWS API gateway. It requires a REST API ID, a resource ID, an HTTP method, and authorization. Next, we set up the integration, which involves specifying the REST API ID, resource ID, HTTP method, and internal details for communication with the Lambda function. Finally, we copy and paste the integration commands, replacing the resource IDs with the root resource ID. We then create a deployment for our API and give it permission.

And then we need a component. So API gateway, sorry, not component, method. AWS API gateway. We need a method. We can call it met. And you know, again, it needs a REST API ID, it needs a resource ID, which is our new resources ID. So dot res dot ID. It needs an HTTP method to work on, and it needs to know if it has any authorization, which in our case, it does not.

Okay, time for the integration. So you can see, it's just really a bunch of commands. We'll call this int. And same thing, it needs a REST API ID. It needs a resource ID. It needs an HTTP method, but we'll use the HTTP method from our method. Okay, and then it needs some internal stuff for when it talks to the Lambda internally. So it needs an integration HTTP method, and it's going to talk to the Lambda on post, even though it gets a get. It'll be an internal AWS proxy to another AWS component, which is Lambda. And the URI of this is our Lambda function called test Lambda. It's its URI. So test Lambda dot invoke ARN.

Okay. And so we're just going to basically just copy paste these two things down here and swap out the resource IDs for the root resource IDs. So the resource ID here is going to be root resource ID. That's going to stay the same. And this resource ID is also going to be that. Good. That's pretty much it. Now we just have to create a deployment of our API, give it permission, and then we're golden. So we'll do resource AWS API deploy, AWS API gateway deployment. We call it DEP.

10. Configuring Integration and Permissions

Short description:

This section focuses on configuring the integration and giving permissions. Unique names are assigned to the integrations, and a stage name is set as 'dev'. The rest APIs ID is obtained from the previously created rest API. Permissions are granted using the resource lambda permission. Hard-coded strings from the Amazon docs can be copy-pasted. The function name, principle, and source ARN are specified. The execution ARN of the rest API is obtained, and sub-path wildcards are added. An output called 'endpoints' is created with the value of the deployment's invoke ARN.

And this is going to be really interesting because it depends on both our integrations. And speaking of which, let's give them unique names here. So we have the integration int and the integration int root. So int and then .introot. Okay. It needs a stage name, which we'll call dev. And there's a slash stage when we access it. And lastly, it needs the rest APIs ID, which, if we go up to where we created the rest API, we can just do it, .api.id.

Okay. So lastly, we need to give it permission. And so we'll do resource lambda permission. Exactly. We'll call it perm. Look, we have autocomplete. This just writes itself. There's a bunch of hard-coded strings here, though, that is from the Amazon docs. So yeah, you can just copy-paste those. The function name is our function name from above, which is this.testlambda.function.name. The principle is apigateway.amazon.aws.com. And lastly, our source ARN. So who's going to call this? So we'll go to the deployment here. And get, sorry, not the deployment, we'll go to the rest API, my bad. The rest API. Exactly. And we will get the API.executionARN. But I want to add some sub-path wildcards to it. So I'm going to interpolate this string with that. And lastly, let's just do an output, call it endpoints. And its value is going to be our deployments invoke ARN. So the deployment's name is deck.invokeURL.

11. Deploying Infrastructure and Verifying Deployment

Short description:

Now, we will apply this infrastructure and watch what happens. We already created the lambda, so it's just going to create all the extra stuff. The plan is to deploy and hopefully get a URL with a screenshot of the default route. We can try different URLs and it works. That's it. We just deployed a bunch of AWS Lambda infrastructure with a text file. And if we want to, we can bring it down just as easily.

Alright. So that's not that much. Now, we will apply this infrastructure. And watch what happens. We already created the lambda. So it's not going to create that. It's just going to create all the extra stuff. So it's going to diff in real-time, what changed. And as you can see, it has 8 things to add. And it's just a bunch of this.

So this is the plan. If I look at this, if it looks good, I'm going to say yes. And it's literally just going to deploy that for us. And at the end, hopefully, we get a URL up here. If I open this URL, what do we have? Hopefully, we have a screenshot of the default route of My Screen Shotter, which is Google. Awesome. And then, if we do a URL, let's try nodecongress.com. We should have a job role, please. Okay, so we have that. We can go nuts, we can do this all day. We have g2i.co. We can do my personal website. So, it works. That's it. We just deployed a bunch of AWS Lambda infrastructure with a text file. And if we want to, we can bring it down just as easily. I just do this. I say yes. This yes is very important, because it can crash everywhere. But I do that, and boom.

12. Takeaways and Sandwich Debate

Short description:

We just deployed Node.js infrastructure without leaving our editor, enabling collaboration, scalability, and easy destruction and recreation. Four key takeaways: mechanized and automated processes are better than manual ones, ephemeral state is preferable to long-lived state, repeatable processes are superior to rare ones, and transparency is often better than a black box. Thank you for your time. Let's discuss the sandwich debate in the community Discord channel.

It no longer exists, but I could just as well apply it again, and then it exists again. So, let's come back to the presentation. So, we just deployed Node.js infrastructure without leaving our editor in a way where our team can collaborate, scale, destroy, and recreate as needed. This is the power of infrastructure is good.

Let's talk takeaways. Really, I have four takeaways for you. I want you to remember as you leave this talk, I want you to remember MERT. Specifically, mechanized processes, automated processes, tend to always be better than manual processes, because they're things you can repeat, iterate, and improve, and not have to guess. Ephemeral state tends to be better than internal state, or long-lived state, because long-lived state can get corrupt, it can accumulate staleness, whereas something that is wiped and refreshed, and still the same, tends to be more pure. Repeatable processes tend to be better than rare processes, because, again, they're processes that can be automated, and iterated upon, and tested, most importantly. Lastly, transparency is often better than turbid, meaning open software, open source software, open deployments, open infrastructure, tends to do better than some black box that you have no idea how it works, for scale and for teams.

So, with that, I just want to say thank you so much for your time. Yeah, hey, how's it going? I'm disappointed. So, what's your view on this? Why is it a sandwich? You know, it's bread. It's a thing in bread. Like, you know, if I put tofu in bread, it's a tofu sandwich. I don't get it. Yeah, I don't get this. Yeah, it's an important thing to discuss, but it's really weird that people say it's not. So, if you have another opinion, explain it to us in the community Discord channel. But, yeah, sorry, agreed to disagree.

QnA

Developers version 2.0 and the need for DevOps

Short description:

Developers version 2.0 may have some cloud experience, but they may not have the depth of a full-time DevOps expert. It's always helpful to have experts in your team to handle the specifics.

But now, let's go to the questions from our audience. And the first question I would like you to answer is from Java Task. So, when we have developers version 2.0, do we still need dev ops? Yeah, can you say that again? Developers 2.0? We have developers version 2.0, do we need dev ops? I don't. Assuming developers version 2.0 means, like, an upgraded version of the developer who does cloud stuff, I think yes, definitely. Because, I mean, it doesn't hurt to have someone... I feel, you know, Spotify as a company does a really great job of this kind of T-shaped diagram of experience. So, you can have broad experience or you can have deep experience, but you can't often have all the experience. And so, if I consider myself maybe a developer 2.0, I do do some cloud stuff. I also do some front and some backend, but I don't think I would have the depth of a full-time hardcore DevOps expert. I wouldn't even claim that. And so, I could easily mess things up if I'm not careful. So, it would help, definitely, to have somebody who's got the depth that I don't have. Yeah, it's always helpful to have experts in your team and know the specifics. And it's good if you can help out a little bit and maybe give like a, I can't find the English word of start. Yeah. That's, that's, that's basically. Yeah. I agree. So we agree again, two, two points.

Terraform and Incremental Changes

Short description:

Terraform is to your infrastructure, literally like React is to your user interface. It's just this declarative thing that is able to detect changes and then only change what you described should change. It's like hot reloading your infrastructure. That's pretty hardcore.

Next question is from cyberFox 909, buy you a drink. Sorry, I think I think we're just making a joke. Next question is from cyberFox 909. If I make some change, the Terraform file, which will generally change the infrastructure, does the whole build process happen again, or it only changes the component that's being changed? Yeah. That's a good question. So I talked about this in my talk briefly, but I tend to mumble, so maybe it was missed. It's actually, I, I know there's some frontend developers in the room. So it's pretty cool because Terraform is to your infrastructure, literally like React is to your user interface. And what that means is it's just this declarative thing that is able to detect changes and then only change what you described should change. You know, so like, for example, if you create a disc from two gigs to eight gigs on AWS, nothing else about my infrastructure will change. It won't rebuild the whole thing. It'll just do a cumulative incremental change. So it's like hot reloading. Yeah, exactly. It's like hot reloading your infrastructure. That's pretty hardcore. That's – for a front-end developer, that sounds pretty cool.

Cost of Serverless Functions

Short description:

The cost of deploying a serverless function like this depends on demand. With a Lambda, you only pay when it's invoked and for the time and memory it uses. The prices are tiny, and if nobody uses it, you pay nothing. So, the cost would vary based on usage. While it's not expensive, a sudden surge in usage can lead to a higher bill.

Another question from Bruce. How much would it cost per month for this particular demo you've given? That's an excellent question, and you know, I'm glad Bruce asked. This, I think, will be covered in the talk tomorrow about serverless functions. In this case, we deployed a serverless function, and I believe they're the best way and the cheapest, and just the most economically smart way to deploy stuff. Because you pay nothing if it's not used. Like, normally, if you work with a VM or EC2 instance in the cloud, or even if you do a Kubernetes cluster, you're going to be paying for those machines to just be on. Whether people use them, or they don't. But with a Lambda, with a function like this, if nobody uses it, you pay nothing. You pay for every time it's invoked, and you pay for how much time it takes to do its thing, and you pay for how much memory it uses to do its thing. And those prices are tiny. So, it really depends on — to answer the question, how much would this cost? It depends on demand. Like, if now, after this talk, the whole world goes and uses this Lambda, I might have a reasonable bill, but generally, it's not that expensive. Well, we have 12,000 people watching, so we could bankrupt you. Oh, no. Everyone now.

Multi-tenant Infrastructure using Terraform

Short description:

The speaker is unable to answer the question about multi-tenant infrastructure using Terraform and suggests clarifying the question or continuing the discussion in the spatial chat.

Next question is from Ghost1m. What do you recommend us to have multi-tenant infrastructure using Terraform? Would I recommend multi-tenant infrastructure? What do you recommend us? I don't know how to answer that question. What do I recommend you to have multi-tenant? I'm sorry, I don't understand the question. Okay, so, Ghost1m, if you can clarify your question a little bit better, then we might have time to get back to it. Or I will be in the spatial chat. There's a whole room where we can hang out and talk all day.

Terraform Registry and Reusable Modules

Short description:

There is an equivalent to NPM packages of Terraform modules called the Terraform Registry. It provides connections to external services and has adapters for almost every cloud provider. However, I haven't personally explored reusable modules as I haven't had the need for them.

The next question is from bkankal. Is there an open source library of reusable Terraform modules that you would recommend? That's a good question. So, in my talk, I showed off the Terraform Registry, which is basically the NPM registry, but for cloud stuff. So, there's an equivalent to NPM packages of Terraform modules. They're not really modules. They're more like connections to external services. So, in terms of like a module registry, I'm not sure. I don't really use anything out of the box because I can, as we just saw in the talk, I'm pretty comfortable copy pasting from the registry. But, the registry itself has like adapters for literally almost every cloud provider I can find. Like, they even have stuff for Heroku. And I think digital... Like, it's just, it's rich. But, in terms of reusable modules, I've not looked into that because I haven't had the need personally. That's also an answer I don't know is an answer.

Thoughts on CloudFormation, CDK, and Terraform

Short description:

I've used CloudFormation but didn't like it. Terraform is TypeScript first, which is great for me as a TypeScript engineer. It's not vendor locked to AWS, so I can use it for Azure, Google Cloud, etc. The best way to get started with Terraform is to watch my talk and deploy a node thing as a Lambda.

Next question is from Tom. What are your thoughts on AWS CloudFormation versus AWS CDK versus Terraform? That's a great question. I've used CloudFormation. And I want to be respectful and kind but I did not like it at all. I found it tedious. I found it overly verbose. And I found it... There's many different ways to write a CloudFormation file, from my experience. I would write it and then somebody would tell me to do something different. It didn't seem very solid. I haven't worked with AWS CDK specifically but Terraform is creating... I think it's called CDKTS, something like this. Which is... I'm super excited for this. Because it follows in principle the same principles of the AWS CDK. But Terraform is TypeScript first. And so me as a TypeScript engineer this is gold. Because I don't have to learn this other language, HCL to write my Terraform files. I can literally just write TypeScript and I get perfect auto completion on everything. And then it just outputs a Terraform configuration. So if AWS CDK has TypeScript out of the box I would get on that as well. But for me the value in Terraform is that it's not vendor locked to AWS. I could write a Terraform file for Azure, for Google Cloud, for whatever I want. And that's for me the value there.

Okay thank you. Next question is from Dan G. What is the best way to get started with Terraform? Watch my talk. I think it's honestly like you could write some node thing and deploy it as Lambda. That's a great way to get started. And then really go from there.

Getting Started and Converting Infrastructure

Short description:

To get started, it's best to read the documentation thoroughly. Converting existing infrastructure to code can be tricky, and I haven't found a way to do it yet. We may clone everything and switch over to Terraform. As for writing lambdas without Babel and with tree shaking, it depends on the runtime. AWS Lambda handles up to Node 14 and requires bundling with tools like Webpack or Rollup.

I honestly believe that's the best way to get started. If you don't have access to my talk, for whatever reason, the way I learned most things, Terraform, Kubernetes, Docker, whatever it is, React, I just open the docs and read every word from start to finish. And then I'm like, okay cool I kind of get the gist of this. So that's an option as well. Which is really time intensive. Yeah, but it's worth it. It's an investment, right? Yeah, yeah, yeah sure.

Okay, next question is from Tazi. Can we convert an existing infrastructure to code or do we need to set up a brand new infra and migrate? Yeah, that's a tricky one. So, you know, I hope I'm not violating any NDAs or whatever, but like where I work, we don't have a large infrastructure as code style of doing things. Like it's largely been manual, but it's something we're moving towards. Something we'd like to move towards and I haven't, I've been looking at can we, you know, can we take our existing manual flow and use the existing services but map them to Terraform resources? I haven't found a way to do that yet. I'd like to, but I think what we're gonna do go ahead and do is kind of clone everything we have in a Terraform file and then just switch over if it comes to that. But yeah, so the answer is I haven't found a way to do that yet, although I'd like to have it. So if you know something, let me know in the special. Cool.

We have another question from Java task. Can we use Node 14 AS6 modules to write lambdas without Babel and with tree shaking? This is a great question, because it comes down to define a lambda, right. And if a lambda is just a function, then yes, you can, you can write it with whatever you want. You can write it with Python. Sure, but functions that run, run in a runtime. And so I feel like the answer to this question is it depends, it depends on the runtime. Can your runtime handle native ES 2018 or whatever ES you want to write. And AWS Lambda, as far as I can tell, cannot. It handles node 14, I think, as a runtime maximum, which has some modern features. But the real challenges, you know, the burden of bundling your lambda is on you. So like, you can't just npm import whatever you want. And just ship a plain JavaScript file to Lambda. You'd have to use Webpack or Rollup or something. Yeah.

Maintaining CICD Pipelines and Version Control

Short description:

Regarding version control of infrastructure source files, such as Terraform files, they can be checked into version control for collaboration. However, the state file, which contains sensitive values, should not be checked in. Instead, using a tool like Terraform Cloud is recommended. It is free for up to five users and can be connected to a GitHub repo for easy management.

Okay. You gotta have that. Oh, I forgot the word. Bundling, yeah.

We have time for one more question. It's a question from Novorf. Speaking of infrastructure as code, what are your experiences so far with maintaining CICD pipelines as source files under version control? Do you have any favorite solutions? Can you ask it again? Speaking of infrastructure as code, what are your experiences so far with maintaining CICD pipelines as source files under version control? Do you have any favorite solutions? I'm having a hard time understanding the question, but I'll answer it as best I can. If it's regarding version control of source files of infrastructure, so like these Terraform files. Like you can, I mean, obviously the point of a Terraform file is that you can check it into version control and collaborate on it. But in terms of checking in and versioning specifically the state file, because what Terraform will do is it will manage the state of your infrastructure and keep track of that. That is best not checked into version control because it contains sensitive values, passwords and things. So my solution for checking in and versioning those is to not, but instead to use something like Terraform Cloud, which is generously free up to five users. And so for us, we don't really have more than five people who touch Terraform. So we just connect that to a GitHub repo and everything's taken care of.

Okay, cool. So we have more questions that have been coming in, in the Q&A channel, but we have no more time. So for the people that have been asking questions, you can join Dajez in his spatial chat speaker room, where he's going to go now, right? Yeah, that's right. All right, Dajez. I want to thank you a lot for this amazing Q&A and hope to see you again somewhere in real life, maybe someday. Awesome.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
JSNation Live 2021JSNation Live 2021
19 min
Multithreaded Logging with Pino
Top Content
Almost every developer thinks that adding one more log line would not decrease the performance of their server... until logging becomes the biggest bottleneck for their systems! We created one of the fastest JSON loggers for Node.js: pino. One of our key decisions was to remove all "transport" to another process (or infrastructure): it reduced both CPU and memory consumption, removing any bottleneck from logging. However, this created friction and lowered the developer experience of using Pino and in-process transports is the most asked feature our user.In the upcoming version 7, we will solve this problem and increase throughput at the same time: we are introducing pino.transport() to start a worker thread that you can use to transfer your logs safely to other destinations, without sacrificing neither performance nor the developer experience.

Workshops on related topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.
React Summit 2022React Summit 2022
164 min
GraphQL - From Zero to Hero in 3 hours
Workshop
How to build a fullstack GraphQL application (Postgres + NestJs + React) in the shortest time possible.
All beginnings are hard. Even harder than choosing the technology is often developing a suitable architecture. Especially when it comes to GraphQL.
In this workshop, you will get a variety of best practices that you would normally have to work through over a number of projects - all in just three hours.
If you've always wanted to participate in a hackathon to get something up and running in the shortest amount of time - then take an active part in this workshop, and participate in the thought processes of the trainer.
TestJS Summit 2023TestJS Summit 2023
78 min
Mastering Node.js Test Runner
Workshop
Node.js test runner is modern, fast, and doesn't require additional libraries, but understanding and using it well can be tricky. You will learn how to use Node.js test runner to its full potential. We'll show you how it compares to other tools, how to set it up, and how to run your tests effectively. During the workshop, we'll do exercises to help you get comfortable with filtering, using native assertions, running tests in parallel, using CLI, and more. We'll also talk about working with TypeScript, making custom reports, and code coverage.