Building Serverless Applications on AWS with TypeScript

This video is only available for Multipass users
Bookmark

This workshop teaches you the basics of serverless application development with TypeScript. We'll start with a simple Lambda function, set up the project and the infrastructure-as-a-code (AWS CDK), and learn how to organize, test, and debug a more complex serverless application.


Table of contents:

245 min
02 Aug, 2021

Comments

Sign in or register to post your comment.

AI Generated Video Summary

This workshop focuses on learning serverless with TypeScript and building a serverless app for a coffee shop. It covers various topics such as serverless services and pricing, serverless workflow with AWS, setting up IAM and credentials, CDK project setup, organizing functions, and creating DynamoDB tables. The workshop also explores the benefits of a single-table layout in DynamoDB, creating APIs with AWS Lambda and API Gateway, and troubleshooting deployment issues. Additionally, it discusses environment variables, testing and debugging, and handling environments and rollbacks.

1. Introduction to Serverless with TypeScript

Short description:

In this workshop, we'll learn Serverless with TypeScript and build a serverless app for a coffee shop. We'll create an API to sell coffee beans from different countries, protecting it with authorization. Serverless eliminates the need to manage servers and allows us to focus on writing code. AWS will handle scaling and security, ensuring each Lambda function is isolated. We'll explore the traditional architecture of a web app with an API and database, and understand the benefits of serverless.

♪♪ I guess we can slowly start and then if someone else joins, they're welcome, of course. So welcome, everyone. Thank you for joining me on this workshop. We'll try to learn Serverless with TypeScript today. We'll use a few different services such as API Gateway to create an API and DynamoDB database and a few other things.

And running workshops virtually with people that don't know each other is a bit tricky because I love splitting people into different groups so they can work together. So this is a big experiment for me, but let's hope that it will work. Feel free to suggest anything and I'll update a bit the link that you'll get. And in the next few days, you'll have everything that will mention here today written in that online link. So you'll be able to follow again this workshop at some point if you want, of course.

So I'll share my screen and we'll slowly start. Okay, so you should be able to see my screen. Can anyone confirm? Yes, perfect. So we'll do, we'll use few things today. First one is Excalidro. I really like Excalidro because we can even collaborate here. I'll actually start the session now and you'll be able to see everything that I see. I'll paste this link to both Discord and Zoom. It's basically a tool that allows us to draw anything basically. And the cool thing is that I managed to add some AWS icons few days ago so we can draw diagrams here at some point and things like this. So this is the first thing that we'll use. I, oh, can you leave my link alone? I'm just kidding. So this is the link where you'll be able to see the workshop data. It's on my website. It's slobodan.me slash workshops slash Node Congress. And it will take you to something that looks like this. It's not perfect at the moment. I tried not to add too many things, and as I said, I'll update this after this workshop during the weekend and early next week. So you'll be able to see all the additional things here. We have some small presentations. We also have a lot of, there are some assignments. There are some code examples and things like this. So we'll see if that will work or not. So I guess we can start. Let's start with a quick poll if I know how to run the poll here. But let me try. I just want to know if you have an AWS account. This is something that we really need for this workshop. Of course, if you don't have AWS account, you'll be able to follow this workshop and then try everything later at some point, maybe next week or whatever. If there are just a few people without AWS account, I can try to help but I'm not sure that I'll be able to create accounts for everyone. It'll take too much time. Okay, it seems that most of the people have their own accounts, which is great. Serverless rules. Okay, I'll zoom out this a bit. Cool, so there are two people without AWS account. I'm not sure what's the best way, let's just start, and when we need to deploy something, I'll see while we are waiting for everyone to finish the assignments, maybe I can create an account for you or some temporary account for you, or something like that, but we'll see. Perfect, thanks. I'll end this poll now and we'll run a few more polls a bit later. One more thing that I want to actually know before we start, are you using TypeScript or not? We'll use TypeScript today, but you just need basic knowledge of TypeScript. You don't need to be a TypeScript master. Perfect. So everyone, at least tried TypeScript, that's awesome. Cool. And just one more, and that's about, oops. Do you use serverless? Cool. Okay, this is great. You don't need to have any knowledge about serverless before we start this workshop but let's let this start and see. This is good enough. I'll stop this poll. Perfect. Thanks. So let's start with a small presentation. What we'll do today, we'll try to build a serverless app, actually API for some coffee shop. Let's say that your friend has a coffee shop and now because of lockdowns and everything, he wants to sell his coffee beans that he's roasting. So he just want to create a simple application where people will be able to buy coffee beans. And he asked you for help to help him to set up that website, basically an API, where he'll be able to sell different coffee beans from different countries. I think I saw that we have someone from Columbia. So that's great. This is one of the countries with really good coffee and I love drinking coffee and that's why we'll talk about this, but you can apply the same principle to anything. We'll just build some kind of a REST API and protect it with some authorization. So with traditional non-serverless application, you can do this and most of you probably did similar applications, but why do we need to create a serverless application here? We don't, of course, but you want to start playing with serverless because serverless is platform as a service. So instead of building an application where you need to own and prepare the server and everything for example, you don't need to update the operating system or anything like that. You just want to write your code and then you'll deploy that to some serverless services on AWS and AWS will manage that in the background for you. For example, they will scale your API. If you have many requests immediately, they'll just scale everything for you. If no one is using your application, it will cost you $0. It's really secure because each Lambda function, which is a basically a piece of code that you deploy to serverless is completely isolated from everything else. And I think the best way to understand serverless is to understand actually how would we build this application without serverless. So if we built some kind of traditional application, web application, we'll have some kind of web app, let's say React or something like that. Then we have some API. And finally we have some database. For example, if admin wants to add the database, he goes to a website and he logs in and he's able to add a new coffee to the database. And then if customer wants to buy a new coffee, then it's again easy because you just send some requests. You go to a website, we list all coffees. You buy one coffee and we get that coffee from the database and assign that coffee to you. And then someone in the background sends that or ships that coffee to you and you can have really good Espresso at your home. Inside that API, most of the time we have something like this. So let's say, I guess most of you know about ExpressJS or other similar Node.js libraries. All of them have some kind of router. Then you can put some different middlewares in between, but one of the common things is putting authentication between that and handlers.

2. Serverless Services and Pricing

Short description:

With serverless, we use different services to handle different parts of the application. API Gateway serves as the router, and for authorization, we can use Cognito. Each Lambda function is isolated, ensuring security. The cost of these services is affordable, with Lambda pricing starting at just 20 cents for 1 million requests and Cognito offering free usage for up to 50,000 monthly active users. Running a big serverless application in production costs around $250 per month, a small portion of the earnings.

And finally you have some handlers that will basically do some business logic. With serverless you can do the same, but basically instead of bundling everything in one API that is deployed on one machine, we use different services to do different parts of this diagram.

For example, there's a thing called API Gateway, which serves as our router. It's basically a separate service that you pay per million requests and it cost like a, I think HTTP API, actually, let's see. So API Gateway pricing.

There are a few different things. We will use this HTTP API, but as you can see, first 300 million requests will cost $1 per million requests. It's not that expensive. And then after that, it's a bit cheaper. There's another type of APIs that we'll not use that is a bit more expensive, but they recently added this one that is like three and a half times cheaper, which is thing that we see often with serverless. And the cool thing is that the first million requests received are basically free, so this is really cool. And we'll use that thing to build our router today.

And as I said, if you have an account, this will be free for you today unless you have some other applications that are using a lot of requests on your account. But anyway, it will not add more than a few cents to your build. So we have API Gateway that serves as our router. We don't need to do anything special there. We just need to configure that, we'll see how do we do that. Then for authorization, again, we have separate service that we can use. Oh, actually, before that, for handlers, whenever you write in Express your handler, that's basically a logic that will happen when someone trigger your, for example, add coffee route or something like that. Basically here handlers are Lambda functions. Instead of bundling everything in one big application, each Lambda function, the best practice, basically, is to have one Lambda function for each piece of your business logic. And then you have, each Lambda function is completely isolated from everything else. So, one user, adding a coffee, will not be able to affect any other Lambas that they're running at the same time.

So for example, even if someone managed success to this Lambda function, they will have access to that one Lambda function in one session. So they're not able to inject any malicious code or anything like that. Basically, this Lambda function is read only, there's just a small portion of that Lambda that has like a, where you can write something temporarily. And finally, you want to add some authentication or authorization, and you can use another service called Cognito for that. We'll deep dive into each of these services a bit later. Of course, each of these services costs something, but they're not that expensive. For example, for AWS Lambda. So, Lambda pricing, as you'll see in a second, Lambda's really cheap. Let's wait for it to load. So for 1 million requests, you pay just 20 cents. And then you also pay some cents for each gigabyte that you process each second. And there are some, you can take more memory in your Lambda functions and things like these, but that's not really important now. And of course, you have like 1 million requests per month free, which makes this application that we'll do make today again, free for you. Also Cognito pricing, let's see that one too. For Cognito, we'll use that Cognito to store our user sessions. And again, it's like, we can have up to 50,000 monthly active users and it will be free. And then after that, you pay like a really small amount of money per monthly active user. So yeah, that's it. Everything we'll do today is quite cheap. And I know that because we are running a big serverless application in production. We have like more than a hundred Lambda functions. We have API Gateway, we have Cognito and many other things including some GraphQL and things like these. And our monthly bill is like $250 per month. Which is really, really small portion of what we are earning every month. So yeah, that's really cool.

3. Serverless Workflow and Introduction

Short description:

We'll learn how serverless works with AWS. API Gateway and Cognito handle requests and authorization. DynamoDB is a scalable and cost-effective serverless database. We can make our website serverless using S3 bucket. TypeScript is used for Lambda functions and infrastructure as code with AWS Cloud Development Kit. I'm Slovan Stojanovic, CTO of CloudHorizon. We build serverless web applications, including Vacation Tracker. You'll need an AWS account, NodeJS, AWS CLI, and configured credentials to follow the workshop.

So here's how does it work. You still go to your website. For example, administrator can still add a coffee through the website. API Gateway will receive this API request. It will check with Cognito if you have, there's some token here, it will send this token to Cognito and check if you have access to do that, to add the coffee. Cognito will say that you're okay. You can add the coffee, and then only when Cognito checks this, API Gateway will run some function and you'll be able to add the coffee. If someone else, of course, will store coffees.

We can store coffees in any kind of database, but the thing is that all these things are scaling automatically. So if our database is not scaling automatically, that can be a problem because if you have too many requests then what we'll have is your database can crash. Of course, there are different things that you can do to prevent your database from crashing. We used Mongo at some point in production and that worked really well. But now we switched to something called DynamoDB and DynamoDB is a completely serverless database. I'll explain that one a bit later. Again, it's really cheap and it scales really well. It's fast. I know that it scales well because once I had a bug in production where we managed to write like 250 million things in our database because of that bug. And the cost of that bug was like $300. So it scales and it's not that expensive at all. So this flow works like this but of course we can even make our website serverless by using something called for example, S3 bucket or something like that. We'll today use Amplify console to host our front end application if we managed to get to that part of the workshop. And as I said, this works fine if you have permission to do something but if you don't have permission, for example, if a customer tries to add coffee it will basically go to Cognito. Cognito will tell our API gateway that customer is not authorized and that request will never reach our business logic. So that's why serverless is really, really cool.

But how do you deploy and develop a serverless application? Basically, there are many ways to do that but most of the applications have a lot of small components and we want to be able to use some kind of infrastructure as a code because just imagine deploying each of these functions and separately, imagine that you want to update that you're sharing some common library and that you want to update all of these functions at the same time, that will be a nightmare. Not that much if you have just two functions but for example, as I said, we have more than 100 functions in production now. It will be almost impossible to know which version of code is each Lambda function so instead of doing that, we just want to write the code once and run one deploy command that we'll just be able to deploy all of our Lambda functions or at least one part of our code that we want to change. There are many ways to do that and one of the ways includes TypeScript, of course. With TypeScript, you can write your Lambda functions. Lambda does not understand TypeScript at the moment but it understands Node.js, it understands Java, it understands Python and many other languages and it also has something called like a container support now and also a custom runtime so you can bring your own runtime if you want but as we know, TypeScript can compile easily to Node.js and we'll do that today. We'll write our code in TypeScript and we'll convert that to Node.js and then deploy that to our serverless application but we can go even step further and use TypeScript for infrastructure as a code using something called AWS Cloud Development Kit. You'll see soon, but that Cloud Development Kit allows you to define all these things that we saw here using TypeScript and you can add basically that as part of your project. One library, basically one folder and a few files in your Git repository can be basically this infrastructure as a code and the rest of it can be basically just the functions and things like these. On our project, called Vacation Tracker, we use TypeScript for everything, for front end and back end. And we also use monorepos so we can share types and many other things between front end and back end and few other separate small applications such as analytics and few other things. Today, we will build just one thing, we'll not build many different things, but yeah, we'll see that soon.

So let me introduce myself, this photo is too big, sorry for that. But basically I'm Slovan Stojanovic, I'm CTO of CloudHorizon and that's a company based in Montreal, Canada. We build web applications, some of them are serverless, some of them are not, but we also build a product called Vacation Tracker that allows you to track your leaves. For example, you can apply for PTO or a SIG day or anything like that, through Slack or through Microsoft Teams and your approver will be able to approve that in seconds. We also have a web dashboard and things like these, but in that application is a 100% serverless and it grows really fast. And so far we have a really good experience with serverless in production. I'm based in Belgrade, Serbia, I'm also a organizer of Just Belgrade Meetup. I wrote a book about serverless called Serverless Applications with Node.js published by Manning Publications. And I'm also AWS Serverless Hero. And I'm writing a lot about serverless. Some things that we'll cover today, I have different articles about them and I'm publishing many more articles. And yeah, you can find that on my website, which is basically this same website without like this workshop part. But I'm not that important. So I guess we can start with the workshop because it's Friday. We don't want to waste too much of our time on less important things. So, as I said, to be able to follow everything in this workshop, you will need an active AWS account, you need NodeJS with npm. Version is not that important, but I would suggest like version 12 or 14 because these versions are supported in Lambda, and we'll use that for our Lambda function, so it's good to have the similar version. If you're installing NodeJS right now, feel free to install the LTS version, which is basically 14 point something. You need AWS CLI, and you also need to configure your credentials. I'll try to do everything today. I don't do that often on the workshops, but because we have people all around the world, and it's really hard to work in groups, I'll basically go through this workshop and try to build these things. And you can stop me at any moment, ask questions, feel free to do that. You can write your questions in chat. I'm monitoring both chats all the time. And yeah, let's try to build something cool today. So for NodeJS, if you don't have NodeJS, there's a link here. You can go to an official site and install, as I said, LTS version, for example, it will come with the NPM. For AWS CLI, there's an official documentation here. You can go and install it. Installing AWS CLI is not always fun, but I think the improved version a lot, because they use like Python and then you need a specific version of Python, but I really hope that everyone installed that before. And as I said, you need AWS Account. If you're using your company's account, I hope that you have enough permissions that we'll need today. And I have permissions here. Basically, whenever you deploy a new stack using CloudFormation, you will need a permission to create and deploy stack. You will need the permission to create and create IAM roles. And you'll need a permission to create Lambda functions, API gateway API, DynamoDB table and to write and read logs in CloudWatch. We'll see what's CloudWatch. And of course, to use Cognito. If you have these permissions, then you'll be able to follow these workshops without any problems. And finally, to start with everything today, we'll need to configure our AWS credentials. I linked the guide here, but I'll try to walk you through everything. So we'll use AWS today. And I'm not sure if I have a poll for that, but let me see. No, I don't. I hope everyone knows at least basics of AWS. But if not, I'll try to walk you through the most important parts. So I'm logged into the console and here you can change the region. I'm in Ireland right now, but it doesn't really matter. Anyway, you can deploy this application in most of the regions. Maybe not some really new regions that they just opened, but most of the main regions such as like Europe, Frankfurt, Europe, Ireland, and I think all the other in Europe or all the regions in US can like easily deploy everything that we'll do today. I just need to tell you that the pricing for services can be slightly different in different regions. I would recommend using either US East 1 or US East 2 or EU West 1 or EU central 1 for this workshop, but it's up to you. So when you log into this console, there are many things here. And if you try to open services, as you can see, there are a lot of services on AWS, from computing, we see Lambda here, to satellites, to quantum technologies and many other things.

4. Setting up IAM and Credentials

Short description:

To set up your credentials, go to IAM in the AWS console. Create a new user with programmatic access and attach the administrator access policy. This will simplify the process, but for production, consider selecting specific permissions.

There are a lot of machine learning and things like these. Unfortunately, we'll not be able to touch some of these cool services today. And the first thing that you need to do here, if you don't have your credential set, is to go to a tool called IAM. And it's basically identity and access management in AWS. And here you can see users, groups, roles and things like these. As you can see, I have a lot of roles because most of these roles were created with some different like applications. If I go here, you'll see that most of them are auto-generated long time ago. And for users, I already have my user. But if you don't have a user here, you can go and create, add a new user. You can name this user, however you want. And you should give yourself programmatic access. You go to the next step, and then you need to attach some policies. To simplify everything, you can just use administrator access. If you're doing this for production, you might try to pick the permissions that someone really needs instead of picking administrator access, but let's just make everything really simple now because this is something that we can talk about for hours.

5. Setting up AWS CLI and Account

Short description:

Next, you can add tags to track billing. Confirm and create a user to get an access key. Open your terminal and configure AWS CLI with the access key. Be careful not to publish the permissions anywhere. Use AWS SSO for better account management. Wait for others to set up. Consider using different IAM users instead of the root user. Keep the default profile blank or use environment variables. AWS SSO provides temporary access to production. Setting up AWS takes time, but it gets faster after the initial setup.

Next, you can add tags. Tags are really cool in AWS because if you put different tags here, you'll be able to track your billing by different tags, but right now, we don't really care about them.

Finally, we need to confirm everything and create user, and when we create a user, we'll get this. It's an access key, and I can see my secret here. You need to copy both of these things, and then what you can do, you can open your terminal, and can you read this? If anyone is not able to read this, just let me know, and I'll try to zoom a bit more. Yes, perfect. So I have AWS Console installed here. Let's see the version. It's version 2, and basically, I already set my credentials, but if I do AWS credentials, I think it's credentials, I never know these commands, oh, AWS Configurer. So if I do this, it will ask me to configure my access, and I just need to paste this and this there, and maybe select a different region that I want to use. I'll not go through this because if I do that, I'll need to show you my secret key, which I don't want to do today, because I don't want to see Bitcoin miner or something like that, something more creative in my account. I'm just kidding, but make sure that you don't publish these permissions anywhere, because if you publish these permissions in the GitHub, probably in the next few minutes, you'll have a different thing set up in your account, and yeah, it can cost you a lot. Fortunately, if you have any issues with that, you can just write to AWS support and they'll basically help you to solve the problem and most of the time they just don't charge you for that. But let's just be safe this time and not do that. So, this is the thing that you need to do. You need to go to create a user and then to run AWS configure, paste these things in your account, and then you should be logged into your account. To be able to see if you're logged in or not, there's a command that is a bit longer. Let's just check this. S-T-S. So, just a second. Export AWS profile, I have different profiles here. I have multiple profiles, so I want to use a workshop profile for myself. So, I have basically a shortcut here, so I can say, Who am I? And it will tell me that I am, let's see, that this is my account ID. If I go here, I should be able to see the same account ID, so I'm connected to the same thing. You probably don't have this Who am I command for you. It's who am I, it's STS, as I don't know this. I just go to help and help just like tells us what do we need, Oh, perfect. Someone pasted this for me. AWS-STS, get caller identity, it will do the same thing that I just showed you. The cool thing that I learned recently is that you can create AWS folder in your root folder and then inside this AWS folder, you have your credential. That's basically what you just like, when you do configure, you'll get that. But there are a few other things here. And with AWS-CLI, there's also this file called AWS-cli-alias, where you can basically have top level commands. And I have, who am I command that is basically a shortcut for SDS get caller identity. That's why I'm able just to do AWS who am I and get the account. So that's basically it. Once you have that, you have everything that you need to continue with this workshop. Before we continue, I'll just like run another poll that I'll try to run from time to time. And that poll is basically, this is not a real assignment, but let's relaunch this. Can you tell me if you managed to configure your AWS CLI or not? There are probably some people that are not able to do that. We have one person so far, three, okay, four. So for the people who didn't manage to do this, what's the problem? Feel free to unmute yourself or just write in your, in Discord chat or in Zoom chat. Do you need any help or you don't have an account? We just need a few minutes to... Oh, sure, sure, sure. We'll wait, don't worry. So one of the good questions was, is it okay to use the root user instead of creating IAM one? It's basically okay. I would recommend you to basically have different IAM users because you can control them a bit better. And while we are waiting for everyone to install and set up everything. Oh yeah. So another great question by Alex is, would you recommend to keep the default profile blank? I see you are using environment variable. So as I said, I have multiple accounts. I'm not able to show you my configuration file, but inside that configuration file, I have many different accounts. So to be able to select the account, I need to do something like this. I need to export an environment variable and select one profile. I just need to put the profile name. By default, there is a profile called default and you can keep it blank. So you don't deploy things to a wrong account. People do that often. I did the workshop in my company at some point, long time ago and someone used that workshop account for a long time as their account because they had no idea which accounts are they using. So maybe it's a good option not to use the default one because unless you have one account that you use every day, your personal account or something like that, then it's probably okay to use default account. But there's another cool trick that they learned recently you can use AWS SSO for this. I don't have SSO set in this account. As you can see here, this is not my root account. This is Slobodan's workshops account. I have basically a lot of sub accounts. You can go to organization and create sub accounts. For this one, I don't have a lot of them, but in our company, oh, actually I'm not able to see. I'm just able to see the organization because I'm not the owner of this organization, but they have a few of them for my personal account. But for my company account, basically each developer has their own environment and their own sub accounts, so they can do whatever they want in their sub account and they're not able to do anything in production directly. Of course, if they want to access production, there are some different ways to do that. One of the ways is a single sign-on. Basically in single sign-on, you can create accounts for people. So instead of doing this, you can basically do AWS, SSO, log-in, and then it will open the browser for you. You'll be able to log in and get the temporary access to, for example, production. And then after a few hours or something like that, you can log out or you can just wait for your session to expire. And if you do that, the good thing is that there are no credentials on your computer. Even if my computer is like, if someone can access my computer, they will not be able to get the credentials for our production. That's why for production and things like this, I would definitely recommend using AWS single sign-on, but for your personal account, it's up to you. I prefer not to abuse this default. I'm not using default at the moment. I had default in the past, but I'm not using default anymore. So, I prefer selecting the account before I do something. So let's wait a bit longer and see if people are able to set up, to install the console and everything. Are there any other questions for this? I know that we're still not doing any interesting part of the workshop unfortunately, but setting up AWS always takes a lot of time. And then when you set up everything once, then after that, everything is much faster and easier. So, I really hope it will become much more fun in a few minutes. It's also easy to forget how you set it up as well. Oh yeah. Yeah. It's really easy to forget that, but, yeah.

6. Setting up CDK Project and Coffee Shop API

Short description:

In this part, we will set up the CDK project, starting with designing the database and creating APIs. We'll also discuss testing serverless applications. We'll use AWS CDK, a tool for writing infrastructure as code in multiple languages. CDK has AWS constructs published by AWS and other constructs from third-party sources. CDK supports everything supported by CloudFormation. We'll create a Coffee Shop API using CDK and npx. CDK provides a higher level of abstraction and makes it easier to understand and write infrastructure code.

If you need me to repeat anything or help you to set something up, just let me know and I can go through these steps again. Any concerns when using the Docker version of AWS CLI? I have no idea, I never tried the Docker version. It should just work fine. So, let's try. And let me know if there are any problems. Everything should work fine. This actually, may be the best way to use AWS CLI because then you don't have anything on your own computer, so you don't care about the credentials and everything. You have everything inside the Docker container and you can just delete it when you don't need it anymore. I'm not the big Docker user, it can help you with basically with testing serverless applications, but I'm not using it often. We'll see how to test serverless applications a bit later in this workshop.

Basically, what we'll do today, we'll set up the CDK project. We'll start with the database, try to design the database a bit, create a simple API, one route, create a bit more complex API with a few different routes, see how to debug this API at authentication, and try to build a really simple front end and connect with this application. This is the goal of our workshop today. Inside step number four, we'll talk about testing serverless applications a bit, because I think this is really important topic. Do we still need a few minutes to finish setting up credentials? Is there anyone that needs more time? I'll just go to the next step and feel free to finalize that. We'll not start with coding immediately. I'll try to explain to you what's the next step first.

As I said, we'll use something called AWS CDK to build our project today. And CDK is basically, it's a tool that you can use and write your infrastructure as a code in a few different languages. You can use TypeScript. You can use Python, Java, and you don't need to build a serverless application only. They published CDK for Kubernetes. I think they have CDK for Terraform and a few other things but we'll use just the raw CDK, CDK that works with AWS. And there's a really nice guide here how to start with CDK with like they're trying to explain the key concepts. You have different examples and different languages and things like these. And one of the really best things about CDK is that there's something called AWS CDK Construct Library. And yeah, it looks like this. You have all the different like things that you can use from CDK. We are looking at TypeScript right now but as I said, you can use different languages here if you prefer. This workshop we'll use TypeScript because yeah, it's an node congress. So it's more fun than writing Java or something like that. And yeah, let's try to understand what's happening here. Basically with CDK, there are a few different types of constructs. Some of them are starting with AWS. These are AWS constructs. They are published by AWS. And there are some other constructs that you can install published by other people and things like these. There are also multiple different levels of CDK. Some of the CDK modules are basically higher level modules. For example, we'll use AWS ApiGateway V2. As you can see here, it has, like, you can create a new API route, a new API you can add routes and do things like this. But by default, CDK supports basically everything that is supported by CloudFormation. I'm not sure if you're aware of what CloudFormation is. CloudFormation is basically the infrastructure as a code built by AWS. And it's something that AWS is using in most of their examples.

So I created a new folder. So inside this folder, we'll have two different things. Basically as I said, here, we want to create Coffee Shop API because we'll have another folder a bit later which will contain the frontend. So if I do this and enter this, I'll again have a simple empty folder. I can run this command. So basically, I don't want to install CDK at my machine. Instead, I can run npx, which basically downloads CDK temporarily to my machine and just runs this command. And then after that, it will delete it somewhere in the background. If I run npx cdk init app and then, I just need to, if I just run this actually, let me show you. CDK will tell me that I can create app, library or a sample app. I'll create the application here as I said here. To do that, I'll run this command. Cool.

7. Creating CDK Project and Initializing Stack

Short description:

In this part, we create a new folder and use npx to download CDK temporarily. We initialize our stack and generate important files like CDK.json. The lib folder contains the new stack, which is the CloudFormation application. We also have tests and can use npm run cdk sent to see the CloudFormation file output. We want to keep the readme file, bin file, cdk.json, and config for the desired folder layout.

So actually, let's just try... Someone is writing a comment. I'll wait for that and then we can start slowly to build our application so I can show you how does that work for everything. Actually, I'll start while I'm waiting for this. So I'll create some directory called for example, Node.congress. And I'll make sure to enter these directory. As you can see here, it's an empty directory.

Inside this directory. Oh! There's a question. Can AWS CloudFormation provide you with a summary of buckets etc? So am I able to use it as documentation? Not really, because it's really hard to read. There are some tools that are trying to visualize CloudFormation templates for you. And my friend Nemanya Costic wrote an excellent article about that. Let me just, no, it's not this one. Using a different profile. Give me a second just to access the right one. There are different tools that you can use to basically visualize your CloudFormation or a CDK template, and yeah, they're not perfect at the moment. At least ones built by AWS are not perfect. So it's not easy to use that as the documentation, but the good thing is that you can use it to check if there are any, no, no, no, I don't know where his blog is, but basically I'll paste the link in the chat so I don't take too much of your time right now, but there are some certain things that can basically visualize everything for you, but they're far from perfect right now, and I'll send a bit later that link. But it's cool to use this as a CloudFormation template to make sure that you don't have some, that you're not leaking some permissions and things like these, because you want your application to be secure and CloudFormation can help you with that.

So I created a new folder. So inside this folder, we'll have two different things. Basically as I said, here, we want to create Coffee Shop API because we'll have another folder a bit later which will contain the frontend. So if I do this and enter this, I'll again have a simple empty folder. I can run this command. So basically, I don't want to install CDK at my machine. Instead, I can run npx, which basically downloads CDK temporarily to my machine and just runs this command. And then after that, it will delete it somewhere in the background. If I run npx cdk init app and then, I just need to, if I just run this actually, let me show you. CDK will tell me that I can create app, library or a sample app. I'll create the application here as I said here. To do that, I'll run this command. Cool. CDK is installing everything for me. It will create the folder structure that we'll see in the second. And it prepared a few different things for me. I have commands such as npm build, which builds my application. I can use npn run watch, and have like a watcher that will just rebuild everything once I change something. I can run tests, and I can use CDK to deploy my stack, to use Sync to create a CloudFormation template from my stack. Or to see the diff from my current stack to something that is deployed to production. As you can see, it created the git repository for me, if I do ls right now. That's ls, actually. You'll see that I have git, npm ignore. I have some readme file. I have some bin. Actually, let me show you here. These are the important files here. I have bin. Inside bin, I have this coffee shop, workshop, ps or... The name actually depends on the folder. It will to be called slightly different here. Let's see. If I go to actually bin coffee shop api ps inside that file, as you can see, we are using node and it will register source maps. It will run a CDK core, it will import some things from CDK core, actually CDK, it will import our main stack from a leap coffee shop API stack. It will create a new application by running new cdk.app and it will basically initialize our stack. That's it. Nothing special. You don't need to change this initial file. It's just out to generate it for you. And that's it. They also create this CDK.json file for us. If I go and see, what's that? I'll see that actually let's do this. This will be a bit easier. CDK.json, as I mentioned inside the CDK.json they have this command, which basically tells us that it runs our application by doing this npxts node blah, blah, blah. And it points to this bin file in bin folder that we just saw. And there are different things here like things that they just set up for us. And probably the most important part of this is a lib folder. Inside this lib folder they created the new stack for us. So what's stack? Stack is basically one cloud formation application. Whenever we deploy, it will deploy everything defined in this stack. I prefer instead of like using this as they put here, I prefer doing like stack, what else? Construct and stack props here. So, the code looks a bit nicer like this. And basically when we define over lambda functions and everything else, they'll go here. As you can see right now, this is really, really simple. They don't have anything special here. They also generate test for us and basically tests are really simple. I think these just for testing and they'll just create a new application. They'll create a new stack and they'll use expect CDK which is basically extension of jest to make sure that to create like, to be sure that the created template looks exactly like this and to be sure that I really have, actually, I can always run npm run cdk sent to see which CloudFormation file it will output for me. It takes some times to run it first time, but here it is. So this is something when I run deploy it would deploy this. It created resources for me and it created just some metadata for us, nothing special. And some conditions, metadata just, this does not deploy any resource on AWS. It just tells AWS that there are some metadata related to this stack. And also they have some condition that defines different regions and things like these. So right now we kind of have something but nothing really special. And we can go back to our browser and see. So this is something that we have right now. But we actually want slightly different architecture here in a layout of our folder. We want to keep this readme file. It's okay to keep this bin file. It's okay to keep cdk, json, and just config.

8. Organizing Functions and Modifying Config.js

Short description:

We'll organize our functions in a single folder and move tests to the same folder. Each function will have its own file, where we'll wire the dependencies, add business logic, and parse events. We'll have tests for each function and common files for shared functionality like a coffee database repository. To achieve this structure, we'll modify the config.js file to remove unnecessary tests and add the functions and lib folders.

It's okay to keep lib. And I don't like to have tests in a separate folder like this. Instead, we'll move tests to the same folder like this, because we also want to have like our functions here. I can name this folder however I want. For example, source or something like that. I like naming these file functions because then we have all our functions in one folder. And for example, we'll have one function to add a new coffee, as you probably remember from that diagram that we had. This function will have some initial file. We'll go through all of these files when we start writing our functions. But these files will not have any tests. It will just like wire the dependencies. We'll have some business logic, we'll have something that will parse our events and then we'll have tests for this function only. And then we'll have more different functions here. And finally, these functions will share some common things. For example, maybe we will have coffee database repository or something like that. Maybe we'll have some types and things for this. And we can put them in common. So we'll try to build this kind of structure and to be able to do that, we just need to change, just config.js a bit because by default, as I said, they want to have tests for CDK here. I'll delete these tests because we don't need them right now. And instead of these tests, I'll just go to just config file. And as you can see here, they're running, they're checking for tests on the root directory slash test. Instead of that, I want to have functions here. And I also want to have lib here because maybe we want to test our infrastructure and make sure that everything works fine there. So, yeah, that's it for this part.

9. Updating Website and Future Assignments

Short description:

I'll update this website and add more steps in the next few days. I'll also modify the workshop and send additional assignments to my mailing list. If you need help, reach out in the Discord or Zoom chat.

I'll again re-run my poll that tells me if you finish the assignment. You don't need to answer with no. Come on, here it is. When you finish the assignment, if you can just click on yes, so we know when everyone, or most of you, finish this assignment. It's not a real assignment, of course, it's just like setting up the CDK project. And then we'll slowly start adding things to this.

What I'll do after this workshop in the next few days, I'll update this website, basically, and I'll add multiple different steps. For example, I'll go through your questions. What was the npm run xx sync command. So it's always npm run-cdk because you can avoid that by installing CDK globally. I don't have CDK installed globally, so instead maybe I have, who knows. As you can see here, it basically installed this CDK. EWS-CDK as a dev dependency and when I run npm run-cdk, it runs this local version of CDK instead of a global version. So, as I said, I'll try to, whenever you ask some questions, I'll try to, in the next few days, update this website and you'll be able to access this and try this later again and we'll have more data here in the next few days. I also plan to modify this workshop a bit and just create a few more additional steps because we'll not be able to finish everything and send that to people on my mailing list so they can do that synchronously. Maybe one assignment every day or something like that will be sent to an email but we'll see if that works or not. Let's wait a bit more to see if other people will finish this. If you need any help, again, feel free to write in the Discord or in Zoom chat then I'll do my best to answer.

10. Construct Library Levels and Usage

Short description:

The Construct Library for CDK contains different levels of constructs. Level one constructs are the same as CloudFormation constructs. Level two constructs, like the HTTP API version two, require less code and have better defaults. Level three constructs are small applications that set up multiple services. The stability of constructs can be determined by their labels. Green means stable, yellow means experimental, and red means not implemented. Some constructs, like IOT Things Graph, may not have documentation. Nicer constructs, like AWS Lambda and AppSync, provide easier usage and generate many things in the background for you.

One good question is, is ConstructLibrary equivalent of NPM? Actually, it's not. Everything that you see in this Construct Library is published on NPM. This is basically just the library of all different constructs available for CDK. What they do, every time they publish a new version of CDK they take all of things that exist in CloudFormation, compile them to CDK and these are level one constructs. Level one constructs mean that they're exactly the same as CloudFormation constructs. And then they also add a few different like a specialized constructs. These are level two constructs, for example, for HTTP, so for HTTP API version two that we just saw, it's like it requires less code. It has better defaults and things like this. So that's level two construct. And finally, they have level three constructs which are basically a small applications containing like few different services that they just set up for you. Not all of the level two or level three constructs are stable. You can follow the stability by seeing these labels. Green means stable. This means experimental. This means not implemented. And there are a lot of things here. You can see these small one numbers, as you can see here, for example, this is IOT Things Graph, whatever that means. And as you can see here, this is basically compiled from CloudFormation because it has CFN prefix. So when you use it, actually this one does not have any documentation. So let's try some other. Kinesis Analytics. Let's hope that this one has better documentation. So as you can see here, they have CFN application. That means that this is level one construct. They just generated this from CloudFormation and it's not really easy to use. As you can see, there's nothing special in the documentation. But then if you see some nicer one, for example, AWS Lambda, let's see, where's the Lambda one? Oh, AppSync, I know that this one is nicer. This is level two construct. And when you import this, you can just like do AppSync GraphQL and just put like a small number of things and it will just generate many things in the background for you.

11. Introduction to DynamoDB

Short description:

DynamoDB is a serverless database provided by AWS. It is a document or key-value based NoSQL database that can scale automatically based on load. It is fast, capable of running 4 million transactions per second, and offers fine-grained access control. DynamoDB is used by AWS, including for their high-traffic Amazon e-commerce shopping cart. AWS manages the database, and users only pay for what they use. Data storage has a free tier of 25 GB per month, with additional storage and read/write units charged at affordable rates. DynamoDB uses partitions and partition keys to store and retrieve data efficiently. It supports various data types and offers local and global secondary indexes for different access patterns. The choice between a single table or multi-table approach is a topic of discussion in DynamoDB usage.

Is there anyone with any problems with CVK? Do you need any help? Do I need to repeat anything? If so, I'll just assume that the answer is no. So we can slowly go to our next assignment. And here we are starting the fun part here. And basically the next assignment is designing a database. So as I said, we'll use Amazon DynamoDB and I'll borrow the excellent presentation from my friend Alexander Sampson. He's actually wrote that book with me. He has an excellent presentation about DynamoDB and let's see here.

So what's DynamoDB? DynamoDB is basically a serverless database. You don't need to run any patches or updates to the database itself, AWS will do that for you. It's a document or key value based, it's no SQL database so it's not a relational database. It has that event train model basically can connect it to AWS Lambda, it can run a Lambda function when something happens inside your table for example, when you have some new input in the table and things like this, it scales per any load basically when you have more things, it can scale automatically, it's really fast, they can run 4 million transactions per second with DynamoDB and you can do access control with it, you can have fine grained access by table, table items and things like these. For example, you can see that specific user has rights to write to a specific database with specific key only so that user will not be able to write anything else or read anything else from the database. This is really cool when you're building something that needs to be secure and the good thing is that of course, it's built by AWS but also it's used by AWS. There are some really nice papers and actually Werner Vogels, whose a CTO of Amazon. He's a doctor and his thesis was on dynamoDB as far as I remember and AWS is using it from day one and they run, they use DynamoDB to write to run Amazon e-commerce shopping cart. So they use it under a really high load. If it works for Amazon, for their shopping cart during Black Friday, it will hundred percents work for you with any load that you have. The cool thing is that everything is managed for you. You don't need to provision any hardware or do any things like these, so availability zones and everything else is handled by AWS. There are really cool articles and papers about and also some conference talks about how do they do that. And of course, you can easily monitor and see what's happening in the database, but they're doing like patches, updates, fixes, and things like this. So that's really cool. You don't need to care about that database. You just need to think how to use it and that's it. As everything else in a serverless world, you pay only for what you use, except for the data storage of course, because they need to store some data for you. So first 25 gigabyte stored data each month are free. Then after that you pay like 0.25 cents, sorry, dollars per gigabyte each month. And you also pay per write and read units. So if you write million things at the same time, it will cost you $1.25. And if you read 1 million things at the same time, it will cost you $0.25. These are basically parallel requests for you. This is not that expensive. I tried to use it under a heavy load. As I said, we had like 250 million write units to our database because of some bug. And yeah, it added additional $300 to our invoice. And everything worked perfectly fine for our users. So the cool thing with DynamoDB is that it scales really, really well. Basically how does that work? They have partitions, they have partition key that is basically a primary key inside DynamoDB. And partition key means that everything with the same partition key will be stored in the same physical machine. So on the same hard disc, and each of these partitions are completely independent. So if you have two different partition keys they'll be in different places, somewhere in AWS data center. And it's really cool and fast when you're searching things by one partition key. But if you want to take all the different things with different partition keys that can be not really a good, DynamoDB is not good for that. So it's not for analytical data, for sure. There are no special limits. It has auto scalings. So right now you can turn on auto-scaling which we'll use today, and it will automatically scale number of read and write units for you and it will scale down all these units for you automatically. So you basically pay for this red part but you don't need to manage this manually, it will automatically adjust related to your traffic. As you can see, there's always enough read and write units available. And as soon as you don't have enough traffic it will scale down everything so you don't pay for too long for that. Cool thing with DynamoDB is that it has a single digit millisecond put and get requests. If you do put or get, you'll get response to like less than 10 milliseconds. They have some custom-based SSD platform. SSD hard disks and things like these but these are not important for today. There are some global tables. You can have different things, the same things in multiple regions and things like that. But this is not really important for today. What's important is data types. So inside DynamoDB, you can store strings, numbers, binary and you can also store multivalued data such as maps or arrays basically. And as I mentioned, they have some primary keys called partition key but the primary key can be that one partition key or it can be a combination of or partition and range keys, so two different fields. Either this needs to be unique or the combination of these two. And this is something that will help us to build really cool things later. And as explained here, it's unstructured. You need to have all these keys everywhere but the rest of the data, it's up to you. You're not able to efficiently query by these data. You just need to use keys to query our data. As I said, uniquely identified single keys, blah, blah, blah. That's fine. Partition key, it uses partition key to know where to store the data and to be able to return the data really fast. There are some local indexes, secondary indexes that can help you to add some different access patterns. And you can add like, for example, here as a local secondary index, so this combination is a key or country and year can be a key. And also there are some global secondary indexes which basically take this table and create a copy of that table somewhere else. We'll not go through the whole presentation. There are really cool things here. So I would suggest you to go through this presentation. I'm not sure if there's a presentation available as a like recording of this talk anywhere, but anyway, presentation will be useful. And a big discussion in DynamoDB is should I use single table or multitable approach? So what does that mean? If I go here and in AWS console and type DynamoDB, it will take me to the service called DynamoDB. Just a second. There's a new preview of the new console. I'll switch to that one because it's like a sooner or later it will anyway switch to that one. As you can see here, there's not the databases thing. There's all only table thing. So DynamoDB is always a table. But there's no database with multiple tables, there's just a table. Table is basically a database. You have two different things that you can do with this table. You can have multiple small tables for everything that you need. For example, you can have a different table for users, different table for, let's say coffees, different table for orders and things like these. But the recommended way to use everything is to have everything in a single table. And this sounds really weird when you start using DynamoDB the first time.

QnA

Storing Data in DynamoDB

Short description:

We can store different types of data in separate tables or use a single table. With a single table, we can efficiently retrieve related data using primary and sort keys. Adding global secondary indexes allows for additional search patterns. However, there are limitations, such as not being able to search for specific values within a field. Designing the database to accommodate potential future patterns is important. Adding additional tables or global secondary indexes can be done without rebuilding the entire database. Performance considerations depend on the frequency of certain routes and the use of global secondary indexes.

But let's see how does that work with DynamoDB. So we want to store coffees and we want to also store some things about our customers such as profiles and maybe a cart for our customer that is shopping right now. And we want to store also orders for our customers. We can have different tables for that. For example, one table for coffees, one for customer profiles, one for customer cart, one for orders and that will work fine. But in some cases that will not be efficient enough.

If we put everything in a single table it can look something like this. This pk is basically our partition key. This is our sort key. And then we have some data which is not important right now and we can have some global and local secondary indexes. As I said, everything with the same primary key will be stored as the same platform. So coffee and coffee will be not in platform, sorry, on the same partition, on the same physical disk. So coffee and coffee will be on the same machine, customer 1-2-3 and customer 1-2-3 and customer 1-2-3 will be on the same machine. If I use query or something like that to get all the coffees, I can just tell DynamoDB return me everything with the primary key coffee and it will return me these two things. But I also can really efficiently say, for example, return me on with the coffees from Brazil by saying primary key is coffee, I always need to tell the full primary key, but also for the sort key, I can tell DynamoDB, give me everything that starts with country, hashtag Brazil. So it will give me just this one, but if I have multiple coffees from Brazil, it will give me all the coffees from Brazil. Again, if I want to get, for example, customer profile page, I can easily say, give me customer one to three by just providing primary key. It will return me profile information, card, and orders. But if I want only orders for, for example, profile for one customer, I can say give me customer one to three with sort key profile. So it will get me just this one line, or if I want like orders, I can say, give me customer one to three with orders one to three from date, blah, blah, blah. So as you can see here with different things, we can we can get like really different patterns. And this is cool because if I use GetQuery to get like a few different things, it will be able to return me everything related to this one customer in milliseconds, which is really, really fast. But of course there are some limitations, I'm not able to tell DynamoDB, give me just coffees named Columbia or something like that. Is this something that sounds clear? Does it sound weird or, actually, I have an idea. Let's see who has an experience with DynamoDB. I think I have this one, yeah. Let's see. How do I zoom out this? Yeah, like this. So many people, actually, most of the people never use DynamoDB. And that's what I expected. Is there anyone that used single table layout for DynamoDB? You can type in chat or feel free to unmute and tell me. Oh, can we take a quick break? Of course we can. Let's take a five minutes break and let's continue at 1730 My times in seven minutes. Is that okay for everyone? That's good. Okay, I'll stay here. So feel free to ask any questions in the meantime, but we'll not continue from here. Can I ask you a quick question just about the scheme above? The slide above the... Just scroll up a little bit, where are you deciding on your partition key and sort key? Because you mentioned that... This one? No, go up, just go up a bit. Oh, this one. That one, yeah. So let's say for example, my front end, I want to actually search by topic. Would that mean that I have to rebuild my database than just to search by the different? I mean, would that mean that I have to rebuild my database then because I haven't thought ahead and like set topic as an index key? So, long story. Actually, short answer is yeah. There are some, actually, you don't need to rebuild everything. Fortunately, you can add, you can't add local secondary indexes without rebuilding the database, but you can add global secondary keys without rebuilding the database. So, you can easily add like a global secondary key that is, for example, album or whatever here. So, this will become your index. But again, if you want to have a full text search, you can do even that with DynamoDB. It's a bit more complex, but we do that for users in vacation tracker. What we do right now for vacation tracker, this primary key is a company ID. We have a special table built for user index because we have a CQRS. We have an event table that will just build some read-only tables that are temporarily built for us. So, primary key is company ID. I'm not able to see any other company data except my company. Secondary key is basically something, for example, active users or something like that. And then, I can say, give me a company, one, two, three, for example, let's say Netflix, give me active users from Netflix, and then use a filter on one other field that we have, which is basically username, email, and few other things, and filter by do a full-text search on that field and say, give me just users with the name, let's say Slobodan or something like that, or users that contain letter S or something like that. It's a bit more complex, but it works really fast. Cause, you know, coming from a relational database, you don't really expect to have to guess ahead of time which columns you can search, cause you should be able to- Of course, of course. So what you're saying is, but this pattern though, if you set the data, you don't have to worry about, you know, if you have a requirement that comes down the line, you can just use this pattern to kind of handle that. Is that what you're saying? So basically, yeah, it's a really good, practice to, before you start using, if you decide to use single table layout, which many people don't use, of course, to try to understand your patterns a bit in advance and try to even build some patterns that you're not using right now, but you might use at some point. But if you really need to add something down the line, you can easily create another table or you can add a global secondary index, you can have more global secondary indexes that are able to cover some of these situations. I see. So you're able to patch everything down the line. Of course, there's one more thing that you can do. You can rebuild the database and just fill it with the data, which you don't want to do that often. So yeah, the easiest way to do is to spend some time. Actually, it's not easy, but the best way to handle this is to spend some time try to understand all the patterns here and try to design your database to be able to support even patterns that you don't have at the moment. And then you'll be able to easily adjust to some new situations. If there are some situations that you're not able to adjust to, you can add global secondary index without rebuilding this database. It will just like take the data from these database, create the copy somewhere in the background as a basically that global secondary index, and then you'll be able to use a different primary key and different sort key for that index. Right. So if it would depend then on the performance characteristics then, I suppose. Yeah, yeah. Of how many times that route is used, that that's how you probably would decide whether you have to rebuild the database or just add a global secondary index, I think. Yeah, but global secondary index is still really fast. It'll just give you the different primary keys, sort key somewhere. Just imagine the copy of these database in like on some other location. So it will allow you to do that. Sorry, there's one more question from Vitaly. I hope Vitaly is still here. Actually I'll wait to be sure that Vitaly is here for two more minutes and then I'll answer the question. Yeah. I appreciate like a mask required. I'm here. Oh, you're here. Perfect. So what is the benefit having one table for all data rather than a different tables for each kind of data? Could be a performant issue if one single table grows too large. So basically if we go back here, AWS guarantees us that as long as we have partition keys that are evenly ordered, for example, we have UDIDs for customers or something like that, this will be really performant because it will store each partition. It will just make sure that each partition is on a separate machine.

Single-Table vs Multi-Table Layout

Short description:

By using a single-table layout in DynamoDB, we can retrieve data for the same customer much faster than querying multiple tables. It is also possible to use a multi-table approach, especially for beginners. However, the single-table layout provides significant advantages and can be considered a superpower for efficient data retrieval.

As long as we do that, they'll be able to provide us a really fast database that can return the data in single-digit milliseconds. So the big advantage of this is that if you need to get many different things for the same customer, for example, or something like that often, it will be much faster than to query three different databases. If you're querying three different, actually not databases but tables, if you're querying three different tables for each table, you have some time that you connect to the table, then you have like a few milliseconds to get that data and then you need to join that data somewhere in your code. Here you can run one query and get everything that you need in the same query. This is not the only way to use. It's completely okay to use a multi-table layout. And I actually recommend you to use a multi-table approach if you're new to DynamoDB. But this is something once you start using DynamoDB a bit more, this will give you basically superpowers because you'll be able to get all the data for a specific customer or something like that really fast. I hope I answered that question good enough.

Exploring DynamoDB and the Single Table Layout

Short description:

If you want to learn more about DynamoDB, I recommend checking out the DynamoDB Guide website and Alex DeBris' book, The DynamoDB Book. There is also a helpful YouTube video by Rick Shea that explains DynamoDB and the benefits of the single table layout. Adding a global secondary index can help retrieve data efficiently. DynamoDB is a good fit for serverless applications, but it may not be suitable for all use cases. Feel free to reach out with any questions on Twitter.

Yeah, I probably need some time to wrap my head around about this concept. Maybe you can recommend some article or something to read about one-table versus a lot of tables. Just a second. There's actually an excellent website by Alex DeBris. I'll paste this website. Just a second. DynamoDB, DynamoDB. Oh yeah, DynamoDB Guide that one. Alex has a really great book called The DynamoDB Book that I really recommend. But there's also this DynamoDB Guide that is an excellent resource for learning more about DynamoDB. I'll post this in both Discord and our Zoom chat. Also, as I said, I recommend his book. It's really good book. And if you prefer a video, there's actually an excellent explanation on YouTube by Rick Shea. Rick? Here it is. Win-win session. What was the year? Yeah, I think it's this one. Oops, sorry. So Rick actually works at AWS. And he's talking how they use DynamoDB. And he's probably one of the people that knows the most about DynamoDB. So he talks a lot about like different access patterns and he's promoting the single table layout a lot. So if you want to start from something, I definitely recommend this video for you. It's not that long. I think it's one hour long or something like that. So yeah, it will definitely answer a lot of questions. Yeah. It's one hour. It will answer and give you a lot of different insights, how to build like databases for a efficient databases for like with that single table layout. What do they do in AWS. And there's also something that they built called, I think AWS, no SQL, rock bench. Yeah. This one that helps you to build this like structures using like a visual builder or something like that. So I'll try to put that. So I don't bombard you with the links. I'll try to put that in this... Put all these things in this page here at some point this weekend or early next week. So I hope we can continue. You didn't miss anything special if you took a short break. We just discussed this single table layout. So this is something really complex. So I don't expect you to understand this right now. It's really cool concepts on the other hand. Take a look at the links that I posted in both chats. You should start with this YouTube video if you want to learn more about this. And then if you want to dive even deeper into DynamoDB, there are a lot of excellent blog posts by Alex Dupree, and he also wrote an excellent book about DynamoDB. So, if we go back to our table here, as you can see, I kind of built some database here, but there's one big problem with this database. And the problem is basically, how do I get all the orders by different customers? So, as I said, there's no efficient way for me to get all the data from these columns here without providing primary key. And of course, each customer will have a different primary key if I want to get different orders. So, this is not covered by this exercise here, but if anyone has any idea how to do that, fortunately we have Excalidro, so we can do a quick like sketch or something here. Any idea? Use Postgres? That's a good idea, but no, we'll not do that. Why not Postgres? So, by default, Postgres is not serverless, so I need to think about like scaling everything. I can use AWS Aurora, which is basically Aurora… It's part of RDS. There's a serverless Aurora, which is a database that looks like Postgres. They call it serverless, but it's much more expensive than DynamoDB. And it takes more time to scale and everything. We can definitely use RDS, but the problem, again, with RDS is that it's more expensive and you need to think about your RDS to scale your RDS and do things like this. But what we can do, as I said, we have a primary key, and then we have some sort key. And this is great for our, if the data is relational, just a second. Oh yeah. If you have relational data, of course, if you want to do some analytics or something like that, DynamoDB is definitely not the best solution for you. So feel free to use a relational database. But as I said, I think DynamoDB is really good fit for serverless applications. And of course, it's not fit for everything. But for us, for example, it works really, really fine. And we have some relations in our data. It's up to you, it's like your decision in the end, which database will you use and how will you structure your database? I'm just showing you how to use this non-SQL database that will cover most of your use cases, but this is definitely not the only solution. So as I said, if we have like primary and secondary key, and we're not able to get something by these primary and secondary keys, what we can always do is basically adding another set of global secondary index, for example. It's called... Come on. So inside the same database, we can add global secondary index primary key. This is the first global secondary index, for example. And a global secondary index one sort key. And basically, we add two additional tables to our database. But what is happening in the background is that DynamoDB copies, everything somewhere and treats these as a primary key and this as a secondary key. So as a primary key, we can have orders. And then as a secondary key, we can have like dates or something like that. Just imagine that if we go back here, that here, we have like some two other keys, one key will be order, which is primary key and the secondary key will be this date. So we can easily get all the orders for the specific date and things like these. And yeah. Is this like materialized view in relational database? Yeah, it's something similar to that. But it's basically... Somewhere in the background, they create a copy of your table. You can decide what they want to copy. But yeah, it's kind of that. Any other questions related to this? We'll try to use it now. So even if you don't have questions right now, it's fine. I know you'll have a lot of questions about DynamoDB. Feel free to reach me even after this workshop. You can do that by using Twitter, for example.

Creating DynamoDB Table with CDK

Short description:

We will now create a DynamoDB table named coffee table with a partition key called PK. The type of the partition key can be any suitable type such as string, number, or binary. This table will be used to store data related to coffee, and we will explore the use of composite keys and local secondary indexes. We will use the AWS CDK DynamoDB construct to create this table. Let's proceed with the implementation.

Here's my Twitter handler here. Or you can write me an email and I'll be happy to answer you with anything else that you want to know about DynamoDB. I'm using it a lot. I'm definitely not the expert. If you want an expert for DynamoDB, you probably want to talk to Alex Dibri or someone else. But I can answer a lot of other questions and I'm using it now a few years in production and so far it works really nice for me. So I'll go back here.

Now that we have some database structure, I have it here too. As you can see, I'm using this a lot. This is called composite keys. So I have some string, country for example, then I have a hashtag, then I replaced this with the real country, for example, Brazil, Colombia, or something like that. Then I have again, hashtag, then I have name, and then I put the real name here. I can put availability and many other things. These are called, as you can see here, we have some local secondary index just to show how does that work. But basically what's happening here is that I have this composite keys so I can search different patterns the same way I mentioned here. So I can say, primary key is this, secondary key I can just like say, give me everything that starts with country Brazil. And then we'll see all the coffees from Brazil, but not coffees from Colombia or Kenya or some other countries. So let's try to build that with CDK. Basically our assignment will be, here I explained the constructs. Again, I'll move that to the previous step to project setup. There are some explanations for L1 and L2 constructs, the thing that I mentioned, and here's the assignment. Create the DynamoDB table with the name coffee table and with the following two primary and sort key. And let's add just one local secondary index. As you can see, there are some solutions here. So you can try to follow this later, but I'll try to type now and you can try to follow now if you want. I'll go back to my project. It's the same thing that you already saw. Everything that I'll do right now, I'll do in this lead CoffeeShop API stack. So here, so to start with everything, I'll go back here just for a second and copy this. So actually let's go here. If I go here and try to find DynamoDB, I'll see that there's a construct for DynamoDB. Here it is. I have a separate construct for global tables, we'll not use global tables right now, but global tables are cool if you want to have everything in multiple regions on your AWS account, but that makes everything slightly more complex so we'll not use that right now. You can always deploy the same application in multiple regions and have some global API that will tell your customers that that customer is from let's say US or something like that and that it should query a different API or something like that. It's there are a lot of different things that you can do here. But for AWS DynamoDB, I can just copy this thing here for TypeScript and install this. So I'll go here inside my CDK project and I'll simply run npm install and then oops, just copy this, come on. Okay and I want to install a CDK slash AWS DynamoDB construct if I install it, it will simply add that to my package.json, package lock and everything else and I can use it in my project. Let's wait a bit. And then let's wait a bit longer. Hopefully it will finish quickly. As you can see now, the version of CDK is 1.91. Oh, should this file already exist? So if you called your API slightly different, you should have something else there. It's lib slash something.ts the only file in lib that you have right now. Oh, so if you have coffee shop workshop stack, that's because you named your folder coffee shop workshop instead of coffee shop API, and that's fine. Just use that file instead of this file. It's like it doesn't matter as long as you will have just one file inside this lib and you want to work in that file. Yep, I used the wrong name here, but it's a really good question, thank you very much. Okay, so I installed this finally. If I go back to here, I can do import something from AWS CDK DynamoDB. I don't have this in a poll, but do you use semi-columns or not? I'm not using them, but whatever, I have it here now so I'll try to use it in this project. I should create a poll next time for this. So there are different things that I can import from this, of course I'll need table. And as you can see here, there are some cheats that we can just copy and paste, but we can also try to build this by ourselves. So just a second, so I'll put my chat. Okay, if I go back here, I can say something like const coffee table. Coffee table. I'm not using exactly the same naming as I have in this. So sorry if something is not exactly the same. It doesn't matter. Names are just names. You can name your table however you want and we'll see soon what will be created for us in the AWS. So if I do new table inside this, we first need to add the scopes. So scope is always this. So this is basically our stack. Then we need to add some name. It's basically just a string. We can call it coffee table. And then finally, we'll need to add some object that will define some keys and things like this. As you can see here, we need to provide partition keys. So let's try it partition key. We can name our partition key however we want. It doesn't really matter. But I prefer if we are using single table, yeah, pretty can help you with semicolumns. That's gold trick, of course. But yeah. So you can name your partition key however you want. As you can see, in here we need to define our name and type, type can be, as we saw somewhere here. There are different types called, including like strings, hold on just a second, is it here? Yeah. Strings, numbers, binaries, and blah, blah, blah. Doesn't really matter. But as we have this approach, instead of calling this coffees or something like that, I love calling it primary key or sort key. So it's clear that this is primary key. For everything else, I named these additional data columns by the data that is inside that column. But for these two, I prefer having like this kind of name, like partition key, sort key, so we can easily know that this is related to that kind of data. So, as I said, we need to add the name. I'll call it PK, as partition key. And we also need to add type. Type is what? Just a second. If I go, I can always go here to the documentation.

Defining DynamoDB Table

Short description:

We define the type of attributes for our DynamoDB table, including the partition key and sort key. We recommend using the billing mode 'pay per request' for automatic scaling. Adding a local secondary index allows for additional search patterns. We can run 'npm run CDK synth' to output the CloudFormation template and verify the table creation. The CloudFormation template creates a table named 'coffee table' with the specified attributes. It also explains the key schema and the multiple names used for the same keys.

As I can see here, type is attribute string. Attribute string is, again, something that we can get from here. Attribute type, not string. Sorry. And I can say here, it's string. As you can see, these are the things that we already saw there. Besides these, we can use maps or, or sorry, or arrays, but we don't use maps or arrays for primary key because that doesn't make any sense. So I'll put that this one is string because in our structure here is basically a string, nothing else.

Let's also define the sort key. Sort key should look similar to this. Name is SK, like sort key, and type is also attribute type dot string. And that's it. We now have everything we need. One more thing that I love adding here is billing mode. There are two different billing modes. One is basically, just a second, if I go here, let me show you, ah, come on. Okay. Billing mode dot paper request or provisioned. For provisioned, you need to define how many write and read units do you want to have at any point? I always recommend paper use because with paper request, you don't need to care about that. Instead, they'll just scale everything for you as we saw here, in this image. So this will happen automatically for you. Everything will be scaled, you'll pay just for these red things available for that moment and they'll scale this down really fast. If you need to do that by yourself, you need to make sure that you can scale fast enough and then that you downsize your database after your peak is done. Cool. So yeah, that's it. Now we have some kind of a dynamoDB table, the same way that we define it here. And if you want to add a local secondary index, as I said, local secondary, global secondary index can be added afterwards. If we add local secondary index afterwards, then it will break basically this, so clear this database. We don't want to do that. So let's just add this one local secondary index right now, even if we don't really need it right now, maybe we'll use it a bit later. So in our local secondary index, we need to name our index and we need to provide sort key. Local indexes are using the same partition key. We just need to use the different sort key, but if you provide global secondary indexes, as you can see here, hopefully they have, no, they don't have an example, but for this, you need to provide the new partition key, new sort key if you want. And of course, if you want to create just the projection, you can do that here. And tell DynamoDB that you don't want everything to be copied to that secondary index. So the name is, again, not that important. We'll use it later, but let's just call it LSI1. And then we want to add the sort key. We'll add a new column called LSI1, and then we also want to add the type, attribute type string. That's it. We now added our database. To be sure that we got something, let's just run NPM run CDK synth. That will output our CloudFormation template here. It's not deploying anything right now, it's just outputting this. Here's what we have right now. Just a second. No, I'm on screen, sorry. Can you show the file for that minute so we can copy? Yes, of course. But if you want to copy this, the better way to copy this is, just a second. So the better way to copy this is to go here and basically copy this part. Oh, maybe not, because I added one blank line problem. Anyway. Where is that? That's in the... So, yeah, this is the, just a second, I'll send this. Yeah, Aaron, send it in the chat, I'll send it in the discord. Aaron is faster. Thank you, Aaron. So I just added, it takes part of my source code side, one blank line. That's why you don't see this here. But yeah, I'll edit. I'll show my window again. Sorry for that. So for the structure, again, it's the same right now that we had before. We have bin, we have lib, and node modules, and that's it. We'll create functions here a bit later, but yeah, we don't need to create that right now. So I didn't change the structure of the project. It's the same structure that CDK created for me. And I'm just editing this lib coffee shop API stack, or if you named your folder differently, it will be coffeeshop, workshop stack or something else. So let me try just to deploy this so you can see what's deployed first. Here's the CloudFormation template. It created the table called coffee table. It's this name. It just added some something after this. So even if I named two different things the same, it will create two different things here. And the other thing why it's doing this, is if I want to deploy multiple stacks in the same account, this coffee table will not conflict with the other coffee table from the other stack. This is type DynamoDB basically. Is there anything that... Yeah, there was a message, but I probably missed it. So I'm not answering you, you just pinged me again. There's a key schema here, we have a primary key, which is basically our primary or hash key, sort key, bigger thing with AWS is that, they have multiple names for the same things. As you can see, they named this partition key here and hash key here. It's the same thing, but they have two different names. The other key is sort key or range key. This is basically the same thing. It's just like, these are the older names that they don't use that much anymore but in CloudFormation, you need to use them. You need to define attributes like, which is string and which is number here by putting this thing. As you can see, this is much more readable than this. But they defined the same thing. Billing method. Here's our billing method here.

Deploying Stack and DynamoDB Table

Short description:

We deploy our stack and DynamoDB table using the CDK. The CloudFormation template is created and deployed. We can view the stack and table in the AWS console. The CloudFormation stack provides information about the deployment status and resources created. The table can be accessed and items can be added directly from the console.

And then local secondary index. Again, we have the same thing here. It just added like, projection type alt. And then we have some other things like, when we update something, it will retain these databases. It will not delete it. When we delete the stack, it will still keep our database and it added like some metadata. So let's try to deploy this. I love to do AWS who, and again, just to be sure that I'm using the right account. Oops. Yeah. Maybe I'm using the wrong command. So, sorry with this. Yeah, I'm using the correct account. This is the one that I want to use. So, I can set the right region by doing, by exporting another environment variable called AWS region. Let's use, I think it was US West 1. Let's use that one. And now, I can do NPM run, CVK, deploy, and see what will happen. This is the first time I'm really deploying this to my AWS. So far, we just played with this in our local environment without deploying anything. I would recommend finishing this and maybe finishing the first lambda function and then again, take a short break and then continue. Feel free to tell me whenever you need a short break and we'll make it. So, it's deploying our stack, it's creating our stack, our DynamoDB table. It will take some time the first time you're deploying this. Every next time, it will not like create a new table, it will just update the existing table if we change anything, or it will just skip this part if we just add the new Lambda function. So every next deployment should be a bit faster. Okay, that's it, it's deployed. It's telling me that it deployed CoffeeShop API stack. This is some, aren't is basically a unique ID, as you can see here, it says, CloudFormation. It says my region, my account, stack. Stack name, and then whatever, some kind of ID. So everything works. Let's see what do we have in our AWS account. So if I go, go, go, go where console awsamazon.com. First thing I want to do is to go to CloudFormation and see what happened there. In CloudFormation, I have my new stack. If I click on this stack, someone asked for visualization. I can go to designer, and see how does this work visually? And so far, so good. I have some metadata, and I have one table. So these designer is now useful. You'll see soon that it will not be useful anymore when we add more things. But here we can see this template. It deployed the CloudFormation template, not the CDK template, because CDK in the background outputted this, and this was deployed. It's the same thing that we have when we run a CDK synth. Oh, sorry. I'll be a bit slower. Sorry. If we use tags, we would see these tags too. These tags are important for billing only. You can see which tag costs you, how much each tag is costing you. Here in CloudFormation, we have some stack information, such as stack ID. This is the same thing that we had like here. So the command that they used is npm run cdk deploy. If I go back here, sorry, I see the status. It's created. I see some different things here that I don't really care about. I can see events. These are the things that I saw in my terminal at some point. It just like, it's reviewing my stack. It's creating my stack. It's creating my table. It's adding some metadata. Metadata is created. Table is created. Coffee shop API stack is completed. I can see resources that were created. It's basically that metadata that I don't care about and my table. I can click here and go there. I don't have any outputs. I don't have any parameters and that's it. So these are the things that they see here in the CloudFormations stack. The fastest way for me to go to this table is to go to resources, see coffee table and click on this. If I click on this, it will just take me to the other part to the part that we already saw, which is DynamoDB tables. This is my table. Inside this table, I can go to see explore items. As I can see, there are no items here. I can even add some items directly from here. If I go to create items, I can say, so I can type anything here. It doesn't really matter, but let's try to follow this, the same thing that we defined here. So, first I want to have coffee as primary key. If I want to add that, I would add like coffee, just coffee, nothing else, then my sort key, I want to have something like this. Just a second, sorry. Here it is. So I want country, country can be, for example, Brazil. What is in Brazil? No, I have no idea. But for name, I can put the name of the coffee. Let's just call it Brazil. Brazil, it doesn't make any difference. I can add some other additional types and things like these. For example, quantity and things like these.

Creating Items in DynamoDB

Short description:

If I create an item in my DynamoDB table, it will only have primary and secondary keys. The values are stored as strings, even if the data type is specified as a number. This may be due to the way DynamoDB handles queries and makes it easier to search for specific values. At the end of the workshop, we can easily delete everything by running 'NPM run destroy'.

If I create this, it will just create one item in my table. If I scan everything, you'll see that one item. It doesn't have anything else. It just has primary and secondary key. So that's it. I can see this as JSON. It's a bit weird, as you can see. It's not primary key coffee and secondary key this. It's like primary key is string and that string value is coffee. Secondary key is string and that value is blah, blah, blah. And see one more thing. If I go here and I add one more field, just a second, add new attribute, which is number, and let's say quantity here, and let's put quantity, for example, 10 kilograms, and if I save this, what do you think is the type of, so if I go to JSON, will I see like number, colon, real number or string? Actually in the background they distort this. They say that quantity is number, but they store this number as string for some reason. I have no idea why, but yeah, that's how DynamoDB works. Alexander is asking will we delete all these at the end of the workshop? Yeah, precision, probably precision. That's a good comment, Robin. But they also might be doing this because of the queries and everything they do in the background to basically get these fields. For example, if you try to find all numbers that are starting with two zero, it's probably easier to do that with strings than with numbers. Who knows? So to answer Alexander's question, yeah, we can easily destroy everything in the end by running NPM run CDK help. I think it's destroyed, but let's see, oops. Just a second. Yeah, we can run NPM run destroy, we'll do that in the end and make sure that we deleted everything that we created in, that we delete everything that we created in this workshop.

Adding Items to DynamoDB

Short description:

To add items in DynamoDB, go to the item explorer in the console. Click on the orange 'explore items' button or go to the sidebar and select 'item explorer'. In the old console, go to the 'items' section within the table. To add a new item, fill in the required fields and save. In the new console, go to the 'tables' section, select the table, explore the items, and click on 'create item'. You can add additional fields by clicking the small plus icon. Make sure you are using the same region as your deployment. You can pass a profile flag to NPM run CDK deploy using '--profile' or use 'export AWS profile' in the terminal. Remember, we are creating a table using CDK and will add a simple API in the next step.

So we added our DynamoDB database, I'll re-run my poll just to see if you manage to finish this or not. Yeah, I did that right. How did you get to the item explorer in DynamoDB, I couldn't see. Sure, sure, sure, I'll show you that in a second. So, if I go back to tables, it's a bit weird. So, you're in this view, you have this orange button here, explore items or you can go to the sidebar and go to item explorer and then again, select your table. It's a bit weird, but yeah it's like AWS is definitely not the best with user experience.

So, the easiest way to get to the item explorer is to click on the orange button explore items. This might look slightly different for you if you don't use the current console. If you use the old console, skip and revert. This can look like this for you. So you have tables and then when you go to the table, you have items instead. I see. Okay, now I see what you mean. So I just tried the preview of the new console because this is something that they'll force us to update at some point anyway. But I'll keep this old console for now and I'll try switching between these two. Thank you. Sorry for that. I completely forgot that to tell you that. But I'm there. We need to know how to add items within old view because it's impossible. It's requesting a third key. Can you repeat I? Can you please show how to add items within this view because it's requesting third key while you edit just two. Yeah, yeah, yeah. So you can go here, type coffee again. Come on. Then go to a sort key, country. Name, let's add some other name. And then I'm not sure if we need to add local secondary key in this console. Let's try it. Yeah, it require 4 this local secondary key. For the local secondary key, you can add basically anything, but let's just add, for example, availability here. Availability available. And then if you try to save, we should be able to add this. It's a bit weird because it lets you to add everything without local secondary index from the new console, but not from the old console, but yeah, that's AWS. Feel free to add anything to local secondary key. We just added, so you see how everything works, but it's okay if you just fill anything there. It will not break anything, basically. So let me go back to my poll. Where is it? Yeah. Okay, I think what happened is, most of us had the old version. So you were working on the new. So can you repeat one more time on the new version? I know you added the primary key and secondary. How to add another field? Oh, how to add another field? Yeah, that's a good question. Sorry for that. If you click this small plus here, it will allow you to append another field, basically, which is called, for example, quantity or whatever. So click on small plus here and then you can do remove, you can do insert or you can do append. Append is basically editing after, insert is adding before this field. But where is that on the navigation? Where is that? Yeah, it's, you're in the overview, go to items and click on create item. No, but that's the old console, I mean the new one. Oh, yeah, in the new one, in the new one. Sorry, sorry, sorry, sorry. Yeah, so go to the tables. Yes. Select the table, explore the items, and then if you want to add a new one, here it is, create item. Can you see it? Yes. Great. Okay. And then this is slightly different because you have like, this form and then you have add new attribute instead. I see, I see. Thank you. Mikkel is asking if I'm logged in as a root user. So yeah, I'm logged in into this console as a root user on my subaccount. This is not my root account. Yeah, I'm asking because... So in the terminal I'm like the user I created, I am. Yeah. But on the console on Amazon, I was like the root, so I had to... No, that's fine, that's fine. Is that fine? Yeah, yeah, it's completely fine because both of your users have all the permissions. As you can see here, I created this workshop for me, but they own... console access, so I have access key and some secret that is not available here, but I'm not able to log in into this web view with that user anyway. Okay, then I... So this is completely fine. So maybe I think I miss a step then because when we deployed it, you went to CloudFormation and you had... Oh. That you didn't have. Yeah, yeah, yeah, yeah, yeah. So make sure that you're using the same region. So when you deployed everything, what's your region? Do you know what's your region? Yeah, I think I selected while setting up the user a different one than when we deployed. Yeah, just if I go, for example, to, let's say, Yeah. If I go to Canada Central, I know that they don't have anything in Canada Central, you'll see that my CloudFormation is empty and I don't have anything in Canada Central. And then if I go back to the region that they used, which is basically Ireland, I'll see that they have something. Is there a way to pass a profile flag to NPM run CDK deploy? I don't think they accept, actually let's see, I have no idea. If I go back to help profile, profile, profile, yeah. Yeah, they have profile, minus, minus profile networks. You can also use minus, minus region probably, I have no idea, no? No, but you can use minus minus profile or you can use export AWS profile equals something as Aaron posted in the Discord. Slovan, what are we doing here? We have a table and we're creating an item, are we creating a table? What's a... No, no, no, we created a table using this CDK, I just wanted to show you the table, you don't need to fill this table, I just show you how to put something in the table, but you don't need to do that. The next step we'll do, we'll add a simple API here.

Creating API with AWS Lambda and API Gateway

Short description:

We will now create a simple API using AWS Lambda and API Gateway. The AWS Lambda NoJS construct is recommended for TypeScript support. It uses ESBuild for faster bundling. We will install the necessary dependencies and build our first function. Then, we will add API Gateway, specifically the HTTP API, which is easier to use and more cost-effective. REST APIs offer different integrations, but we will not be using them in this workshop.

And then the next step is to connect one function with this table and write the, save the new coffee using the API. So this was just the example to see that we have some database somewhere.

I see, okay. So, Ivan, or Ivan, you're, to answer your question from the Discord, you tried running npm run deploy minus minus profile Ivan? Yeah, it's both, yeah. As Aaron said, you need to have that AWS, that profiling your.aws credentials file. And in addition to that, just try running this, so export AWS profile equal your name, or the name of your profile, and then you can run these command to make sure that you're using the same, actually, we have it here in the chat, just a second. Someone pasted this to me. Okay, so it's AWS STS, get caller. Oh, come on, get caller. And that will printout the account that is right now selected in your, oh, it works with export AWS profile. Perfect, okay, maybe minus minus profile doesn't work as it should, but I have an idea really. So Daniel is asking, I don't have the right permission set up for my IAM user, which permission do we need and how do we add them? So let me try to answer that. If you go back to AWS, you can go to IAM, you can select your user. Are you using your company account or your personal account? I'm using my personal account. Perfect, so for your personal account, the fastest way to do that, so go to IAM, select your user, go to add permission here, and attach existing policies directly, and try to give it, I think it's admin access or something like that. Yeah, okay. Just a second, I'll show you in a second. This one, administrator access. Right, okay. With this one, we'll make sure that you have everything. Instead of that, we can have different permissions, but it will take a lot of time to, like, to add all the things that you really need for this workshop. Okay, I think that was some... Alex is always working now, thank you. Perfect. So, let's continue. Is there anyone else that needs some help to deploy the table? Yeah, I think I will need help, but I think we can do it on the break. Sure, sure, that works too. Thank you very much. So, the next thing we'll do is creating a simple API. So, we just want to create a really simple Lambda function where we can post some, and the API, and we want to be able to have post to slash coffee that will just return status 200, or 204, it doesn't really matter. So, we don't want to store anything to the database, we just want to set up our API gateway and to create some small Lambda function that will just return that everything is okay. Let's try to do that. So, how do we do that? There are a few different things, as I said, there are some solutions down there. And you can follow this a bit later if you want, and I'll try to type now. I'll go back to this CDK constructs list, and the first thing that we want to do is to add AWS Lambda. There are two different things here for AWS Lambda, as you'll see in a second when I find it. Actually, there are a lot of other things for AWS Lambda. By default, you'll probably see AWS-Lambda, which creates a new Lambda function and everything works fine. But we'll not use that one. There's another cool thing called AWS-Lambda-no-js, which is really similar, but as we'll use no-js, this might help us a little bit. As you can see, this construct is experimental still. You can use it, but they might change some things in the future. The cool thing about this one is that we don't need to think about bundling our TypeScript to JavaScript because Lambda function will not understand TypeScript. This one comes with something called ESBuild. There are multiple ways to bundle everything to JavaScript. Actually, no-js. You can use Webpack. You can use many other things, but ESBuild is really cool because... Yeah, I'll show you just this one. And I tried it. Oh, come on. Is it this one? Yeah. So for the speed, Webpack, this was actually similar in our production. Our Webpack version 4 took 45 seconds to one minute to build 100 lambda functions. We switched to ESBuild and it took less than five seconds to build the same amount of functions. So yeah, it's really, really fast. And for TypeScript support, there's just one problem with this for TypeScript, it has TypeScript support, but it will not run type checks for you. So you need to run these type checks by yourself. Basically, you need to run, to check your types before you deploy everything. So let's try to build our first function. Yeah. TSC, no output, or yeah, something like that works. Again, if you want to do this locally, I'll go back to this CDK construct for AWS Lambda Node.js. I copied this and try to install it. I'll go back here. I'm still in the same folder. I didn't change anything. So I just run npm install this. It takes some time to install it. I can also install esbuild as they recommend here. So basically npm install, saved up esbuild. I don't know why add zero because they probably want to be sure that we don't install some new version that is not supported or whatever. I'll do this. So npm install, saved up esbuild as they recommended. This is installing. Let's see the next step from this. Basically we want this but we also want to add API gateway. Maybe you remember from the intro part, I'll just quickly jump here. As I said here, we want to use API gateway as a router and then we want to have some handlers as Lambda functions. Yeah, it's actually TSC, Noemit, thanks Aaron if you want to run type checks before you deploy everything. I don't think we need that right now so we might add that a bit later. So basically what we want to build right now is just a second. We have this part here. We want to build this lambda function and we want to build this API gateway. To add API gateway we can, API gateway actually support two different types of APIs. They have HTTP APIs and REST APIs. And besides that, they support web sockets which we'll not use today. What's the difference between these two? The main difference is that HTTP API is like easier to use and 3.5 times cheaper so we'll use that one. REST APIs have different like integration. You can skip the lambda function and directly from here you can write to this database by writing something called the BTL template but we'll not do that today.

Setting up Lambda Function with AWS CDK

Short description:

In this part, we explain the process of finding the AWS CDK and AWS API Gateway version 2. We install the necessary dependencies and update the versions to ensure compatibility. We create a new folder structure and add a Lambda function named 'add coffee'. We also discuss the error related to different versions and explain the steps to resolve it. The entry file and handler are defined, and we export the handler function. Overall, we cover the setup required for the Lambda function using AWS CDK and Node.js.

This is something that we'll explain actually in some of the next blog posts and we are actually working on a new book for AppSync and GraphQL with TypeScript so we'll spend a lot of time explaining these BTL templates and things like this. These are not that fun so we'll not do that today.

So let's try to find API gateway here. There's probably a better way to search through this. I'm not doing that the best way I can. I'm just scrolling up and down but what we need is this AWS CDK, AWS API gateway version 2. Yeah, it's not that easy but whatever.

So if I go back here, do NPM install that, I'll have enough things to start with my function and everything. Wait a bit, wait a bit longer. And hopefully soon it will install everything that we need. But actually I don't think to wait for this. If I go back to my project, I want to go to that CDK stack. So I'm going to leave coffee shop, API stack or whatever, the name of that stack is. And I have there my table. And the next thing I want to do is to add for example, one Lambda function. To add a Lambda function, I want to import this AWS Lambda node JS. And from here I want to import probably a function or some no, Oh, it's no JS function. Sorry. Our first function will be, for example, create coffee. Let's just say that these are functions. Okay. We can split this into multiple files. We'll do that a bit later, but for now I'll do this, add coffee function, and I'll say new node JS function. Again, I need to provide a scope. Scope is this, some name, add coffee. That should be enough. And then I have some props. For props, Oh, just a second. This, this, this. Oh, here's the interesting error with with CDK. As you can see, it tells me that this is not compatible with this tech. Why? Well, because we use different versions of something. As you can see here, DynamoDB and Lambda is version, I'm in my package, Jason, they're version 1.91.0, and CDK Core is version 1.88.0. So I want to update everything that is, that is one, that is this old version. So I'll update this, these are the things that were installed automatically for me when I run CDK in it. They probably didn't update their template. So I just want to update all of these dependencies. It's CDK Core, CDK, and CDK Assert. And once I do that, I just want to go back here and run npm install. You just need to make sure that you're using the same versions. I recommend the latest version, but it would work even if I put like 1.88.0 here for all of these other things. Let's wait for a second until it updates everything. Give me a second. I'll just turn on my light. You do an npm install for the gateway integrations as well in your- Not yet. Not yet? We'll add that soon. Okay. David said that his versions are fine. Yeah, I have no idea why do I have- Oh, actually I know why. I'm using- I probably have a CDK installed on my machine. Let's see that. CDK minus minus version. Yeah. I have this CDK installed globally. So that's why my versions are not up to date. Anyway. If you have that problem, now you know what will happen. If I go back here, as you can see, there are no problems anymore here. But I just need to add a few more things here. Not handler. I'll just go back to the documentation and not try to be smart. Go back to lambda, lambda, lambda, yeah. This one. I need entry file and I need my handler. So entry file, we'll put something here soon. And handler, we'll again put something here. So what's entry file? We can go here to our folder structure and finally create these, do you remember that we mentioned at the beginning that we want our structure to be slightly different? We want to have these functions and then we want to have addAddCoffee function and then we want to have this lambda ts file. So let's try to create that now. So I want to go this to my root folder, create a new folder called functions. Inside these functions folder, I'll create a new folder called addAaddCoffee. And finally inside this addAaddCoffee, I'll create a file called lambda ts. It will be empty at the moment. So I'll copy the related path for this and just paste it here in entry. So it's functions, addCoffee, lambda ts. That's fine. And then I need to export something from this function. So I need to name these functions somehow and export that. I can use ES 6, actually ES whatever. I don't need to do module.export equals because ES linked, sorry. ES build will translate this for me. We'll name these functions somehow and we'll export these basically. So for the name, unfortunately, we are not able to do a default export and put here just default. We need to name our handler somehow. And some, often people call this handler. So basically what we do is go to our lambda ts file and export our handler. Handler is just some function that does something. It doesn't matter right now, we just need to have some empty file here that will do something soon. So again, I'm creating my function, by creating function, I running new Node.js function, I put reference to this stack, I add the name for that function, I add an entry file for this, which is basically this new file that I just created and I need to provide handler, handler is the function that I'm exporting from this file. So basically, AWS will do somewhere in the background this from lambda ts, actually lambda js because they will translate these two js. Is that clear enough? I hope it is.

Adding API with AWS CDK

Short description:

To add the API, I'll scroll back to the top of this file and run the import from AWS CDK API gateway. I want to create an HTTP API, which is cheaper and faster. I create the API using the new HTTP API and pass the necessary props. Running 'npm run cdk synth' updates the CloudFormation template. The S3 bucket contains the code and there are environment variables, a handler, and the API details. Changing the node version inside the function may require using a different AWS Lambda package. There is no command to deploy to S3. The serverless framework is not built by AWS, but AWS SAM and D-CDK are recommended alternatives.

Okay, I'll just hope that everything is clear now. The second thing I want to add is our API. To add the API, I'll scroll back to the top of this file and run the import from AWS CDK API gateway now. Let me use this. I want to create HTTP API, not there are other things here, I think I, no, I'm not able to create rest API, fine. So I want to create HTTP API. This is the cheaper API that is really good and it's actually faster than the other one. I can do const API equals new HTTP API. Again, we put the reference to our stack. We put the name of this... This resource, we can call it Coffee API and we have something like... We want to pass some props here. For props, we can pass different things here, but let's see what they put here. We probably don't need to do anything special, but... Oh, come on. Yeah. We don't need to do anything special. We can just keep it like this. It's really simple, that's it. If I save it and if I go back to my console as API, and run npm run-cdk-synth, this will not deploy anything. I just want to see how it will update my CloudFormation. It takes some time, of course, as you can see, it was bundling my asset. So it was basically bundling my function. We still have our DynamoDB database, but we now have some IAM role that adds some policy, as you can see, for Lambda function. This policy is not doing anything special, but it adds like AWS Lambda basic execution role. So it created something for us. It adds a Lambda function with some properties. It points our code to some S3 bucket. If I go back to my, so a relative. Someone is asking for relative path inside the function. So this one, it's basically from root. You just put like functions. You don't need to do dot, dot, slash. Yeah, I understand but this does not compile with me, it's arrow that it's cannot find file at this pass, and I just copied like relatively. Okay. And you have the same pretty interesting. Okay, I'll try to understand my mistake now. No, it's probably not your mistake, who knows? Can you put like dot dot slash? Actually, are you sure that you don't have any typo here? Yeah, checking, checking. Someone said that you need to delete the folder name. So, yeah, if you have like coffee shop, shop API here, then delete this part. I probably, yeah, I probably open just that coffee shop API and you open the folder outside of this one. That's why my relative path is slightly different than yours. Thanks, Alex. Where is HTTP API being imported on the top? So it's important from AWS, CDK-AWS API Gateway Version 2. Oh, I see, okay. You can find it here, down here. Come on. Yeah, it's this one. We'll import the other one soon and after this workshop, I'll just fix these, I add an environment and broke this code, but I'll update this after this workshop, so you'll be able to copy and paste everything. Thank you. Sorry for this, I didn't show this, I'm trying to require everything from the separate folder and that's it. So again, I'm here and I just want to see what's happening inside my S3 because I saw that it's deploying something to S3. As you can see here, S3 bucket. There's some bucket here and there are some things in that bucket, the code is there. And then we have some environment variables, here's the handler, it's index.handler, this is the thing that we, how we named our handler. And it's using runtime node JS version 12. There's no version 14, but the, for some reason, they didn't add that. And we also have our API, which is API gateway, API gateway version two, colon, colon API, and protocol is HTTP. So nothing special, they just created a lot of different things for us here. Let's just quickly go here to, oh, I have a lot of things here. So oops. There's nothing. It's just the reference to this. So Eva is asking if there's a way for us to force a node version. If you, yeah, there's probably a way to change the node version inside our function. We can try to do that right now. So if I do, let's go to Current Version Options. Let's see. Actually, let's do this. Go to Type Definition, Entry, Entry, Runtime. Yeah, Runtime. So basically we can use Runtime here. So if I do this, I'm going back to AWS Lambda Node.js, I can put probably Runtime here. No. Interesting. So Ivan, I think this is, to do this, you'll probably need to do the following. To use the other AWS Lambda and get the Runtime from that one and then pass that Runtime to this by doing simply something like this. Runtime.something. Yeah, Lambda Runtime but Lambda Runtime is not something that exists in this AWS Lambda Node.js. It exists in AWS Lambda. So it's a different package, actually, which is a bit weird, but yeah, there are some weird things with CloudFormation unfortunate. What was the command to deploy to S3? No, there's no command to deploy to S3. I just run npm run CDK synth and it outputted the CloudFormation template for me. You don't need to do that right now, I just wanted to see how does this work? So, oh yeah. There's another way. For example, there's actually, there is an example in this card. You can use serverless framework to deploy all these things. I'm not using serverless framework. Serverless framework is really great, but serverless framework is not built by AWS itself. Yeah, it's a separate team that is, it's an open source framework and it's really good, but I'm preferring to use either AWS SAM, which is serverless application model built by AWS, it's abstraction of CloudFormation or D-CDK, which is also maintained by AWS itself.

Connecting Function with API

Short description:

To connect the function with the API, we add a route and options. The path is set to 'coffee' and for the integration, we install the 'AWS API Gateway version two integrations' package. We then create a Lambda proxy integration, passing the function as the handler. Next, we add the 'post' method to the route. We import the necessary constructs and integrations from AWS CDK and AWS Lambda. Finally, we make the function async and install the 'aws-lambda' package for adding types. We deploy the stack and test the API.

So, yeah, it's up to you. You can use serverless framework or not, both things will work perfectly fine for you. CDK is quite new, so there are some things that are not perfect with it, but a lot of people from AWS is pushing it a lot recently.

Okay, so I'll go back and try to finish this. So I somehow need to connect my function with this API. First I need to add some route, to add the route I want to do API add routes. Yeah. And inside this route I want to add some options. So let's see what do I need here, I need path and integration. So for path that's easy, for path I want to do like a coffee as we defined somewhere and for integration I need to put something. And of course it will complain because it requires AHTP route integration. So to do that we'll need to install one more package and that package is called, just a second, that package is AWS API Gateway version two integrations. This allows us to integrate some Lambda function or something else with this HTTP API. To do so let's go back here, write npm install this. Wait a few seconds. There are some easier ways to deploy the API but it depends, sometimes you get this deployment speed like you write less lines of code but in the end, if you need to add some specific things it's a bit harder. One of the way to deploy everything is Cloudia.js. It's a library that we built long time ago. You can easily deploy an API but on the other side you're not using cloud formation and it's really hard to build like more complex application but with CDK, you can always extend your application whatever you're building in the future. So, because it supports everything that Cloud formation support, so that's why CDK is a good way to do that. I installed the new package and I'll try to import something from this package. Let's see, what from AWS SDK, HTTP Lambda proxy integration, yeah. So we want this Lambda proxy integration. Lambda proxy integration means that API Gateway will not do anything specific, it will just pass everything that comes to that API Gateway to our Lambda function. To do so, we need to create this integration. So we'll say for example, add coffee function integration equals new Proxy integration and we need to pass something here. Let's see what, we need to pass handler and we need to pass, I don't know what else, for handler, I guess we can just pass this function, let's see in a second. Actually I have it somewhere in the sheet here, yeah, it's just this function, nothing else. And then I can use this integration that we just built and pass this here. As you can see, this is not complaining anymore. As I said, we want to add methods and methods is probably, let's see. These are add route options methods. Let's see how do we add these methods, just a second. Yeah, so I can go back here, HTTP HTTP, actually met HTTP method. We want to import this from API Gateway version two, not integrations, but API Gateway version two. And for methods, we want to add like just post for now. Is this clear enough?

We'll go quickly through everything that we did. We imported node.js function from AWS CDK, AWS Lambda Node.JS. I have no idea why do they need to name everything AWS. So I need to read AWS million times, but whatever. Then we use AWS API Gateway version two construct and we import HTTP API and HTTP method from it. And finally, we have API Gateway version two integrations and we import Lambda proxy integration from it. And now that we have all these things, we created our first function, we have entry file and handler. Then we created integration for our HTTP API by passing this function as a handler again. And finally we created our API, we added one route and we said that we want post method and that we'll use this as an integration. And that's it. Before we deploy this, let's go quickly back to our functions, so functions at coffee lambda ts, and let's try to do something here. First, we'll make this function async not because we really need to make this function async, but because we want to... API gateway will require this from us. It will probably work without this, but later when we add more things we'll need to convert it to the async function, and we need to return something. It's enough to return just the empty array. Or for example, we can... Actually, let me show you one more thing. If you want to add types here, right now I don't know, what do I need to return? To add types I could do npm install aws-lambda and at types slash aws-lambda. This package is not doing anything special, but this is something that we really need, and you'll see in a second why. I'll install this. I think I explained that in the next example, in the next assignment basically here, create lists, get and delete coffee. But anyway, we will do that right now just so you see how does that work. It's okay if you don't do this right now, you can do this later, that's fine too. So I just want to show you. I install that. I can go back to my Lambda handler. I can import something from this, from AWS Lambda, and here we have a lot of different events and things like this. So if I say API gateway, here are a lot of events related to API gateway. There is something called API gateway proxy callback, version two. That's something that we can use. Sorry, not proxy, API gateway proxy result version two. That's actually one that I wanted to use. Now if I put this, I actually need to put promise this. And then I can do like, now I have a code complete. I can see what can I return from here? For example, I can put like 204, like the status code. Are there no types exported from CDK that we can use to return the type value? As far as I know, there are no things from CDK that we can use here, but I might be wrong. I know that this AWS lambda works from long time ago. Maybe there's a better way to do this, but I'm not aware of that better way. So now I built some really simple lambda function and I updated my stack. So let's try to deploy it and see what we'll get. If I go and do npm run cdk-deploy.

Deploying Stack and Troubleshooting

Short description:

I encountered an error while deploying the stack. It seems to be related to the region. I will try to resolve it by exporting the correct region and running the deployment command again. Some participants are also experiencing similar issues. We are discussing possible solutions and trying different approaches. Once the stack is deployed successfully, I will explain the changes made and the purpose of the roles and policies. The deployment process involves creating new resources and asking for our approval. It may take some time as roles and permissions are set up. These roles are necessary to allow access to the Lambda function and trigger it using the API Gateway. I will provide guidance on resolving the deployment issues and ensure that everyone can proceed with the workshop.

If I go and do npm run cdk-deploy. Can you show it again, please? Sure, sure. Let me just put deploy here and I'll go back here at the lambda function. So this lambda function is not doing anything special. You don't need to provide this. It's enough to just return an empty object, but I just added status code 204. And just to be sure to have auto-complete here, I added this API gateway proxy result version 2. As you can see, they're not the best with naming things. So yeah, I took this from this aws-lambda package that they just installed. I'll open my terminal and make sure to make it a bit smaller, oh. So here's what's happening here now. You can still see the function up here. So this deployment will potentially make sensitive changes according to your current security approval level, whatever that means. Please confirm you tend it to make the following changes. So what do they want to do? They want to create something new for us. Lambda involve function and to add a role. And they will create some IAM policy for our lambda function. Because it's doing something related to policies and roles, it will ask us to approve these changes. So I'll just type Y, like yes, and type enter. Oops. Well, yeah. Region. Export AWS region. Equals EU West one. And let's do export AWS default. Region equals EU West one, just to be 100% sure that we're using the right region. And let's try again. Something related to my region is not correct, but we'll see in a second. After we deploy this function and make sure that, oh, come on. This stack uses assets, so the toolkit stack must be deployed to the environment run cdk bootstrapped. What? AWS. I need to do bootstrap AWS slash, bootstrap AWS slash bootstrapping. Oh, someone has some issues with cdk-sync. You must use out there when there are multiple input files. Do you have any input files? So what happened for you is that there are multiple input files for you somewhere here, maybe you have multiple, actually check your entry here. It should be like this, it should point to a single file, not multiple files. Maybe it points to the folder or something like that. So for me, let me just like, solve this problem, tells me to run cdk-bootstrap-aws and then my account and region. I just need to check my account, so I'll use whoami again, and copy this and then run npm run cdk-bootstrap-aws and then slash region that they want, west1. Let's hope that this will work. I have no idea why do I need to bootstrap, I didn't use bootstrap at the beginning, so it's probably because of that. I don't use the default account, maybe that's why I'm not able just to run this command. So Alexander you have the same error as I or the same error as Alex with cdk-synth. It's the same error as you and I'm afraid to export new variables because I have like huge amount of AWS accounts on my machine. No, no, no, you don't need to do that, give me just a second let's see what it's doing. I have a lot of different accounts too, but this will not change your accounts, cdk-bootstrap, it will just do something in the environment. As you can see it created some environment and I want to deploy again, let's be just... Oh, go on, sorry. Sorry, I interrupted you. Yes, the same, right? It was about exporting, you have exported before that... No, no, no, no, you don't need to do that. I'll show you in a second. Let me just check if this works for me. Yeah, it works. Yeah, let's wait for my stack to be deployed and then after that, I'll show you what they did. I don't want to show you this before I'm sure that I solve the problem but Mik is showing us that the same command worked for him. For some of you, it will not ask you to bootstrap anything because you're probably using default account or something like that, I'm not using the default account. That's why it's probably asking me to do this or I have an idea that there's something that I forgot to do, I guess. So it will take some time now to deploy everything because it needs to create these roles and everything for us roles are required because by default, Lambda function is a black box and you're not able to access that Lambda function without setting up some trigger. But even if you set AWS API gateway as a trigger, that's not enough, you need to give permission to that API to trigger your Lambda function. So everything went through. Alexander, the thing that you actually need to do is just this, npm run cdk bootstrap, AWS this and this is, you need to use the account ID of your account here. I understand what it is, thank you. I will run it now. By the way. You don't need. Your region or your provided. No, no, no, use the region that you used to deploy this. So if you used, for example, US East one, you should put US East one here. I used US West one, that's why I put the same here. Yes, thank you. Alex said that it probably depends on the region. Maybe, who knows? I'll definitely ask someone. This is interesting problem that they never saw before. I run AWS bootstrap few times before. CDK bootstrap a few times before, but I don't know why do I need to run it and some of you don't need to run it. So I'll explore that. And... I think they might have an idea. My default is US West two and I run it in US West one. Yeah. So when you, that there was a discrepancy there, then I had to write the bootstrap. Yeah, probably it's the same for me because I have some, I think, default values. I'm not sure how do it. Yeah it's config I think. Yeah, I have US central one as a default value, as a default region. So yeah, it's probably that who knows. Okay. So I deployed something as you saw. Let's see.

Exploring CloudFormation and CoffeeShop API Stack

Short description:

In this section, we explore the resources created in CloudFormation and the CoffeeShop API stack. We have a Lambda function, an API with a POST route for coffee, and a table. The visualization of AWS resources can be challenging, and the designer is not very useful. We export the API URL as a CFN output and test it using curl. It works, and we take a five-minute break before continuing to create a real route.

What do I have in my CloudFormation, of course? There are two things, as you can see, now I got this CDK Toolkit. This is probably the stack. Oh, it was created by CDK Bootstrap that manages resources, blah, blah, blah. So this is what we got when we run CDK Bootstrap. And it just created a bucket, S3 bucket, and gave a policy to that bucket. That's because they are now able to deploy our functions to this bucket. So that happened when we run AWS Bootstrap. And let's see our CoffeeShop API stack. First resources, what do we have here? We have a Lambda function. This is the function that we just created. We have some role for this function, metadata, I don't care about that. We have some stage for our API gateway. By default, stage is $default. So I don't care about that. We have our API. We have some route for our API. It's a POST coffee, as you can see here. We have permission for our Lambda function to be triggered by this API gateway. We have integration and we have table. A lot of things. If I go to this designer, you'll see why visualization of AWS resources is hard. This is not making anything simpler. As you can see, it's even more weird now what's happening inside. How did you find the designer? Oh yeah. So I'll go back. I'll leave. So we are here in stack info and resources, but you have template here. And then you can view this in the designer, but yeah, it's not really useful because, let me see. I think I can. Show you something. I tried running the designer in our production environment at some point, and we got this. So it's definitely not useful. So, okay. That's it we have some API, but I have no idea how to trigger this API. I don't have an I, sorry, I don't have an URL to trigger. So I'll go back to my code one more time here, and I want to add one more thing. So from this stack, I also want to export one thing, and that's a CFN output. CFN output is basically something that will be shown here in outputs in coffee shop API stack. So let's create an output at the bottom of our file, new CFN output, this as always, name, API URL, for example, and then we have some props. In these props, what do we need to add is just value. So what's the value of this API? For some services, we can easily say just like something like API URL or, oh, can we do this? Probably not. Oh, okay. It's interesting. Let's do this and try. No. Yes. So API URL exists and it can be undefined, so that's why we want to add empty string. I'm not sure if this will work, so I'll just try it by deploying it again. I'll run NPM run CDK deploy.

Can you show the code again, please? Sure. And the import, please. Sure, sure, sure. I'm taking this from AWS CDK Core. And then I'm just exporting this. But before you do that, wait for me to deploy everything and just make sure that this works because I'm not 100% sure. This URL is not returned by CloudFormation, so I really hope that they added this to CDK, but we'll see in a second. I'll just move this to my other screen. So it works. Nice. Aaron just said in the chat that it works. I'm still waiting for my update. I'll put it here. Oh, here it is. As you can see, now here, in the outputs, I see API URL and then I have some URL. Nice. If I go back to my cloud formation, reload everything. I've reloaded here. But anyway, if I go to outputs here, as you can see I can see this URL. So let's try to test this URL. There are many ways to test this URL. You can use postman or something like that. I'll just use curl. I want to send post request to this API URL. API URL, I'll just add slash coffee because I added slash coffee here, as you can see. And I'll add additional parameter minus I to get the headers and everything back. So I'm running curl minus X post. It will tell curl that I'm sending a post request to this URL, I added slash coffee in the end. And I want to add minus capital I to get the headers back. That's it, I got 2.04 as a status and I have date and request ID. So it works. Let's take five minutes break, if that's okay for everyone. And there's one thing that I want to finish before we finish this workshop. And that that I want to show you how to create at least one real route. It's one hour left, but I really want to show you how to write a real function that you can test. Yeah, so unfortunately we need more time for this workshop for sure, but as I said, I'll update this so you'll be able to follow all the remaining things by yourself. And I'll make sure to create like better descriptions for everything at some point next week. So five minutes break, and then we'll continue at, yeah, in five minutes, seven, oh, eight my time. And yeah, I have no idea in which time zones are all of you. So that's it, I'll stay here. So feel free to ask me the questions that we didn't cover.

Lambda Function and Environment Variables

Short description:

When deploying a Lambda function on AWS, environment variables are used to provide metadata and credentials. The template file downloaded from the AWS console contains descriptions and properties, but the actual data is obtained from AWS, not the template file. You can inject environment variables, including secrets from the secrets manager, to the Lambda function. However, there may be latency issues when retrieving data from the parameter store or secrets manager. It is recommended to retrieve such data during stack deployment. In production, it is better to store secrets in the secrets manager and pull them from there. The architecture and Cognito parts will not be covered in this workshop, but the post route will be explained.

In the meantime, everyone else can take a break. Unless you don't want to break, that's fine too.

I have a small question about Lambdas. Sure. So the situation is next, I'm using Lambda function and running it locally via s-a-m-a-w-s, which allows you to pull the Docker image. Sure. Local and local keys, it's the same API. And the question is, I got Lambda description in my AWS, I've downloaded this and added to my local Lambda repository into template file.

Okay. The question is on remote on AWS while real run, does it read this template file? Or it's taken from descriptions of this rippin on AWS console itself. So, here's how everything works. When you deploy this Lambda function, when it runs on AWS, you'll get a lot of environment variables that will tell your Lambda function a lot of different things, including some metadata, some credentials and things like these. So, unless you're uploading that template with your Lambda function code, it's probably not getting the data in from that template file. It's getting from something else. What's in that template file? Can you explain again? Yeah, sure. You know when you open Lambda, we are AWS console, you can like, you can actions I think, and like download description or something like this. Oh yeah, just a second. Let's do that here at Lambda. Okay. Double check now for a minute. So this basically, I don't know how to call it correctly, but this is description. It's like meta description of Lambda itself. It's like- Oh yeah, yeah, yeah. It's getting it from- Properties, environments, variables, tags, all of this. Yeah, yeah, yeah. It will get these things from, basically from AWS, not from your template or anything like that. Yes. Yes. So if you want to add some environment variables, we'll do that in a few minutes. But basically here, you can do something like environment and then Supreme VAR or something like that. And then you'll be able to access process, touch process.env.blah, blah, blah. Can you inject value parameters or secrets there as well? Yes. The secrets manager from Amazon, I mean. Yes, yes, yes, yes, yes. You can do that. You can do that. And that's actually- We can take them from the function, but just to clean things up, because you don't always want to have to like import loads of Amazon SDKs in your Lambda function. Can you repeat the question? I missed the beginning. Yes, I mean, obviously you can inject any environment variable that you want. But if you want to actually get out of the value parameter store on Amazon or the secrets manager, Yes. I'm saying- Yes, you can connect your Lambda directly parameter store or secret manager in the runtime. But the problem with that is that the latency is basically the main problem. So it can take like 20 to, 10 to 30 seconds to get something from the parameter store the first time. Yeah, but if it's- That's right. Yeah. You can, you can get it here while you're deploying the stack, you can probably get something from the environment like get from SSM parameter store or something like that. Yeah. Yeah. Of course you can do that. Of course, this is just a meta function, but you can create a, we do that in production. Because that would be, would be better than reading SSM in the lambda itself. Yes, yes, yes, yes. For sure, for sure. The values aren't changing every time, right? You don't, shouldn't have to re-read it every time. Exactly. Exactly. But it depends. What do you, what do you need? For example, if you need the, the name of your DynamoDB table or something like that, which we'll need soon, we'll do something like this. Table equals, what was a coffee table dot something. Yeah. Yeah. Table name. Yeah. So sometimes, of course, sometimes for secrets, you definitely want to store them to SSM and then just pull them from SSM store. So seeing as we won't have time to cover the architecture part. Yeah, actually we'll do that right now. That's the most important part. And we'll not cover the Cognito part because we don't have time for that. But yeah. I think we've seen like that, you know, we've got the post route working. It's pretty easy to get the rest of them working, to be honest. Sure. For sure, for sure. We'll just cover the post one because we'll not have enough time to cover everything. Yeah. I just wanted to see about the repository that you use and stuff. Yes, yes, yes, yes. That's probably the most important part that we want to cover today. So I'll relaunch this one. Did you finish this assignment or not? I have a problem with Lambda. The Lambda bundling, it says that it is missing the package JSON file for some reason. Package JSON file. Oh, yeah. That's really weird. Can you paste me in the Discord or in the chat this part, like your function definition? That sounds like you're in the wrong directory. Yeah, yeah, this is why I'm asking. Running the NPM command from. Just feel free to paste this part anywhere. Perfect. Functions.

Building a Real Function with Business Logic

Short description:

We want to have the business logic isolated from different things, such as events and databases. We can store the business logic in a separate file and have its own unit tests and integration tests. We can share repositories between multiple functions and test them against different databases. We'll create a coffee in the database, but won't have time to cover everything. Inside the Lambda TS function, we'll wire the dependencies and pass them to the business logic. We'll import the necessary components like the parser and CoffeeDB repository.

This sounds good. Indexed, yes. That's fine. From which directory are you running this command? As you can see here, I'm in the Coffeeshop API directory and I'm not in the directory outside of this. So I'm not here, I'm inside Coffeeshop API. Yeah, I'm running it from there. I mean I deployed that previous step, correct. Yeah, and you have package.json, right? Yeah, it's requiring a package.json in the Lambda function itself, I don't know why. No, it shouldn't, yeah. But what's in Lambda.js file for you? The same thing. Do you have ESBuild installed on your local machine? Oh, maybe I'm missing that, okay, let me check. Let me try that, it's explained here. Let me just send you that. Lambda, Lambda, Node.js, bundling options, whatever, come on. I don't have to do that, it worked for me. I don't know, I don't know why, but it might be that. I'm not 100% sure. Pierce that error, please, David. The error is, like, that packages and it's missing. So, if you don't manage to do this, let's just fix that error after the workshop, because we don't want everyone to wait. So, I want to show you one more important thing, and that's basically building a real function. Unfortunately, we don't have enough time to cover everything, but, yeah. So, let me tell you a bit about testing. How do we test Lambda functions? Basically, I prefer using hexagonal architecture ports and adapters. It works like real ports and adapters. So, for example, when you travel somewhere... Of course, not in 2012, 20 and 21, unfortunately, but, yeah. At some happier time, whenever you travel somewhere, there are different power supply sockets. In hotels, for example, you don't want to buy a new adapter for your Mac or other laptop, because you don't want to pay like more than 100 euros or dollars or whatever, just to be able to charge your laptop. Instead, you can have a small adapter that will just make sure that you can use your existing adapter with a power socket in your hotel. You want to do the same with your code. You want to have the business logic that is isolated from different things, for example, from events, so you can trigger it with the localevent or maybe a lambda event. You can store it to DynamoDB database or some kind of in-memory database, and maybe at some point in some other Postgres or something else, and you want to be able to do similar things. So I have some code here, but this is from some presentation I did. Let's look at the lambda function. It's a similar to the function that we are trying to build, but it's just connecting to something called event bridge instead of DynamoDB. Let's just assume that this is DynamoDB. We have lambda-ts file. That's the file that we use so far, and it doesn't have anything. We don't want to store business logic there. Instead, we want to store our business logic in some main.ts file. I love to call it main, but you can call it whatever you want. This file should have its own unit tests and integration tests. And then we want it to have some different repositories that we can share between multiple functions. For us, this will be coffee database repository. And we don't want to test our main.ts in each function against a real database. We want to test about it against some local in-memory database, just to be sure that this connection really works and that this business logic works with this integration. But we want these tests to be fast, but then we want to have this repository that talks to real DynamoDB. And we want it to have a unit test and also integration tests that will be tested against the real DynamoDB. And finally, we'll have some event parser that will just know how to parse lambda event and that parser will have unique tests. Then if you want, we can just like switch, we did this in production, we changed MongoDB with DynamoDB by creating the same repository with the same interface and then just switch everything in lambda TS file. I want to show you how to do this and we'll cover just this one, but we just want to create a coffee in the database. Unfortunately, we'll not have enough time to cover everything, but yeah, you'll be able to do that at home if you want. So let's try to do this. So what I want to do now is go back to my functions. I already have a function. I don't want to change anything begin my template file, we'll just work in this function here. So give me a second. Okay, so inside this function, we want to do few different things. I'm not able to see your questions right now because I'm trying to minimize my errors. So I'm looking at the copy of the code on my machine. So if you have any questions or anything like that, feel free to unmute yourself and just let me know. Thanks. So in Lambda TS function, first, we have some event here. This event we can also get the type for this event from this. It's called API gateway event... Oh, sorry. It's API gateway proxy event, proxy event version two. Now we know that there are some things here but we don't care about them right now. What do we want to do inside this Lambda function? Basically, what we want to do here is, we want to check if we have some DynamoDB table name but we don't care that much about that. As we can see in this here, this function, this file should just wire the dependencies, should wire this parser, this DynamoDB CoffeeDB repository and this business logic. So we want to do something like this, const, actually no, we want first to import some business logic from source main TS. We don't have this file at the moment, but whatever, let's call this Add Coffee. We'll create this function in a second. This main function will just wait for this. It will pass the event, this function, it will pass some parser to this function and it will also pass some Coffee DB repository to this function. So the simple three things, and it will just be, and it will wait for the results. So it's not doing anything special. It's passing just some things to our business logic. And in the end, let's say that we want to do console log result and maybe still return status code 204 if everything is fine. So we want to do this. To be able to do this, we also want to import few more things. We want to import our parser. Let's call it actually parse API event. It's a bit nicer. We want to import that again from our source parser. We'll create this file soon. And also we want to import some CoffeeDB repository. I like to create this repositories as classes. So I'll name it with the capital C here and I'll pull this from something called common.

Building the 'add coffee' Function

Short description:

We create a Lambda TS file and build the business logic step by step. The function 'add coffee' parses the event, extracts country, name, quantity, and price, and saves the coffee to a database using the coffee repository's 'save coffee' method. We discuss the option of parsing the event in the handler or passing it directly to the function. The goal is to test the function separately and have minimal failure points. We also define a parser type and a parser response type to ensure type safety. The event format remains the same, and the parser function returns the parsed values. The business logic can be reused for different events by passing the event and the corresponding parser. The parser and parser response types are defined locally to the function.

This doesn't exist at the moment, we'll create this soon. And let's name it CoffeeDB repository. And then we need to create an instance of this. So let's say const coffee DB repository equals new CoffeeDB repository. That's it, this is our Lambda TS file that we want to build. So let's try to slowly build one by one these functions. Let's go to our business logic, this is the most important part. I'll create the new folder here called source. Inside this source folder, I'll create a new file called main TS. And inside that main file, I just want to do export async function. It needs to be async because I want this function to connect to the database. So I want it to be called at coffee because I named it add coffee here. I want it to receive some kind of event, parser, and also coffee DB repository. I hope that's clear so far. So it's nothing special here. What I want this function to do is the following. I want to parse the event. So parser event should return some values. Let's say that they want to have country name, and let's say that they want to have quantity, and also for example, price for this coffee. It's nothing special, I just want to have a few different things for this coffee. And then I want to take these things and say for example, await coffee repository.savecoffee, or something like that. And I want to pass the same things to this save coffee. Is this clear enough? So our business logic is basically parse the event, get country, name, quantity, and price, and save that coffee to a database. It's really simple. It doesn't matter if we store this in DynamoDB, Postgres, or anything like that. We just know that we have some coffee repository that has its methods save coffee, or let's call it add coffee, just to have the same name, and that's it. It saves the coffee with some country, name, quantity, and price. What we want to do here is, of course, to be able to have some types and things like this. I'll again take AWS Lambda and take API gateway. Come on, I'll just copy it from here. I don't want result, I just want the event. I'll copy this. So the event will still be in this format. Sure. What do you think of being able to avoid spreading the dependency of Lambda by just passing the parsed output to the function? Sorry, I'm not sure. So if you parse the event in the handler and pass that to the business- Oh yeah, yeah, yeah. That's fine, that's fine. I just wanted to check whether there was a reason why you need the- No, no, no, no, no. You can do this. I just spread everything because I just want like to know what I'm passing to this, but it doesn't really matter. That's not what I meant. I meant, if you look at the dependencies you're passing to add coffee. Oh yeah, yeah. You're only passing that event just to parse it, like you can parse that in the handler because you're not- Here? You're not using events? Like you can parse that. No, no. Like this? No, no. I mean when you call add coffee in the handler. Sure. In handler. Oh yeah, here. Yes. You can parse it there. Yeah, but I don't want to do that. I'll show you soon why. Cool. Yeah. Sorry about that. So I want to be able to test this thing separately from everything else. So just this thing. And I want to be able actually to have like any here. And this parser will be able to get this event and return this. So that's why I don't want to parse anything outside because then I would still need like to, I can do that, of course. It works. But then I have more things here that can fail. I want to keep this to a minimum. That the only thing that can really fail here is this part here. Nothing else. And I don't have tests for this file. That's why I'm trying to pass everything that I really need to this function. It's really not that important for a really simple function. But sometimes I really have different events that they want to pass to the same, let's say business logic. Maybe I can share this business logic between different things. So maybe the other event will come through SNS or something like that. So I can easily reuse this business logic if I pass the event and also the parser for that event. Hope that makes sense. But it doesn't matter. If you prefer you can parse that outside. Anyway, let's try to build the rest of the function. I need some type for this parser function. I'll simply because I'm not using this function outside of this function here. So I can create some local type TS or something like that. Let's call it type ts. And in these types ts I can easily let, let's say I can easily do export type, parser should be equal to event, I'm using any here because I don't really care about this. And this should be some kind of parser response. And this parser response, again, I can write this parser response by doing something like export type parser response equals, we said country, which is a string then name string, string price number. And then also do I need commas here? I probably need commas. And then finally I have quantity. So I'll save these things. And now here I can easily like do import from the dot types, and then I can save for example, parser, expect parser functions here. So I can say any here.

Creating Parser Function and CoffeeDB Repository

Short description:

We create a parser function called parse-APIEvent that takes an event and returns a ParserResponse. The event's body is parsed to extract name, country, price, and quantity. If the body is undefined, an error is thrown. Next, we create the types folder with a CoffeeTS interface. Then, we create the CoffeeDB repository, which uses the AWS SDK DynamoDB client and has a constructor that takes a table name.

It doesn't really matter now because I know that the parser will return some parser event and that this will exist in that parser event. And then also I need some coffee DB repository for this coffee repository. I still need to create this repository. So I can still, for example, I can have types for that somewhere and maybe I can do import from some common file that we don't have at the moment, common, yeah. Let's say types coffee, some file that we'll create with some types. And this will define some ICoffee interface that we'll create a bit later. This coffee interface, we'll use to return from here. And we'll create that coffee interface in a second. That's basically what we want to return from this function and to log. And we also need to define this coffee DB repository which we'll do in a second. I guess I can always do like, import from, again, that common that we don't have at the moment, slash, I think we named it somehow here, yeah, coffee DB repository. Because this is a plus I can say, coffee DB repository, and I can use that as a type here. So I have my types here. I'll delete this one because I don't really need it. And that's it. I want to also create my parser function. Is it called parser or? Yeah, it's called parser, sorry. I'll create my parser function. Sorry for interruption, I have this code somewhere in the repo because- Yeah, yeah, it will be in the repo. So I didn't push it yet. You don't need to type this. I'll type it. And I'll update this. I didn't put the repo right now, but I have everything ready. So after the workshop I'll push the code to the repo and just update this. So you have a link to the code in each solution back here. So it's a bit more complex, but don't worry. I'll try to explain everything soon. So here we want to create some a parser function. We call this function parse-APIEvent. And it takes some event and it returns some, yeah, it returns... We created that local type here as you remember, ParserResponse here. Event can be basically, can be anything. But in this case we know that this function will parse the API event. So I want to do import from types and I'll use ParserResponse here. This is not an async function because it received some static values and returns static values. So I can simply return the value here, not the promise or anything like that. I'll also make sure that this time my event is really API gateway proxy event. And now I know how can I get this. For example, I want to parse, actually, event.body and that event body will give me something such as name, country, price, and quantity. And I can simply return this. So I don't do this. I'll simply do, actually, let me do this return. So we have everything explicitly here, but body can be undefined. In my cases, body will not be undefined, but let's be sure that we can protect from that. So if, for example, I can say, if event body doesn't exist, throw a new error, which will be called, let's say, body is required. So I have a parser function that can parse my API event and return name, country, price and quantity from the body. Maybe I can have more things in the body, but I don't really care about them. And now I really need to have my types in CoffeeDB repository. So before I start with the tests, let's try to create that repository thing. To do that I'll create common here in functions. Just a second, wrong folder. Yeah, move. So inside functions now I'll have, add coffee, and I'll also have common. Inside this common, I want to have two things for now. Inside the common, I want to have my types. And let's start with the types because that's the easier part. I want to create a folder called types. And inside these types, I just want to have coffee type right now. It's CoffeeTS. And this is something like that. I'll create an interface and export it, but I'll just, it's the same thing that I say, I'll just add some ID in case I need to add an ID. Maybe I don't need to add an ID. I don't think we store the ID. So it can be the same like this. So that's it. And the other the most important actually piece of this is Coffeedb repository. So let's create this Coffeedb repository. It's Coffee.db repository. Maybe we should call this repository Coffee DynamoDB repository or something like that to know explicitly that it talks to DynamoDB but it doesn't really matter right now. So what I want to do here is I want to use AWS SDK and I want to get something from AWS SDK but they don't have it right now. So I'll do npm install AWS SDK minus, I don't need to save it actually. While that's installing, I'll also import... What do I need to import? Maybe I want to import iCoff. No, let's actually create everything. I want to create a class. It doesn't need to be a class but it's clean for me to have classes for this and it also provides the interface for us. It's called CoffeeDB repository and then inside that class, we need to have some constructor. In this constructor, I want to be able to get the table name because I need to pass this table name to AWS SDK to know in which table should we write something. This table name is simply a string, nothing special and that's it for now. I don't need anything special here. I'll just do this table equals table just to store this table name to something called table. Because of the TypeScript, I need to do this. This works. So let's hope that this is installed now. Yeah, it is. I want to use the clients and I want to use DynamoDB client. I don't need to do this specifically. I can just import from AWS SDK but I know that this will create a smaller bundle for me. And I want to use something called document client.

Storing Coffee Data in DynamoDB

Short description:

We use the document client to simplify writing to DynamoDB. We can create a new document client by default or pass a mock for unit testing. The 'add coffee' method stores coffee data to the database using the put function of the document client. The data includes the primary key, secondary key, and other columns. The repository class handles the storage process and returns the stored data. The table name is obtained from the process environment. The lambda.ts file checks for the presence of the table name and throws an error if it's missing. The business logic calls the parser to extract the required data and then calls the coffee repository to add the coffee to the database.

Document client is a part of DynamoDB that simplifies a bit writing to DynamoDB. As I want to be able to test, to do a local unit test for this repository, I want also to do this. I want to pass another argument here that will simply create a new document client by default. And in all other cases, I can just pass a mock or something like that instead. And I'll store this here. This is document client, nothing special. I'm just lazy to write document client one more time. So I want to be able to create, oh, sorry, DC document client. Come on. I want to be able to pass any string and I want to be able to easily pass a mock instead of this just to be able to do unit tests for this. And then I want to store, to add coffee. Sorry, to add coffee to the database. This is a simple method. This method requires a few different things. Let me just see here. This is the order that we used. So I'll use the same order here. Maybe it's better to use the object instead, but whatever. We already put this string number, number and this will return some promise. It should return probably a coffee or something like that to me. It doesn't really matter, but let's say that it returns a coffee. I can use that interface that we built, like from types coffee, I coffee and then I know that this returns I coffee. So let's try to save this database. How do we save this database? Well, I can use this.dc. So this is document client and document client has different functions. As you can see, a lot of them, I'll use this put function. Put function will store this database. Inside this, I need to provide two different values, table name. As you know, I already store this table name in this class. So I can do just this. Besides table name, it needs to store some item to the database. Item is basically what we want to add to the database, something that we try to add manually at the beginning of this course. So as we remember, we want to store primary key. Primary key will be coffee, the same that we did at the beginning. Secondary key can be, again, the same thing that we added at the beginning, it's country. Then this country. I'll use template strings here just to create one bigger string, and then name, and then this name here. Then I also want to store everything else as normal columns. I can basically do this country, name, quantity, and price. In production, you probably want to check these values and make sure that they are the real values, not something else. But right now, we'll not do that because we don't have enough time. As you can see here, it has no effect on this type because this is not returning a promise. To be able to get the promise, we need to add .promise at the end. It's a bit weird, but this is something that AWS does with a lot of things. Now, this is stored to a database. This will return something, but I don't really care about this. I want to return iCoffee and to be able to return iCoffee, I simply want to do return country name, price quantity, and that's it. Now I have my repository. It's nothing special. It's just something that stores the thing to DynamoDB and returns this. If I want to replace this with some other repository, I can create the same thing where this will store something to Postgre or some other database and that's it, but I'll still have the constructor that will get the table name and coffee that will just add these things. Now if I go back to this function, I'll see that I created a mistake here and I don't have a table name. To pass the table name, I'll take something from the process environment and I'll show you soon how to add this to a process environment. We'll just call it coffee table. It complains because this can be undefined. We need to protect from that, so I can say if there's no this, throw new error and that error can be something, let me see here, table name is required or something like that. Let's see here. That happens when I. Let's recap quickly before we add that. We have lambda.ts, lambda.ts just checks if we have this. When we don't have this, it will throw some error. Then it will create some DynamoDB repository. That DynamoDB repository instance, we just need to create to pass this coffee table name. Finally, it will add a coffee to the database by calling our business logic and will pass event, some parser and coffee DB repository. Then our business logic will just call this parser. Pass the event to that parser. Get the things that we really need. When we get the things that we really need, we'll call coffee repository, add coffee and pass the things that we want to pass. And that's it. Then we'll have also this repository that is probably the most important part right now, which will actually store this to our database. Any questions related to this?

Passing Parser as Dependency Injection

Short description:

We pass the parser function as dependency injection to the Lambda function. This allows us to keep the Lambda TS file small and focused on the business logic, while the parser acts as an adapter for the event. The parser checks if the event body is empty, parses the body if it's not empty, and throws an error if there's an issue. We need to deploy the function and ensure it has the necessary permissions to write to the coffee table. The parser is particularly useful in more complex functions, where it helps separate concerns and allows for easier testing. We can use CloudWatch to debug the function and enable source maps for better error tracing.

Sure. Yeah, this one? Yes. It's really simple. No the other one, the types TS Oh yeah, yeah. Sorry, sorry. This one. Yes. And then the parser TS. Yeah, parser is basically just like returning this, which is basically this, maybe I should use I coffee here instead of this, but it doesn't really matter. And then it takes some event, it knows that this parser is for API gateway. So it knows that this is API gateway proxy event. It checks if body is empty. If it's not empty, then it parses body because body is a string. As you can see here, it's string or undefined. This can throw an error and that's fine. If this throws an error, this Lambda function will error. And that's okay for me right now, we'll get error 500. If you don't want to get error 500, I can go here and just add like try catch here and then pass some error that I want to handle outside or something like that. So let's try to deploy this quickly before adding tests. To be able to deploy this, if I just deploy, it will not work because I need this... Just a second. I need this coffee table. And to add this coffee table, I'll go back to my Lib coffee shop API stack. This one. Inside the function, I want to add one more thing called environment. And inside the environment, I want to add coffee table and that should be coffee table dot... Table name. Yeah. Let's try to test this. So if I deployed this, do you think this will work or not? Any ideas? I'll go back to my chat so you can write if you don't want to. So Robin gave us a good tip. So we should use unknown instead of any, for types. Thanks for Robin. That's a really good tip. There's an alternative. So our function is deploying right now. And any ideas, will this work or not? I'll tell you it should not work, but do you know why? You got to give it access. Yes. Yes. So this function does not have permission to write to this table, but let's just wait for everything to be deployed. And then we'll make sure that we have enough permissions for that. Can we have a small question? Sure. Could you please again, explain why do you pass your parser as dependency injection? Because it will be used farser because now it's useless a bit. Yeah, this is really simple function. So in this case, it's really not that useful. It's more useful in more complex functions. This function, the business logic of this function is really, really simple. But what I'm trying to do here is basically to have this. I have my business logic and everything that comes into my function has a small adapter. So my parser is basically adapter that can adopt Lambda event or some local trigger to what my business logic really expects. I don't really need to pass this, but then I have more and more logic inside this function, inside this Lambda TS file that I don't have tests for. So I want to keep this file as small as possible. And I don't want to do many things here because I don't have any tests for this. My tests are here. So I want to pass this function, make sure that I'm handling all the errors here. That's the only reason. But for these small functions, it's up to you. It's like, there's no big benefit. So I think I've hit this problem of trying to do this before though a passing off direct AWS dependencies into my hyper functions. Especially when I'm testing, you've got a stop the whole object that you only care about a couple of attributes off. So what I was suggesting was in the adaptor, like before you call it, you pick off the stuff that you want off the events, proxy event and all those things. So it's not that much extra code, it's just having a stricter boundary so that the HTTP layer doesn't spread into the business layer. But I guess your parser sort of is your adapter layer there, right? Yeah, exactly. But like- That's it. Can you show me an example, like of a test where you're faking out your like talking to client for example, because those APIs are awful. Yes, I'll show you that in a second. Let's just try to quantity, let's just try to invoke this first. Right, do we have enough time? Yeah, I'll show you the tests now because it's like a, let's say that this is $10 and I have this coffee then I have I. Oops, just a second. I missed something, but I have no idea what. Curl x post minus d. Oh, so I got internal server error. So this is probably small i or something. Yeah, I have error 500. So let me quickly show you how to debug this function because I think that's useful. To debug this function I can go to something called CloudWatch and inside CloudWatch we have logs for all the functions and everything. And right now, just a second, right now I can go to log groups here. In log groups we have, yeah, there are a lot of different things here for me but for you there will be AWS Lambda Coffeeshop API or whatever the name is of this. And then the function. There are multiple times I call this function multiple times but I want to take the latest one. If I go here, I'll see the error and the error is like unexpected token, Jason, blah, blah, blah. And then I see this index JS line 71. It's useless. So to fix this, there's an easy way to fix this actually. We need to enable basically source maps, to enable source maps, I can do, there's a small trick in node version 12 and 14. You can basically do just a second or a small explanation for that here. I want to do this. So I want to have node options, environment variable and to pass source maps to that. And now if I deploy again, come on hopefully we'll be able to see here. So while this is deploying, let's do the tests.

Testing and Troubleshooting

Short description:

While deploying, encountered an error with source maps and access denied inside DynamoDB. Tests were conducted for parse API event, business logic, and integration. Mocking repository and testing local coffee DB adapter were explained. Integration tests involved creating a new DynamoDB table and testing real data writing to the database.

So while this is deploying, let's do the tests. Instead of writing everything because we don't really have enough time to do that. I'll make sure that they have the other part of my code here. It's the same thing. It's just like my backup code. So I know that I have a solution that works. Let's go back here just a second. Yeah. One more second until we deploy this thing. For some reason, it takes a bit longer time can be. I don't know why, actually it's updating the stack but most of the time it's a bit faster than this. It's much longer than on my local machine. Like 10 times longer. Yeah. I'm sharing my screen so my computer is much slower now for some reason. And if I do Uptime here. Yeah. It's like, it's not perfect but I should probably restart my computer before doing a workshop. So okay. Let me run this again. And I'll get the same response again. No surprise. But if I go here, go back here, wait a bit because it takes some time to get the logs. Here's the new logs. Ah, this is not working. Nice. So I have some error here and source maps are not working. So, cool. I see that it's ParseAPIEvent and json parse is basically not working for some reason. Maybe I'm not sending the right country, Columbia name, Columbia quantity 10 price, whatever. Oh, I know, I know, I know. I need to add headers and I need to tell it that content type is applicationjs. JSON. I hope it's like this. If this doesn't work, I'll stop here and I'll make sure to update this in the documentation after the workshop. Oh yeah, we have something much better. So we have access denied inside dynamo DB and unfortunately, for some reason, our source maps are not working right now. Oh no, they are working but they're pointing to different files that are not our, like state machine, blah, blah, blah. So that's it, now we don't have access to this dynamo DB, we are not able to do put item on this lambda function, but I'll show you that if we have time. Before that, I think much cooler thing is related to tests. So let's see the tests here. If we want to test this, I can easily test parse API event by creating some sample event. And then, so I'm using just four tests. I imported this parse API event from parser. I'm calling this function slightly different. Here are some sample event, these are not really important things. These are the things that are required by API gateway version two. There's no body but there are some different things that are really not important. Here are my tests. These are unit tests and I want to see that it will throw the error called body is not required. Then after that, I want to have another test that will parse the event when I pass some things to it. And I also want to have, I here expect that this result is really returning what they want. And I also want to check that I'm ignoring the additional options, and that's it. These tests are simple. For my business logic, again, I have simple tests, unit tests. I'm requiring that function here, and here's how do you mock. So, Aaron, I think you asked how to mock things like document client and similar things. I'll show you that in a few seconds, but here, I'm just mocking like my repository by using just require mock. So I don't need to rebuild all the different things. And here's, it's easy. I just create a mock here from this mocked repository, and I pass this mock here. That's it. There's no complex mocking or anything like that. So I can easily like put some sample event. MockParser is just a really simple mock function that returns something. And then I have like, I'm just checking things. Also here, I'm checking that this mock is really invoked, so that's fine. And then the integration tests, they're quite similar. I have something called local DB with the same interface. So it has something, I don't really care what, but it has addCoffee function, that is async function with the same interface and it returns the same values. It just stores everything in this local table which is basically whatever. I don't really care about this. It's just returning some ID here or something like that. And then if I want to test this, I just like pass this local DB coffee adapter and just make sure that everything works fine with my local coffee DB adaptor. And I don't think I have, Oh, I have this inside coffee repository. Oh no, I didn't add the tests for coffee repository, but for the coffee repository, it's, I'll add the test in the DynamoDB, in GitHub repo that I'll send to you, but everything works like this. Actually I have it here. No, no, no, no. Here. Yeah. For testing, if I want to test for integration tests, I'll do this. Before all, I'll create a new DynamoDB table. These params will just create a primary key and secondary key in my DynamoDB table. At the end, I'll delete my DynamoDB table. I'll always need to wait for this table to exist or not exist because it will finish this promise before it's really deleted or before it's really created. And then inside, I just want to test, inside these tests, I just want to test to write the real things to the database. That's how I test, like I'll send you the code for that. But basically with some explanation here in this part. But yeah, that's how do I test, like if this really works with the real DynamoDB database. And yeah, unfortunately we don't have enough time to finish everything today.

Adding Access to DynamoDB

Short description:

We add access to a Lambda function to write to DynamoDB by granting the 'write data' permission. This allows the function to read and write to the table. We don't need to use the ARN, and it's possible to create a special role to grant specific permissions. The naming convention for short keys in DynamoDB is not standardized, but using a character that stands out and is easy to split the data by is recommended. If you have any more questions, feel free to ask.

Any questions before we finish? As I said, I'll update this documentation for early next week and just add in each of these parts, you'll have the link to the commit in the GitHub so you can see what changed.

One last question, how do you give access to DynamoDB to solve the problem from your deploy? Oh yeah, so how do we add an access to? So basically we want to add an access to a lambda function and yeah, this is something. So in CDK, we want to add permission to a lambda function to write DynamoDB. Just a second, creating permissions in cloud.

So basically, when you're creating a table, yeah, you can do this. This is the fastest and easiest way to do that. Just a second, let's try this, coffee table grant write data. We want to write and we want to add these, to our add coffee function, I think. So let's just try if that works now. There are multiple things that you can do. So here you can define like custom permissions, which we don't want, or you can use one of the permissions that are like a predefined for us. So read, write data, stream, stream, read, write data. We just want to write data to, to this function, but I granted read write data here. So, yeah. It's quite simple, but yeah, this, as you can see, this will be able to do all these things. Our function will be able to do all these things. If you don't want to allow it to, just on this, like one, sorry, one table, so this Lambda function will be able to read and write to this table. If you don't want that, you can print better permissions by like, you can create a special role and grant that role to this function. And that role can allow this Lambda function to do specifically one thing only, or something like that. Right now we can, we can do this easily because we don't need to be able to read from this table. We never read from this table. But anyway let's... Do you need to use the ARN, the ARN to do that, then? Not really. Not really. Because I don't think so. Let's see. What's the grantee interface, yeah. Yeah, I don't know. Let's see. This handler? No, they're using the function here. So the function should work. Let's see. Because one of the pains of the serverless framework is that you've got to use the ARN, the ARNs and... Yes. IAM roles. It's just a real pain. Yes, I agree. So let's see. So it works as you can see. And if I go because this will take the ARN from this function somewhere under the hood. So we don't need to manage that. Now if I go to my DynamoDB, I can see that I have my new coffee. I have no idea what's the name of my coffee, but let's see. I think it was Columbia. Country, country, Columbia. Yeah, this one. So let's try to add one more just to be sure that everything works. So country, give me some other country. Cuba? Ireland's known for it's coffee. Yeah, yeah, yeah, yeah, definitely. Cuba. Let's call this coffee tree, whatever. We have quantity of 30 and let's say that this coffee is a bit cheaper. Start. If I go here, and how do I reload? I have no idea. How? scan. Cuba, yeah, it works. So here's the thing that I mentioned. So if I do like coffee and then I query this table, it will return all the coffees. But then I can say, for example, begins with country, Brazil, just one, because for others I put Brazil. And then Cuba, if I add another coffee for Cuba, for example, with different quantity, 40 and price of four and go back here again, I can like see two of them. As you can see, it's really, really fast way searching. I can also do this, like find all countries that starts on letter C. That will work. Or I can say, for example, find me all countries that start with Cuba. And then I can add filters, and say, for example, where the name... Oh, come on, when the name contains and run this, it will find this one. Sure. How you name your short keys. Is that your own convention or is that what the DynamoDB community uses? I don't know. I think I saw this multiple times like PK and SK because this is partition key or sort of key but... I know that's not what I meant. I mean, the country, hash Cuba, that part. Oh, yeah, with hash. Yeah, yeah. So the hash part is explained in many different, you'll see the links that they sent at the beginning of this workshop. So you'll see that they use these composite keys and hash is something that is often used for splitting the composite key. But it's up to you, you can use any basic character you want to use. By convention it's good to use this, is it? There's no best practice. So it's something that you want that you don't have often in your data. So for example if you're building something similar to Twitter and you have hashtags inside, then it's probably better to pick some other character like Piper, something like that. You want something that is easy to type but also easy to split the data by that. And maybe that's it. Like obviously not user data. Oh definitely. That's something that stands out and has only a few characters I suppose, yeah. Yeah definitely. So do you have any other questions? Because I don't want to take too much of your time. It's Friday and for me this was a long week. I'm pretty sure many of you had a long week.

Handling Environments and Rollbacks

Short description:

We use Git as a rollback mechanism for deployments. AWS CDK provides a convenient way to set up continuous deployment. If something fails during deployment, it can be rolled back to the previous version. Different environments can be managed by deploying multiple AWS CloudFormation stacks. In our production setup, we have separate sub-accounts for production, staging, and development environments. Each developer has their own environment, and the same stack can be deployed to multiple environments for testing before going to production.

I'm really sorry that we were not able to cover everything. As I said, I'll update this and if you want more like you can see this here. You can follow me on Twitter. Feel free to ask me any questions related to this. So any questions now? So I have one question regarding environmental. For example, how to handle different environments? Let's say we have something in production but we're doing some changes and we want to test that in a different environment? And the other question will be if something goes wrong is there a way or an easy way to roll back the change? So for us, we use basically Git as a rollback mechanism. So there is a cool thing. Just a second, AWS CDK pipeline. Yeah, this one. I'll paste this link to both. So CDK team built a nice way to build continuous deployment with CDK. It will do something like this. Source, build, update, pipeline, publish assets, blah, blah, blah. And as you can imagine, it's like deployed as CDK basically construct. So you can use this to set up your continuous deployment with this. And what we do for vacation tracker, whenever we have a new deployment, we basically push to some branch and add a tech and then that's automatically deployed to production. And if we want to roll back that, we simply do that on Git and the code is deployed again. There are ways to do that with the cloud formation, but I think most of these ways are just overcomplicating everything. The good thing is that if something fails during the deployment here, if I managed to, I don't know, break something, we'll see the error here in the events and it will simply roll back to the version before that. So you'll not able to break the infrastructure in production that easily. You can break your code, but for your code, I think it's better way to use Git or some other basically source code management tool to do rollbacks. And for different environments, this is just the AWS CloudFormation stack. So you can easily deploy multiple stacks here or what we do in production. We have one sub account for our production, one sub account for our staging environment, one for develop environment and then each developer has their own environments and basically we can just deploy the same stack to multiple environments and make sure that someone checks everything before it goes to production. So for us, you test on your own environment as a developer. Then when you finish some feature, you create a pull request on develop. Someone checks that, we run tests and everything that goes to develop environment. And then when we want to create a new release, we create a new release environment. Basically we have some staging environments for releases and then even our QA tester checks some things that they want and then after that, when everything is approved, that's merged to production. That makes sense.

AV Testing and Local Debugging

Short description:

AV testing can be done easily with API Gateway's BlueGreen Deployments. It allows for rolling out changes to a percentage of users and rolling back if errors occur. However, it may not be suitable for APIs connected to specific platforms like Slack and Microsoft Teams. For local debugging, you can use AWS SAM's lambda container or run the TypeScript function locally. You can pass attributes to the function and test it or run tests. Remember to delete the stack and associated resources manually if the deletion fails. Check the DynamoDB table and delete it if necessary. Thank you for joining the workshop and stay updated on future workshops and books on slobodan.me/subscribe.

And have you done any like AV testing and for example, how would that work with this? Oh yeah. Yeah, you can easily do that. It's like, but what's the AV testing? Most of the AV testing we're doing are more or less connected to the front end basically. You create a few different routes and then you somehow from the front-end decide which routes you want to do to run, but there's another cool thing on the API Gateway that you can do during the deployment but also for AV testing at some point that's a BlueGrid Deployments it's supported automatically by API Gateway. And there are actually a few things that can already deploy deployments and things like this. You can tell API Gateway to, for example, roll out new changes to 10% of your users each 10 minutes or something like that. And then if there are enough errors in the CloudWatch console, it can roll back everything to the previous version. So you can automate these things, we are not using that in production, but yeah, there are many things that you can do. We are not using that because our API is connected to Slack and Microsoft Teams. We need to provide specific URLs for them. So we are not able to do test, to do Canary deployments for that, but and we also use a lot of AppSync for that, but yeah, it's useful for some kind of applications. So Alex, thanks for a great reminder, let's do this to end this workshop. NPM run CDK destroyed to actually destroy everything that we built today. I'll try to run it first. And then there's another great question for local debugging. There are a few things that you can do for local debugging. For example, you can easily... There's AWS SAM has a lambda container, so I'll just delete my stack now. You should do this too. Will delete the stack because who has bootstrap issue, it would not delete the stack. You should go delete S3 and then only it's possible to delete the stack itself. Perfect. Let's see. I had the bootstrap issue. Oh, for me, it's destroyed. Yeah, yeah, just... Interesting. So let's see. If I go here on my CloudFormation, I'll see that I don't have this stack anymore. I have this other stack. I can delete this other stack that I got with bootstrap by clicking on delete button here. Probably don't want to delete this without manually deleting the S3 bucket. But let's see. So for local debugging, you can use some local to run a local Docker container and debug that function locally. Or the cool thing with this approach, remember the hexagonal architecture that we mentioned here at some point, the good thing with this is that basically your function is simply a TypeScript function. So if you go to functions and then CD add coffee, CD source, this is our business logic. You can easily run this business logic. I don't have a TS node here, but you can easily run this business logic by like starting a Node console and passing some attributes to this function but you need TS node because this is TypeScript. But the good thing with this setup is this is just a simple JavaScript function. If you want to test one function locally, basically add coffee equals require. Something like this should work. Of course it's TypeScript. So this will not work for me right now but if you have TS node, this will work. And then you can easily invoke this function with different attributes and everything or you can simply run your tests and see. The cool thing is that you can go to, for example to your Lambda TS file, log this event by doing console log and then actually sorry, JSON stringify console log. And then from cloud watch you can take this event and then just pass this event to your local function and make sure that everything works or doesn't work with that event. I hope that answers the question. Unfortunately that's something that is really important but we didn't have enough time to fix. So as you can see, delete failed here and it failed because of my bucket. I can delete this stack by keeping this bucket and I'll do that right now and then I'll delete manually this bucket. I also want to check one more thing in the DynamoDB. It's possible that it didn't delete our database. Let's just check, yeah, our database remained because of this line. This is the last thing that I want to show you, sorry. Cd, cd, npm run cdk-sinc. If I run this, I'll see that it added something called delete policy retain. So it said that it will retain this database even if you delete this deck. So I need to manually delete this table by selecting it and clicking on delete button. I can create the backup before deleting it. I don't want to do that. I just want to delete it. So, thank you very much. I hope you learned something new today, and had some fun. I'll update the documentation, as I said, definitely not today, but probably tomorrow or on Sunday. So you can check early next week and you'll have more data in this link. If you want to learn more about serverless applications, we'll soon do our next book on testing, sorry, not testing, but building GraphQL with AppSync and TypeScript. So feel free to go to my website and go to slobodan.me slash subscribe. Subscribe and you'll get more workshops and more updates once we start, like we finish some, a few chapters of our book. Yeah. Thank you very much. If you have any additional questions, feel free to reach me on Twitter. This is this URL. Thanks very much. Cheers. Thank you. Have a nice weekend. Thank you. Yeah, you too. Thanks.

Watch more workshops on topic

React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Workshop Free
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.
The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.
React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.


React Advanced Conference 2022React Advanced Conference 2022
148 min
Best Practices and Advanced TypeScript Tips for React Developers
Workshop
Are you a React developer trying to get the most benefits from TypeScript? Then this is the workshop for you.
In this interactive workshop, we will start at the basics and examine the pros and cons of different ways you can declare React components using TypeScript. After that we will move to more advanced concepts where we will go beyond the strict setting of TypeScript. You will learn when to use types like any, unknown and never. We will explore the use of type predicates, guards and exhaustive checking. You will learn about the built-in mapped types as well as how to create your own new type map utilities. And we will start programming in the TypeScript type system using conditional types and type inferring.
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.
Level
: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
Workshop Free
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:
- User authentication - Managing user interactions, returning session / refresh JWTs
- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents
- A quick intro to core authentication concepts
- Coding
- Why passwordless matters
Prerequisites
- IDE for your choice
- Node 18 or higher
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
Workshop Free
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.


TypeScript Congress 2022TypeScript Congress 2022
116 min
Advanced TypeScript types for fun and reliability
Workshop
If you're looking to get the most out of TypeScript, this workshop is for you! In this interactive workshop, we will explore the use of advanced types to improve the safety and predictability of your TypeScript code. You will learn when to use types like unknown or never. We will explore the use of type predicates, guards and exhaustive checking to make your TypeScript code more reliable both at compile and run-time. You will learn about the built-in mapped types as well as how to create your own new type map utilities. And we will start programming in the TypeScript type system using conditional types and type inferring.
Are you familiar with the basics of TypeScript and want to dive deeper? Then please join me with your laptop in this advanced and interactive workshop to learn all these topics and more.
You can find the slides, with links, here:
http://theproblemsolver.nl/docs/ts-advanced-workshop.pdf
And the repository we will be using is here:
https://github.com/mauricedb/ts-advanced

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk
here
.


Vue.js London 2023Vue.js London 2023
30 min
Stop Writing Your Routes
The more you keep working on an application, the more complicated its routing becomes, and the easier it is to make a mistake. ""Was the route named users or was it user?"", ""Did it have an id param or was it userId?"". If only TypeScript could tell you what are the possible names and params. If only you didn't have to write a single route anymore and let a plugin do it for you. In this talk we will go through what it took to bring automatically typed routes for Vue Router.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk
here
.