In this talk I explain how the AWS Lambda service works explaining the architecture, how it scales and how a developer should think about when they design their software using Lambda functions
AWS Lambda under the hood
From:

Node Congress 2023
Transcription
Hi everyone, and welcome to this session on aws lambda under the hood. I know several of you are not looking only on the how to build some code, but also why you should build the code in a certain way. Either you are an expert on writing lambda functions or you are thinking to use serverless and aws lambda in your next workload, I think at the end of this talk you will be able to take conscious decision on why to write the code in a certain way. So without further ado, let's go ahead. My name is Luca. I'm a principal serverless specialist in aws. I'm based in London. I'm an international speaker and a book's author. So in this talk, we're going to cover quite a lot of ground. And especially what I want to cover is, first of all, what is a lambda function? Because maybe you have an idea what it is, but let's try to nail the key characteristics of lambda functions. Then we move on understanding how the service architecture works under the hood. We then discuss how we compose a lambda function in aws. And then we move into the function life cycles and how to leverage them for maximizing the benefit of your code. And last but not least, we're talking about how to optimize your node.js code for lambda. There is a lot of ground to cover, so let's start. What is a lambda function? A lambda function, in a nutshell, is as simple as you provide some code and we take care about provisioning and managing the service for you. So you don't have to think about the networking side or how to orchestrate the scalability and so on and so forth. You just focus on what really matters for you. So creating value for your customers and moreover, writing some node.js code that map basically your business capabilities into production. You pay by millisecond. So every time that you invoke a lambda, you pay for only the execution time and nothing more. And that is a great way to think on not only on production, but also think about, for instance, testing a staging environment that are not used 24-7 like your production environment. There you just pay for what you use. You don't have to provision containers or virtual machines that are there 24-7. You can serve the content in two ways. You can provide us your code through a zip file when the file is up to 250 megabytes. Or if it's bigger than that, you can use container images up to 10 gigabytes. In both cases, you can leverage the benefit of aws lambda without any problem. We also offer built-in languages. We have some runtimes that are available and are managed by us for you, like Java, Go, obviously node.js,.NET, Python, and many others. But also if you want to, if you have a specific use case, you can bring your own. And we operationalize in the exact same way. Classic example, we had a customer that was working in finance and they needed to use COBOL, but they really felt in love with lambda, so they created their own custom runtime in COBOL. Last but not least, scale up milliseconds. So based on your traffic pattern, we take care of scaling your lambda function and answering all the requests that are coming. So usually from a developer perspective, what you see is that you are writing some code, as you can see on the left of this slide, and then you upload on your aws account and some magic happens and you start to have an api that is functioning. The reality is there is way more. So the question now is, have you ever thought how lambda works under the hood? So let's take a look into that. First of all, there are two operational model of lambda functions. The first is a synchronous invocation, where, for instance, you have an api gateway that exposes your api, there is a request coming in from a client, and a lambda function is triggered with the response, and then you serve the response synchronously to your client. The other option is a synchronous invocation, where in this case, you have a service that is pushing an event into the lambda service, the lambda service stores inside an internal event queue the event, and then lambda functions start to retrieve these events slowly but steadily and operate some work on that. The requester, so in this case, Amazon EventBridge, for instance, it receives directly an acknowledgement and nothing more. And those are the two ways that the lambda invocation works. So if we see on the grand scheme of things, how from what you see on the left of this slide, where there are multiple services or even synchronous or not, are sending a request to the aws lambda service that is the big rectangle that is inside this slide, and how to move into your code that is on the far right of this slide, where there is a MicroVM sandbox, is an interesting journey. And especially, I want to first highlight what's happening inside your sandbox. The sandbox is where your code is running. Now, if you think about that, your MicroVM, where there is the code that you have written and is operationalized by us, is running inside a worker. And obviously, there isn't just one worker. There is way more. Usually in aws, we have multiple availability zones. And as you can see here, you have multiple workers running inside one availability zone. And the availability zone is a data center. So think about how a data center looks like, and that's your availability zone. But usually in aws, every time that we create a region, it's composed by multiple availability zones. Therefore, every time that you are pushing the code inside lambda, automatically, you're going to have your code that is available in multiple data centers. You don't have to do anything. You just focus on which region you want to deploy and what is the business logic. And we take care about not only operationalizing the code, but also making it highly available across our infrastructure. So now, let's take a look into, a deeper look into the invocation mode and how it works inside architecture. So in the synchronous mode, what happens is, for instance, in the api gateway, it's calling synchronously a front-end service that is inside the aws lambda service that is returning an immediate response, because what happens is, it's invoking a specific worker, spinning up a micro-VM, and your code is starting to run and return immediately either the response or the error to the client. Instead, when you are thinking about the invocation mode for synchronous lambda functions, it's slightly different. So in this case, for instance, you have SNS that is pushing an event into a message into the front-end. The front-end is storing inside an internal queue the specific message. The caller receives an acknowledgement just saying, yes, we took into account your request. And then you enter inside the internal queue. And the beauty of this approach is that we are not only running your code, but also we are supporting a bunch of features. Like you can set up the retries mechanism, so up to three invokes in total. When something is failing, you can define the destination, and also can define that letter queue that is a useful pattern when you want to automatically retry creating your own workflow, the errors, or even doing the debugging of your system. There is a third method that we didn't talk about. It is event source mapping. And there are certain services like MSK or Kinesis or SQS or DynamoStreams that are using another service that is available in the aws lambda services called event source mapping. Event source mapping takes care of pulling messages from the source and then synchronously invokes your lambda function. Obviously, thanks to event source mapping, you can do batching, error handling, and way more. The beauty of this is if you think about other services, you need to write your own logic. In this case, lambda is completely obstructed for you. So you just set up how it looks like, your batch of messages, and then that will be sent into your lambda function code for operationalizing your business logic. Now, we have seen how the service works, but now let's try to understand when you upload some code how it looks like when your micro VM is spin up for running the code. So every micro VM is using Filecracker. Filecracker is an open source micro VM that we created for lambda functions, especially for serverless computing, and now is also available for other aws services. It's completely open source, as I mentioned before. Therefore, you can look into it. You can see how it works, but it's a fantastic piece of software that we are using for operationalizing your code for lambda functions. So how it works. So it usually works in this way. When there is an input, therefore someone is triggering and invoking a lambda function, we select a worker host. The input, at the event, the input arrives into the worker host. The first thing that it does is loading the code that you have uploaded. This is usually called cold start, and it means that it's cold start mainly because it takes slightly longer than usual retrieving the code at runtime and then creating your micro VM with Filecracker and obviously the runtime that you selected or the container. There are different levels on how these things are cached inside the system. But in general, the idea is you have a cold start when you build the first time the micro VM with the code and everything. After that, you have your lambda function worm. So you start to hit the lambda function, the micro VM, multiple times with different inputs, and suddenly you don't have cold start anymore. You have a lot of responses that are not handling, it doesn't have any latency on generating my micro VM. And that is great. So now I think it's a good time thinking about the lifecycle part or how the lambda function works when it generates a new micro VM. This is probably the best diagram that I can show you, and it's totally available inside our website. Usually when you have a lambda function, you go through three stages. You have the initialization, then the invocation that happens multiple times until your execution environment is still up and running and available. And then the shutdown, that is when, let's say, your lambda function is not cold anymore. And therefore, as we said, you pay for your execution time. So after a while, we claim back the infrastructure and you just go ahead until the next invocation, obviously. Now, in every single of those steps, you have different things that are happening. So let's start to look into the initialization phase. So in the init phase, we have, in this case, the extension, because you can use extension for your lambda function. Think about sidecars that are available in your execution environments. And that's usually the first thing that you load. Then there is the runtime loading. So if you're, for instance, you selected node.js, there is the Node runtime that is initialized. And then suddenly you have the function initialization. Here is one of the key parts of the system. Usually in the function initialization, what you do is, for instance, retrieving secrets that are used during the invocation of your lambda function, or even parameters that are stored externally of your lambda function. The beauty of this thing is that this is done only at the initialization phase. So you don't need to retrieve every single time certain information. Even establishing a connection with a database, you can do at the initialization phase and then forget about that. That would be available for all the invocation that that execution environment is doing. So then we can move into the execution environment and, sorry, the invocation phase, where the execution environment is already warm, and therefore you start to respond to every request. Bear in mind that till the execution environment is up and running, you can definitely skip all the initialization phase. Therefore, your lambda function will be quite fast because it's just invoke and is there available for you. At the end, when we claim back the lambda function, you have a runtime shutdown and the extension shutdown. Pretty easy. If you want to know more, you can look into the documentation at this link and you will be able to see more information about this diagram and how the lifecycle works. Now let's talk about how you can optimize your code in lambda function and especially in node.js code. As we said here, we have our input that is receiving externally from the service. And you generate a micro VM with your code. But obviously, in order to reduce the time of cold start time, one of the key techniques is trying to reduce the size of your bundle size. And in this case, how it works, you can use webpack, ESBL, whatever other tools that are needed in order to reduce the size of your bundle. This obviously will make a faster cold start because suddenly your code is just a piece of logic and a bunch of libraries that they're using. But all the dev dependencies, for instance, and all the, let's say, functions that are not using that code that you can eliminate, tree shaking that you can apply, is totally helpful for reducing the cold start when you're working with lambda functions. Another functionality that is available in lambda, if you have a specific workload, for instance, imagine that you have a workload that is predictable and you know up front that on Sunday evening from 7 p.m. to 9 p.m. you're going to have a surge of traffic, you can use provision concurrency. Provision concurrency basically will allow you to set a time frame and select a specific lambda function for keeping it warm. So what happens is that we select your lambda function and we start to initialize up front for that specific period of time, you're going to have some lambda functions that are available and are already warm for fulfilling the surge of traffic. That is a fantastic way for scaling your workloads without having a cold start. And moreover, if you have top level await, the initialization phase is going to take care of that part and therefore what is going to happen is that stuff like, for instance, retrieving parameter from external sources like system or parameter store from system manager, it will be available at the initialization phase and then it won't need it anymore. So you reduce your execution time of your lambda function. Another suggestion is to use javascript aws SDK version 3. That is a fairly new release. Obviously, the package is smaller compared to v2 and that is a great thing. Also, there are a bunch of other features that are quite handy. So for instance, if before you were using the version 2 of the SDK and DynamoDB, one suggestion that we usually provide is that you need to keep alive the TCP connection. Otherwise, every time that you call and execute a lambda function, it establish a new TCP connection. So the packets are fading away with the version 3 and are embedded inside the SDK. So you don't have any more to handle by yourself. Moreover, that is something that often people don't know is that the run times, the node.js run times embed specific version of the aws SDK. Therefore, if you're not using a specific version of the SDK or you're using, let's say, already streamlined implementation or something similar, you don't need to embed inside your dependency the SDK, but you can leverage the one that is shipped alongside the run time. So you have one dependency less to handle. The caching for lambda, you can cache in memory and there are two ways to do so. The in-memory cache could happen at the execution environment, so as we have said, inside your micro VM. And that is happening at the initialization phase, as we have seen before and talked extensively. Also, you have a temp folder that can store up to 10 gigabytes of data. By default, it's 512 megabytes, but you can obviously increase this limit. Also, you can store and cache across multiple lambda functions if it's needed. For instance, you can use services like Elastic File System or Elastic Cache in using Redis or Memcache in the case of Elastic Cache for storing your data across multiple lambda functions. So in that case, they are going to retrieve them from EFS or Elastic Cache and you will be good to go. Another tool that is worth mentioning and an optimization that I highly encourage you is to use an open source tool called lambda Power Tuning. Power Tuning runs your code and lambda function in a way that allows you to understand what is the best setup for your lambda function invocation time and cost of invocation. Obviously, there are different parameters that you can set and very often people are thinking that the lowest amount of memory will be always the cheaper, but in reality it's not. That's why we have this tool that you can run it and understand which is the right architecture to run lambda. It could be ARM or x86 and also which is the right memory size based on the optimization that you want. As you can see in this slide, you can optimize for cost, you can optimize for time, and both are a valid dimension to think about. Last but not least, there is a library called lambda Power Tools that streamline the integration of observability inside your lambda. In fact, logging, tracing, and metrics became a breeze thanks to the usage of lambda Power Tools and it will allow you to have the best practices embedded inside your code out of the box. If you want to know more, you can find the link in this slide. To recap, what we have seen so far is that lambda is a fantastic service that allows you to just use what you really need. You optimize your focus, your day-to-day on creating business logic, more than operationalize your infrastructure, and definitely end those spike of traffic to a certain extent and also other type of workloads. If you want to know more about how lambda works, under the hood there is another talk that I really recommend that was done by a colleague of mine, Julian Wood, and with also a principal engineer from the lambda team that is going to talk about more in depth what we have seen so far. And also, you can take a look to the security white paper that is available on aws website that provides a lot of insight to the security model and how lambda works under the hood. I hope that you enjoyed this session and enjoy the rest of the conference.