AWS Lambda under the hood

Spanish audio is available in the player settings
Rate this content
Bookmark
Slides

In this talk I explain how the AWS Lambda service works explaining the architecture, how it scales and how a developer should think about when they design their software using Lambda functions

FAQ

An AWS Lambda function is a serverless computing service where you provide the code, and AWS handles provisioning, managing, and scaling the infrastructure. It allows you to focus on writing code without worrying about the underlying systems.

Lambda functions can be invoked in two ways: synchronously and asynchronously. Synchronously, such as via API Gateway where the response is immediate, and asynchronously, where events are queued and processed sequentially.

AWS Lambda offers several benefits including automatic scaling, no need for provisioning or managing servers, and you only pay for the compute time you consume. This makes it cost-effective for varying loads and for environments like staging and testing that aren't in constant use.

AWS Lambda supports multiple built-in runtimes like Java, Go, Node.js, .NET, Python, and others. Additionally, you can create custom runtimes for languages not natively supported by AWS Lambda, such as COBOL.

A cold start occurs when an AWS Lambda function is invoked after being idle, causing a delay as the environment boots up. Optimizing cold starts can be achieved by reducing the code size, using provisioned concurrency, and employing the latest AWS SDKs which are optimized for performance.

Yes, AWS Lambda automatically scales by running multiple instances of your function in response to incoming events, handling each invocation independently and concurrently.

You can deploy Lambda functions using a ZIP file for code sizes up to 250 MB, or as a container image of up to 10 GB. This flexibility allows for integration into various workflows and CI/CD pipelines.

Provisioned concurrency is a feature in AWS Lambda that allows you to prepare and keep a specified number of Lambda instances ready to respond instantly to events, eliminating cold starts during periods of expected high traffic.

The AWS Lambda lifecycle includes three key phases: initialization, invocation, and shutdown. During initialization, the execution environment is prepared. Invocation handles the actual processing of events, and shutdown cleans up resources when the function is no longer needed.

Luca Mezzalira
Luca Mezzalira
22 min
17 Apr, 2023

Comments

Sign in or register to post your comment.

Video Summary and Transcription

In this Talk, key characteristics of AWS Lambda functions are covered, including service architecture, composition, and optimization of Node.js code. The two operational models of Lambda, asynchronous and synchronous invocation, are explained, highlighting the scalability and availability of the service. The features of Lambda functions, such as retries and event source mapping, are discussed, along with the micro VM lifecycle and the three stages of a Lambda function. Code optimization techniques, including reducing bundle size and using caching options, are explained, and tools like webpack and Lambda Power Tuning are recommended for optimization. Overall, Lambda is a powerful service for handling scalability and traffic spikes while enabling developers to focus on business logic.

Available in Español: AWS Lambda bajo el capó

1. Introduction to AWS Lambda

Short description:

In this session, we will cover the key characteristics of Lambda functions, how the service architecture works, how to compose a Lambda function in AWS, and how to optimize your Node.js code for Lambda. A Lambda function allows you to focus on creating value for your customers by writing code that maps your business capabilities into production. You only pay for the execution time, making it cost-effective for both production and testing environments. You can provide your code through a zip file or container images, and choose from a range of built-in languages or bring your own. A customer even created a custom runtime in COBOL.

Hi, everyone, and welcome to this session on AWS Lambda Under the Hood. I know several of you are not looking only on the how to build some code, but also why you should build the code in a certain way. Either you are an expert on writing Lambda functions or you are thinking to use serverless and AWS Lambda in your next workload, I think at the end of this talk you will be able to take a conscious decision on why to write the code in a certain way.

So without further ado, let's go ahead. My name is Luca. I'm a principal serverless specialist in AWS. I'm based in London. I'm an international speaker and a book's author. So in this talk we're going to cover quite a lot of ground, and especially what I want to cover is, first of all, what is a Lambda function? Because maybe you have an idea of what it is, but let's try to nail the key characteristics of Lambda functions. Then we move on, understanding how the service architecture works under the hood. We then discuss how we compose a Lambda function in AWS. And then we move into the function lifecycles and how to leverage them to maximize the benefit of your code. And last, but not least, we are talking about how to optimize your Node.js code for Lambda.

There is a lot of ground to cover, so let's start. What is a Lambda function? A Lambda function, in a nutshell, is as simple as you provide some code and we take care about provisioning and managing the service for you. So you don't have to think about the networking side or how to orchestrate the scalability and so on and so forth. You just focus on what really matters for you. So creating value for your customers and moreover writing some Node.js code that maps basically your business capabilities into production. You pay by millisecond. So every time that you invoke a Lambda, you pay for only the execution time and nothing more. And that is a great way to think not only on production, but also think about, for instance, testing a staging environment that are not used 24-7, like your production environment. There, you just pay for what you use. You don't have to provision containers or virtual machine that are there 24-7. You can serve the content in two ways. You can provide us your code through a zip file when the file is up to 250 megabytes, or if it's bigger than that, you can use container images up to 10 gigabyte. In both cases, you can leverage the benefit of AWS Lambda without any problem. We also offer a built-in languages. We have some runtimes that are available and are managed by us for you, like Java, Go, Node.js, .NET, Python, and many others. But also, if you have a specific use case, you can bring your own, and we operationalize in the exact same way. A classic example, we had a customer that was working in finance, and they needed to use COBOL, but they really fell in love with Lambda, so they created their own custom runtime in COBOL.

2. Lambda Invocation Modes and Architecture

Short description:

Last but not least, scale up milliseconds. Lambda has two operational models: asynchronous invocation and synchronous invocation. The code you write runs inside a MicroVM sandbox, which is part of a Worker that operates in an availability zone. AWS takes care of making your code highly available across multiple data centers. In synchronous mode, the API gateway calls a frontend service in AWS Lambda, which immediately invokes a worker to run your code and return the response. In the case of synchronous Lambda functions, events are pushed into a message queue and the caller receives an acknowledgement.

Last but not least, scale up milliseconds. So based on your traffic pattern, we take care of scaling your Lambda function and answering all the requests that are coming. So usually, from a developer perspective, what you see is that you're writing some code, as you can see on the left of this slide, and then you upload on your AWS account, and some magic happens, and you start to have an API that is functioning.

The reality is there is way more. So the question now is, have you ever thought how Lambda works under the hood? So let's take a look into that. First of all, there are two operational model of Lambda functions. The first is asynchronous invocation, where, for instance, you have an API gateway that exposes your API, there is a request coming in from a client, and a Lambda function is triggered with the response, and then you serve the response synchronously to your client. The other option is asynchronous invocation, where in this case you have a service that is pushing an event into the Lambda service, the Lambda service stores inside an internal events queue the event, and then Lambda function starts to retrieve these events slowly but steadily and operate some work on that. The requester, so in this case Amazon EventBridge, for instance, receives directly an acknowledgement and nothing more. And those are the two ways that the Lambda invocation works.

So if we see on the grand scheme of things, how from what you see on the left of the slide, whether there are multiple services or even synchronous or not, are sending a request to the AWS Lambda service, that is the big rectangle that is inside this slide. And how to move into your code that is on the far right of this slide, where there is a MicroVM sandbox, is an interesting journey. And especially, I want to first highlight what's happening inside your sandbox. The sandbox is where your code is running. Now, if you think about that, your MicroVM, where there is the code that you have written and is operationalized by us, is running inside a Worker. And obviously, there isn't just one Worker. There is way more. Usually in AWS, we have multiple availability zones. And as you can see here, you have multiple Workers running inside one availability zone. And the availability zone is a data center. So think about how a data center looks like, and that's your availability zone. But usually in AWS, every time that we create a region, it's composed by multiple availability zones. Therefore, every time that you are pushing the code inside Lambda, automatically, you're going to have your code that is available in multiple data centers. You don't have to do anything. You just focus on which region you want to deploy and what is the business logic. And we take care about not only operationalize the code, but also making it highly available across our infrastructure. So now let's take a deeper look into the invocation mode and how it works inside architecture. So in the synchronous mode, what happens is, for instance, in the API gateway is calling synchronously, a frontend service that is inside the AWS Lambda service that is returning an immediate response because what happens is it's invoking a specific worker, spinning up a micro VM and your code is start to run and return immediately either the response or the error to the client. Instead, when you are thinking about the invocation mode for synchronous Lambda functions, it's slightly different. So in this case, for instance, we have SNS that is pushing an event into a message into the front end that the front end is storing inside an internal queue the specific message the caller received an acknowledgement just saying yes, we took into account your request and then you enter inside the internal queue.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

You Don’t Know How to SSR
DevOps.js Conf 2024DevOps.js Conf 2024
23 min
You Don’t Know How to SSR
A walk-through of the evolution of SSR in the last twelve years. We will cover how techniques changed, typical problems, tools you can use and various solutions, all from the point of view of my personal experience as a consumer and maintainer.
AWS Lambda Performance Tuning
Node Congress 2024Node Congress 2024
25 min
AWS Lambda Performance Tuning
Have you ever wonder how to get the best out of your Lambda functions?If so, this talk will reveal the behind the scene one of the most popular serverless service and you will be exposed to a step by step guidance for optimizing your functions.During this session, I will walk you through the mindset to reduce your Lambda functions execution time. In the example presented I was able to reduce the execution time by 95% for warm start and over 50% with cold starts improving also the transactions per seconds served with this API.
Advanced GraphQL Architectures: Serverless Event Sourcing and CQRS
React Summit 2023React Summit 2023
28 min
Advanced GraphQL Architectures: Serverless Event Sourcing and CQRS
GraphQL is a powerful and useful tool, especially popular among frontend developers. It can significantly speed up app development and improve application speed, API discoverability, and documentation. GraphQL is not an excellent fit for simple APIs only - it can power more advanced architectures. The separation between queries and mutations makes GraphQL perfect for event sourcing and Command Query Responsibility Segregation (CQRS). By making your advanced GraphQL app serverless, you get a fully managed, cheap, and extremely powerful architecture.
Demystify the DX for Lambda functions
DevOps.js Conf 2024DevOps.js Conf 2024
30 min
Demystify the DX for Lambda functions
In this session, I share with you how AWS CDK and AWS Toolkit can simplify the developer experience to run serverless workloads in the cloudA session with no slides, just an IDE and a CLI for deploying an API in the cloud, update it quickly, and retrieve logs without leaving your favourite IDE!
Building Dapps with React
React Advanced Conference 2021React Advanced Conference 2021
30 min
Building Dapps with React
Decentralized apps (dApps) are continuing to gain momentum in the industry. These developers are also now some of the highest paid in the entire trade. Building decentralized apps is a paradigm shift that requires a different way of thinking than apps built with traditional centralized infrastructure, tooling, and services – taking into consideration things like game theory, decentralized serverless infrastructure, and cryptoeconomics. As a React developer, I initially had a hard time understanding this entirely new (to me) ecosystem, how everything fit together, and the mental model needed to understand and be a productive full stack developer in this space (and why I would consider it in the first place). In this talk, I'll give a comprehensive overview of the space, how you can get started building these types of applications, and the entire tech stack broken apart then put back together to show how everything works.
Building Real-time Serverless GraphQL APIs on AWS with TypeScript and CDK
React Summit 2020React Summit 2020
25 min
Building Real-time Serverless GraphQL APIs on AWS with TypeScript and CDK
CDK (Cloud development kit) enables developers to build cloud infrastructure using popular programming languages like Python, Typescript, or JavaScript. CDK is a next-level abstraction in infrastructure as code, allowing developers who were traditionally unfamiliar with cloud computing to build scalable APIs and web services using their existing skillset, and do so in only a few lines of code.
In this talk, you’ll learn how to use the TypeScript flavor of CDK to build a hyper-scalable real-time API with GraphQL, Lambda, DynamoDB, and AWS AppSync . At the end of the talk, I’ll live code an API from scratch in just a couple of minutes and then test out queries, mutations, and subscriptions.
By the end of the talk, you should have a good understanding of GraphQL, AppSync, and CDK and be ready to build an API in your next project using TypeScript and CDK.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
Building Serverless Applications on AWS with TypeScript
Node Congress 2021Node Congress 2021
245 min
Building Serverless Applications on AWS with TypeScript
Workshop
Slobodan Stojanović
Slobodan Stojanović
This workshop teaches you the basics of serverless application development with TypeScript. We'll start with a simple Lambda function, set up the project and the infrastructure-as-a-code (AWS CDK), and learn how to organize, test, and debug a more complex serverless application.
Table of contents:        - How to set up a serverless project with TypeScript and CDK        - How to write a testable Lambda function with hexagonal architecture        - How to connect a function to a DynamoDB table        - How to create a serverless API        - How to debug and test a serverless function        - How to organize and grow a serverless application


Materials referred to in the workshop:
https://excalidraw.com/#room=57b84e0df9bdb7ea5675,HYgVepLIpfxrK4EQNclQ9w
DynamoDB blog Alex DeBrie: https://www.dynamodbguide.com/
Excellent book for the DynamoDB: https://www.dynamodbbook.com/
https://slobodan.me/workshops/nodecongress/prerequisites.html
Serverless for React Developers
React Summit 2022React Summit 2022
107 min
Serverless for React Developers
WorkshopFree
Tejas Kumar
Tejas Kumar
Intro to serverlessPrior Art: Docker, Containers, and KubernetesActivity: Build a Dockerized application and deploy it to a cloud providerAnalysis: What is good/bad about this approach?Why Serverless is Needed/BetterActivity: Build the same application with serverlessAnalysis: What is good/bad about this approach?
Frontend to the Cloud Made Easy - A ReactJS + AWS Workshop
DevOps.js Conf 2024DevOps.js Conf 2024
59 min
Frontend to the Cloud Made Easy - A ReactJS + AWS Workshop
Workshop
Eyal Keren
Eyal Keren
This workshop enables you to learn how to develop React applications, and then deploy them to the cloud (or building them to the console) coupled with a backend, fully abstracted, with no complex backend configuration, simplifying the building and deployment of frontend & web apps to the cloud.
Building a GraphQL-native serverless backend with Fauna
GraphQL Galaxy 2021GraphQL Galaxy 2021
143 min
Building a GraphQL-native serverless backend with Fauna
WorkshopFree
Rob Sutter
Shadid Haque
2 authors
Welcome to Fauna! This workshop helps GraphQL developers build performant applications with Fauna that scale to any size userbase. You start with the basics, using only the GraphQL playground in the Fauna dashboard, then build a complete full-stack application with Next.js, adding functionality as you go along.

In the first section, Getting started with Fauna, you learn how Fauna automatically creates queries, mutations, and other resources based on your GraphQL schema. You learn how to accomplish common tasks with GraphQL, how to use the Fauna Query Language (FQL) to perform more advanced tasks.

In the second section, Building with Fauna, you learn how Fauna automatically creates queries, mutations, and other resources based on your GraphQL schema. You learn how to accomplish common tasks with GraphQL, how to use the Fauna Query Language (FQL) to perform more advanced tasks.