The Edge & Databases: Everything Everywhere All at Once

Rate this content
Bookmark

Cloudflare Workers and Edge Functions bring the Serverless model to the next level by letting developers deploy code instantly globally to give it exceptional performance, reliability, and scale.

Having server-side applications execute close to where their users are located brings greater performance and drastically improves the user experience of an app. However, due to their limited runtime environment, working with your favorite traditional database is challenging since it can’t be accessed in CloudFlare Workers directly. Prisma solves this problem in multiple ways.

The goal of the talk is for developers to understand what the Edge really means, how it works, and how to work with your favorite traditional database on the Edge.

26 min
14 Apr, 2023

Video Summary and Transcription

This talk discusses working with databases on the Edge, the challenges of serverless and databases, and the challenges of working with databases on the Edge. It explores solutions such as using proxy connections and globally replicated data stores. The talk also highlights the use of Prisma for caching data and the considerations for edge migration. Additionally, it mentions the caching strategy with SWR and the availability of edge solutions for caching.

Available in Español

1. Introduction to Databases on the Edge

Short description:

Hello, everyone. It's good to see you here. In this talk, we'll talk about what it means to work with databases on the Edge and how exactly you can use them if you choose to. My name is Alex and I'm a Developer Advocate at Prisma. We have doubled down our efforts in understanding the serverless ecosystem, especially when it comes to working with databases. This talk will be split into four sections. We'll walk down memory lane and understand how we got here to the edge. I'll talk about the edge, what it is, the limitations, and how you can get around those limitations in relation to databases. I'll show you a demo of a tool that I built. We'll leave with a few final thoughts and if you have your questions, feel free to ask them after the conclusion. How did you get here really? Well, application deployment has come a really long way. Back in the day, most teams and companies used to deploy their applications on a computer that say used to live in the basement. And then still in 2006, Amazon or, um, infrastructure as a service came to be.

Hello, everyone. It's good to see you here. And also put a face to the names of the people that I usually interact with on Twitter and hopefully also after the conference.

So I, as Ryan Darl mentioned in the previous talk, you know, it's when you're submitting a talk, you usually just submit something. And then I came up with a better title, which is, Edge and Databases. Everything, everywhere and all at once. In this talk, we'll talk about what it means to work with databases on the Edge and how exactly you can use them if you choose to.

Yeah. So my name is Alex and I'm a JavaScript... Oh, addict. I'm sorry. Wrong session. I am a Developer Advocate at Prisma, where, as Kevin gracefully mentioned, I work on teaching them how to make working with databases easy for developers. As a company, we have doubled down our efforts in understanding the serverless ecosystem, especially when it comes to working with databases so that we can provide tools that offer the best experience for you as developers. And one of them is Accelerate, which we launched a couple of months ago that allows you to cache your data... Yeah, your database query responses at the edge or globally.

So this talk will be split into four sections. In the first section, we'll walk down memory lane and understand how we got here to the edge and then in the second segment, I'll talk about the edge, what it is, the limitations, and how you can get around those limitations in relation to databases. And then in the third bit, I'll show you a demo of a tool that I built to show you what it would be like with the solution tool that we have built. And then we'll leave with a few final thoughts and if you have your questions, feel free to ask them after the conclusion. So let's get started. How did you get here really? Well, application deployment has come a really long way. And sometimes when I talk about these technologies, I may look young, but I have the mind of a 56 year old. So I hope you also don't feel old really. Um, back in the day. Eons ago, even before I started learning how to use computers, most teams and companies used to deploy, um, their applications on a computer that say used to live in the, uh, basement, for example, uh, the basement is just an example. And this works fine. And developers and, or teams and DevOps would be responsible for managing the entire thing. And the application server lived together with your database, which was ideal. And then still in 2006, Amazon or, um, infrastructure as a service came to be.

2. Challenges with Serverless and Databases

Short description:

And this shifted responsibility from having to think about the computers and just leave it all to Amazon. We worried less about our servers and focused on building the code that runs our applications. However, working with relational databases posed challenges, such as connection management. When a surge of traffic occurs, multiple connections can overwhelm the database, leading to errors. Function timeouts and cold starts also affect serverless performance. Solutions include setting the connection pull size to one, defining a concurrency limit, or using an external connection pool like PG Bouncer.

And this shifted responsibility from having to think about the computers and just leave it all to Amazon. And then a few years later, we started seeing the rise of platform as a service, which shifted more responsibility where you had to worry less about the computer that your application run on and just leave that to your cloud provider. And then a few years later, again, we saw the rise of functions as a service with AWS, Lambda, um, cloud. I can't think of any other examples at the moment, but it was great because we worried less about our servers. And we only focused on building the code that runs our applications. And this was great because you could, uh, scale your app almost infinitely because you can, uh, in the event, there was a surge of traffic in your application, you can scale up to say 1,000 instances of functions running at the same time.

And then, uh, in the event there was no traffic, um, it would scale down all to zero and you would only pay for exactly what you use instead of a long running server. But this came with a few challenges, especially when it came to working with relational databases. And in this talk, I'll talk about three concrete challenges that, um, well, developers and teams experienced. And the biggest one of them all was connection management and handling how you manage the connections to your database. So let's take an example of a function, which is denoted by this lovely UFO here. Um, if your function had to interact with the database, uh, and you're using a query builder or a database connector, if, and you go with the default configuration, it would likely open multiple connections. And this is okay. If you have a single Lambda that runs once in a long time, because your database would be pretty much relaxed, but the main challenge comes in when you have a surge of traffic and you have an entire swarm of lambdas.

And not one or two, three, so many four, but an entire swarm. And in this case, your database is in a state of panic because it usually has a limited number of connections at any given point in time. And before you know it, you'll be out of connections, and your data and your functions will start running into errors and they start failing. And this is not ideal because sadly, your database is unable to scale up with your functions in serverless pause. A few other problems that we still experience with serverless include function timeouts, which doesn't make it ideal for working with, say long running processes. So if you have like a batch operation that takes an hour to perform, and most cloud providers usually have like a set time of how long your function should run, which doesn't make it ideal if you have a process that runs really long. And another challenge that we still run into is cold stats where this affects the latency and of your function, which makes it doesn't give an optimal experience for your users. But it's not all that bad because we found solutions and that's great, because it just pushes innovation forward. And one of them was setting the connection pull size when connecting to your database to one. So instead of having multiple connections, you can say in this case, limit it to only one. However, this is not ideal, because if you have, for example, the batch operation that's inserting a thousand records, then there would be run sequentially instead of in parallel, which makes it a little slow. That's OK. We have another possibility, which we could define a concurrency limit. So in the event, as I mentioned, if you have a surge of traffic, your cloud provider usually sets how many lambdas that you can run at any given point in time. So in this case, you can go to your AWS console, for example, and then you can limit instead of having 100 lambdas running concurrently, you can have, say, 10 or 20 at any point in time. But the most robust solution of them all is by using an external connection pool like PG Bouncer, which will is responsible for managing all the connections that go to your database.

3. Challenges of Databases on the Edge

Short description:

So in this case, edge computing allows you to run applications closer to users, providing benefits such as scalability, lower latency, and reduced costs. However, working with databases on the edge presents challenges due to constrained runtime, limited compute resources, and the absence of TCP connections. To overcome these challenges, you can use a proxy connection using HTTP or WebSockets to communicate with the database.

So in this case, you would you wouldn't run out of connections because it has like a log of how many that it can actually give at any given point in time. So with all this innovation, it brought us to the edge, which is kind of like a new era in a way, because edge, in this case, is a form of serverless compute that allows you to run your applications even closer to as close as possible to your users.

So in this case, once you deploy application, say it would be instantly deployed across the globe. And if your user is somewhere in the east coast of the U.S., then that's the application that's deployed or the data center that is closest to them would be responsible for returning the data that is theirs. And if you have another user that's also somewhere, say, in East Asia, the data center that's closest to them would also respond to them.

Now, this comes with a couple of benefits, by the way, when deploying to the edge and by the way, when I'm referring to the edge, I'm referring to Cloud Flare Workers and Dino Edge Functions. But you can also accomplish a similar setup using virtual machines as well. But it would require a little more fiddling on your own. Well, you can scale infinitely because your application is all over the globe. And this means that you have lower latency. And on Cloud Flare Workers, by the way, there are no call stats because it runs on a different runtime, which I'll mention, I'll talk about in just a few. And it reduces the cost of how you are running your application.

However, when it comes to working with databases, it comes with a few paper cuts that you have to be aware of. And that's that one, you have a constrained runtime because when you're running your application on lambdas, you usually have the full access to the entire node API. However, when you're using Cloud Flare Workers in particular, you are limited to a V8 isolate, which is a significantly smaller runtime that has a limited surface in terms of the API that you can use at the point that to deploy your application. And then another challenge that Cloud Flare Workers or Edge has is that you have a limited compute resources. For example, you have less CPU, you have less memory, and you also have like a significantly smaller allowance of how big your application should be. And I think for Cloud Flare Workers is somewhere around five megabytes. And this gets even harder when it comes to work with databases. And since we have a limited API surface, there are no TCP connections, which makes it difficult to talk to databases. And since most database deployments are usually in a single region, having to serve your Edge functions, which are global, becomes a little bit difficult because of the latency and the round trips connecting to your database. Another pause. Come on, cheer up, liven up.

However, it's not all that bad. There are solutions and you can work around them. And for the problem on working with TCP connections, one way that you can get around it is by using a proxy connection using either HTTP or WebSockets. And this means that you have a proxy that would sit in between your edge function and your database. And your edge function would communicate to your data to the proxy using either HTTP or WebSockets. And then the proxy would use TCP to communicate with the database. And then it would respond to your application with the data that you requested for.

4. Challenges of Working with Databases on the Edge

Short description:

And a couple of examples of these tools include the NEON serverless package, AWS RDS proxy and the Prisma Data Proxy. The second challenge is the latency and the round trips when connecting to your database. One way to get around this is by using a globally replicated data store. However, replicating data introduces challenges in terms of consistency and synchronization. Another solution is to use a specialized data store or cache, storing only the data needed per region. This approach requires tolerance for stale data. Instead of a replicated data store, a specialized data API may be more suitable for working with applications on the edge. The application would be deployed globally, with the database in a central location. Requests from users in a specific region would be forwarded to the database, and the data could be cached at the data center for faster response times. The duration of data serving would depend on the application's tolerance for stale data.

And a couple of examples of these tools include the NEON serverless package, AWS RDS proxy and the Prisma Data Proxy.

The second challenge that I mentioned is the latency and the round trips when connecting to your database. And one way to get around this is by using a globally replicated data store. And by this, I mean, your database provider or you as well as a developer could be responsible for creating replicas of your data and distributing them across the globe. And I know this sounds easy, kind of ideal. But the moment you start talking about replicating your data, we find find ourselves in the distributed systems territory. And it's not that easy because you have to start thinking about consistency. For example, what happens? How do you synchronize your data across your replicas and which replica or node is responsible for handling requests? And this is not really an easy problem to fix. And as much as companies like Fauna, CloudFlare, Cockroach and AWS are doing a great job with this tools, we also have to come to the terms that perhaps our database is not likely to move to the edge, and that's OK.

Now, one final solution or suggestion, because this is just like a rough draft and I would like your opinion on it is by using something called a specialized data store or cache, meaning you only store the data that your user needs per region. And this comes with a disclaimer that you have to have your application or some parts of your application has to have some tolerance for stale data. And that's OK. So as Lee Robinson quoted wrote in his article on state of databases and the edge in 2023, perhaps what we are looking for when it comes to working with applications on the edge is not a replicated data store, but a specialized data API.

So here's what it would look like. Well, we would have your application deployed all over the globe and your database sitting somewhere in the East Coast, because that's where most well all the AWS US jokes come from. And then if a user somewhere from the US East would make a request, then the first request, of course, would be forwarded to your database. And then after that, your database would respond. Of course, the first request would take a while. However, that data that the user requested for could be cached at that data center. And then that data would be then served to the user. And then in the event the user makes a second request, then the user, the response time for that particular function would be significantly faster. And that's the same data that will be served. Now, how long you serve this data would be completely up to you. It could be 10 seconds, 60 seconds, two months. But again, as I mentioned before, it's completely based on how much tolerance your application has for stale data. And stale data is not wrong. It's also good because it serves you. Sorry. Because with stale data, you're able to reduce the load on how on your database. So this is an example of what it would look like in a Cloud Foundry in a in a in a Cloud Flare worker.

5. Using Prisma and Caching Data

Short description:

In this case, I'm using Prisma, a tool for querying and interacting with databases. I'm requesting posts from my database and setting a cache with a time to live of 60 seconds. The SWR (Stale While Revalidate) strategy allows serving stale data for 60 seconds before refetching and revalidating. I built a demo app, a beautiful blog, where data from CloudFlare is fetched. The data center that responded is based in Berlin. The cache state affects response time, and subsequent requests are significantly faster. The code for the data center is FRA. The code snippet shows a query for published posts, with a time to live of 10 minutes and a revalidation time of 60 seconds. The demo uses remix, a simple application, and if you're interested, feel free to reach out.

Sorry for the misquoting. So in this case, I'm using Prisma, which is a tool for querying, interacting with databases. And in this case, I am requesting for posts in my database and I'm setting a cache. Sorry, the red can't be seen, but I'm highlighting the cache strategy and the feed that I'm requesting for has a time to live for 60 seconds. And this means that the data will be stored in the edge on for 60 seconds. And then the SWR stands for Stale While Revalidate, meaning that you're you will be able to serve the stale data for 60 seconds before your worker is responsible for refetching the data and revalidating it for you. So yeah, that's what it would look like.

So a quick demo of an app that I built. This is a beautiful blog. And if I refresh this page, hoping that the Wi-Fi is working. Yes. So we have some data from CloudFlare, and we can see that this the data center that responded to our request is based in Berlin, and the code for the data center is FRA. And FRA is like an IATA code, which is like the airport codes, but for data centers. And for our particular database request, you can see that that we can see that the database, the data center that was responded to, it was based is the same as our data center. And in this case, it was a cache miss. And that's why perhaps the refresh took a while. And the data was last updated in GMT. This is about two hours ago. And if I refresh this data, we can see that response was significantly faster And we can see that it was the cache state is that time to live, meaning it has already been cached and any other subsequent request would be significantly faster. The same goes for the individual posts. You can see that the transition to the different pages is significantly slow. And those are cache miss and the same data center responded to our data, which is the code FRA.

So what does this look like in the code? So in the, I'll only show you one query. In this case, I have a query that for all posts and I'm filtering for posts that have been published. And in this case, I set a time to live for about 10 minutes. And the still will revalidate is 60 seconds. In this case, I'm also getting the CloudFlare information from the request at CF, sadly, that's not typed in remix, by the way. This is a familiar, simple application using remix. And that's all for the demo, in case this is the kind of thing that you'd be interested in trying out. Just feel free to reach out and talk to me or flow over there.

6. Considerations for Edge Migration

Short description:

Should you migrate to the edge? It depends. Consider the data needs of your application. Some parts can tolerate stale data. Remember, these are just tools, and you don't have to migrate if your current setup is working perfectly. Thank you for attending the talk.

Say, say hi flow. And so in conclusion, the big question is, should you migrate to the edge? Well, it depends, and I don't have the answer for you. It comes with a few considerations, meaning that, you know, if your application is dependent on a central data store, then it's probably not that great because latency and when you have multiple requests, one after another in a single function will take a while. And perhaps what we should do is understand the needs, the data needs of our application, which is and because some parts of application, as I mentioned, can tolerate stale data, and that's OK. And finally, let's just remember that all these are tools and you don't have to migrate from whatever you're using. It's running perfectly. So thank you, by the way. I hope you enjoyed the talk, and I'd like to give a shout out to a former colleague, Matina Velander, who let me use the beautiful illustrations on my slides. Thank you. Thank you ever so much.

QnA

Caching and SWR

Short description:

At the moment, cache strategy for Prisma raw queries is not yet available, but it's in our plan. When users work from different locations, the closest data center will respond to their requests. If the data is not cached in that data center, it will be forwarded to the database and cached there. SWR specifies how long stale data can be served before re-fetching from the database.

So your first question is, can I use cache strategy for Prisma, raw queries? At the moment, not yet, but that's in our plan to include cache, caching for your raw queries.

Yes. Great. Next question. If the data is stored or cache next to the users, what happens when the users work within one account or domain from different locations? What happens when people work between locations? Will there end up being an inconsistency?

So if you work from different locations, I think, well, one, we're relying on CloudFlare for accelerate in this case. And the closest data center that to the user will be the one that will respond to the users request. And if the data is not cached in that particular data center, then it will also be. It will forward the request to the database and then cache the data in the data center that received the request. So even if they move around, it will always be seeing whether it's cached on the data center that's closest to them.

Yes. Cool. Next question. How is SWR different from there's a TEL. And I don't know if this is a TEL. Yes. Thank you so much. How is SWR different from TTL on the Edge server? It's a tough one. So TEL just specifies how long you should cache the data and then stalewhileRevalidate specifies how long you can cache the data, you can serve the stale data before it can go and re-fetch the data again from your database. I hope that's clear. I'm not sure I fully understand. I mean, that's totally cool. Perhaps this is something we can go and have a chat with in the break. Can you maybe describe it in another way if there's another way you have kind of up your sleeve now, or would you rather we do this in the break and you have a moment to think about this one? So I can give it one more try. So if you have decided to cache your data for 60 seconds, then the TTL will apply first. So the first request will forward the request to the database and then it will be cached in the database. Now, once the 60 seconds is up, the SWR now comes in place and then it specifies how long you can continue serving the stale data before you can refetch it again from the database. So, yeah, it's just, yeah. So they play together but at different points in that journey of storing and serving up potentially stale data. Yes. Awesome, thank you.

Edge Solutions and Caching

Short description:

Cloudflare Workers is not the only edge solution that supports this caching strategy. Accelerate Edge is just one of the platforms you can use, along with serverless and traditional servers. You can also bring your own cloud providers and use this strategy wherever you deploy applications.

We've got, we seem to have quite a few but like quick fire, quick fiery kind of questions. Cool. Let's go with this question next. Is Cloudflare Workers the only edge solution that would support this caching strategy? No. So Accelerate Edge is just one of the platforms that you could use to, with Accelerate. You can still use it with serverless as well or you can still use it when working with a traditional server as well. So it's not the only platform that you can use cache strategies with. Cool, and that means you can bring your own cloud providers to this. You can use this strategy with your own cloud provider too. Where you deploy applications. Yes.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2023Node Congress 2023
36 min
Deno 2.0
Deno 2.0 is imminent and it's bringing some big changes to the JavaScript runtime. In this talk, we'll introduce the new features including import maps, package.json auto-discovery, and bare specifiers. We'll discuss how these improvements will help address issues like duplicate dependencies and disappearing dependencies. Additionally, we'll delve into the built-in support for deno: specifiers on the deno.land/x registry and its role in providing a recommended path for publishing. Come learn about how these updates will shape the future of the JavaScript ecosystem and improve backwards compatibility with Node applications.
TypeScript Congress 2022TypeScript Congress 2022
27 min
TypeScript and the Database: Who Owns the Types?
Top Content
We all love writing types in TypeScript, but we often find ourselves having to write types in another language as well: SQL. This talk will present the choose-your-own-adventure story that you face when combining TypeScript and SQL and will walk you through the tradeoffs between the various options. Combined poorly, TypeScript and SQL can be duplicative and a source of headaches, but done well they can complement one another by addressing each other's weaknesses.
JSNation 2023JSNation 2023
29 min
I Would Never Use an ORM
What's an ORM? An Object-Relational Mapping tool (ORM) is a library to map a SQL table to a Class. In most cases, ORMs force the users to structure their code to have Model objects that include both data access and business logic.
Once upon a time, I did several projects using ORMs as I followed the common belief that they would simplify the development and maintenance of projects. I was wrong. ORMs are often a hurdle to overcome for the most complex part of a project.
As the next stop of my journey, I recommended people use the native languages of their databases, e.g., SQL. This works great for the most part, but it creates quite a struggle: there is a lot of boilerplate code to write that can be pretty tedious. I was wrong, again.
Today I'm presenting you Platformatic DB.
Node Congress 2022Node Congress 2022
31 min
Database Access on the Edge with Cloudflare Workers & Prisma
Edge functions are pushing the limit of serverless computing – but with new tools, come new challenges. Due to their limitations, edge functions don't allow talking to popular databases like PostgreSQL and MySQL. In this talk, you will learn how you can connect and interact with your database from Cloudflare Workers using the Prisma Data Proxy.
You can check the slides for Alex's talk here. 
Remix Conf Europe 2022Remix Conf Europe 2022
41 min
Remix Persistence With DynamoDB
Remix is the best React framework for working with the second most important feature of the web: forms. (Anchors are more important.) But building forms is the fun part: the tricky part is what happens when a web consumer submits a form! Not the client side validation logic but the brass tacks backend logic for creating, reading, updating, destroying, and listing records in a durable database (CRUDL). Databases can be intimidating. Which one to choose? What are the tradeoffs? How do I model data for fast queries? In this talk, we'll learn about the incredibly powerful AWS DynamoDB. Dynamo promises single-digit millisecond latency no matter how much data you have stored, scaling is completely transparent, and it comes with a generous free tier. Dynamo is a different level of database but it does not have to be intimidating.

Workshops on related topic

Remix Conf Europe 2022Remix Conf Europe 2022
195 min
How to Solve Real-World Problems with Remix
Featured Workshop
- Errors? How to render and log your server and client errorsa - When to return errors vs throwb - Setup logging service like Sentry, LogRocket, and Bugsnag- Forms? How to validate and handle multi-page formsa - Use zod to validate form data in your actionb - Step through multi-page forms without losing data- Stuck? How to patch bugs or missing features in Remix so you can move ona - Use patch-package to quickly fix your Remix installb - Show tool for managing multiple patches and cherry-pick open PRs- Users? How to handle multi-tenant apps with Prismaa - Determine tenant by host or by userb - Multiple database or single database/multiple schemasc - Ensures tenant data always separate from others
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Top Content
WorkshopFree
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura
Node Congress 2023Node Congress 2023
33 min
Scaling up Your Database With ReadySet
WorkshopFree
The database can be one of the hardest parts of a web app to scale. Many projects end up using ad-hoc caching systems that are complex, error-prone, and expensive to build. What if you could drop in a ready-built caching system to enable better throughput and latency with no code changes to your application?
Join developers Aspen Smith and Nick Marino to see how you can change one line of config in your app and use ReadySet to scale up your query performance by orders of magnitude today.
Remix Conf Europe 2022Remix Conf Europe 2022
156 min
Building a Realtime App with Remix and Supabase
Workshop
Supabase and Remix make building fullstack apps easy. In this workshop, we are going to learn how to use Supabase to implement authentication and authorization into a realtime Remix application. Join Jon Meyers as he steps through building this app from scratch and demonstrating how you can harness the power of relational databases!
GraphQL Galaxy 2021GraphQL Galaxy 2021
143 min
Building a GraphQL-native serverless backend with Fauna
WorkshopFree
Welcome to Fauna! This workshop helps GraphQL developers build performant applications with Fauna that scale to any size userbase. You start with the basics, using only the GraphQL playground in the Fauna dashboard, then build a complete full-stack application with Next.js, adding functionality as you go along.

In the first section, Getting started with Fauna, you learn how Fauna automatically creates queries, mutations, and other resources based on your GraphQL schema. You learn how to accomplish common tasks with GraphQL, how to use the Fauna Query Language (FQL) to perform more advanced tasks.

In the second section, Building with Fauna, you learn how Fauna automatically creates queries, mutations, and other resources based on your GraphQL schema. You learn how to accomplish common tasks with GraphQL, how to use the Fauna Query Language (FQL) to perform more advanced tasks.
GraphQL Galaxy 2021GraphQL Galaxy 2021
175 min
Building GraphQL APIs With The Neo4j GraphQL Library
WorkshopFree
This workshop will explore how to build GraphQL APIs backed Neo4j, a native graph database. The Neo4j GraphQL Library allows developers to quickly design and implement fully functional GraphQL APIs without writing any resolvers. This workshop will show how to use the Neo4j GraphQL Library to build a Node.js GraphQL API, including adding custom logic and authorization rules.

Table of contents:
- Overview of GraphQL and building GraphQL APIs
- Building Node.js GraphQL APIs backed a native graph database using the Neo4j GraphQL Library
- Adding custom logic to our GraphQL API using the @cypher schema directive and custom resolvers
- Adding authentication and authorization rules to our GraphQL API