Database Access on the Edge with Cloudflare Workers & Prisma

Rate this content

Edge functions are pushing the limit of serverless computing – but with new tools, come new challenges. Due to their limitations, edge functions don't allow talking to popular databases like PostgreSQL and MySQL. In this talk, you will learn how you can connect and interact with your database from Cloudflare Workers using the Prisma Data Proxy.

You can check the slides for Alex's talk here

31 min
17 Feb, 2022

Video Summary and Transcription

This Talk discusses database access on the edge with CloudFlare workers and the challenges of serverless platforms. It explores solutions for database access, including CloudFlare-specific solutions and using Prisma data proxy. The Prisma toolkit and demo are showcased, demonstrating how to convert an application to use a database. The process of setting up Prisma Data Platform and deploying the application to CloudFlare workers is explained. The Talk concludes with insights on database usage and the differences between serverless, CDN, and the Edge.

Available in Español

1. Introduction to Cloud Flarewalkers

Short description:

Hello, friends. Today I'll be talking about database access on the edge with Cloud Flarewalkers. We'll discuss what it is, how different it is from the traditional serverless platforms and the growing pains of serverless. CloudFlare workers run your application code much closer to your user, which could even be in the same city. CloudFlare workers are significantly lighter because the platform is built on top of the V8 engine. CloudFlare workers also provide high performance, with shorter start-up time and lower latency. However, connecting to databases is still a challenge from CloudFlare workers due to the limitations of the V8 engine.

Hello, friends. Today I'll be talking about database access on the edge with Cloud Flarewalkers. My name is Alex, and I'm a developer advocate at Prisma. Feel free to reach out to me after the talk if you have any questions, or just any random questions or connect.

This talk will be divided into four sections. The first part will be about Cloud Flarewalkers, or the edge. We'll discuss what it is, how different it is from the traditional serverless platforms and the growing pains of serverless. The second section will take a look at Prisma and the Prisma Data Proxy, exploring how the Data Proxy helps you interact with your database on the edge. And the third part, or the third section, will be the exciting bit, where we will have a demo. We'll migrate an application using a JSON file for storing and retrieving quotes, and move over the data to a database and deploy it to the edge. Finally we'll have a recap of the talk on the end. And for now, let's jump into it.

So what is the edge? It seems to be a buzzword that's used a lot, but what does it mean? The edge is a form of serverless compute that's distributed. And in the context of CloudFlare workers, it's deployed to CloudFlare's network of data centers.

Now that you understand what the edge is, and how CloudFlare works, let's take a look at how CloudFlare workers are different from traditional serverless. As I mentioned before, it's distributed. Traditional serverless functions are deployed to specific data centers or region. And in contrast, CloudFlare workers run your application code much closer to your user, which could even be in the same city. CloudFlare workers are also significantly lighter because the platform is built on top of the V8 engine. It's the same engine that powers Chromium-based browsers such as Chrome and Edge, and also the Node.js runtime. So whenever your CloudFlare worker is invoked, instead of provisioning an intact container for your serverless function, CloudFlare worker spins up a V8 isolate that is significantly smaller. And if you've heard about the cold start problem on serverless, then you understand why this is an exciting technology.

So CloudFlare workers also provide high performance. This is adding on top of the first two points. Because the environment is significantly lighter, which leads to a shorter start up time, or even possibly none. And you also would experience lower latency whenever you invoke your function because it would be invoked from the data center that is closest to you. Now I know this all sounds delightful, but CloudFlare workers still suffer from the same problems that traditional serverless functions do in the context of databases. So connecting to databases is still a challenge from CloudFlare workers. This is because V8 engine supports sending only HTTP-based connections. However, traditional databases depend on long-lived TCP connections.

2. Challenges and Solutions for Database Access

Short description:

Different databases have different implementations for connecting to them, which can be chaotic. Database connections in serverless environments can easily run out, causing requests to fail. Possible solutions include CloudFlare specific solutions, using Postgres to convert an existing database into a RESTful API, modifying Postgres and MySQL drivers to use HTTP with Deno, and using the Prisma data proxy.

This becomes a bummer. The different databases present different implementations of the interfaces of how you can connect to them, which is all chaotic and it's on. Database connections are also stateful, while serverless environments are stateless, meaning that you can easily run out of database connections because every worker creates a new connection pool. And when you run out of connections, requests in your application would start failing.

So how can you solve this problem? Well, you have a number of options available to you, just like in the rest of the JavaScript ecosystem. One option would be to go with CloudFlare specific solutions such as worker's key value store and durable objects. The second option, if you have an existing database, you could use Postgres, which converts your existing Postgres database into a RESTful API. Another option would be to convert, to use the Postgres and MySQL drivers and modifying them to use HTTP using Deno, I believe. And finally the gist of this talk, which is the Prisma data proxy and this talk will focus on this, but feel free to try out the other options for yourself as well.

So Prisma is a next-gen type-safe ORM for NodeJS that works with both JavaScript and TypeScript. It supports a number of databases such as SQLite, MySQL, Postgres, SQL server, and also CockroachDB and MongoDB, which are still in preview. The core benefits of Prisma is that it boosts productivity by letting developers query data in natural and familiar ways, and also providing a human readable data model. Prisma also increases developer confidence with the type safety, the auto-completion, and the robust API for interacting with your database. So even if you don't use TypeScript in your project, most editors will provide auto-completion for your database queries when you're using Prisma.

3. Prisma Toolkit and Demo

Short description:

The Prisma toolkit comprises of three tools: Prisma Client, Prisma Migrate, and Prisma Studio. The Prisma data proxy acts as an intermediary between your application and your database, managing connection pools and enabling you to bring the benefits of your relational database and Prisma tooling to the edge. The demo involves converting an application from using a JSON file to a database, migrating the data, and using Prisma to query it. To set up the project, install the Prisma CLI and client, and initialize the project with the 'npx prisma init' command.

And the Prisma toolkit comprises of three tools. Prisma Client. That's the type-safe and auto-generated clients. Prisma Migrate. That's the declarative data modeling and data migration tool. It generates fully customizable SQL database schema migrations based off of your Prisma schema. And the third component is Prisma Studio. That's a modern and intuitive graphical user interface for your database.

And now this brings us to the Prisma data proxy. It's an intermediary between your application and your serverless, your application or serverless function and your database. It helps you manage connection pools by ensuring that you don't exhaust your connection limits that would lead to the fail requests. So when you query your database, the Prisma client would connect to your database via the data proxy that would use HTTP.

Now, this is exciting because it enables you to bring all the good parts of your trusted relational database and the Prisma developer tooling to the edge. You can still adopt CloudFlare specific data stores, like Work As Gavy and Durable Objects for specific cases where it makes sense, while benefiting from querying your central database for most of your app. Now this brings us to the most exciting bit of the talk, which is the demo. What we'll be doing is converting the application that is using a JSON file to retrieve and create quotes or get getting quotes by ids and we'll move that application will migrate the data to a database and convert their existing routes to start using Prisma to query the data.

Now let's jump into the demo. I've already set up the base template for the baseline for the project. It has three specific routes in the index to TS, which is the router.getQuotes over here. We can get quotes by ID and we also make a post which doesn't really create a new quote, but it just sends back what you sent. The project is also using Webpack. It has a minimal Webpack configuration for this to get running. I'll go ahead and uncomment the Prisma alias which we'll use in a later section for the demo. The Prisma alias will help you resolve the Prisma client dependency correctly when you're querying a database. So let's get into it. The first step to set up our project is to set up our project by adding Prisma. I'll go ahead and install the Prisma CLI as a dev dependency and then install the Prisma client as a normal dependency. I'll wait that to load for a moment. Now that we've installed the Prisma client dependency, I'll go ahead and initialize our project to start using Prisma by running the npx prisma init command. So what this command does is create a new directory in our project called Prisma with schema.prisma file.

4. Using schema.prisma and Seeding the Database

Short description:

This section covers the usage of the schema.prisma file to define data models, enabling the data proxy feature, modeling the quote data, applying the schema to the database, and seeding the database with data from the data.json file.

This schema.prisma file will be used to define our data models, which will be applied to our database. The command also creates a .env file, where we'll store our environment variables for the project.

So for now, I'll go ahead and add in, paste in my connection string to my database. Now that's done, I'll go over to the schema.prisma file. And start using the data proxy in your project. You can enable it by adding the preview feature flag in the generator block in your schema.prisma file. Data proxy. Data proxy. That's done. Close this. Expand this.

Now let's go ahead and model our data. Our quote, as we can see over in the data.json, contains three fields, three properties, which is the ID, the content, and the author. I got this from an API on the internet. This will be in the stata repository that I'll share before the end of this talk.

So we'll create a new model, quote, which will have an ID, that's a string, and it's an ID value, and by default we want this to be generated. We also have the content property, the content field, which will be a type string. Also want to have the author, which is a string, and if you'd like you can add a created at field in your table, which will be a timestamp of when the record was added to the database. And we want this to be auto-generated using the now function.

So now that we've modeled, let's apply this schema, or the schema that we just created to our database. I'll run a npx Prisma DB push command to apply whatever we have to our database. Yes, since it had existing data, I'll go ahead and drop it.

The next step is to migrate or seed the database with the codes that we have in the data.json file. So inside the schema.prisma, inside the Prisma folder, I'll add a seed.ts file. And in here, I will paste in some code, which I'll explain in a moment. So what this does is import the Prisma client from our node modules, and we are also importing the quotes from the JSON file. And since it's an array of roughly 500 quotes, we'll loop through each and every quote, and insert a record into our database inside this block.

Next step now is to execute this file against, so that we can actually see the data. So I'll use ts node to execute our TypeScript file. So npx, ts node, prisma, c.ts.

5. Setting up Prisma Data Platform

Short description:

We've initialized our project with Prisma, migrated data from the JSON file to the database, and set up the project on the Prisma Data Platform. We've imported the repository, added the connection string, and obtained the data proxy API key. Now, we need to update the .env file with the new connection string.

Give it a moment. Let's see. And it's done. So so far we've initialized our project with Prisma and we've migrated our existing data from the JSON file onto the database. So the next step is to set up our projects on the Prisma Data Platform so that we can get an API key or a connection string that will allow us to communicate with our database using HTTP.

So the first step is I'll go over to GitHub and create a new repository. I'll call it Prisma CFW, short form for Cloud Flow Workers. I'll make it private, create it. So locally I'll commit my files to GitHub first. But I'll first start with initializing it, getting it, and I'll git add, and then we'll commit all our files. Okay, so now that's done, we'll go ahead and import our project on the Prisma data platform, but first let's check whether the changes were synced.

Oh, I have, I missed a step where we needed to add an origin first to, so before we can actually push it to GitHub. So I'll paste in the command from GitHub. The changes have been pushed over to GitHub, so if I refresh this, we should be able to see our new changes, the changes we've just made. Cool, that's good progress. So the next step is to import the project on the Prisma data platform. So I'll go ahead and create, click New Project. I'll give it a name, Prisma CFW. We'll import the repository, Prisma CFW. That's all good. Next continue, I'll be using my own database and I'll paste in my connection string as well. So the Prisma data proxy will be generated once we complete setting up the project on the data platform. Okay. Now that's done. So you're given two connection strings once the project is done setting up. So the first one will be the data proxy API key which I will copy. And the second one is your connection string to your database. I am done so I'll click done here and we are all good. So let's go back to our project. And in the .env file, I will create a duplicate of the database url environment variable and paste the one we just created but comment out the first one.

6. Refactoring and Deployment

Short description:

Now it's time to refactor our application. We replace the data.json import with Prisma Client and refactor the routes to query, get, and create quotes from the database. After refactoring, we test the application using the wrangle-dev command and Postman. Finally, we deploy the application to Cloudflare workers with the Wrangler publish command.

Give it yes. Cool, now that's done. It's time to refactor our application. Sorry, not now. So back in our index.ts file, we will replace our data.json import with importing Prisma Client. And then we'll create a new instance of the Prisma Client. And let's first refactor our first route. So we'll make a query to get all the quotes from our database. So const quotes equals to await Prisma.quote.findMany. That's done. Let's refactor the getting a quote by the ID. So const result equals to await Prisma.quote.findUnique. And then we need to provide a where clause that requires the ID, which is from the params value over here. Lastly, we'll need to be able to create a new record. So over here we'll await Prisma.quote.create. And we will, we need to pass in the data property, which requires the content and the author values, which will can be from the request body. Now that's done, the next step is to regenerate our Prisma client so that it will be able to use the Prisma data proxy.

Now that you're done refactoring the application, let's now go ahead and test out the application by running the wrangle-dev command. The application is running. And that's ready. I'll switch over to Postman and increase my font. And this I already pasted in the command to fetch all the quotes. This works. Last time you make a request, it's slow, but over time the request time reduces. So let's copy an ID and over here in the GetById, let's replace this with our new ID and request for it. And yes, yes, it works. And let's create a new quote, which I said that the edge rhymes with the ledge because I think that's a cool quote. And boom, that works, we got our response. And congratulations, we've finally migrated the application. Now, the last step is to deploy our application to Cloudflare workers by running the Wrangler publish command.

7. Final Thoughts and Recap

Short description:

The project is only 97 kilobytes big and relatively takes a short time. We've successfully created it. The Edge is still in its relatively early stages, and this workflow provides one of the simplest setups to get up and running. The advantage of using Prisma is that the data proxy is deployed closer to your centralized database. We covered the challenges and advantages of working with CloudFlare workers, what Prisma is and what the Prisma data proxy is, and how to add database access to an application and deploy it to CloudFlare workers. Thank you for listening.

As I mentioned before, you can notice here that the project is only 97 kilobytes big and relatively takes a short time. So, I'll go ahead and copy this command, this URL. Go back to Cloudflare workers, go back to Postman and then replace this URL with the one that's in production. And, boom, application works. We've successfully created it.

Now, let's go back to the slides. And if you'd like to build the same application or refactor the application, it's available on GitHub. You can tune it. And if you'd also like to play around with the application, the live demo is available on Cloudflare workers and deployed. And as you can see, the whole experience building and quickly migrating felt relatively fast. In about 10 minutes, we were able to successfully complete the whole migration process.

Now, with a few final thoughts, the Edge is still in its relatively, it's relatively still in its early stages, and this workflow provides one of the easiest or the simplest setups to get up and running. It would be useful if your data proxy, given that it's deployed to a specific region, is closer to your database, and also your application deployed closer to your users. The Edge is a new paradigm that makes you reason differently about data because data is distributed. And when it comes to relational databases, distributing these data is sort of a computer science problem because maintaining consistency across the globe, if you have it deployed to multiple regions is a challenge. So the advantage of using Prisma is that the data proxy is deployed closer to your centralized database. So you can use your normal development workflows to build your application. And this talk, as a final recap, cover the challenges and good parts of, the challenges that exist when working with CloudFlare workers, or serverless platforms. We also covered the advantages that CloudFlare workers have over other platforms. We also discussed what Prisma is and what the Prisma data proxy is and the problem that it solves. And finally, we added database access to an application and then deployed it to CloudFlare workers. Thank you for listening. I hope you found the talk useful and feel free to reach out to me if you have any questions. This is an exciting time to be working on the edge.

8. Database Usage and Expectations

Short description:

Alex asks about the database used with serverless apps. 44% answered PostgresQL, but the pronunciation varies. The speaker expected a mix of Postgres and MySQL databases due to the popularity of planning skill.

So Alex asks, what database are you using with your serverless apps? And we see 44% answered PostgresQL. I don't even know if that's how you pronounce it, I never said it out loud. Do I say it correctly Alex? I'm not sure it's PostgreSQL, Postgres, depends on how you prefer to say it. Yeah, but mostly probably Postgres, yeah. Oh, just Postgres, okay. Yeah. What do you think about this result? Is this what you were expecting? Well, it's changing, I'm not completely sure. Given that also planning skill is new, a lot of folks might be moving over to it, so it's more of a MySQL database. But yeah, it's close to what I was expecting, yeah. Yeah, yeah, nice.


Audience Questions on Prisma Proxy and Edge

Short description:

CCCrish asked about the proximity of the Prisma proxy to the edge and the real database. The speaker explained that while the Data Proxy is deployed to a specific region, the ideal scenario is to deploy it as close as possible to the database. The speaker also mentioned the possibility of multiple data centers in countries like Germany. Another question was about migrating a larger scale application to use Prisma Proxy, which is currently in early access. The speaker advised against using it in production but encouraged experimentation and feedback. Lastly, the speaker explained the differences between serverless, CDN, and the Edge, highlighting that CDNs are a network of data centers deployed globally, while the Edge allows for computation logic deployment further from users.

Okay, so let's jump into the audience questions. Let me up into the Discord server. CCCrish is asking, from what I understood, the Prisma proxy is near the edge, right? But what about the real database? It's not exactly near the edge, right now the Data Proxy is deployed to a specific region, and you'd rather deploy that as close as you can to your database so it's not completely edge-ready, but the perfect use case would be if your users are in, for example, in a specific city. But hopefully, as the edge matures, we can improve the Data Proxy as well as just the database access on the edge, yeah.

So, a little follow-up question for me. So I noticed that you said, when the edge improved, and you said cities, do you know if there's actually, like, countries that have multiple edge locations? And even, like, I can imagine a country like Germany, you're in Germany, that you would have a edge surfer from the same company in, say, Hamburg, and one in Berlin, and one in Frankfurt, something like that. That's also a thing? Or is it like there's a German surfer? Yes, I'd say yes. I think companies like Cloudflare have their data centers all over the globe, and I'm sure there are probably multiple data centers in Germany, for example. So, yes, it's possible. That's super cool. Yeah, things are speeding up. Nice, thanks.

Next question is from Hale to the Wood. How would you suggest going about migrating a larger scale application towards using something like Prisma Proxy? For example, one that is currently using serverless functions, talking directly to DynamoDB. Prisma doesn't currently support DynamoDB, so this would be a bigger challenge. I'm not sure I would have any best practices, but right now the Prisma Dataproxy is also in early access, so we wouldn't recommend you to use it in production. However, you can still use it to just experiment, provide feedback so that we can also prepare it to be generally available to everyone. Yeah. Okay. Yes.

Then, another question. What's the difference between serverless, CDN, and the Edge? I will add to that. Good question. So these are similar but still different concepts. The serverless is like a deployment architecture where you don't really have to worry about your infrastructure at all, or just managing any servers that are involved when you deploy applications. Now, CDNs or content delivery networks, it's a network of data centers that are deployed all over the globe, for example, Netlify and Cloudflare where you can run your application as close as possible to your user, and the Edge takes this a step further and deploys. Now, you can run computation logic, not as close as possible to your users. I hope that's a good enough explanation for our viewers. Yeah, so to debrief that CDN is the same as an Edge server, but just less physical locations. It's almost the same in that CDNs are mostly used to serve assets or deploy applications.

Database Usage and Conclusion

Short description:

We're already using it with the application when you deploy it to Netlify and they have the data centers all over. It's mostly been used to serve static assets. For now, we don't have any questions. If you want to continue the discussion about Prisma or database access on the Edge with Cloud Fireworkers and Prisma, you can do so on spatial chat.

We're already using it with the application when you deploy it to Netlify and they have the data centers all over. It's mostly been used to serve, for example, static assets. I could be wrong and feel free to correct me on that.

Yes. No, that's also what I understand. But there's always smarter people. If you want to correct us, do so on the Discord server.

For now, we don't have any questions. If you have any question, be sure to type them and to give you some time, I will ask Alex about the Waitre. Alex, we were just discussing. Can you give me your best impression of Henry Cavill's growth. My throat is a little sore right now, so I can just say… I think that's pretty good. I hope so. You sound properly Witcher-level annoyed. Of course. Yes. Yeah.

All right. So, there's no questions coming in. I see no one's typing at the moment, so we are just going to cut a little bit shorter to the break. I will release you. If you want to talk to Alex, you can do so on spatial chat. Alex is going to jump there now. So, if you want to continue the discussion about Prisma or database access on the Edge with Cloud Fireworkers and Prisma, you can do so on spatial chat.

So, Alex, thanks a lot for being with us here today and enjoy the rest of your day. Thank you for having me too. Bye bye. Yes.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
React Advanced Conference 2021React Advanced Conference 2021
36 min
Living on the Edge
React 18 introduces new APIs for rendering applications asynchronously on the server, enabling a simpler model for architecting and shipping user interfaces. When deployed on edge networking platforms like Cloudflare Workers, we can get dramatic performance and user experience improvements in our applications. In this talk, Sunil will demo and walk through this new model of writing React applications, with some insight into the implications for data fetching, styling, and overall direction of the React ecosystem.

Workshops on related topic

Remix Conf Europe 2022Remix Conf Europe 2022
195 min
How to Solve Real-World Problems with Remix
Featured Workshop
- Errors? How to render and log your server and client errorsa - When to return errors vs throwb - Setup logging service like Sentry, LogRocket, and Bugsnag- Forms? How to validate and handle multi-page formsa - Use zod to validate form data in your actionb - Step through multi-page forms without losing data- Stuck? How to patch bugs or missing features in Remix so you can move ona - Use patch-package to quickly fix your Remix installb - Show tool for managing multiple patches and cherry-pick open PRs- Users? How to handle multi-tenant apps with Prismaa - Determine tenant by host or by userb - Multiple database or single database/multiple schemasc - Ensures tenant data always separate from others
React Advanced Conference 2022React Advanced Conference 2022
95 min
End-To-End Type Safety with React, GraphQL & Prisma
Featured WorkshopFree
In this workshop, you will get a first-hand look at what end-to-end type safety is and why it is important. To accomplish this, you’ll be building a GraphQL API using modern, relevant tools which will be consumed by a React client.
Prerequisites: - Node.js installed on your machine (12.2.X / 14.X)- It is recommended (but not required) to use VS Code for the practical tasks- An IDE installed (VSCode recommended)- (Good to have)*A basic understanding of Node.js, React, and TypeScript
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Top Content
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js ( Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow ( 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.