Learn how to generate instant GraphQL APIs using a data source connector (GraphQL and non-GraphQL sources), extend and join them both with custom resolvers and deploy to the edge without leaving the code editor.
Build and Deploy Instant GraphQL APIs to the edge
AI Generated Video Summary
GraphBase is a platform that provides a GraphQL API for different data sources. The Workshop covers connecting to APIs, creating a new project, and building a GraphQL backend. It also explores extending APIs, transforming with Open API spec, using database adapters, and securing GraphQL queries. Caching and query scope are discussed, as well as data availability, cost, and authentication configuration.
1. Introduction to GraphBase
React-JS for Webinar Thank you very much. Great to be here. Today's world of different APIs that we have. So that's what we are trying to resolve at GraphBase. By default, we put the GraphQL API for all of your different data sources at the edge around the world at over 300 locations. So your user, when they make a request, are most likely going to get a response that comes back in sub 50 milliseconds.
React-JS for Webinar Thank you very much. Great to be here. I think I'm all set up, I was connecting a few displays to help us get going today.
So this first slide I opened up talking about today's world of different APIs that we have. We have so many different SDKs and APIs that honestly, the documentation for all of these varies so much, the player, the developer experience, all of those are very, very different. And these are just a few of the logos I think that for a few of the products that we use today, from databases, to CMSs, to content management systems, to commerce APIs, there is so much. And yeah, we spent so long kind of connecting all of these things and learning the different SDKs and APIs that it's not the best experience. I don't think. And the other thing as well, closely attached to this is that those data sources are often at one region. And if in a very big if, the preventer that you're using gives you the flexibility to push this, push whatever service you've got to different regions around the world, you often have to pay for that flexibility. And the more services you've got and the more regions you want to be in, the more that's going to cost you, and that is not a great place to be in. So that's what we are trying to resolve at GraphBase. Here we are a GraphQL company, and we want to solve this issue. So at a very high level, when you're using something like GraphBase, you get really, really fast response times instead of going back to the different regions. By default, we put the GraphQL API for all of your different data sources at the edge around the world at over 300 locations. So your user, when they make a request, are most likely going to get a response that comes back in sub 50 milliseconds.
2. Connecting to APIs and Creating a New Project
In this workshop, we will cover connecting to existing APIs, including GraphQL, OpenAPI, MongoDB, and potentially Postgres. We will also explore deployment options and making queries to different APIs. To begin, we will create a new project using NPX Graphbase Init, providing a project name and selecting TypeScript as the configuration. Once the project is set up, we can focus on the server element and start building our GraphQL backend. The project will include a Graphbase folder and a package JSON file.
So the first thing that I think that I want to cover in today's workshop is connecting to an existing API. If there is any API that you use now, feel free to leave a comment in the chat about whatever APIs that you're using and we can look to connect some of those. Feel free to follow along as I do and use your own APIs. If you want to use the APIs that I use, obviously, they'll be on screen and I can share those in the chat as well.
The first thing that we'll do in the workshop is we'll connect to our data sources and connecting to the different data sources looks a little something like this. This is not in the code just yet, but this is what we'll be doing in a few minutes. We will be using the GraphQL connector. We'll also be using the OpenAPI connector and we'll also be looking very briefly at the MongoDB connector. If we get time, I'll show you the Postgres connector as well that we've got coming out in a few weeks.
Once we've done the connection side of things, we can actually go on to deploy that using the command line or we can connect to GitHub repository and we can deploy this. I think for today's workshop, we can focus mostly on the local environment. I want to show how easy it is to build GraphQL APIs locally without having to deploy. But once you do want to deploy and you want to make this available for your applications, all you need to do is run mpx graph-based dev. It will ask you to log in, obviously, if you haven't done that before. And then yeah, you can deploy it. And what we do under the hood is everything is deployed to the edge and we use the whole thing is written in. Whoops. What's happened there? There we go. So everything is written in Rust, the whole graph-based engines written in Rust. And then we package that up using WebAssembly and we deploy that artifact around the world to, we use Cloudflare, so we deploy it to the Cloudflare network, which is pretty cool. All right. And then lastly, we'll explore making different queries to all of these APIs. Some are GraphQL API, some are MongoDB, and some are REST APIs, believe it or not. There is a tool, pathfinder.dev, that you can use to follow along to make queries. We'll be using the version that's hosted with the CLI, but if you want to use any GraphQL API and perform operations, we have this cool tool. And this is an overview of everything that we'll look at today. So I think we are ready to dive into everything. So if there are any questions, please let me know. But first and foremost, I think we just kick things off. I'm just opening Discord as well, so if anyone does have any questions, please let me know. And please ping me in the chat as well.
Okay, so let's begin by creating a new project. We'll bump the font up here so we can see what's going on. And then here we can use NPX Graphbase Init. Obviously, if you have NPM installed, you'll be able to run this. And then all you need to do is give a name for your project. Now, if you have an existing project, say it's an Astro app, say it's a View app, a React app, an Angular app, whatever kind of application, maybe it's a mobile application with something like one of the mobile frameworks, you can use that. You can use Graphbase inside of that as well. So wherever your project is, if you have something already, change directory into that now inside of the terminal. And then proceed to run NPX Graphbase Init. Because I'm not inside of an existing project today, I'm just creating a GraphQL backend from scratch with outside of a frontend. We're just going to be focused on the server element here. I'm going to give my project a name. So I'm just going to call this TS Workshop. And then what this is then going to do is it's going to ask me what type of configuration would I like to use to scaffold and build and configure my Graphbase backend. I like TypeScript, so I'm going to choose that and it feels very fitting given this is TS Congress. And then once that's done, we will then get a folder with a file in for our configuration. So if we change directory into that TS Workshop now and I'm going to open this up inside of my code editor. If we have a look and see what it's given us, we can see that we have a folder called Graphbase. We have a package JSON, which has some data in here.
3. Creating a Basic GraphQL Query and Resolver
We will remove the configuration file for this workshop and focus on the essentials. Install the latest version of the Graphbase SDK to access all the latest connectors. Create a basic GraphQL query and resolver using the g helper and the builder pattern. Define the query, hello, with an optional argument, name. Specify the resolver file and the return type. Running the local dev server will generate a GraphQL schema. The schema can be viewed on port 4000. This configuration provides a standard and common way to generate GraphQL schemas using the GraphBase SDK.
And then we have a configuration file. Now this configuration file, we can remove for this workshop because we're going to be doing everything in stages here. So let's just remove some of the boilerplate. And this is what we are left with. So this is all you need to kind of follow along at this point.
There is one thing that we'll need to do and that's to install the latest version of the Graphbase SDK. We did just push a new version. So you'll need to just run that install for the SDK. And then what this does is it will just bumper this version here and that means we've got access to all of the latest connectors. So if you don't see a connector, if you're watching this later on demand, you'll need to install the latest version of the SDK. And this package.json would be a package.json inside of your front-end application. All Graphbase init would do is add this dependency to your existing dependencies. But we just called ours TS workshop when we initialize the project.
So we're going to get started by just doing something very, very basic. If anyone has built a GraphQL API, please let me know in the chat if you built one before. If you haven't, we are going to explore creating a GraphQL query and resolver. And the way that we can do that is by using this g helper. And this is using the builder pattern here to build on top of G, which is our schema, and then we can give a name for the query. And the query here, why don't we just do the traditional hello world? We will get on to more exciting things, of course, as we proceed through here. But yeah. Let's just create a query, hello, and we can maybe just give it an argument. And we'll define the argument field, the argument name as name. And then here we can say this is a string. And we can make this optional if we want to as well. Okay. And then once we've got that, we then need to define how this query works. And to do that, we can call the resolver here. And we can specify the name of the file of the resolver that runs when we execute a GraphQL query. Let's just make it very simply here. We'll call this Hello. And then very lastly, we need to specify what the return type is for this query. And here I'm just going to say that this is a string response. You could make this anything else. It could be a big int. It could be a date, a date time, an email. It could be an enumeration. But for the purposes of getting started, we just want to call this a string. So now that we've got this Hello query, this isn't going to do anything just yet. All that this is going to do is generate a GraphQL schema. And if we run the local dev server here by using npx graph-based dev, this will start up a local server that we can then go to on port 4000. So if we bring this on over to the screen, if we look inside of here, we've got a query, hello, with some arguments of name. If we go over to the tab here, we can see the schema and we can see that we have a field hello and we have a name string that returns the type string that is non-nullable. So this is all that that configuration did. Here's the SDL if folks are familiar with that. But this is a pretty standard common way. You could write this by hand, but with GraphBase we're able to use the SDK here, we've got full type safety, we don't make mistakes when defining the SDL, we have this pattern here. So this here generates this in GraphQL. But if we attempt to run this query, let's have a look and see what happens. So we're going to run a GraphQL query, we can omit query because that's by default a query. We can pass it a name.
4. Creating Resolver File and Using TypeScript
5. GraphQL Resolvers and Arguments
The resolver expects an exported function that returns a string. The arguments, context, and parent can be accessed within the resolver. In this part, we focus on the arguments, specifically the 'name' argument. We return the value of 'args.name' if it exists, otherwise we return 'world'. This is how GraphQL queries and resolvers work, allowing us to fetch data or perform mutations.
Now all this file then expects, if you're following along, is for you to export a function. Here we'll just call a default function. And then we'll return a string. And we'll say hello, world. Now that that's saved, we open up the terminal here. We can see that the resolver initially tried to compile, but there was no file. Now it's detected a change. And that server is reloading. So if we go back and we run this once more, we should then see that we have hello, world, in our response. But what we don't have is ts, congress. This is an argument that we specified in the configuration that generated the SDL, which is specified off here.
Now, if anyone has worked with GraphQL resolvers before, you'll know that there is a certain format, a signature for resolvers. First, we have the parent. Then we have the arguments. Then we have the context. And then we have info. If we work in reverse order, info contains information about the current query. So that can contain an AST of maybe what the query looks like, any arguments that were passed, any directives that were used. So if we do things inside of the code to figure out the best way to query plan or project queries onto databases, we've got that. Now, for this, we don't actually need that. And then for context, with context in GraphBase, we can access things about the current request. So that could be things like authorization headers or any other context variables that come from you configuring the server. And then we have arguments, which are passed to the query. And this is what we'll need for this part. And then very lastly, parent, this is what we can use to access the parent or the root query. Now, we have a root query, hello, here, so there isn't a root or a parent of that. It is itself the root query. But if this was a resolver inside of a nested type, we can access that parent data. And we'll have a look at that a little bit later. So let's ignore that field because we only care about arguments here. And then what we're going to do with that is we're going to return here, let's just return here the name, so we'll call args.name. And if this doesn't exist, we'll just return world. So hopefully all that should work. If I can TypeScript correctly, this should be okay, I think. I think so. Actually, no. We'll need the... Where's it gone? I think that should work. Correct me if I'm wrong. Anyways, we'll run that. And then there we go. We get that hello, TS Congress in our response, because it got the value here. Now because that was optional, of course, we can get rid of that and that could just return any default value. Here we just have world. So this is a GraphQL query. Of course, we could do the same for resolvers as well. So instead of just fetching data all of the time, we could run a mutation here and we could do whatever we wanted here. GraphQL queries often fetch data. They're similar to get requests, whereas a mutation is typically kind of like a post put patch delete operation. Although with GraphQL, you typically only interface over post and querying is when you kind of fetch data, mutating is when you want to change something.
6. Creating a Basic GraphQL Server
This is a very basic GraphQL server that you can deploy to production and it returns the value that you pass to the hello query.
So we could type in whatever we want here and create a mutation. It looks very, very similar to what we did before and it can do whatever we need. So we'll do this. We'll touch on mutations a little bit later. But for now, this is a very basic GraphQL server at this point. And all we've needed to do really is this is all it takes. If we open the hello file, we've got two files that create a GraphQL server. There isn't anything else you need to do. You don't need to import a server and instantiate that server and then configure deployment of that server. You don't have to configure TypeScript. At this point, you have a GraphQL server that you can deploy to production and it returns the value that you pass to the hello query. Now I'm not very production-like. It's probably something you'd never use. It's not even as advanced as a to-do list. But yeah, this is what we've got.
7. Adding Connectors and Using GraphQL API
Let's talk about adding connectors to GraphBase and stitching in your own data. We can use a Shopping Cart API as an example. Create a new console called CartQL and choose the GraphQL connector. Pass the URL to the API. If you encounter a type issue, check the editor, import the necessary files, and ensure you have the latest SDK version installed. If the error persists, try commenting out the code and check the project structure.
Now then, I think we should get onto some more interesting stuff because that's quite boring. Here we have this query. Let's just leave that in there. And next, I want to talk about adding connectors to GraphBase and being able to kind of stitch in your own data as well. What API should we use? I have my own kind of API that I use, which is just a Shopping Cart API. I think we could use that as an example. And then I'll show you some other things like a headless CMS.
So I want to show you very basically very quickly how we can use a connector to connect a GraphQL API. And you might be wondering why would we do that? So here let's create a new console called CartQL because that's the GraphQL API that we're using. Then we can call connector. And here we'll choose the GraphQL connector. We'll give this a name, we'll just call this CartQL for now. And then we can give it some more values. And we'll pass the URL here to that API. It isn't important that you know how this API works. That's just what we're using.
So we do have a question about having type graph-based schema is not signable to input type. It's one of properties, you know, it is happening to the g.query, the name is the one being highlighted. This thing here. That to me sounds like maybe it's a mutation that's been called. Or are you invoking a enum? This is what I have. That looks similar. Looks identical. Yeah. Looks so good. And you are definitely exporting the default config, is that right? Okay. So it could be something with the editor. Make sure that this is imported. Also make sure that you have the latest version of the SDK installed. So you can run npm install at graph base slash SDK at latest, and it will bring the latest version. Just check that you've got all of that there. Start the server again. So yeah, the only reason I would expect there to be a type issue. I don't even know if there's a way for us to replicate that. Yeah, I'm not entirely sure what that could be. Are you using Windows? Are you using Mac? Not that that should matter, really, but I'd be curious to log that one. Mac, okay. Yeah, same as me. Okay. Yeah, it's strange. That error that you're receiving, why don't I paste onscreen just for others to see? So this is the error. I'm having a type graph-based schema. It's not assignable to input type. It certainly sounds like it's a mutation here, but obviously that's not correct because you shared the schema. Yeah, sorry about that. I'm not entirely sure what's going on there. Yeah, the latest version should be working with that. Anyways, maybe we can move past it. What happens if you comment this out? Does the server run, if you just comment that out? And I just want to double check that you have graph-based.config.ts inside of the graph-based folder. And package.json's at the root.
8. Connecting APIs and Adding Data Sources
Once we've connected to an API, we tell GraphBase to attach it as a data source to the schema. We can now see the query for cart and fetch various details from the API. We can also add items to the cart and retrieve the updated cart contents. This provides a typical shopping cart experience. Additionally, we can add more APIs, such as Contentful, by configuring the environment variables and connecting the data source.
No worries. So yeah, if you do figure it out, let me know. Let's keep going anyways.
But then once we've created, once we've kind of connected to an API, we need to then tell GraphBase to attach this as a data source to the schema. So all we need to do at this point is say, cartQL is a data source to the GraphBase schema. Pass that in. And because the server is still running, it's detected that change. And we go back over to here and we refresh. We can now see that we have a query here. This is a namespace as well, by the way, that we have this query for cart. We can then give an argument to this. So let's call this tscongress. Tsc. And then from this, we can fetch the ID. We'll fetch the subtotal, formatted amount. Maybe as we fetch the total items, we execute that. We can see here we don't have anything. If we update this query and we fetch the name of the item and the quantity, maybe as we fetch the line total amount and formatted amount, we're making a request here. When we execute that, that is going directly to that API that we connected. And it's not – yes, it's going through the GraphBase gateway, but that query's being projected onto the other data source. Now, that's cool, but why is that interesting? Well, if we take this and if we run a mutation to do the same thing, but this time let's add an item to the cart. And we'll say this is TSC, and we'll specify an item ID here. We'll give it a name of test. We'll give it a price of 1,000 cents. Then we'll return the ID. We'll execute that mutation. That's added one item to the cart. I run this again, there should be three items in the cart now. So if I go back and run the query that we had before, we run this query that we had before, we'll now get a response that we have three items. Then we see all the items here, and we get a line total for each of those. We could go as far as getting the amount and the formatted amount for those lines so we get the individual prices for the items. So very typical shopping cart experience, right? This is what we see when we use state for shopping cart, whether it be an API or a dependency. That's what we get. Now then, this is one API, but let's spice things up. Let's now begin to add a few more. Now, if we go in here, I'm just going to add to my env file a API key. I'm going to do this offscreen very quickly. But inside of my env file, I've just added two environment variables, name and their value, and I'll show you what those names are in just a second. So if I grab another connector now, and below here, I'm going to connect to Contentful. So I've given the variable name Contentful. I've used the GraphQL connector because Contentful have a GraphQL API. I then give it this name Contentful, which is different to this one up here, and then I give it the URL. And this is how I invoke that environment variable. So I have an environment variable called Contentful underscore API underscore URL, and then in that environment variable file, that's equal to that URL. And then we have the headers here, and this is kind of the signature, the format that Contentful expects, which is this bearer authorization header, and then we paste in the value there. Lastly, all we do is connect our data source. And that's all we have there. So now if we go back and we refresh, we'll see now we have one API, two API, and then we have that API that's on our own, that hello query that we have, that Graphbase resolves itself. But if we open up Contentful here and we go to property, we can get the items, and maybe we get the name and the location, the latitude and longitude of that. Then we fetch that.
9. Expanding Types and Fetching Data from Remote APIs
With GraphQL, we can query and mutate against multiple databases or data sources using a single endpoint. We can expand types to return additional fields, such as weather. By configuring the resolver and using returns, we can fetch the current temperature from an API. This allows us to consolidate data from different sources and provide a comprehensive response. Imagine having a remote API that can be accessed through GraphQL.
This is a location that I have stored inside of Contentful. So now at this point with GraphQL, we're able to do many things now. We're able to query and mutate against multiple databases or data sources or APIs using a single GraphQL endpoint. So if we wanted to create a page, let's say, for example, we wanted to add items to a cart that were maybe properties that we wanted to purchase or rent, we could use a single API to do that, and with Graphbase we can join those, and we'll get into that a little bit later.
But with this here, what we're able to do now is let's have a look at this here. We have property collection. We have items, which is the contentful property type. But we've got the name and we've got the location. Imagine a beautiful UI that's shown at these locations. What if we wanted to see the current weather? Well, we'd have to call a weather API, but you might be thinking, well, OK, how would that work? So we have the ability with GraphQL to execute multiple queries here, so we could do something where we have a query for weather, and then we pass the location in, maybe it's the latitude and longitude. But imagine doing that for every single location. That could take quite a bit of time and the DX might not be the best there. So here we can see we've got a response from both Contentful and our local resolver.
But what if we could, instead of creating this resolver by itself, what if we could expand the property collection item, or the type, to return a field called weather. Now we don't have this weather field at this point, but let's go ahead and add that. So back inside of here, below our data source, we will go about extending that property. So if we do g.extend and here we'll need to pass in the type name. So if we go back to this, let's now look inside of Contentful query. Let's go down to property collection. That returns a property collection. Item is Contentful property. And this is the type that we want to return or extend. So now let's create a new field called weather. We can give this some details. We can give it those arguments similar to what we had before with the hello. We can also configure the resolver here. And then we can use returns. So for this case, let's just return again. Maybe it's a decimal. Yeah, let's return a decimal there. And then we'll call the resolver weather. So with this file, we'll save that, and we'll go to resolver, and we'll create a new file that we'll call weather.ts. And similar to the resolver that we had before, let's just copy and paste that. And we'll call this as weather resolver. Now what we can do is we can actually do something inside of here to return the current temperature, the weather, for this API. Now I don't want to spend far, I don't want to spend too much time on this because we've got other things to cover. But at this point, you could go and implement a call to an API to fetch the current weather. And this example is inside of our documentation, if you would like to follow along with that specific example. But here we could just return, maybe it's 22.5 degrees. And then we don't need the arguments here. And then we do that, we save that, we go back to here. And, whoops, we'll need to query for that once more. And now we've got this new field called weather. We'll execute that. And there we go. We get that value, 22.5, that comes back. So great. We've got some data now. We've got a response coming back. So, imagine you have an API that is remote.
10. Extending APIs and Transforming with Open API Spec
We can extend the Contentful API with custom logic using a few lines of code. By using g.extend, we can create a new field, specify its type and resolver, and create a corresponding file. We can also fetch location latitude and longitude by making a fetch request and accessing the values from the query. If there are any questions or if we want to add more data sources, we can proceed. Additionally, we can transform non-GraphQL APIs into GraphQL APIs using the GraphQL open API spec. By configuring the open API connector, we can exclude certain fields and generate queries based on the spec. With the necessary environment variables set, we can use the transformed API. We can query data from Stripe using the generated queries and even extend Stripe with custom fields, such as gravatar, by creating a resolver and specifying arguments and return types.
We have this Contentful API. But you want to extend that with your own custom logic. Well, we've just seen how easy that is with a few lines of code. So here we have g.extend, we give the name of our new field, then we return the type of the field, and then the resolver weather, the name for the resolver which is weather, and then we just create a single file that we call weather. And that's all we need to do.
Now, if you wanted to do, maybe, so you wanted to fetch the location latitude and longitude. We can also do that as well. So for the sake of this, we could just fetch, we could do something like make a fetch request, and inside of here we could say a lat is equal to lat and the longitude is equal to the long value here. And this location is what comes from the property or from the query, so if we add that inside of here, we can access these values inside of here, and that's what that root or parent argument is for.
So, yeah, please do ask any questions, if none of this makes sense, or if it all makes sense, then great. Don't ask anything, or whatever, just let me know if there's anything I can help with. Now then, at this point, we can go on to add more data sources. Let's just add a few more.
At this point, we've stitched in remote APIs and we've designed our own GraphQL query with our own data, but what if we want to extend an API that is not a GraphQL API? Maybe we just want to embed a GraphQL API or a REST API and transform that into a GraphQL API and output GraphQL and do all of this kind of REST to GraphQL stuff. Well, because something like the GraphQL open API spec exists, that's a specification that Graphbase can read and then our system, our engine, can take all of that data and it can transform that into GraphQL.
So if I bring this example across and walk through that very briefly, here we have Stripe, very famous API, very cool. Very cool API. But they don't have GraphQL. What they do have is an open API spec. We're able to take that specification and pipe that into the open API connector, configure all the headers that we need, and then any transforms. And with transforms, what's cool about that is we can say, okay, I want you to exclude certain fields. So maybe we only want to fetch the user's field or the customer's field or the orders query. We can extend all of that as well and exclude that, sorry.
So with that, if we save that, I will need to off screen again, just add my environment variable to that all important .env file. I think we're good. Yeah, so I've added that. So now this stripe API key is my secret stripe API key. Now with this running, the changes have been detected and you can see here that we detected that stripe API key and we've set that inside of .env. We're now able to use that. So if we go back to Google Chrome, we should now see that we have a query for stripe. And all of these queries here were automatically generated for us based off the OpenAPI spec. That's why I love specifications that we're able to read one and you know, use because we know what the field is called and the type of the field and the URL and we have all the properties of each individual field where we're able to do what we need to do to transform all of that, all predictable.
So here, I'm going to make a query to fetch my customers from Stripe so we'll grab the name and maybe so grab the email. We perform that request. We then get the data that comes back from this. Just need to admit a few more. There we go. So yeah, we have a query that's going to Stripe and it's fetching back all of my customers.
Now, the concept that we had before with extending Contentful with a custom field. Let's say we wanted to do the same for Stripe. Now, let's say we want to get the avatar or the gravatar for each of our Stripe customers. Now, Stripe doesn't have a field called gravatar, but because we're using graph base, we're able to create a resolver for that. So just to save us a bit of time, I'm going to copy and paste the code to make that happen and we'll walk through it. So below that Stripe data source, we'll do exactly the same as what we did before. We'll call g.extend and this time we'll pass the type name StripeCustomer, then we'll call the field gravatar and then for that, we'll specify that there are some arguments to this field. We can specify the size, the default image, if an image doesn't exist, of course, a rating and the Boolean to only fetch secure images. And then we return a URL that is optional or nullable and then we use the file gravatar. And this up here, where is it? enum ref, the rating. We've just created a GraphQL enum here. So I'm going to save this.
11. Extending APIs and Using a Database Adapter
We've connected a REST API, transformed it to GraphQL, and extended it with our own custom code. We can join another API, such as getting Stripe customer tickets inside our support platform or retrieving shipping information from a shipping or returns API. We can do anything inside the resolvers and extend the nodes to perform various tasks. Next, I'll show you how to use a database adapter and discuss the benefits of using it.
Then I'm going to go back over to here and then if we refresh and we go to the schema, we'll then be able to see that we have the field instead of our customer. So if we look for that customer query, where has it gone? Maybe as we look inside of the whole schema, customer. So we have the strike customer now and all of these fields obviously generated from the open API, but if we scroll down now, we'll see that we have Gravatar with those different arguments that return a URL. So now we can go back and we can run that mutation, that query, sorry, but we'll need to obviously pass, we'll obviously need to create that Gravatar file, which we'll do in just a second. So now we're able to make this request, we'll run this, it won't work, it's going to be null because we haven't got a file there, but if we go back to our code and we create the file here called Gravatar.ts, and because we run at the edge, you can install npm2 packages, but they need to be edge compatible. Inside of here, I'm not using any npm libraries, we're just using the built-in crypto stuff, that's kind of the edge stuff from Cloudflare. It's disclosing the object, that works. So the question is, how does that Gravatar, the new Gravatar field, how does it know that it's disclosed to that object? Because we said here extend stripe customer and the response from this is a stripe customer, the actual node here is a stripe customer, so we're extending that node with that, that's all that is. So the nodes object, you can ignore that, that is the API, that's kind of just what the API returns. The node actually returns a stripe customer. So the customer's query, we can see here that it returns, that type is a stripe get customers, then that get customers has the node inside of it, but we're only extending the customer itself, so wherever stripe customer type is used in the API, that extension will be passed there as well. So maybe you could get an order and then you could get the customer from the order, that field would appear there as well, so it's kind of reusable. The nodes mean that's a collection of the type. Yes, so nodes here, I go through phases of, do I like the nodes approach or not? What you will find with other APIs is that it looks a little bit like that, where you then have nodes and edges, and this follows the specification in the GraphQL world, typically known as the relay cursor specification. I'm not sure if the stripe thing, they just call those nodes as well, I'm not that familiar with it, but in the GraphQL cursor spec, we have a node and it's the type, like the edge is that, and then instead of there, we can have whatever fields you need. So we do prefer to try and unify all of these APIs so they look the same, so wherever we can, we try and follow that cursor spec, but for stripe specifically, I can't remember. So now we've got this file, we're just doing what we need to do to encode the email and the first argument here of that gravatar. This email value is what's here. So instead of that stripe customer, we get from the parent, the root query, which is customer, we get the email. Now one gotcha here is if we don't include that email in the query, then we can't access it here. We do want to make that possible by allowing people to create kind of to extend the query with fields they need, but if you're integrating this API, you're going to know how it works and you'll know that you need to kind of pass email. So with that done, at the very bottom here we just return a string, which is a URL that's composed of all of the different values above and then we can run that. So now we get this file here. If we click on that, it launches into a... It launches Gravatar. To get that, we can then pass different arguments. Let's say we want to increase the size to 300. We run that, it updates the URL. We follow that and it gets a larger image. And that's an image of me. Maybe it's from about 20 years ago, I think, at this point. But yeah. That's extending... I think that's quite powerful. So we've connected a rest API, transformed it to GraphQL and then we've extended it with our own custom code. And we can do anything inside of those resolvers. We can do whatever we need to do. We can even join another API, so we could say, for a Stripe customer, I want to also get all of my Stripe customer tickets inside of my support platform. Or maybe it's all of their returns or shipping from my shipping or returns API. You could obviously extend those nodes to do exactly the same thing. So, yeah. Now, I think there's a few of the things that I want to go through. You don't have to follow along with it if we don't have time. But, yeah, there's two other things I want to show. I'm going to update my environment variable file so I can show you these and I'll talk about what those are. So my environment variable file is updated. Now, the next thing that I want to show you is how we can use a database adapter. So, so far, we've plugged in remote APIs and we've created this manual GraphQL resolver. But imagine having to create all of the different types for these. Our file would get quite big, yes, because we'd have multiple files.
12. Using Database Adapters and Generating CRUD API
But with Graphbase, we can automatically generate queries and mutations for data that lives in a database. Currently, we have MongoDB as a database adapter, and we are working on adding Postgres and Neon adapters. To connect to MongoDB, we need to provide the API key, URL, data source, and database. These values can be obtained from MongoDB Atlas, where we have already created a database and enabled the data API. By specifying models and their fields, we can generate a CRUD API for the collection. The generated queries and mutations are automatically added to the schema, and we can customize the collection name if needed. The generated API includes input arguments for all fields, which are non-nullable by default. We can then make GraphQL queries and retrieve data from the database, following the relay spec with edges and nodes. The generated API provides the necessary flexibility and convenience for interacting with the database.
But I don't want to create queries and mutations for data that lives in a database. Maybe if it's something like SQL or MongoDB, we can generate some of that stuff automatically. Well, with Graphbase we can. We've got a small set of database adapters. We have MongoDB currently. We are also working on Postgres and Neon adapters as well. And, yeah, for this, let's just call Mongo and we'll call connector.mongodb. We'll give this a name. Let's just call it Mongo. Very creative. But then we need to add in a few things. So we're going to add in the API key, URL, data source, and database. Now for anyone following along now or later, this URL, data source, and database, and the API key for Mongo come from MongoDB Atlas. The Atlas offering that they have, I have already created a database and I've enabled the data API. Our docs will walk you through this if you're unfamiliar with that. If I just open up the guides and go to MongoDB. I published this a few days ago, and yeah, you need Mongo database with the data API enabled, and you can get these values from there. That's the MongoDB, when you go to their website and you sign in, you create a database. That's what I'm talking about. That's what we're using here. You can use a local version of MongoDB as well, and you can check out our docs for that as well, but for the purposes of this, we'll just use the remote one. Now what this means we can then do on the MongoDB connector is we can specify different models. So here, let's create a model for user, and then we'll give it some fields, maybe name. I'm going to say that this name field is a string, and the age is of the type int. And maybe email is also a string. Now we could use email as well here, and this will give you some validation stuff. Let's just keep it to a string to be, so we don't go off script a little bit. But then all that's left to do is to attach that Mongo data source to the schema. So here we've specified a model, and this is a new keyword. We've not used this anywhere else. This name user works with model and model will generate a CRUD API for the collection users. Now if we want to customize the name of the collection, we can give it a name of users. So instead of using the user, singular, model, name for the collection, we can specify we want the plural users. Now if everything's still running, which it is, that will then reload. And now if we explore the Mongo DB, the Mongo namespace, we now get two queries, user and user collection. You also get, inside of mutation, inside of Mongo, we get user create, delete, and the many query, create many, update and update many, mutations as well. And these are all automatically generated for us. And if we go inside of here, we look at the arguments, we open the docs for that, we can see here in the input arguments that we've got all of those different fields as well. And because we didn't specify that these were nullable, all of these are non-nullable or required, not optional. And all of this was generated automatically. So if we go back to query, and here we'll just make a GraphQL query and we'll say, get me the first 100 users. And then going back to what we're talking about before with edges and nodes, we do follow that relay spec here, so we generate the edges here. So then we can do ID, name, age, and email. Did I add all of those? Yep. We run that. That then makes a request to the API. This looks like we don't have an age value, so we could do .optional here. We do optional there to be a bit safe. Maybe add optional there as well. I haven't checked on the data in MongoDB in a few days, so it could be that it's been deleted, but there we go.
13. Working with MongoDB and Securing GraphQL Queries
The age is null, there was no value, and because it's specified in a type that it was non-nullable, the API wouldn't return with a null value. Caching is a cool topic that wraps all of this together and goes back to bringing data sources to the edge. To create a new user, we can pass in a new name, age, and email. With just 15 lines of code, we have a fully working CRUD API with batch mutations. To secure GraphQL queries from DOS attacks, you can add rate limiting inside the resolver or use tools provided by GraphBase. This does not work with APIs, but all the information is available in the documentation.
The age is null, there was no value, and because it's specified in a type that it was non-nullable, the API wouldn't return with a null value. So there we go. We get the data. That comes back from MongoDB.
So, yeah, there is a comment here. This seems incredibly powerful. Do we also have the other benefits from GraphQL translating the REST API? Caching. Where would that unique ID from GraphQL specification go in the implementations? I see your ID now. Okay, cool. Yes, caching is a cool topic that I'm going to come on to because that's the thing that wraps all of this together and goes back to our first comment in the very opening slides, which was about bringing data sources to the edge. We do that by using caching. So I think that's a good segue into caching, that question. So thank you very much.
But yeah, just to wrap this one up, mutation, if we wanted to create a new user, inside of here, whoops, we can pass in a new name and we can pass that value and then from that, we can get the inserted ID. We run this mutation. It inserts a new record. Let's just do that. And we'll call this, we'll add you in here. Age, I'm just going to make this up as well. By the way, don't be offended. I don't know, 30. Is that a good median age? Let's go with 25, so we don't insult anyone and then email. Let's just say at graphql.com. I don't know if that's a website or .org. And then we run that and boom, straightaway, we have that done. Let me go back to the query that we had before. There we go. There we go. We've got myself and we have that second entry that we just created coming from MongoDB. This is in MongoDB Atlas. It's a remote thing. But we're able to do all of this without needing to do anything. All we did to make all of that possible is this, 15 lines of code. Pretty cool. 15 lines and we moved down to space. 15 lines of code and we've got a fully working CRUD API with batch mutations as well.
So how will you suggest to make these services, GraphQL queries, secure from DOS attacks? That is possible. You can, inside of the resolver, you can add some rate limiting. There could be a library for rate limiting. We actually have a kahve solution inside of resolvers that you could store. We know the IP address. But GraphBase itself and our cloud offering, we have tools there coming that allow you to kind of configure all that stuff. Right now, we show you what people are requesting, how long requests take to data sources, etc. But we want to give users the ability to do stuff. But you could home brew something.
Yeah, so the question about does this all work with APIs? No. At the very beginning, if you watch this later, all of this is available in our documentation as well. But this query here, hello, inside of here, we could say okay, users is equal to await. Let's make this an async function. Prisma.user.findall, I don't know what the API is.
14. Importing and Using Raw SQL with Neon and Mongo
We can import and use raw SQL inside the resolver to return users. It is recommended to use Neon or Mongo. In a few weeks, stable Neon and Postgres connectors will be available. Neon DB can be specified with a URL, and Postgres support will also be added. These connectors will allow us to perform various operations, including using models.
But we could import things like that. We could do raw SQL inside of here and then we could return those users inside of there. Now, I strongly encourage that if anyone is using Neon or Mongo, that you go down the approach that I went through here. So in a few weeks, we will have a Neon connector that's stable and a Postgres connector. So here you can specify your Neon DB and then you can give it the URL for that. We will also have Postgres support. You can use any Postgres database and provide the connection string there as well. And then with that, you'll be able to do everything you did before by doing model, et cetera. But that is coming. That's not out yet.
15. Creating Custom Connectors
You can create custom connectors by re-implementing the signature of one, specifying the name and field inputs. The connectors can be polymorphic, allowing for flexibility in the types. The resolver can fetch the type name from the query arguments. The resolvers need to be edge compatible, and edge databases like Neon, Terso, and PlanetScale work well in this context.
Yeah. Can we create custom connectors? The custom connectors would just be obviously by this query thing. You could probably create... you could reimplement the signature of one, I think, by creating something like user create and then you could do g.input and you could specify the name and the field inputs. You could even make a union type so it could be polymorphic. So there is some flexibility. You could make this a union and then instead of it being a user create it could be, you know, object create and then this union would have a name, then it would have values, sorry, the different types. So it could be user. It could be, I don't know, on the spot, but product, whatever. You know what I mean? It's kind of polymorphic in a way. So you could reimplement the connecter logic that way I guess. And then in the resolver you could fetch from the arguments the type, the type name on the query. So yeah, you can use the, I should, the one caveat with these resolvers is that they have to be edge compatible. So I think Prisma's going to have it. I think Prisma has a hard time at the edge so that may not be a perfect example but certainly other edge databases like Neon and Terso and PlanetScale and all the other big ones, they would work in this context no problem whatsoever. We have guides on all of those I've just mentioned as well.
16. Caching and Forwarding Authorization Headers
With GraphBase, you can configure caching rules to improve performance. By setting a maxAge of 60 seconds, you can cache data and provide a faster user experience. The cached data is stored at the edge or on the server, ensuring consistent performance for all users. Caching also reduces the cost of API calls and allows for static data fetching. Different configurations can be specified for caching different data sources. Additionally, you can forward authorization headers to protect sensitive data when deploying the application.
All right, cool. So the very last thing that I think I want to cover is when we go inside of here and we make a request to our database, we can see here that some of these took a while. Let's go and make a request to Stripe as well. We'll get our customers once more. We'll get the email and Gravatar and maybe it's the name. Now that Gravatar function has to run for each request and then it's got to go to Stripe and get the data and then it's got to do everything it needs to do to return that data and we can see here that took over two seconds to return that data. That's not amazing. But one thing you can do is with GraphBase is when you export the default config, you can configure rules for caching. So we can pass an array of different rules and these take in properties like the type, maxAge, mutation of validation scopes and the stable value to validate. I'm not going to go into all of these. Our documentation covers them in a bit more detail. But with maxAge, what we're able to say is for 60 seconds, cache everything. And how do we cache everything? Well, the root type is query. So we're able to cache everything for 60 seconds. Now that 60 seconds has gone, the next request is going to have to go to the server and bring data back. That's not great, and if stale while we validate has taught us anything, if it's 60 seconds, if it's old by 60 seconds and that maxAge is passed, just pass the stale version, but in the background, re-fetch, give the user that old data. So we can pass that 60 second value there as well. And then we would go back to here, if we want to update the data for the next time, that will go off to the service, make that request. And then when we run that a second time, we'll see here in six milliseconds, we got the response from Stripe and Mongo that was inside of the cache. And imagine this was multiple data sources and you're joining them and you have your own APIs, users no longer need to kind of hop around the world and get what they want within six milliseconds. Granted, this is locally, but if you were to deploy this to production, this is gonna be 20, 30 milliseconds, a lot faster than it would be going directly to the database, each individual data source. And then if you do things like I've done with the resolver, it's gotta run that resolver, et cetera. We can get around all of that. So yeah. So the edge is, in the edge, correct. A client-side caching using Apollo Client is not affected and can stay oblivious to all these implementation details. Yeah, if you're using something like Apollo Client that has caching in the browser, that's awesome. But if you have 100 users on the page, they're still going to be making requests to all the underlying data sources, right? So just because one user has a cache in their client, the next user isn't. But with this, it's because it's cached at the edge or on the server, if we want to think about it in old terms, that every user is going to have a great experience, which is cool. And also, what I really love about just caching in general is the cost. It's so much cheaper now using the connected APIs, you don't have to kind of pay for tons and tons of API calls anymore, because you could cache them. Like, let's think of a website you buy things from. A typical product page probably doesn't change all that often. Things like the inventory, of course, and whether it's in stock or on sale might change. But things like the description and the images, you don't have to fetch that every single time, so you could cache that. So if your site is, say, it's been statically generated, it can statically fetch data from the cache. So, yeah, you're able to cache all that stuff, which is cool. Yes, you can specify different configurations as well. So I set up caching for absolutely everything, but I could just say that I want to cache Stripe customer for 60 seconds. I could also then add another rule and say, well, I'm going to cache the Contentful property object or the Contentful query for however long that is because I don't really care. That data doesn't do stuff. Now, what's even cooler is that with these rules, let's say, for example, Stripe customer is a perfect example here because at this point, the graph-based gateway is configured with my own Stripe API key. Now, if you deploy this, you're obviously going to get access to my stuff, which isn't ideal. So what you could do here is you could say headers forward, set authorization, and then you could forward, I can't type script today, a code, you can then forward the authorization header from the request. So let's call this something different so it's easier. So Stripe secret key. And then this value down here, that might work, it might not work, I've not tried this. Yeah, that worked. So we're going to Stripe again, we're getting the customers. But it showed this not working.
17. Caching and Query Scope
Queries can be cached at a global level or specific to an API key. This allows for efficient retrieval of data and ensures that each user receives the appropriate response. The caching process eliminates the need for repeated queries to the database and improves overall performance.
It might work because it's queried because it's cached but perks. So yeah, wrong query, but still works because it's cached. But once that time has lapsed, that query will not work anymore because that value is not valid. Now, at this point, that data is then cached at a global level. But you can also then with the configuration for queries is you can specify the scope. So we've got a limited set of scopes right now. And you can specify it's public. So it's for everyone, everyone gets the same kind of reading right from the same cache or you can do on an API key basis. So if you use a token or a header to resolve that data, we can figure that out and we will cache the response specific to an API key. So if you make a query to get all customers, but you pass your own API key, that will be cached to you only. You know, and if I do it's cached to me only.
18. Data Availability, Cost, and Auth Configuration
I think the data availability and cost is mind-blowing. We're solving the problem of paying multiple vendors to duplicate data across regions. We want to ingest data in real time and keep it in sync. We can configure Auth and use existing JWTs for authentication. We offer a managed database option, but our focus is on creating an API to extend databases without migrating data. That's all for today. Reach out to me on Twitter for more information.
Thanks very much for the comments. I love all of the questions. Yeah, I think that honestly, I think the data availability and the cost stuff is just blows my mind how cool that stuff is. You know, I go to the days, like I don't have the presentation open anymore. But I was shown at the end. Let's see if we can pull it up. I was shown the in the beginning. See I'm shown this in the very beginning, which is this problem here, right? We've got so many different tools and APIs that live everywhere all over the world and they're in different regions. So you end up having to pay for, you have to pay all of these different vendors that put your API and duplicate the data across different regions. If you have six regions, sorry, if you have six vendor APIs and you want to spread those across the world, you know, let's say a ten, like that cost is just going to just spiral out of control. It's just too expensive for the, even for applications which are making millions in revenue and can afford this stuff. It's not a cost I think we should have to afford. There should just be a better way for this stuff. And I think, yeah, I think this is what we are trying to solve. It's not the perfect solution, because there's still going to be that latency in the very beginning until it's cached, but over time what we want to maybe is try and look to do is make it so we can kind of ingest that data. So we've always got, you know, we're kind of reading your data in real time. You open a connection to us, we're able to kind of just bring in that data. So instead of you making requests to your database, you're making it to us, and in the background we're going to the database and making sure that things are in sync. There's a really cool technique that I think we could adopt. That is all that I wanted to cover today. Just kind of give a very quick tour of stuff. There are things I haven't covered around Auth as well. So the other cool thing, I'm going to talk about it really quickly. You can configure Auth in here. So you could say you want different rules. So you could specify group-based, owner, private, public rules here. And for groups you could say, okay, the group admin, whoops, can get data, can delete data, etc. So you can configure those rules for your connectors as well, which is cool. And then how the Auth stuff works is you could log in to Clerk or Auth0 or whatever Auth provider you're using, get a JWT and use that same JWT to authenticate against this API. So you don't have to migrate your users to us or how you deal with your user login stuff. Yeah, you can just use that with us. So use whatever you've got with us. And that works fine, out of the box. That's not what we're trying to do. When GraphQL is first started, we give the option for people to use. It's still there now if you want to use it, but we are going to deprecate it soon. But there is the option to have a database. That we manage and deploy for you. But honestly, I don't know about you, but I don't want to migrate my data. Like, it's the thing which is probably going to take me and my team so much time is migrating data. So, you know, we talked to our users and said, hey, what if we can create an API for your data. And allow you to extend your databases with data that doesn't belong in your database. And that seems like a pretty cool thing to be a product fit. So, yeah, that's all I had time for today. Let me know if there is any other questions. You can hit me up on Twitter. I would really appreciate you to join the Discord or at least give me a shout on Twitter. If we go to Twitter here, I'm Notrap, which is just Barton backwards. If you want to see more cool stuff like this and I'm going to be talking about this as well next month, let me know. Let me know on Twitter if you've got any questions. Yeah. I mostly talk about GraphQL and databases and APIs. So if you're into that kind of thing and you want to chat more, let's hang out. But for today, I think, if there's no more questions, I think we'll call it. Cool. Thank you, too. Thanks very much for showing up. Thank you all. And yeah.