Building GraphQL backends with SDL

Rate this content

In this workshop we'll cover the basics of GraphQL, and then use that knowledge to build a backend using SDL. Once we've created our schema, we'll run it locally, deploy to production, and create deployment branches for any changes to our schema. We'll finish the workshop by fetching data from our backend from the frontend, as well as adding authentication to lock down data access!

Mac setup with Node, and NPM (if you wish to follow along locally). Otherwise an account with CodeSandbox would do.

91 min
31 Oct, 2022

AI Generated Video Summary

The Workshop covers the process of creating a backend with GraphQL SDL, deploying it to GraphBase, and connecting it with React. It explores the basics of GraphQL schema and local server, GraphQL edges, nodes, and mutations, and GraphQL API operations and unique fields. The Workshop also covers deploying the project to GraphBase and GitHub, adding fields and importing into GraphBase, and deploying to the edge as a GraphQL API backend. It discusses database history and request information, authorization and schema changes, and setting up authentication with Click Provider. The Workshop concludes with topics like fetching posts with Apollo Client, setting up Apollo Client and auth middleware, and GraphQL code generation and backend authentication.

1. Introduction and Project Initialization

Short description:

As you're joining, I suggest heading to and creating an account. The purpose of today is to create a backend with GraphQL SDL, deploy it to Graphbase, and connect it with React. If you want to get ahead, create an account with Graphbase. I am Jamie, and I create weekly screen casts on GraphQL at I cover various topics and provide useful tips and tricks. Feel free to ask any questions or suggest video topics. Let's kick things off by opening a terminal and running the command npx graphbase at latest to initialize a project named React Advanced Workshop.

♪♪ As you're joining, if anyone who is interested and wants to get ahead, then I would suggest heading to, creating an account, and the purpose of the time today is to create a backend with GraphQL SDL and run that locally. We'll then deploy it to our Graphbase account, and then we will make a pull request to change some of that schema. And then we'll dive into connecting it with React. And maybe he's adding some authentication with an authentication provider. Maybe he's clerk in this instance. So yeah, if you want to get ahead, feel free, create an account with Graphbase, and I'll give some time for others to join in.

So for those who don't know me, I am Jamie. I create weekly screen casts on GraphQL, where you can find them all here at And every week, I post a video. Normally, on Mondays, Tuesdays, Wednesdays, I will post a video on a specific topic around GraphQL, everything from the front end, back end, yeah, whatever it be, we will cover useful little kind of tips and tricks and usage of videos on how to do certain things. So if you're looking to reference, how do I import my schema into my node file? Or how do I create a code-first schema? Well, there is most likely going to be a video there. So yeah, check on out these. And if anyone has any questions or any suggestions for videos, let me know. Feel free to ping me on Twitter or here in the chat. And yeah, I'm happy to cover them. So feel free to ask any questions. I've just posted the chat there so people can see. If you don't know where that is, you should see a little badge pop up. I'll do my best to keep an eye on the chat as we go through. But today's session, we'll be kind of in the code editor and we'll just go through getting things connected. So apologies if I'm in the middle of something and I don't see it, that's just how it goes sometimes. So yeah, maybe we kick things off and we get started.

So I think one thing that I'd like to do is inside of a terminal, inside of a terminal, if you have a terminal and you're following along, you want to follow along. Open up your terminal, get into a directory that you're familiar with. I'll just post this in my GitHub. I'll increase the font here so we can see. So this is my directory where I store all my code and I'm just going to run the command npx graph base at latest and then I'm going to call init, and I'm going to give a name for my project, so here we'll call it React Advanced Workshop and you can give this a name of whatever you like. You know, we can call this whatever. It doesn't matter too much, we're not going to dive into too much here. We'll create a few models and whatever but if you're following along in a later date, this init, and then we have the project name. This is if I don't have a project already. So, you would run this, it would create a directory called React Advanced Workshop and it would scaffold the graph-based schema where we can deploy and specify our backend.

2. Using Next.js and Initializing GraphBase

Short description:

I'm going to be using Next.js because of the new release. We'll create a Next.js project with TypeScript and initialize GraphBase inside it. We'll open the project in Visual Studio Code and delete the auto-generated schema file. Then, we'll define our own schema and talk about the basics of GraphQL.

What I'm going to do, however, I'm going to be using Next.js because I heard that you just released an amazing new release, 13. So, why don't we use that throughout this? So, if you're here, I imagine you've got some familiarity with Next.js. If you don't, it doesn't matter too much. We won't be diving too deep. So, feel free to follow along.

And I believe it's create next app at latest and then we'll give the project a name. So, here we'll call React Advanced Workshop and then we'll use TypeScript as well. So, the command that we see here, this will generate a new next app. And we'll call it this here. And this //ts creates a TypeScript project. So, this is initializing a new TypeScript, a new TypeScript Next.js project in my current directory.

And then what I'm gonna show you is how we can initialize GraphBase inside of that because I didn't just want to say this is how we create a GraphBase project, go figure it out. In a real world scenario, not everyone has the luxury to work on a Greenfield project. So, I want to show how you can add GraphBase or a GraphQL backend into your project. So, here I have my project, typical Next.js project. Now, what I'd like to do is go ahead and run that command. Let's just clear that screen and we'll run it up here. We'll run mpx graphbase at latest, and then we'll call init. I'm not gonna provide a directory or a project name for my Graphbase project here because it's going to create in this project. I already have a folder, so, yes, we want to go ahead and install and will be able to see here that this has successfully created a project and a new schema inside of my folder here.

So, if I list out the contents, we can now see we have this folder, Graphbase, which is for my backend. And we do have a Discord as well, so, if there's any questions you have, feel free to join the Discord and ask anything in there or share any codes or anything like that. We're all in there. Just checking the chat, no questions yet, which is good. Awesome. So, let's… I'm going to open this inside of my code editor. I'm using Visual Studio Code, so we will open up there. And, hopefully, my microphone volume is good enough for you all. And, yep. So, here we have the project, inside of Visual Studio Code. And you can see here that we have this schema file, and this is what's generated, scaffolded by Graphbase. Let's delete that. We don't need that. We're going to create our own API. So, the first thing that I would like to do, actually, is write our first schema. And I'm going to talk about the basics of GraphQL here, and apologies if you're familiar with it, but I think it's important to just understand what's going on. But let's first define a very simple schema, and then I will talk through what's going on and how GraphQL works with this. So, if you've created any type of GraphQL API before, you've most likely seen the SDL first approach, where you write a GraphQL schema, and then you write your resolvers, and then you run your server, and you're good to go. Now, actually, with GraphBase, what we say is, you define your schema, we'll define the resolvers. So, you know, some people do like to use GraphQL for many different things, and I think using SDL, we can create this back-end. And while there's a conversation happening about kind of, should SDL be used for my back-end, should my GraphQL expose my database, I would like to, just for this workshop today, just remove the thought of, you know, this SDL being your database. Think of this as your back-end. GraphBase will soon give many different features that allow you to hook in other systems. So, it's not necessarily SDL to a database. It's just SDL to your data and your back-end. So, you'll be able to plug in various things and we will give you access to that through this unified GraphQL API. If you use something like Mongoose or where you kind of write your model for your documents or if you've used something like Prisma, where you've created your Prisma schema and that gives you an ORM to talk to your database, you can think of this similar to that. We're just using GraphQL as kind of our schema language to generate this ORM or API. And we're treating the API as an ORM. So let's go ahead and define our first model.

3. GraphQL Schema and Local Server

Short description:

So with GraphQL, we can define the POST type and tag it with a model to give it full CRUD rights. This automatically generates an API for the POST type. We can add an ID field and a title field to our POSTs. Running the Graphbase server locally gives us a GraphQL playground where we can interact with the automatically generated queries and mutations. We can use the post query to fetch a collection of posts and specify the number of posts to fetch, such as 10.

So with GraphQL we can say the type is POST, but instead of just leaving it like this because this isn't going to do too much in the Graph base land of things, what we will do is actually tag this with model. So this model tells kind of the Graph base command line. And when you POST this and deploy this, it tells it that this is a model. Give it full CRUD rights. So you'll get an automatically generated API for your POST type, where you can update, create, delete, read, and whatever else.

So let's create a ID field for this. And this will automatically be managed. There are some other hidden fields, such as createdAt, updatedAt, so you don't need to add those, but you will need to add the ID. And then let's just give our POSTs a title. Now, you'll notice here that this field is nullable. I can make it non-nullable or required by adding that bang at the end. But let's just remove that for now. And then let's have a look at what this gives us going forward. So now we've got that inside of our project.

I have this schema inside of my project. I'm inside of my Next.js application, and here I can run npm run dev, and that will start my next server. What we want to do is actually run the Graphbase server, and we can use MPX to do that. So again, use Graphbase, and I'm using the latest version. If you're following along, this, most likely, will be 10, 0.10.0. So, yeah, let's tag that just in case anybody is watching on demand later. And then we can run dev. And now, this will run a development server locally. So if I open this now, we can see here... I'll close out some of my previous tabs, and we'll see here that we get a GraphQL playground. Now, this may look a little bit different if you're watching this later. We will soon update this to be graphical. A new version of graphical so the layout and the colors and whatever might change, but most of it stays the same. So this is, kind of, if you're familiar with GraphQL, this is, you're going to feel right at home. You have documentation, you have some schema, and we can already see that we've got some queries and mutations, and these are all automatically generated and managed from Graphbase. So, and this is all locally as well, we've got a locally run server, so we can do, we can interact with all of these locally. And just check in the chat as well. Feel free to ping any questions. Yep. Nothing just yet that I see. Neither in Discord nor Zoom, but feel free to ask if you have any. So yeah, hopefully this is clear. We've got a local run in GraphQL server. Wow. Like, this is all automatically generated for us. So let's inspect these a bit further. So here we have a post query, and we can see that we have an argument by post input. So this input type is something which we can use to fetch our posts. So let's just go back to using that, right, inside of our window here. So, what we see on the left here is where our GraphQL operation is. I haven't named this operation, but let's just say we'll give it a name of get all posts. And then inside of here, I get a number of different queries based on my schema. And right now we have a post type. So we get the query to fetch a post by, you know, an individual post, or we get a query to fetch a collection. So let's use the collection to fetch all of our posts. Now, we will need to specify here that we want to fetch the first number of posts, and let's just use 10 here.

4. GraphQL Edges, Nodes, and Mutations

Short description:

Inside the edges and page info, we can get the current cursor and the node of type post. We fetch the ID, title, createdAt, and updatedAt fields. We can pass a variable for the first argument in the query. We can fetch the first ten by providing the value. This is the basics of GraphQL queries. We can also perform mutations like post create, delete, and update. The post create mutation takes the title field as input.

And then inside of here, we've got some edges and page info. Now, these, if you're not familiar, is more in line with the relay specification of GraphQL. And we'll talk about and dive into that a little bit later, I think. But this just declares certain fields and information to help paginate your data.

So let's dive into each node of our edge. And here we can see we can get the current cursor and then we can get the actual node, which is of the type post. So we'll fetch the ID, and then we can see that we have that title field. And then like I mentioned before, we've got the createdAt and the updatedAt fields automatically added here to the schema. So let's just fetch those for good measure.

I'm gonna add this and nothing's going to show because we haven't mutated anything. We haven't added anything to our database. And the database here that's back in this schema is all running locally in development. In production, we will manage that for you as well, and there's a few nice things we're looking to. So yeah, that's kind of querying with GraphQL. And here we can see that we've passed this number, this kind of variable to our input arguments that is first here. So let's turn that off because that's annoying. There we go. And then this argument here, 10, we can actually swap this out to be a variable in GraphQL. So let's just say this is first. And then back where we have our query, name, inside of here we can pass a variable name and we'll call this first. Now we need to specify that the type of this variable matches where I'm using it. So here we can see if we hover first that the first argument is an int. So here we can use the scalar, int, and that then follows through to this query. Now if you run this, we'll get an error. You didn't provide any kind of parameter. So inside of the playground here, we can pass some query variables. And here with order of completion we can see we can fetch the first ten and we can provide that value. Now when I run it, that variable and query goes along to the endpoint, to my locally run server, and runs it. Really cool. So this is just GraphQL. We'll just lay on the grounds of GraphQL here. This is a query. This is kind of the name of the query, the operation, rather. And this is the operation type. This could be mutation, subscription and whatever else that your API has. And then these are the arguments for the operation. And then we can use those inside of here for queries, which is really cool. Next thing, why don't we actually mutate something? Let's create something. So, let's call mutation. Let's just make this an unnamed operation, doesn't really matter for now. But here we can see that we have post create, post delete, and post update, for some mutations here. And these mutations are all automatically generated for us. So, here we provide the argument input. And then now we can see that we have this title field. Now if we hover over title field, if we hover over the input, I believe. There we go. We see that we have the post create input. And this is, if we open up the documentation, post create, and we search for it, post create input, we can see here that the title is a string. Now, we didn't make this non— we didn't make this required. So, if we go back to a schema here, and we update this and we put a bang on that end and save it, the schema detected, was detected.

5. GraphQL API Operations and Unique Fields

Short description:

When we make changes to the GraphQL API, the server automatically reloads. We can create, update, and delete posts using mutations. The API generates the required fields and manages them automatically. We can also define unique fields using directives in the schema. This ensures that specific fields have unique values.

That change was detected, and then that server was reloaded. If we refresh this and we get a clean version of the documentation, and if we search for that same post create input, we can now see that title is required. And that was all automatically generated for us through the CLI.

Now, if we don't provide that field, for example, say we just leave it empty, and then inside of here, we fetch the post from the response and we get up the ID of that response, why don't we add the created ad and the title while we're at it? And we've got to run that. Well, we're going to get an error. Why? Because that field is obviously required. So, let's add a title, and we'll call this hello react advanced. We now execute that. And there we go. We have one post created for us. It automatically has an ID assigned and it has that created at timestamp. Which is cool. Then we've obviously got the title in here.

Now, if we go back to our query and we run this, you guessed we've got all of this here. We can see that we've got the updated at creator title ID all automatically managed for us. So this is a GraphQL API running locally. Now let's take this a step further. Let's grab this ID and you guessed we'll create a mutation to update that post. So here we'll specify the ID. So we want to update a post by that ID. And then we want to provide the input title. I'm going to say Hello Graphbase. Then I grabbed from the response that title and we'll grab the ID and run that. You should see here that we get that title and then we get that ID. Oops, there we go. So that's updaters and we have that data there, which is cool. And then just to kind of round things off, finish it all up. Why don't we go ahead and you guessed it. Delete by that ID. And you can see here my query looks a bit of a mess. Let's make this even more of a mess. I can use this button here to make this look pretty. Run this and that will delete that post. Now, if we go back to get our posts, we've got nothing left because we've successfully created something, updated it, and then deleted it. And these are kind of the core foundations to any API, I think, today. This is just a very simple CRUD app API that you've got. And then you can add all sorts to this to make it your own for whatever application you need.

So, let's have a look at a few more things. I'm going to launch the GraphBase documentation. And inside of here, if we scroll down, we'll head to the schema reference. And here we can see there's a few more things. So, here we have a typical post, right? The same as what we've got going on in our project. But you'll notice that here we have a slug for that post, which is the string, and then we're using the special at unique. And this is a directive similar to what we used for the model. So if we go back to our code, and inside of here, we specify that the slug is not nullable. But this time, we provide the unique directive. Now, the schema change has been detected. We have a post with the slug that is unique. So, what this allows me to do is if we go back here to post create, you'll see we've got an error going on. And if I try to post this, you'll see that the slug is required.

6. Slug Uniqueness and GraphQL Directives

Short description:

We can specify a unique slug for the post and the GraphQL directive will handle the logic of ensuring its uniqueness. No additional code is required. GraphBase takes care of it all.

Now, I must provide that slug. So we'll call this Hello React Advanced. And then we'll submit that mutation, and then we get that successful ID. Now if I come back here, and I update my query to include that slug and I run this, we can now see that we have that Hello React Advanced slug. Now because it's unique, if we now run that mutation, we will get an error that this, you know, cannot be created because it's unique. And it's already been taken. So that is cool. We didn't have to do any kind of complex logic there, we just specified a GraphQL directive, and, you know, it's unique. There's nobody... there's no code that you need to write in order to make that logic. We'll take care of it all.

7. Deploying to GraphBase and GitHub

Short description:

We can deploy what we've already created by signing in to GraphBase and creating an organization. This is important for collaboration. We can invite other users to our organization and assign them as owners. If you're not following along, you can start with one of the provided templates. We'll deploy our project to GitHub and import it.

So already, we've not had a kind of create a GraphQL server, create a serverless endpoint, write some resolvers for all this logic, connect the database at the edge. All of this is happening locally. And then we can take this a step further and deploy it. And we can get that schema and deploy to the edge. And all of that will be automatically managed for you. So enough of the sales pitch, I think we'll go through deploying what we've got here already. And we can do that by signing in to GraphBase. So I'm going to go ahead and sign into my account. And here I have a list of projects. And we also have the ability here to create an organization. So this is what I want to do, because I think this is really important for anyone following who wants to work on a project with others. So let's create an organization, and we'll just call this React Advanced. So we'll go ahead and create this, and hopefully my plan allows me to do that. And here we've got this new organization. So at this point, if you're following along and you want to invite someone else to see what you're doing, you can head into the settings, and you can invite other users. And you can assign those users to be owners as well, so you've got full privileges to make changes to your project, such as updating environment variables, which is something we'll come on to shortly. Obviously, if you're not following along with the workshop, you may want to just get started with any one of these templates here, and we'll provide a schema for you. So let's click on this here, and you can see that we have a to-do list model, and then we've got our GitHub account connected, and then we can specify the repository where all of that schema will be stored. Now, we've got a project locally, so we want to deploy this to GitHub in our own repository, and then we want to import it, so we'll do that next. So let's go back to our project here.

8. Adding Fields and Importing into GraphBase

Short description:

There are many fields we can add, such as scalars for URL and email. We'll make a pull request to update the schema. We'll commit everything to the main branch, including the initial graph-based schema. We'll create a repository on GitHub called React Advanced Workshop, make it open source, and push our entire directory. The schema is now in version control, allowing us to update the frontend and backend simultaneously. We can work with the schema locally, submit a pull request, and preview changes with the unique database. Let's import our project into GraphBase.

There's so many other fields that we could add here. There's so many different things like scalars that we could use. So if you wanted to use things like URL, you could use the URL scaler. If you wanted to capture somebody's email, you could use the scalar for that email, and there's validation applied to that, so you haven't got to write any custom logic yourself, but we'll come to that when we make a pull request to update the schema.

So we've got this here. I'm going to stop the server, and let's just commit to the main branch, absolutely everything, because that's how we roll initial graph-based schema. That will be our commit message, and we will... Whoops. We will commit that.

So here we've got everything is clean. We've got our schema. We've got an initial Next.js commit. We've got a Next.js project. And also, actually, because I committed everything, I may have accidentally committed the graph-based folder. Yeah, I... Maybe we don't need that, but we can remove it later if it's not ignored globally. Small... Yeah, not too important, I guess.

So yeah, let's actually open up Chrome and head to GitHub. We'll go to GitHub. Slash new, I think it is. Yep. And then we'll create a repository called React Advanced Workshop. Feel free to call this whatever you like. And we'll go from there. And just checking Discord, there is a message, I believe. And then we'll call it Create a project. All right. So, yeah, create this project here in GitHub. It can be public or private. You know, whatever your preference is. Open source, let's make it open source. Then let's create that repository. So this is a screen that we're probably all familiar with creating projects and side projects and whatever else. So this is the... This is what we need. So if we head back to our code editor and we commit that code there and we push this, this should push our entire directory to GitHub. Now, I hope I removed that folder. Yes. So that.graph based file folder was actually ignored. And I think that's because inside of the directory is gitignore. Yeah. So that ignores everything, which is cool. And then here we have our schema. And yeah, we have this post type with a unique slogan, Pilefield, nothing too amazing just yet, right? Like, this is GraphQL, SDL, and locally we had a database run, which was just fantastic, by the way. Like, we had a database running on GitHub, like I said before, but now we've got this on GitHub. The schema is in version control. So if I'm updating my frontend and it relies on a new field in the backend, I can add at the same time, you know, something to my frontend and something to my schema, work with it locally and kind of its own environment, and then I can submit a pull request. I can preview that same thing with that unique database, which is really, really cool. So let's have a look at import in our project into GraphBase.

9. Importing and Deploying to GraphBase

Short description:

Let's import this project into GraphBase and deploy it to the edge as a GraphQL API backend. We can access the production GraphQL API using the deployed URL. We can view the branches and endpoints in the production branch. The playground is similar to what we had locally. We can create a post and fetch all posts from the Edge. Everything is deployed and persisted. We can now create, update, read, and delete data.

And here we go. So let's import this project into GraphBase. So we call this React Advanced Workshop. And it's inside of my GitHub. So that's just refetching all my projects. My net is a bit slow. So bear with me. We'll import that project. And then here we can specify the project name is main and oh renevate is already kicked in. It's already telling me to pin my dependencies.

And then here we've got that repository. And then on the right you can see GraphBase has successfully detected the schema and that's what it looks like. Now we click deploy. And this project has been created. And this is now deploying to the edge a GraphQL API backend based on that SDL, and by the time I finish speaking this should deploy and everyone should be able to access it. And there we go. That deployed in probably 7 to 8 seconds. We have this end point. So if we take this end point, we can copy that URL. That is now your production GraphQL API at the edge using that same schema that we have been using locally, which is cool.

So if we head on over to branches, we can see here we have a main branch with our initial schema. And if we click on that, we can get a preview of what exactly we did and how long it took and stuff. And if you invite others to your project, you can see who did what and when, which is cool. And main is the production branch. And you can configure the production branch as well, which is really cool. So let's dive into the production branch. We can see here we've got some endpoints for our project. We've got the main branch, and then we've got that kind of actual deployment branch endpoint, which is cool. So the playground here is similar, similar, similar, similar to what we had locally. So everything from before where... Let's go ahead and actually let's just copy this query into here. We'll copy them, this post, into here, and we'll delete that one. So here, let's just give this a name, so it's easy. Post Create. And now we can run that query to fetch everything. And I didn't provide the variables, so let's do that here. And let's run that same query. We get all of the posts from the Edge and we have nothing, so let's create something. Boom! We have that post created in our GraphQL API, in our back end, at the Edge on GraphBase. And all of this is deployed and persisted for you. Now, if we go back and get all posts, we can get everything here. So we've got all of our posts coming from our database. And again, just check in for any messages at all. Seems everybody is following along. If you do have any questions, you want to learn anything more specific, or do you want to dive into anything, just let me know. We'll take it from there. Don't hold the phone. Okay, so this is now deployed. We're able to create, update, read, and delete data. And we've successfully deployed to the Edge.

10. Database History and Request Information

Short description:

We have a history of operations and deployments, and we can track changes to the schema. We also have information about the number of requests made and their latency. Ignore the upcoming hobby tier and pro plan. We can adjust the request settings on a daily or weekly basis.

So if we exit the full screen here, and again, similar to what we had locally, everything that we create for you automatically. And then we've got some history of the previous operations we ran. And then if we click Schema, we can see here that we have the schema that we had locally. And then in the list of deployments, we can see everything that we ever deployed. So if you make any changes to your schema and it's redeployed, you can get a full kind of history of everything that happened, which is really cool. And then back on our database, we've got some information about the number of requests that we made. Ignore this here, we will be introducing kind of a hobby tier soon and a pro plan. So we're kind of working through what that will look like. So ignore that. But here we can see we've got some requests and the latency for requests in here. So we can kind of change that on a daily, weekly basis and change it per branch if I had any. So yeah.

11. Authorization and Schema Changes

Short description:

Let's head to API keys to interface with your API using the endpoint and pass along the API key as a header. You can add environment variables to the schema for specific variables that you want to commit. You can also connect using JavaScript or curl to see a traditional request. Let's create a branch in Visual Studio code to add a user model with ID, username, email, and avatar URL fields. We can update the post model to include a content field and create a relationship between user and post. Running npx graphbase will run the schema locally. The Playground now includes user and user collection queries, and we can create a new user with a username, email, and avatar URL.

Authorization, let's head on over to API keys. And these are the API keys you use to interface with your API using that endpoint and you pass along that API key as a header. And we'll touch on that very shortly.

Environment variables, again this is something which you can add to your schema. So if you've got any kind of specific environments variables you do want to kind of commit, you can add those here and those will be injected when you deploy and you can change some other information about your project in here as well.

So enough of the tour of the dashboard. One last thing is if you kind of want to just look, see what this looks like to connect using JavaScript, curl, we'll add some more examples to share. But this kind of gives you a quick preview of what a traditional request looks like. So if I change this to curl and we copy this and we open the command line, when we paste that in, we'll see here that this will make a request and we get a response back to our curl request here. So let's close it out. And now let's actually have a look at maybe is making a pull request to change some of that schema. And when then we can see what that looks like. So yeah, let's open Visual Studio code. Let's create a branch, maybe. So we'll git switch and we'll say this is to add another model, add user model. So I'll add that user model and here we'll say type user model, has an ID, and they have a user name of the type string which is unique, they have an email which is of the email scalar. That is...let's not make that required. Let's just leave that as nullable. And then if we open the documentation we can have a look at all of the other types of directives that there are. So here we can see there is unique and then we have default and we'll very soon introduce the length directive so you can specify that this can only have so many in an array or the integer must be a minimum of five, no more than ten, that kind of thing. Yeah, and then the schema, here are the different types that we have. So you can see here we've got the int, float, Boolean, which are kind of the standards built into GraphQL. You'll most likely find id string, int, float, Boolean in any GraphQL representation. And then we've kind of added some custom scalers on top of that. So things like IP address, email, datetime, timestamp, URL, et cetera. All of that is in there. So avatar URL for a user maybe is worth adding, and we'll add that here. And then we've got some JSON if we wanted to store JSON, although this is kind of a last resort, in my opinion. If you're using GraphQL to type strongly typed, go with JSON. You don't get any type safety. So, you know, and it's a bit of a mess if you have to kind of manually update types and stuff yourself. So we've got this new user model. It has a few fields. And now we actually will, just for the sake of demonstration purposes, we'll update post here to include a field for content. And we could go as far as to say that the author is of the type user and user has many posts. So here we have created a relationship. And now if I run, I'll clear this out, if I run npx graph base. Again, we'll use this version. So if you're following along later, it's 0.10.0, then we'll run dev. And this will run locally, this schema, once again. And now if we go back to the Playground and we head to all posts, we run this, we should get the same data that we had before. But now if we open the documentation, you'll see that we have our post and post collection queries, but we've now got a user and a user collection query. If I open up user, and we inspect the type here, we can see that we have a field, Posts. And this is the post connection from our post model, so we're able to link the two. So if I go into here and we create a new User, User Create, and here you'll want to specify a username, that username could be mine, notrap, that's part and backwards. And then maybe this will specify an email. Here I will make this incorrect, okay? We'll just say we've actually spelled That's not gonna be valid. I could have an avatar URL as well. And here again, I will specify my avatar as slash notrap.png, but that will be valid.

12. Creating Nested Posts and Deploying to Edge

Short description:

We can create a nested post or link a post by passing the ID. After creating a user, we can fetch the user by their unique username and retrieve their email and related posts. We can deploy the code to the edge using a unique branch, which provides an isolated environment with its own data source. This allows us to test and make changes without affecting the production environment. GraphBase automatically detects new branches and deploys them to the edge, providing all the features of the production environment. The browser also provides a visual diff to see the changes made to the schema.

And then here we can see that we have posts. Now with this, what we can do is we can provide an array. And one object in here is we can create a nested post. Or we can link a post. And by linking, we can pass that ID over here, to this post. So let's create this. Now if we create that user, and we get the ID back, and we get the name of the username, because that email is incorrect here, because we've used the email scaler, this will throw an error that the is invalid. And obviously you can show and customize that message to your users.

Then here, if we update this to be correct, and we create that, you can now see that this user has been successfully created. Now what I would like to do is take the username, and because in the schema we defined this username to be unique, we can actually fetch that user by that unique field. So let's go into here and we'll make a new query. And because it's a query we can omit the keyword, the operation type query, and we can run a singular mutation here, query, user. And instead of passing just the ID, we can now pass an object to the argument by, and we can either provide ID or the username. So you can provide one of these. So I will pass in Notrab as my username, and then we'll get back my email. Let's also fetch the posts, if I have any, and we'll grab the edges of those, and we'll grab the node, and we'll grab the title of that post. And because it's fetching a related array of data, we'll need to pass how many we want to fetch in the pagination. So let's make this five or something. Now if we run this query, this will fetch that user, Jamie, by that unique username that I fetched. And also the related posts. So this is a relation between my user and the post model that we created previously. All automatically working to link those nodes together.

Now, we'll take this code, and we'll deploy it to the edge, but we don't want to deploy it to production because we could break everything. We don't want to do that. Instead, what we'll do is we'll actually, we've created the branch already, add user model, and again, we'll just add that single file. I think it's, we've only changed that schema, that graphQL file. We'll, um, we'll commit this file, and we'll say, add user model, and post relation. We'll save this, push it to, um, our git branch, to the, uh, the origin. And then, once that's posted and uploaded and pushed there, we can then head over to create a pull request.

So if we go back, and we open our Github repo here, we can see here that less than a minute ago that we've just done, we've pushed a new branch. Now, I haven't done anything. I haven't created a pull request just yet, but imagine I've got lots of code. I haven't created it, but if we go back to GraphBase and we go to our project, and we go to our branches, we can automatically see that we detected, GraphBase has detected that new branch, and it's already deployed it to the edge. So we have a unique branch. If we click on it, we can see we have a new endpoint for that branch. So this is really cool. This is now completely isolated. It has its own data behind it. We're backing up with its own data source. You can save, update, delete, whatever. Everything happens in that isolated environment. And it's all at the edge, and you get all of the same features that you do if it was the production one. And then, as kind of a, you know, lives into the developer experience, you get the kind of the dif here in the browser. So you can see if this maybe this wasn't you, but you know, maybe somebody in your team sent you a link to this schema page, you can see, ah, they've added content. They've added author, authors of this type user. Oh, OK. There's a type user. So now we can kind of get a clear idea of what's been added. And if we were to remove, say, the title of a post, then, you know, you guessed it, you're going to get a big red line through that field. It's missing.

13. Branching, Merging, and Deploying

Short description:

We can branch and work on new features completely isolated and locally without needing to deploy anything. When ready, we can create a pull request on GitHub and merge it. The new schema will be deployed and we can fetch data with the updated schema.

It's gone. It's deleted. We will, you know, there are ways to kind of deal with migrations and things, but let's just keep this very high level for this talk today.

So yeah, we've got that. We've kind of got this diff. Now, just like we could before, we can connect to this with its own API key for this and we can get the data. And this is, like I say, all fully isolated. So if we go to the playground now and we go back to this query here and we go back here, we paste this query in, we don't forget this time to add the variables and we say we'll get the first ten. I run this. I get nothing. I don't get anything here because why? Because I'm on a different branch. The data is completely isolated. I go back to main. I run that same thing and I get some posts. That's untouched. So that's kind of branching. You know, imagine you're working on a new feature of your API. You're able to kind of branch, create branches for new features and work with them completely isolated and locally you know, without needing to deploy anything it all runs locally. And when you're ready to share with your team, you can share your Netlify, Vercel, whatever it be, those environment variables and each of those can be backed with its own database which is really cool. So let's go back to here and we'll create that pull request in GitHub. And now we can see that Graphbase has commented, its actually comment on here that it's been successfully deployed and this is where you can find more info which is what we were looking at here. And again, we can see all the deployments and we can see this new schema. Awesome. Now we've got that pull request, we've got some new schema. You can already imagine what we're gonna do and that is to merge that pull request. So here I'm on the main branch. It's still using the initial Graphbase schema but if we go back to the pull request, we merge it, our team gives the green light, we do that merge and we see how it's mole. Then if we come back here, we should see that this is deploying and it's deployed. That took no time at all. We can see here that the commit here is that pull request and then if I go to branches, we have that old branch there. Inside of here, we've got those two, we've got the latest commit message. But if we go to the schema now, we get this year, which is cool, right? We've got our post model, we've got a relationship and our type here. Now inside of the Playground, we come here, we run that query. We still get that data from before but now we can actually fetch the author ID. It's not gonna work because we haven't, we only did that in development. We kind of linked that user and you can do that same in production as well.

14. Deploying Schema and Setting Up Authentication

Short description:

We've covered a lot, from creating a schema locally to deploying it to the Edge with GitHub. Next, we'll explore authentication and connecting an authentication provider. Take a short break and create an account on Then, create a new application called React Advanced Workshop. We'll use Qlerk with GraphBase to protect access to our API. Install Qlerk in our Next.js project and set the environment variables. Follow the remaining steps in the documentation.

So yeah, I'm gonna pause for a few minutes there because I think that we've covered a lot. We've created a schema locally and run it locally and we've deployed that to the Edge with GitHub. We've not had to do anything custom, right? We just kind of create a repository, we've linked it into Graph base with click deploy and we've got this amazing GraphQL API deployed at the Edge that we're able to hook into our existing React applications and whatever else.

So yeah, pretty, pretty cool. So next, I think what we look at and there's many other things I would encourage you to look at the documentation here to see what's available. There is authentication as well. We could, and probably, we'll look at that today, we've got time for it, and we'll look at kind of connecting an authentication provider such as an open ID connect provider where you can kind of sign in with something and then get access to that API only if you're signed in. So you can kind of specify these rules here.

So yeah, I'm going to take a short break and I would encourage you to go to and create an account. And then once you've signed in, you'll want to create a new application and we'll call this React Advanced Workshop. So feel free to ask any questions if you have in the chat and we can pick it up from there. With that, I look forward to seeing you soon. And, I am on Discord as well. I hope everyone has signed up in time. I think you can log in with GitHub, so that might make it quicker. We've already gone through that step to deploy our schema. And, Qlurk is a user management software. It's an API they give you components for signing in, logging out, all that fun stuff. We want to actually use Qlurk with GraphBase to protect access to our API. That's what we'll do. We will use email and I'm going to enable Google. And we'll stick with GitHub, right? Everyone uses GitHub. Or GitLab. Or Bitbucket. Yeah, we all use everything these days. So we've got access to something to log in. Then I'm going to create this project. And then now, confetti. I love confetti. It's great success here. We've got a Next.js application that we can follow. The application that we created is using Next 13, but I don't think we will use the app directory. I don't want to run into any errors on this workshop, but React 13 is very new and support is a bit wobbly for many libraries. But yeah, we'll use it as-is, I guess. Yeah, cool. Let's install Qlerk in our project. We'll copy this command here. We'll go to our Next project. We'll need to switch back to main and we'll need to pull those changes. And here is my Next.js project. I'm now going to install Qlerk. This is installing everything we need to get going with Qlerk into our next step. So, once that's installed... Set environment variables. Install the Qlerk Provider. I kind of want to try this, but I just know we're going to run into trouble. Yeah, let's not try it because I think we'll run into trouble. Let's go down here and follow everything else. So we have this env.localfile. Inside the root of my project, we'll create the file env, and then we'll paste this in here.

15. Setting Up Click Provider and Creating Sign-Up UI

Short description:

We need the Qlerk API key and the frontend API key. Configure the click provider and start the app. Create a sign-up UI using Clerk's pre-built component. Add the sign-up page to our project. Customize the appearance using the appearance object. Use the sign-in component in the same way. No errors in the terminal.

Now, I don't need the JWT key right now. We will need the Qlerk API key and the frontend API key from my project. So inside of here, we'll go to API keys. We'll grab that frontend key. And we need to prefix this, I think, with HTTPS there because that's my API endpoint. And then the backend key, we will copy that as well. Now, I don't want it to get on stream. You obviously don't want to commit these to GitHub or whatever, so make sure that inside of your gitignore, you are ignoring those env files because we don't want to deploy those. It's not good. Awesome. I believe that's all we need to get click going. I think that's all we need. I think it is. Maybe this, we don't actually need that prefix there.

Now, if we configure the middleware, we don't need to do that, but we need to configure the click provider. So, I'm going to copy this and then back inside of my code, I will go to my app.tsx because we're using old React, not using the layout stuff because let's not tempt demo gods. We could run into some issues. So, now we can start the app, I think. If we're all good, we can start the app. And I follow as well. Everything should just work. He says, awesome, Jeff, no errors. Cool, open the next, yes. So, this is running my next app locally and we got click instantiate.

Now, what we'll do is we actually want to create a sign up UI. Now, one of the great things that Clerk provides is this pre-build component for your users. So, there's no need to re-implement all of the same logic that we've written for years again. We don't have to do that anymore. We can use clerk, we can use Auth0, we can use whatever we want, and we can have these pre-hosted components in this case or pages on other providers that we can use to sign in, sign up our users. And we haven't got to deal with that logic, saves us time. We can build our site projects much faster. So, let's actually go ahead now and add this to our project. So, if we go back, and we go to pages, I believe this is the sign-up page. And then we kind of want to do double quotes, dot, dot, dot, index. And this is kind of a catch-all. Like, get everything, because the sign-up page can have callbacks and everything else. So, that file needs to be in that specific location to handle that. Awesome. And there's many other things we can do, like, you're probably thinking at this point, well, this UI looks different to my design library and whatever. Well, there's an appearance object, so you can control border radius, or border color, background color, padding, text, font, whatever. All of the control is in there, which is cool. And then there's some other stuff that we really don't need to define or touch just yet. So, let's use the sign-in component and we'll do that exact same flow. Inside of my root pages folder, I will create the folder Sign In and inside of there, we'll create that dot dot dot index dot TSX or JSX if you're not using TypeScript. But I'd recommend using TypeScript. We have sign in here. This is the sign-in page. If we open the terminal, we have no errors. We should be good. And I go back to my application.

16. Authentication and Account Management

Short description:

In development with Clerk, you don't need to provide your GitHub client ID or secret. Clerk takes care of authentication. When signed in, the header component displays the user's avatar, name, and username. The user can sign out from the dropdown menu. Clerk imports the profile picture from the authentication provider, in this case, GitHub. Clicking 'Manage account' opens a modal to manage account details.

Now, in here, I think I should be able to go to slash sign up. And that will show that component. So that component is there. It's detected the name of my application because I provided my API keys. And it's also detected the supported enabled third-party single sign-on sites. So I said yes to Google, I said yes to email and password. I also said yes to GitHub. So I can now click GitHub. That will sign me up and redirect me back signed in as a clerk user. And you're probably thinking, if you've done anything like this before, why didn't I give my GitHub client ID or secret? Well, in development with Clerk, you don't need to do that. They will take care of that. And you authenticate with the Clerk app. Now in production, you can override that. You can add your own. You won't be able to use theirs. And you won't be able to provide your own details. But I just think that's a really, really nice development experience. I think a lot more people will be interested in that, and a lot less for you to set up, which is really cool. So let's sign continue with GitHub. Fingers crossed there's no errors. Doesn't appear to be. And now we're back at the index page, right? But there's nothing here to tell me who I am. You know, that I'm signed in or anything. So, yeah, let's go back to the documentation. So we can see here that we've got some other things, but we don't need to do this. How many times? We've had to create our avatar with our name, with our email in a dropdown, a link to sign out and manage everything. I am so happy we don't have to do this anymore. We can do this by, you know, just including this here. So I'm going to go ahead and copy this code here for my header. Right here. Then back inside of my app.tsx, we will just create that header here for now. You know, did I put return? No, I didn't. We'll just create this function inside of here. We won't go to the extent of, you know, creating a component and a file and, you know, all of that. We really don't need to for the purposes of this. We'll then invoke that header. And then notice here that, well, we've got signed in. We've got user button, signed out, sign in button. Well, these come from Clerk. So let's import signed in, user button, the error's gone, signed out, and lastly, sign in button. So we've got those four components in port. And here we can see, when we're signed in, we mount the user button component, and when we're signed out, we mount the sign in button. I'm going to save that, and because this is in the app.tsx file, if I go back to the application that I have open, this should show here my avatar. And when I click it, I get a dropdown to sign me out. It includes my name and username and my profile picture, which I got, which Clerk detected and imported from my authentication provider, which was GitHub. This is cool, this is really cool. I've signed in, I don't have to kind of create those components anymore. It's all just managed for us, which is cool. Right. So let's click this, and let's click Manage account, and then we get this modal that pops up where I can manage my account details.

17. Configuring Clerk and Schema Authentication

Short description:

You can configure more things in Clerk to show more options here. Let's move on to getting this to work with GraphBase. We'll create a new JWT template in Clerk for Graphbase backend. Every token created when you authenticate will include these groups in the JWT. Let's list out all of our posts from our backend, but only if we're authenticated. We'll configure the schema to require authentication using OpenID Connect provider and specify the rules to allow only private users.

You can configure more things in Clerk to show more options here, and then I can manage my email address, I can add new emails. We've not had to manage any of this. The logic alone for all of this would take us days to do. All of this is just done with a single component. Which is cool. And yeah. There we go. Right then.

So let's move on to getting this to work with GraphBase. Now at this point... At this point we can make API requests to our backend, from our frontend, without being authenticated. It's completely open as long as you provide an API request. If you're in development you don't need that API key. But what we would like to do is lock that schema down so it only works when you are logged in and you pass a header authorization. So let's do that. Back inside of Clirk we will go to JWT templates and we'll create a new JWT template. Inside of here we want to select Graphbase because that's the backend we are using. We'll click that. You can give this token any name you like. Any name at all, it doesn't matter. You'll need to know it later, but you can name this, whatever you want. I'm going to leave it as Graphbase. Then I can specify inside of here the claims for my Graphbase backend to acknowledge when authenticating requests. Now, let's just leave this empty for now. And we'll come back and we'll update this. Once that's added, every token now that is created when you authenticate will include these groups in the JWT, which is really cool and powerful. And inside of here, you can specify fields based on the current user, and we'll come to that a bit later on, I think. Right now, we'll just use this for pure authentication purposes.

Now, we've kind of signed in and we've got everything. There's a few more helpers we've got that we can use on our front end. But I think if we go back to our application, on this page, let's list out all of our posts from our back-end, but only list them if we're authenticated. So, if we go back to our code and we go to our schema, inside of here, we will need to configure a few things to get going. So, we'll need to provide the schema here, and we'll say that inside of here, that the query is of type query. And then we'll provide a special directive called auth. Now, if we go to the documentation for graph-base, we can see here that we can provide the providers and the rules. So, let's copy this and go back. And inside of here, we'll specify that the type of one provider is IODC, open ID connect, and the issuer is this variable here. Now, in this case, we're using it as an environment variable, the same what we explored inside of the dashboard earlier. But now, because we're running locally, we don't have access to that account online. We can't get those variables, because why would you, we could be working on a new branch. We want a way to override this. We don't want to commit that URL to production. So instead, what we'll do, instead of our graph-based folder, we will create our environment variable. And here we'll provide the full absolute URL, and we'll grab the front-end API endpoint from Clerk, because they are our OpenID Connect provider. And we'll pass that in there, and we'll save. Now, this schema is able to detect the provider, issuer, value, and here, we've specified that the rules, one rule is to allow only private users. So you who is logged in can only access this schema if you are logged in. Now, this is a global setting, but at some point very, very soon, you will be able to configure this at a model level. So you can specify you have to be logged in to view this model, et cetera, and customize what groups of users can log in. So let's go back to the documentation, and let's just explore something.

18. Authentication and Authorization

Short description:

You must send IODC JWT tokens using the authorization header. Your provider should adhere to the OpenID Connect discovery. Rules are global and apply to everything. You can specify that only administrators can delete stuff. Configuration details are in the documentation. You can allow anonymous users and specify rules for signed-in users. User group-based authentication is also possible. This is the documentation on authentication.

So you must send IODC JWT tokens using the authorization header in our request, which we'll come to a bit later. And your provider must adhere to the OpenID Connect discovery, blah, blah, blah. So if you're used to using OpenID Connect and you're using a provider, more than likely they're using – they're following that OpenID discovery specification. So not too much to worry about.

Clerk, author, zero – they get this. We used an environment variable for the issuer. Then we've got some rules, and like I mentioned, rules are global, and they apply to everything that you do. You can specify that only administrators can delete stuff, right? That's cool. You can control all of that. All of the configuration is in this documentation of what you need. We don't have time today to explore all of that, but that's what we have. And feel free to ask any questions if you've got any about these. And here we can allow anonymous users. Like I said before, provide an API key, you're allowed in. Then the signed-in users, this is how you specify that. And then user group-based, here we can provide the rule, allow groups, and then we can provide an array of groups. And then we can see with that that the token would look a little bit like this, where the group is admin. Cool. And, yep, you can also configure what operations are allowed. So, signed-in users can only get things. They can create, update, delete, or do many. But, yeah, we're not going to dive into the group-based authentication. It's not really necessary for this, I think. But I will show you very quickly in CLERK how you can tie those two things up. So you can specify in CLERK what users are admins or not, and then that can affect how you query stuff. Yeah. So this is the documentation on authentication. There's a few more things inside of our Concepts section that show you how to authenticate with that, and we'll be using some of this code to make our front-end possible. Awesome.

19. Fetching Posts with Apollo Client

Short description:

We're going to use Apollo Client to fetch our posts. You can find the code in the guide. This is typical application stuff when dealing with a backend.

So we've got this application, we have our user details here, and now we want to fetch our posts. And now I'm going to be using a library that we're probably all familiar with, if you're familiar with GraphQL, and that is Apollo Client. So let's go ahead and install Apollo. Oops, Apollo Client, and we'll install GraphQL. While those are installing, if you want the code for what I'm about to do, you can find this on here, on our guides. I'll post that in the chat as well, for anyone interested, and I'll post that in Discord as well. So that is there. Just posting that to Discord as well. Let's see if that works. Just posting that to Discord as well. Awesome. This is great. All of the code from what I'm about to do next can actually be found in this guide. I think this is really useful if we go through this. Just to show you the possibilities, this is typical application stuff you have to deal with when dealing with a backend. There'll be other content and whatever, and we can get to that. So yeah, we've got that there.

20. Setting Up Apollo Client and Auth Middleware

Short description:

In our app.tsx, we import ApolloClient and set it up. We import HTTP link to specify the backend location. We create a custom provider to get our token from Clerk before making a request. We import Apollo Client, provider, and in-memory cache. We update our application with these imports and create a client with Apollo client, link, and cache. We import set context to set the context of our Apollo link. We add auth middleware to grab the headers, merge them, and pass the token to our Apollo link. We provide the auth middleware in our link chain. We're almost there, on the home stretch.

So now let's head on over to Visual Studio Code and we'll clear that and we'll run everything here. Now inside of our app.tsx, we're going to import ApolloClient and we're going to set that up. So now let's go ahead and we'll import a few things. The first thing that we'll import from Apollo will be HTTP link. And this is what we need in order to tell Apollo where our backend is. Now we could use an environment variable and I would highly recommend that you do so. But for the purposes of this, we'll just specify that this is localhost 4000 slash GraphQL. You might wanna put that inside of here and make it a next public GraphQL API, next public Apollo URI, and then make that HTTP localhost and then you can change that in production. Let's just use it if we're gonna show it, I guess. Where are we, in here? Process.env and then we'll pass that there. So we've got this HTTP link. And not too important right now, it's not gonna do anything. We actually need to create a custom provider and I'll explain why we need to do that and it's mainly for the rules of React and Hooks but we need to call a function from Clerk to get our token. And what Clerk do is they have these short-lived tokens. So right before we make a request to our GraphQL endpoint, we need to call Clerk to get our token then we can pass that along. So I'm gonna import a few more things here. So we'll import props with children from React. Then we'll import use memo from React and then we'll import Apollo Client, provider, and in-memory cache from Apollo Client. Sorry, use memo from React. So now we've got Apollo Client, Apollo provider, and in-memory cache. We can then use those imports to update a file and update our application down here. So if we go back to the guide, and I'd highly recommend that you follow along using the guide, let's copy and paste what we have here. And we'll paste this here. And now we can see we'll also need to import from as well. My apologies. And then we have Apollo provider wrapper and then we need to use this to kind of wrap our application. And there's a few more things that we need to do. So inside of here, we create a client, use memo. We then call Apollo client, we give it the link and we create a new in-memory cache. And if you're familiar with Apollo client, this will all look familiar. If you're not, this is a way for us to kind of create a cache on the front end that when we make a request that updates in the client. So the client's able to make sense of everything and update its cache. So if you do make a request and you've got a cache data, you can kind of say, rely on a cache only or the cache first, then go for network request, et cetera. I recommend having a look at the Apollo documentation if you wanna learn more specifics on that, but yeah, we'll just go with the flow on this for now. Then I am gonna import set context. And this is so we can actually set the context of our Apollo link. Now you'll notice here that we have this link and we're constructing a single, I think we construct a single Apollo link from multiple links here or one flows to the other. So we'll set some context, then we'll go to HTTP link to make a request. So inside of here, you can see that we've added some additional code. So where we have our client, we now call some auth middleware. So I'm gonna copy that and we'll go back into our client here. And this auth middleware gets the operation, which we don't need, but we want to actually grab the headers. We want to merge in the current headers. And then we want to pass our token to our Apollo link. So here, we'll provide that auth middleware in our link chain. And here we can see link from these two links. This is kind of just setting the context of our HTTP link, which is cool. And then we'll just move these around just to follow my guide in case we get anything wrong, we'll call it word for word here. We're almost there. We are on the home stretch here.

21. Importing useAuth and Apollo Client

Short description:

We import useAuth from Clearc and call callClearc.token to get the token. We update the dependencies and use the Apollo Provider wrapper. The wrapper allows public access to certain routes and redirects to the sign-in page if the user is not signed in. We use the Apollo Client Wrapper in our application and remove the boilerplate code. We create an index page and import useQuery and gql from Apollo Client. We make a query using the useQuery hook and pass the query and variables as arguments. This process is currently untimed.

We're going to then import useAuth, and we'll import that from Clearc. Then further on down, we're going to import useAuth. Because we want to call, we need to do this here because we can't deal with that, because of the rules of React, right? Like we're going to call async inside of this, and it's going to say, you can't call a hook inside of a hook, and whatever. We need to run this inside of here. So this is an async function already. So it's inside of here that we're actually going to call out to Clearc. callClearc.token. And if we go back to the guide, if you're following along with the guide, then we want to call that here. Then, instead of here, we'll call Get Token. And we'll need to destructure Get Token from the hook, which we can do here. Get token, and we'll invoke useAuth. And then we pass the name of the template that we had inside of Clearc. So if we go to Clearc here, in our dashboard, this is when I said earlier on that you can create whatever template you want and name it whatever you like, this is the name of the template that you now need to know inside of your application. So you could put this inside of an environment variable if it's likely to change, or you could just name it something that is not going to change, and you know what it is. And here it is, GraphBase. So here we have that token createForce. And now we can use that token, and we can use template literals, we can pass that here. And this is the request header that goes to our backend, and it will contain that here, which is cool. So now we'll need to update the dependencies here. And let's just go back to the guide just to make sure that we're following everything, and I believe it's only GetToken. We'll pass that in there, so we'll memoize that, based on GetToken, and now we are ready to use this Apollo Provider wrapper. So further on down, you can see here that we use this Apollo Provider wrapper inside of my application. Now there's a bit more going on here, there maybe's what we need to cover in this, but essentially what is happening here is we are saying that for any user, they can access About Us, and they can access the Privacy Policy and anything else you want to add into your pages. They can access all of these publicly, they don't need to be authenticated. And this is what this means here. Essentially, if the current route is included in this array, we render the component as normal, otherwise we render sign-in component, or we redirect to the sign-in page if they're signed out. So they're signed in. They can continue as normal. Otherwise, they have to sign in, which is nice. All right, so inside of there, we can use this Apollo Client Wrapper. Assign that in there. And now our application is aware it has Apollo, it's aware of everything else. And if we go to, we go back to our, we go back here. We load the network tab. We can see we're making a request to everything here. However, we're not doing any requests in GraphQL land, just yet. We now want to update our index page to include a query to our backend. So, let's remove a lot of the boilerplate that Next has here. Let's just get rid of everything, and we'll do, we'll create a new function, and we'll call this indexPage. And then we'll export that as a default, indexPage. Then we'll return Hello World. Now, if we go back to our browser, we get MyApp, because that's what I have. So, we have the MyApp component, then we have Hello World for our page. Now, what we want to do is import, we want to import the useQuery and gql from Apollo Client. Now, we can then use this to make a query inside of here, and then we can use gql tag to specify our query. So, let's go ahead, and we'll just call this get all posts, and we'll call gql, and then inside of these back-ticks, we can use that same query that we ran locally before here, and we can add it here. Now, inside of our page, we can call, to get the, we can call that same hook, useQuery, and then inside of here, we can pass that query, get all posts, then inside of the second argument, we can pass those variables. And I'm just checking on Discord to make sure that we've got no comments, it doesn't appear to be any questions, so I'll keep going on as normal. And then variables here, well, we know that the variable that we needed before was first ten. Now, at this point, this is worth calling out that this is completely untimed right now.

22. GraphQL Code Generation and Backend Authentication

Short description:

You can use GraphQL Code Generator to generate typed document nodes, making your variable objects type safe. The graph-based documentation provides more information on this. You can also follow the guide to generate TypeScript types for your GraphQL schema. Now, we have an application that loads data from the backend. The backend is authenticated using a token generated from Clerk. We can deploy the backend and access the data from the API. If there is an error, we display a message. In this workshop, we've created a front-end that communicates with a locally running backend. The backend is built automatically based on the GraphQL schema. In the future, you'll be able to add models or fields that resolve from external sources, allowing you to use existing backends with graph-based.

What you can do is look to use something like GraphQL Code Generator, and it will generate the typed document node. So when you go to type your variables, it's able to kind of detect from the query what variables you need so the variable object is type safe. And that's something which you can learn about over on the graph-based documentation. So if you want to do that, you're able to follow along.

So here, where we have it, generate TypeScript types here. You can follow along with this guide, and this will show you how you can get your GraphQL schema and you can create types, and it shows you what those types look like when they generate. So if you want to follow along and integrate that with your application, you can do that as well.

Cool. So now we've got an application. And from here, we'll destructure data and loading. And if we're loading, we will just return a simple message, loading data. Otherwise, we will return a pre-tag here. And instead of here, we'll call JSON, we'll call stringify, and then we'll call that data. Now, instead of my application, it's loading data.

If we open the network tab, we should get some errors telling us that our backend has failed. And that is because we haven't started. We haven't started that backend. But we can see from the operation that it's passing along that query, the variables. And in this authorization header, we have a token here, a token that is coming from Clerk. This token was generated from Clerk. And if we open some of these requests here, we can see that it's given us our clerk's details and information about who we are and all of our public info, and then we make a request right before GraphQL to get our token using the template called graph base. We then use the response of that, the JWT, to pass along in our headers to our backend.

Now, if we open up our terminal, let's put the pain here, and we'll run npx graph base, and we'll call at zero dot 10 dot zero, and we'll call dev and we'll spin up that backend. Now, what this is going to do is it's going to use the schema. It's going to use the value for the issuer to then look up on the backend the template, and make sure that everything matches up when it's authenticated. So, you can't just pass any JWT. It needs to be created and signed by click. And now when we refresh and we make a request, it makes a request to our backend, and it gets the data from our API because it's authenticated, which is really cool.

So, that is the data here coming from our backend which is running locally. I can deploy this to the web and use that same API that I have deployed for us. So, that's really cool. If I log out, sign out, you'll see here we've got no data. And what we could do is, if we go back to index, we can check to see if there is an error. If there is an error, then we will return, Something went wrong.

Okay. So something went wrong. Why? Well, let's load up the network request. Well, here we just get the message unauthorized. We're not logged in, we don't have access to that data, but if I sign in, maybe I need to go to the sign-in page. If I sign in, yeah, we never went to this before. But if I sign in with GitHub, because I'm authenticated, I can successfully request that data from my application. So if we just think about what we've done in this workshop, we've created a front-end that's able to talk to a back-end API that we are running locally, and all that we've had to do with GraphQL is write the schema. And this schema automatically builds that back-end locally, and when you deploy it, it gives you it available at the edge, and we get all of the models, crude models and validation for all of these fields.

Now, some things that will be coming soon will allow you to add models or fields that resolve from external sources. So, for example, maybe you already have a back-end and you can't migrate it. This isn't a Greenfield project but you want to use it inside of your project. Well, you could use a type and you could say, okay, my type is article. Then you'll be able to use a directive such as HTTP or REST, whatever you want to call it, and you can specify the endpoint. And you can say my endpoint is whatever, my resource is whatever, and then you can just not use that model directive and instead use HTTP. Then all of that data can be available and you get the added benefit that those are resolved at the edge and you can opt into caching.

23. Caching, Updates, and Wrapping Up

Short description:

We will soon be able to cache requests from other APIs, even those without caching. You can build locally, completely isolated from production data, and work on new features. We have exciting updates coming, including web hooks and live queries. Live queries allow automatic updates on the front end when anything changes on the back end. We have accomplished a lot today, and I will commit all the code to GitHub. Thank you for attending, and feel free to reach out on Twitter with any questions.

So, we can cache those requests for you from other APIs who may not even have caching. That kind of thing is coming soon. So keep an eye on our change log for that, and we're going to try and bring more of this to our Edge deployments and locally. So you're able to build locally, completely isolated away from your production data, work on new features, add new things, etc. And, yeah, that is what's coming there.

There's a bunch of other stuff as well, you know, web hooks and live queries. Live queries is coming soon, so you're able to... This query here, you can tag this query as live. So when anything changes on the back end, the front end will update automatically because it's live. That query, it opens an event source, so it's listening. It's streaming from the server anything that's new. We're updated and it will update the response. It's so cool. That's coming very soon. We're excited to show that.

But yeah, this is what we accomplished today. I will commit all of this code to GitHub, to the repository that we were working on before. And, yeah, we're good. We can commit this. Let's stop the server. I haven't changed the schema. Let's just be naughty here and just commit everything. Added, click. Add it off, that's what we did. Yeah, I actually added everything. Yeah, I will not deploy those. We will tidy those up and get rid of them. And all of the code will be available on GitHub, yeah. Cool, so that has been the session. Is there any questions before we wrap up? No? Awesome. Great, well thank you for attending. And like I said, any questions, feel free to reach out on Twitter. I am available here, and you can ask me any questions you like.

Watch more workshops on topic

GraphQL Galaxy 2021GraphQL Galaxy 2021
140 min
Build with SvelteKit and GraphQL
Featured WorkshopFree
Have you ever thought about building something that doesn't require a lot of boilerplate with a tiny bundle size? In this workshop, Scott Spence will go from hello world to covering routing and using endpoints in SvelteKit. You'll set up a backend GraphQL API then use GraphQL queries with SvelteKit to display the GraphQL API data. You'll build a fast secure project that uses SvelteKit's features, then deploy it as a fully static site. This course is for the Svelte curious who haven't had extensive experience with SvelteKit and want a deeper understanding of how to use it in practical applications.

Table of contents:
- Kick-off and Svelte introduction
- Initialise frontend project
- Tour of the SvelteKit skeleton project
- Configure backend project
- Query Data with GraphQL
- Fetching data to the frontend with GraphQL
- Styling
- Svelte directives
- Routing in SvelteKit
- Endpoints in SvelteKit
- Deploying to Netlify
- Navigation
- Mutations in GraphCMS
- Sending GraphQL Mutations via SvelteKit
- Q&A
React Advanced Conference 2022React Advanced Conference 2022
95 min
End-To-End Type Safety with React, GraphQL & Prisma
Featured WorkshopFree
In this workshop, you will get a first-hand look at what end-to-end type safety is and why it is important. To accomplish this, you’ll be building a GraphQL API using modern, relevant tools which will be consumed by a React client.
Prerequisites: - Node.js installed on your machine (12.2.X / 14.X)- It is recommended (but not required) to use VS Code for the practical tasks- An IDE installed (VSCode recommended)- (Good to have)*A basic understanding of Node.js, React, and TypeScript
GraphQL Galaxy 2022GraphQL Galaxy 2022
112 min
GraphQL for React Developers
Featured Workshop
There are many advantages to using GraphQL as a datasource for frontend development, compared to REST APIs. We developers in example need to write a lot of imperative code to retrieve data to display in our applications and handle state. With GraphQL you cannot only decrease the amount of code needed around data fetching and state-management you'll also get increased flexibility, better performance and most of all an improved developer experience. In this workshop you'll learn how GraphQL can improve your work as a frontend developer and how to handle GraphQL in your frontend React application.
React Summit 2022React Summit 2022
173 min
Build a Headless WordPress App with Next.js and WPGraphQL
In this workshop, you’ll learn how to build a Next.js app that uses Apollo Client to fetch data from a headless WordPress backend and use it to render the pages of your app. You’ll learn when you should consider a headless WordPress architecture, how to turn a WordPress backend into a GraphQL server, how to compose queries using the GraphiQL IDE, how to colocate GraphQL fragments with your components, and more.
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
We all love GraphQL, but it can be daunting to get a server up and running and keep your code organized, maintainable, and testable over the long term. No more! Come watch as I go from an empty directory to a fully fledged GraphQL API in minutes flat. Plus, see how easy it is to use and create directives to clean up your code even more. You're gonna love GraphQL even more once you make things Redwood Easy!
Vue.js London Live 2021Vue.js London Live 2021
24 min
Local State and Server Cache: Finding a Balance
How many times did you implement the same flow in your application: check, if data is already fetched from the server, if yes - render the data, if not - fetch this data and then render it? I think I've done it more than ten times myself and I've seen the question about this flow more than fifty times. Unfortunately, our go-to state management library, Vuex, doesn't provide any solution for this.For GraphQL-based application, there was an alternative to use Apollo client that provided tools for working with the cache. But what if you use REST? Luckily, now we have a Vue alternative to a react-query library that provides a nice solution for working with server cache. In this talk, I will explain the distinction between local application state and local server cache and do some live coding to show how to work with the latter.
Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.