Recreating urql's GraphQL cache on the edge

Rate this content
Bookmark

Table of contents:
- Deploying a premade GraphQL API
- Creating an edge worker + cache on Cloudflare
- Configuring caching based on typenames
- Adding invalidation based on mutation return types

114 min
13 Dec, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

The Workshop covered various topics related to software development and engineering. It included discussions on coding, communication, and troubleshooting Wrangler configuration. The participants learned about proxying requests in Cloudflare Workers, identifying mutations in GraphQL queries, and caching responses in a KV store. They also explored the AST with astexplorer.net and manipulated the AST to add type names to queries. The Workshop concluded with a discussion on caching subscriptions in GraphQL and handling mutations and query invalidation.

Available in Español

1. Introduction to Coding and Communication

Short description:

Next we're gonna wrap up the session with Timo. Can he get started? Can you show him how to code a course? Tyto, can you explain the concept you're talking about? A lot of my content focuses on communication. I feel overwhelmed but passionate. We've had some requests to get data and do some validations. If you have a basic graphical endpoint, fantastic. If not, use the provided link to create one. We can use that for the workshop. Let me send you a repo. Now let's get started. Install dependencies with NPMCI and Rangler. Login with your CloudFlare account. Run npm start and handle the type error. Start making GraphQL requests. We should see them console logged in. Let's proxy the request and handle the error.

Next we're gonna wrap up the session with Timo. Can he get started? He's here with you, can't he Timo? Can you show him how to code a course? He must be. So let's go ahead and get him started. Tyto, can you explain conceptual you're talking about? And can you talk about communication and design?

Yeah, definitely. So a lot of my content right now focuses on communication. I feel a little bit overwhelmed. Not that I want to admit, but I'm a little overwhelmed because I am very, very passionate about communication and I've had some requests to get data, but we also want to do some invalidations. So if you have a basic, like for a graphical endpoint, fantastic, if you don't, just head over to this link in the chat. As you can create a super, super basic graphical endpoint. So in this example, it gives us like a little JSON schema. Just click extend JSON, deploy API. And then we get a little link here and now we can actually do some graphQL requests. So if I go into, if I go into Insomnia here and say, get users, for example, we should see, there we go. We got a bunch of users. So just make your own, if everyone has their own, you'll see you'll get like a little unique URL. Keep that safe. And we can use that for the rest of the workshop. Although if you, like I said, if you have a API that you want to use as well, graphical API, then you can use that instead. And I'll just put in still one in advance.

All right. And let me send you a repo as well. Cool. We've got our own repo and we just basically just some boilerplate to get started. So I will share my screen and let's, let's get started. So change my end point here. Let me make sure I'm on the right repo. Cool. So what we want to do, is if you just run NPMCI and we'll install some dependencies. And one of those things that will be installed, oops, let's try that again. One of those things that will be installed will be Rangler, which is a Cloudess CLI for deploying services, things like that. And we're actually going to need Rangler for this project, even though we're running it locally for now. So what you wanna do is if you just run, I'll put this in the chat. Npm bin Rangler login, something like this. If you're on Windows, it might be slightly different. It will give you a prompt to open a browser and just sign in with your CloudFlare account. It will give you a prompt to open a browser and just sign in with your CloudFlare account.

Cool, alright, I'm gonna assume that you're logged in. So everyone, if you just run now, npm run, I think it was dev, it's called a, oh yeah, npm start, there we go. If you're on npm start, what we're actually going to do now is get a type error, totally intentional, knew that was gonna happen. So let's just quickly just do a, let's do return null as any, and then let's just console log request and see what we get. And npm start, there we go. Cool, so in a second you'll get a little prompt, here we go, that we've got a server listening on this endpoint. And now we can basically start making GraphQL requests to this endpoint, we should see them console logged in. So I have a, I've got Insomnia here, but you can use Postman or Kill or whatever you want. I'm just gonna do H7, H7. Cool, so I'm gonna do a quick report. Let's proxy our request to that. And we get an error, work through exception. That's totally fine, that's because it didn't return. But what we should see is that parameter one is not typed.

2. Proxying Requests in Cloudflare Workers

Short description:

Let's proxy our request in Cloudflare Workers using fetch requests. We'll make a new request to our GraphQL endpoint, taking the necessary details from the fetch request. If you're experiencing a 403 error when signing in to Wrangler, try running Wrangler in it to set up your project. If that doesn't work, try removing the wrangler.toml file and running wrangler inherit. As long as the Annet works, we can proceed. If you're encountering errors when running Yarn start or NPM start, it's likely due to the Annet itself. Annet should create your Wrangler and an instance of the worker in Cloudflare.

Okay, let's ignore that. Let's proxy our request. So in Cloudflare Workers, you can do fetch requests. So, for example, if we want to do a request to Google, we need something like that. And it's just your standard like in-browser fetch request. And the environment we're in in a Cloudflare Worker is basically, they call it Worker because it's kind of like a service worker. There's some constraints as to what you can do in the environment. But in general, you can do things like fetch requests and intercept requests and things like that.

So what we're gonna do is, to start off with, we're just gonna do a fetch and we're gonna make a new request as part of this fetch. The destination is gonna be our GraphQL endpoint. So you'll see that we've got new request destination, and then the rest of it, the body, the headers, things like that, we're just gonna take from the request that's being passed to us from the fetch function.

So if I quickly do another request now, I'll work it through its section. What is going on? Hmm, anyone else getting this error? I was thinking I might've broken something beforehand. I guess not. Ah, there we go. Now it's working, all right so just give me a thumbs up in chat if you aren't getting that weird error that I totally was expecting not. Let's have a look. Okay so Marken's getting a 403, Rodrigo's getting a 403 as well. Okay you're all getting a 403. Okay so that probably means, okay cloud flag client V4 accounts, if you, okay so you're getting a 403 when you're signing in to Wrangler. Before you start, just try and run Wrangler in it so you can do that same thing again with npm bin if you want something like that and you should, in theory then, be set up to go. If you feel comfortable doing that, we can just add one as a console. We can just add one and then just do an empty VERTX and then once I'm done with that, I have a hidden flag here. Just a quick confirmation in the chat if that's working now. Oh, I see. Hm, interesting. Okay, let's have a quick look. Okay, so it looks like what you might need to do is just get rid of the, I'm just gonna try this very quickly. I just did a code break. And I got corrupted and I've found out that it's just because it's not TrueType API. So I'll use the normal module. If you guys can... Yes. Okay, so what we can do is if you just get rid of the wrangler.toml... I need it. No. If you just get rid of wrangler dot toml for now, in your projects. And then run wrangler inherit and in theory, then that should create a project. How's it going for you folks? Is that working now? Okay. Don't worry about that for now. As long as the Annet works, what I'm actually going to do is, we're going to take the, did install something, but now it's failing, same as Marcus, okay. And, am I right in thinking that that's failing when you run Yarn start now, right, or NPM start. The Annet itself is, I'm guessing it's passed. Oh, I'm sorry. Installed something and now it's failing, same as Marcus. Okay, cool. So, if you, what you would do is, we do want to use that previous Wrangler that we have. But, what happens on Annet, I'm pretty sure I need to double check this. It's been a while since I've done this. What Annet will do is probably create your Wrangler, but also create an instance of that worker in Cloudflare, which is the thing that we were missing, which is why it wasn't working.

3. Troubleshooting Wrangler Configuration

Short description:

Once you've run wrangler init, replace the contents of the wrangler file. If Wrangler in it doesn't give the right config, create a new project and make changes as you go. Copy the source directory, package.json, and wrangler.toml into the new project. Try running it in dev instance. If you still get the 403 error, it might be something else.

So, if you check, success, false error is missing, xolp xml headers. Okay. What I'm gonna suggest we do is, once you've run, so delete the wrangler toml, and then run wrangler init, the command previously there, and then once that's done, we're then gonna go ahead and replace the contents with this, of the wrangler file, and that should get us back to where we were at the beginning, but now with an instance on Cloudflare.

I thought there would be an easier way of using an existing config, thanks Fernando, can use your screen now. I can indeed, yep let's have a look, so GraphQL cache in, perfect, and if you run this, sorry if I run this, sorry, command start. 403, interesting, trying to see if there's anything here that's like specific to my, specific to my account, but, when you did Wrangler in it, what name did it give the project? I think it, oh okay, I can actually, I can go back, it just, liked the name of the repo. Interesting, what happens if you try and run that, if you run it without something? Mm-hmm, so I saved, I ran that, then it complains that it doesn't have an entry point. Gotcha, yeah, which makes sense. All right, while we're getting started, let's go back to 101, and listening what... I have a question. Yeah, go for it. In the message, it's written, distribution slash index dot Jason, and if you open the folder, it's index.NJS. So if you open the distribution folder, it generates a completely different file. So this I even don't understand. Oh, okay, so when you run Wrangler in it, it will create a new like a Wrangler config, but it won't be the right one, which is why we're running Wrangler in it to create the Cloudflare worker on the server side. And then changing it back to our config. So we've got the build steps that we need. The thing is, it doesn't seem to be, I'm just gonna try and quickly make a new work with some Scratch. Let's have a look. I think there might actually be another way we can do this. It's tricky to test. It looks like something in this is specific to my, to my account. And I'm not sure what it is. If no one's able to find a solution yet, I think to move forward, what we can do is we'll have to make a new worker, unfortunately, and then we'll just have to copy things over as we go. What I can tell you is that I just ran the following. The following, so from like a different directory whatever your parent directory is, I run this and creates a new project. And if you go into that project, it seems to work fine. So maybe what we can do is start with that and then we can instead of using the template we've got created, we can just make changes as we go. So just a recap. So that commands go into the directory, do an NPM install and then try, I don't know about to start command, let's have a quick, and then you can run, I mean, should be able to run that and hopefully you won't see four or three errors. Let's make sure you change directory after running rangler generate so that you go into the newly created projects, which I think is the default name looks to be my worker. Well, let's start moving the source folder. Just move that in. And also, Cause I'm trying to think of how we can do this without potentially breaking this template. I mean the fact that that's working, what I would say is you see the wrangler tomel now, anything that doesn't look like it's specific to that project you just created. So, for example, like not UU ID or the name, which I think is just everything but the name, try copying that into the, in fact, I'll try this now before we start just so I'm not wasting the time. Yeah, funny thing is that I just tried changing the name of the worker in my repo, see if that breaks it, but I'm really not sure where that wrangler config is being stored, it's somewhere of the store in the config, I don't know where it is. But anyhow, let's go ahead and copy this in. Okay, so what I've just tried now is creating a new wrangler project and then once that project is created, I'm copying the, basically just replacing everything there with that original repo. So the source directory, the wrangler.toml, package.json, all that good stuff. So I would say, give that a shot. I think there's something here that is causing wrangler not to work, but yeah, I'd say a good debt shot. So try copying the source directory, the package.json and the wrangler.toml into the new project for replacing those files and then try running it in dev instance. If you've seen the same thing as me, it should work. It doesn't work for me, but I never released a worker before with this account. So maybe I should be creating a domain from the UI or something? So I didn't actually do anything from the UI. Yeah, so once I did wrangler login and then I did wrangler generate to create wrangler project, and then from there I made some changes to do bundling and so on. I had to get it set up for this project. So, when you run wrangler generate and then you go into it, run npm install and then wrangler dev, is that working for you or are you still getting the 403? When I do the npm start that does the wrangler dev, I'm still getting the same message, the 403. But I can see my account ID in that URL, so it must be something else.

4. CloudFlare Workers Deployment Issues

Short description:

If you've tried wrangler generate and different versions of the wrangler CLI without success, the issue may be that you need to create a separate account for workers on Cloudflare. Once you have a workers account, the initial repo should start working. Make sure to enable workers and log in again. When running npm start, the request is deployed to Cloudflare and proxied locally. You can test it by sending a post request to localhost:8787. We're currently just forwarding the request without caching.

And if you do, did you try the wrangler generate to make a completely new wrangler project to then try and run? Nope, but I can try that. I put some in the chat. Just see if that works, I think. Oh yeah, yeah, I did, I did. That's what created the workers folder, yeah, I did that.

Okay, and is that one working and do you get a four for doing that? I'm still getting poetry. Man, okay, now that's really boring. It's all good, it's all good. I'm still wondering if we have different versions of the wrangler CLI and maybe there's a... Let me just have a quick look. So I got 1.19.5 of wrangler. Man, this is hard to solve. Let's try that. Let me have a look, see if there's a way of... We're definitely snagging. What if you're trying to try it through in a super basic project, like the generated one for example, without copying any of the files over and then just try and publish that and see that gets us going. Ha ha, no way! All right, folks, I found the issue and it's in no way something that you folks do in our own very small bot. Turns out you have to make a separate account for workers on Classware. I'm so sorry for wasting your time on this. If you sign up at the workers URL, if you've already made an account at your state, it sounds like you still have to make a new one to access workers. This is a, I can't believe I didn't realize this, but give that a shot and that we should be able to go forward. So you might need to log in again. Surprise, I'm actually quite surprised that we don't get errors when we try and log in with the non-worker account, but get that shot. And it's funny that that's why I asked if we had to do something in the UI because. Yeah, I think you're right, actually. I think it's, yeah, it looks like you got to opt in. You said you tried this already? I did try it to do it manually, and I got one step further. Now I have the entry point problem, but I think I can fix that by editing the config. So I think in theory, once you've, if you've got workers enabled, I'm just gonna quickly try to double-check. But I think once you've got workers enabled, I think you should be able to, that initial repo we've cloned, that should just start working now. But, yeah, I'd say, I'd say just try logging in again once you've got workers enabled. And then go from there. I'm just gonna see if there's a link I can find to enable workers. Cool, all right. So, I'm gonna assume, in front of me, if you aren't ready to go, but I'm gonna assume it sounds like everyone is up to this step. So, long story short, what we've just done is when we run npm start it actually deploys to CloudFlare and then proxies it locally so we can easily access it. So, the reason this wasn't working basically is because it wasn't deployed to CloudFlare. But once you register for that CloudFlare workers account as opposed to a standard account, we're good to go. So, right now, very, very basic. We're literally just proxying that request. So, what you'll see is if you spin up something like a Postman or whatever and do a post request to the endpoint local host 8787. And here's an example query. This query should work for you. I'll put it in the chat in the second. We're getting responses, which is great. And that's basically the, our starting point. So, if you, for some reason don't have Postman or anything like that, you can also just try this kill request in your terminal and you should get a list of users back. Okay. So, we've got a request coming in, and right now we're just forwarding it along. Obviously that doesn't really benefit us, we wanna stop caching.

5. Identifying Mutations in GraphQL Queries

Short description:

There are two types of operations in GraphQL: mutations and queries. Mutations are used for pushing data, similar to POST requests in REST, while queries are used for data retrieval and caching. To identify if an incoming query is a mutation, we can search the query string. However, a better approach is to use an abstract syntax tree (AST) to get a normalized representation of the query. By passing the query into an AST, we can analyze the hierarchy of the query and determine the type of operation. To do this, we can use the 'pass' function from the GraphQL JS library, which will convert the query into an AST object.

So, there's two types of operations for GraphQL query. Does anyone know these? Feel free to just put it in the chat or speak up. In the mutations on queries. Yeah, no, perfect. Perfect. So we've got mutations and queries coming in and the ones that we want to cache are obviously the queries because that's data retrieval. But the things that we don't wanna cache is the mutations because obviously that pushes there like a post requests in REST, for example.

So, the first thing we need to do is we need to work out how to identify the incoming query as a mutation and then acts differently based on that. So, what we could do is we could search the query string, which I've got some example code here somewhere, just save time. Here we go. So, we've got a body here, which is request.body.text, which gives us the request as a text string then we pass it into some JSON. And when we've got this body, let's do an example, example request. Cool. We should see here somewhere. Am I in the right folder? Yep. We get an object, let's just string it up so we can actually see what is in there. Ha ha, we've got an empty object. Request clone text, then JSON parse. Cool. Are we missing some headers? Contact type, application JSON. Interesting. To be honest, it's probably about JSON or something like that. Oh, there we go, okay. So we're getting the query through. You see that we've got the string here and obviously probably most of you know that we wanna do a query, we prefix a query or just omit the operation name altogether. And if we wanna do a mutation, we add mutation to the beginning of the query string. So a mutation would look something like this. And then we add user, for example. So what we could do is we could, now we've got this query in our body, we could just pass the string and look for the word mutation in there. The problem with that is it's kinda janky because just like any kind of language traversal, ideally we want to get a normalized representation of the language that doesn't include spaces, doesn't include any kind of variability and instead just focuses on what is it that GraphQL cares about.

So is anyone familiar with what an AST is? Can you check the chat? Yes, we got one from Rodrigo, awesome. Anyone else heard of an AST? No, okay, cool. So an AST stands for abstract syntax tree. And you can think of it as a step after lexical analysis. So we kind of passed the string into a lexicon. So this query might be like an operation type. And then we create like a tree of the hierarchy so you could iterate through this and see that we've got query and then inside the query, we've got fields and inside that field there's another field, and so on. So for this, the first thing we're gonna do is we're just gonna try and pass the query into an AST. And then the next thing we're gonna do is try and work out what kind of operation is based on that. So the first thing we're gonna do and feel free to follow along is I'll put this in the chat. So first thing we're gonna do with our query is we're actually gonna do, it's called the document of haskell. And we're gonna call pass which is from the GraphQL JS library which should be installed in your project. So don't worry about that. And then we're gonna pass it the Body.query. And then if you add a console.log and have a look at document, you should see something that is basically impossible to grow because it's just a big nasty object. Let's have a look. There we go. So you'll have something like this, like a object with kind document, some definitions and then some location information. So we've passed the query and basically the string is no longer a string. It's now a AST, which is our representation of the document, the GraphQL query.

6. Exploring AST with astexplorer.net

Short description:

To learn the AST, we don't want to constantly console log. Instead, we can use the astexplorer.net tool to explore the AST of GraphQL queries. It allows us to see the structure of the query and find specific nodes. We can even determine if a query is a mutation by examining the AST. Take a few minutes to experiment with the tool and feel free to ask any questions in the chat.

And the thing we don't want to do is constantly be console logging to work out to learn the AST, because as you can imagine, there's lots of different type of AST nodes. There's this top-level node, it's called the document, but then we've got things like selection sets, fields, name, nodes, all these kinds of things that we don't wanna necessarily just read massive documentation to work out.

So what we're gonna do is we're gonna go to a site called astexplorer.net, which I'll put in a chat here. And this is an awesome tool, which, just gonna close some of these. This is an awesome tool, which allows you to put in a language. So it could be a CSS, but in our case, we're gonna go to here and search for GraphQL, which is here. And then you can see that it actually creates some example queries. And then if you click around, you'll see on the right here, it actually shows you what the AST looks like. So for example, if I want to find out this name here, I wanna know if this name exists, this name field, but I don't know how to find it in my AST. If I click it, it'll actually take me to the node and show me how it exists in the tree, which is pretty useful.

So who has any idea? So let's try doing a mutation. Mutation. And we can just do, let's pretend it's a thing here. Okay, cool. So add a query, add a mutation. And then the challenge for you now, first one to do it gets brownie points, because that's all I've got to offer right now, but I'll think of something better. But mess around with these queries, click around and see if you can work out how to identify whether it's a query or a mutation based on this node here. Take it'd be a couple of minutes to do that. And in the meantime, if you've got any questions, just drop it in the chat. Yeah. And we'll get ready to go.

7. Determining Query or Mutation Operation

Short description:

I'll send you a link. If you're behind, check out the progress branch. Let us know if you have any questions or issues. Rodrigo has identified the query operation. We can determine the operation type without string parsing. We check if the kind is Operation Definition and if the operation is a query. We test it with a query and a mutation. True for query and false for mutation.

Okay. All right. Just got your message Muhammad. I will I will send you a link. Now. I will send you a link. Hey Muhammad, I'll make a separate branch with the current state of what I'm doing just so that you folks are, anyone who's behind can just check out that branch. Cool. So anyone who wants to catch up if they feel like they're a bit behind, maybe if you had some issues setting up Cloudflare, once you're good to go, just check out, there's a progress branch, just check that out, and you should be able to move forward with that. But also just drop a message in the chat or talk if you feel like you've got any questions or having any issues. We've all got different environments, as demonstrated earlier. So what works for me might not work for you, so definitely let us know.

You're probably not going to have that issue if you are having problems. Anyone getting close to finding anything in the AST tree that might hint towards what kind of operation we're working with? Perfect, for Rodrigo, legend. Yeah, so Rodrigo has nailed it. Basically, this whole thing here is what we consider a document. And inside the document, the very top-level thing, so the next thing down is a definition. And we can have many of those. As you can see here, we've got two. We've got an operation definition and a second operation definition. And if we go into this first one here, you'll see that there's a key operation, which is query. So this tells us that without even having to worry about what text is here, whether a query is here or whether this is a valid query as well, but it doesn't have the text query in, but it still says OperationQuery, it does the work for us of working out what operation it is, which saves us having to do like string parsing and things like that.

So what we can do is in fact, I'm gonna try and copy and paste Rodrigo's example there. I'm going to say isQuery is equal to, let's take this. What we're going to do is document.definitions.sum. I'm just gonna say node, which gives us each definition. I'm gonna say if the node.kind is equal to, I'm gonna import kind from here. Kind is just an enum that allows us to, kind.operation.definition. Cool, so we're gonna say in our definitions, we have a node which is of kind.operation.definition and node.operation is equal to query. I appreciate that this is not wrapping. Let me see if I can turn the wrapping on for you folks. There we go. Cool, so like, like Rodrigo shared, so we're going to look in those, that document that we've got, the ASD document. We look for definitions, check if the kind is Operation Definition. So just see, do we have one that is Operation Definition? And if we do, is the Operation Definition of Typequery. Put this in chat for anyone who's a, I mean, if she's seeing that. And then from there, do a quick console log and see what we get. So I do my first, my query here. Haha, course it couldn't connect. So let's try that. Ah, let's try that. Ah, the run-in. Man, Cloudflare's not on my side today. All right, it's just flaky, all right. So first time I did a query and we're getting true, and then let's change this. I'm literally just gonna do, let's see, I've got a mutation here somewhere. Let's do this one. And we should see false, awesome. So we're seeing true for when it's query and false for when it's a mutation. And I will put a little KEL request here for anyone who isn't using something like Postman.

8. Handling Queries and Mutations with Proxy

Short description:

We have the ability to work out now if it's a query and it is a mutation, which is great. We're gonna start having to branch in this fetch function here, so let's pull it out and create handle mutation and handle query functions. We should have two almost identical handlers, both of which just proxy at the moment. Based on whether it's a query or not, we'll call the appropriate handler. We wanted to cache a query, but not mutations. To cache a query, we'll use a KV store offered by Cloudflare. Run the following commands: [commands].

Or a GraphQL playground or anything like that. So there's an example query and, oops, that's a mutation. Query and query, there we go. Cool, is that making sense to folks so far? Just a quick thumbs up in chat, maybe. We have the ability to work out now if it's a query and it is a mutation, which is great.

Now what we wanna do is we're gonna start having to branch in this fetch function here, it's gonna get pretty big, so let's pull it out and let's just do let's do a handle mutation function and let's also do a handle query as well. Let's do a handle. And just to save us parsing the query twice, just gonna pass the query like so and then let's return a handler from that. And let's return a handler from that. Give request, oops, and I'll put a quick template here. So you should have something like this, one for handle mutation, one for handle query. And just so that CloudFlare is happy to start with, let's just make sure that we've always got that proxying that we did earlier, the return fetch for the request and all that jazz. And that'll just make sure that we always fall back to forwarding that request if we don't handle it in any particular way. Cool, so we should have, I'll paste this in the chat. We should have basically two almost identical handlers, both which just proxy at the moment.

What we're gonna do now is based on whether we've got isquery is true or not, we're gonna take the appropriate handler. So let's do return isquery. If it is a query, then we're gonna call handleQuery, pass it our document that we've already passed, and then also pass the requests and ctx. What's with this document? Sorry, folks, document should be document node, and we need to just make sure we import that from GraphQL. There we go. So it's query, we're gonna call it handleQuery, otherwise we're gonna call handleMutation, and that's gonna look something almost identical, just instead of query, it's gonna be mutation. Cool, and then we can get rid of this fetch, the bomb. we're there. I will just paste that back in the chat again now that type errors are fixed. So, something like that. Just a quick thumbs up in chat. That's making sense for people. So it's up there now. Thanks so much, game. As a friend, I'm always excited about the minute you show up for anything, it's great to reach out to you. I feel like it's a great positive for everybody here. Our next guest who is on in a very busy day here. That's my friend from last week's C++ podcast. So, if you want to check him out, you know, the tricky work of working out if it's a query or not we've done. Now we're just handling the query or the mutation based on the response. So, like we spoke before, we wanted to cache a... Just make sure I'm not skipping any steps here. Yeah, cool. So we wanted to cache a query, but we don't want to cache mutations. So, in order to cache a query, because we're working on Cloudflare workers, we can't afford to keep queries in memory because those workers have a short lifetime. So they could be completely torn down at any time. And they can also run concurrently as with most serverless offerings. So imagine you've got three workers and they all run at the same time, they've all got their own version of a cache, which can be a bit hectic. We want to work around that. So what we're going to do is we're going to use a what is called a KV store and Cloudflare offers that to us. So KV just meaning key value. And I'm going to send you a. Yes. So you will want to run The following commands. You might need to prefix MPM then before Wrangler, they don't have any like such plugins or anything like that for your bin directory.

9. Creating KV Namespaces and Configuring Wrangler

Short description:

We're going to create two namespaces: type names too and responses. Make sure to add the ID and preview ID to your Wrangler.toml file. The preview ID ensures that the KV store is used when running Wrangler locally. If you have the responses namespace with the correct IDs, you're good to go.

But what we're going to do here is just create the two namespaces. And when you run that, you should get some kind of notification that the namespace has been the KV namespace has been created. In fact, let's do a quick example. So I'm going to make a new one called type names too. Can't do type names too. Let's try that again. Type names too. Cool. And you'll see that you should get something like this, which tells you to add the following to your Wrangler toml file. So we're going to do exactly that. So if you can even like some community so if you can even copy this cross to your Wrangler.toml Let's just put it up here somewhere. And there is one change that we're going to make. Which is for this ID here, which declares like the ID of the KV namespace, which this one we're going to call responses. I think it was. So I've already created these earlier, so I'm just trying to make sure I don't make any new ones. So when we paste this in, we get an ID. But what you'll see is we actually need preview ID as well. So what you can do is just copy ID equals here. Add a comma and then paste it in again. But this time I'm gonna put preview ID. And all this will do is it will tell it that when we're running Wrangler locally, running Wrangler dev, that we wanna use the same KV store for that as well. So yeah, so for now we can just do one. As long as you've got one which is called responses, all capitals, and you've got the ID and the preview IDs the same, then we should be good to move forward. And just give us a thumbs up and chat if you're good to go. Looks like Rodrigo is on top of it. So just make sure that when you paste that in from the CLI, so it says there, KV namespaces binding ID, just make sure you've got that preview ID is equal to and then same as the ID.

10. Working with KV Namespace in Wrangler

Short description:

We can now use Wrangler dev to point to the KV namespace. The KV namespace is exposed as end.responses, which allows us to perform operations like putting, getting, and deleting key-value pairs. Let's start by trying a put operation with key 'one' and value 'hello'. We can also make this async and try to get the value and log it to the console. We have a functioning key-value store and can write data to it. The key we're using is 'responses'.

And once you have that, we now have, when we run Wrangler dev, which we pretty sure we're probably gonna have to run again I'm gonna start it again, just so... What's it? YARM dev start, NPM start, there we go. Cool. So now it should in theory be pointed to the KV namespace. And what we can do now is, the way the KV namespace is exposed to us using Wrangler is view this end. So I've added some types for you, you'll see if you do end dot, you'll see you've got responses and type names. For now we're just gonna worry about responses. We've got end.responses, and then from there we can actually do, we've got basic key values store. So we can start putting things, getting, deleting and so on. So what we can start with is, let's just try on a query, give it a shot just try doing end.responses. Let's say, put, let's put key one, hello. So try giving that shot, try just doing some kind of a put and maybe, let's make this async, and maybe try and get that as well in console log and see if you're able to get that to work. And maybe, all right, so we got a key value store and we're able to write things to it. Anyone want to take a stab in the dark as to what we're gonna write to our KV store? Okay, if anyone thinks its responses, you're on the bill.

11. Caching Responses in KV Store

Short description:

We're going to fetch the response and store it in our cache. The cache will be a key-value store where the key is the query and the value is the response. We'll stringify the response before storing it. The idea is that when a query is made, we'll check if it's in the cache. If it is, we'll return the cached response. If it's a new query, we'll fetch the response, add it to the cache, and return the response.

So what we're gonna do is on our handle query here, rather than just returning the response, what we're gonna do is we're actually gonna try a new wait at first. Let's do new response equals a wait fetch. Cool. Once we got that response, we then want to start putting that into our cache. So before we go ahead, I'm gonna give you a quick visualization of what we want our cache to look like. So it's KV store, which means that we've got K, which is key and then value, which is heavy, which is value. So it's gonna look something like this. Now what we're gonna do for now, we're gonna keep it pretty basic. We're just gonna have our key be the query. So for example, query users, we'll got the value be the response. And we're actually gonna stringify that. I don't quite know if it's necessary or not, but I think it is. Feel free to give it a shot without a string. Let me know if it works, but we're gonna have our response, which is gonna be our data, which might look something like this and so on. So this is what our KV story's gonna look like inside. And in theory, what's gonna happen is when we do this first query, we're gonna make that request and we're gonna check the response, see what we get from the response, and if it's a success, we'll write it to the cache and then next time that query comes through, we're gonna return a cache response. And if it's a new query, then we rinse and repeat, make the request, then add it to the cache.

12. Parsing and Caching Response

Short description:

I'm gonna quickly show you how to parse the response by cloning it and getting the JSON. We'll check if the response is okay and has no errors. Then we'll write it to the cache using the query as the key. We'll convert the query into a string using the print function. Finally, we'll return the new response to the requesting client.

I'm gonna quickly, I'll quickly show you how to parse the response because it's not super obvious and I've even made a note of it somewhere because, because, oh, okay, there we go, okay, cool. So what we can do is we can do new JSON is equal to new response dot Jason. I'm gonna clone it and all clone does is it just creates like a new instance of that response. It doesn't refetch or anything like that. It just means that when we try and read from it, if we try and read from the same response twice or error, which can be a problem if we read from a response and then return that response later on. So for our cases, we're just gonna clone it and we're gonna get the JSON. And in fact, let's before we do that, let's do a quick check. So if new response is okay, so we get like a success response and there's no error. So we wanna make sure we don't have any graphical errors. So let's say there's no error in the response. Well, we got an issue here. Let's do, cool. So we're gonna say the response is, okay, so it's a 200 or a 201 or whatever. And we don't have an error. Then what we're gonna do is we're gonna write this to the cache, which you've all done now. So you're pretty familiar with that, I'm guessing. So we're just gonna do this and responses port, gonna do a key, which is going to be our query. And we're actually gonna have to turn this into a string. So to do that instead of using POS, which takes the query string and turns it into a query, AST, we're gonna do print. And that's basically the inverse. So it takes the AST and turns it into a string. So let's do print, and then we're gonna do our query. And then the value, like I said, feel free to try this without a stringify. It might not be necessary to stringify this. The thing that we're gonna catch is the response obviously. So we're gonna catch new JSON. Then let's just do a little console log here, same like caching query. Cool, so we've got our response, it's okay. There's no errors, we're caching it, but that's not good enough because obviously we still need to let the requesting client get the data. So what we can do really easy, let's just return that new response as well, the full response, all the headers, all that good stuff. So have a mess room and let's see if you can get that. I'll put the example here, mess around with it, maybe try... maybe try something a little different. See if you can think of any other edge cases that we might be missing as well, and then just see if you can catch the query for us. Just quickly share my screen for marking. So we don't cast this, we start to get issues with the JSON being unknown. So just all casting that to any like this by passing the generic should help us out. And also, I'm not too sure, but I'm suspecting... Yeah, just make sure for now, if anyone has tried without a string define, let me know. Objects might work instead of a string. But yeah, for now, I'll try wrapping in a string, and you should get a string for the return type of print, and then a string for the return type of the stringify as well. And again, let's do a quick... Cool, that progress branch is updated. So if you want to pull that in, you're welcome to. How's everyone else doing in, everyone else managing?

Okay, cool. Anyone having issues? Anyone still blocked or are we get in type errors? Just put it in chat or feel free to talk in voice.

All right, cool. If, doesn't have to be Rodrigo, anyone got ideas of how we can now return this cash value? Feel free to just turn your mic on if you want. Until now, this point. Hey, sure thing Muhammad.

13. Caching Responses and Retrieving from Cache

Short description:

We have a worker that handles GraphQL requests, determining if it's a query or mutation. If it's a query, we handle it accordingly. If it's a mutation, we forward the response. We've created a KV store to cache successful responses. We now try to retrieve a response from the cache. If found, we return it. Otherwise, we create a new response and store it in the cache. We also add a header to indicate a cache hit.

One thing I would say is just, once you've checked the branch out, feel free to run this and pull the latest changes that might help catch up if you feel like you're a little bit behind. But by now what we've done is we have a worker which takes an incoming GraphQL request. And the first thing we do is work out is it a query or is it a mutation? And if it's a query, we go to handle query which we'll talk about in a second. And if it's a mutation then we pass its handle mutation which just forwards that response to the origin.

So if which origin is here by the way. So in this case, it's just a fake GraphQL endpoint. And what we've just done now in the last like 10, 15 minutes is we've created a KB store in Cloudflare which is just like a dictionary or object storage. And what we're doing is we're now, when we get a query, we make the request but rather than like the mutation where we just return the response immediately. This time, what we're doing is we're going to check if the response is a success and if it is, we're going to cache that. So we're going to store that in our responses here where the key is the query string and the result is the response. And then we return the original response from the origin. And the next thing we're going to do now is actually return that cache response. And we get our results. And actually, we'll talk about that a little bit later. But for now, let's actually do a few simple tasks here.

We're going toereye we go and name our file. And we're going to reveal a message. We have a a problem with the operations file. We are actually able to deselect this file and then we have a copy of the output. Let's say I've got a website that I can use So we have a cache response. And this is now in storage. And this will basically be there forever. So even if we don't make requests for a year from now, we're still confident that KVstore still has a state and will still stay up to date. So what we need to do now is we need to try before we do this bit here, which is new response. So this is basically like a fresh, you know, the fresh response before we forward to the origin. What we could do is we're going to try and see if we can, we're going to try and see if we can get something from the cache. So we'll first try to get something from the cache. So let's do a constant cache response equals await, and then we're going to go and responses, which gets us our kv.get. And then we're going to print our query. And just to clean things up a bit, because printing is not too cheap, let's just do a quick, let's move it all there. So we just call it once. So we're going to try and get the cache response, like so. So we just say, get, and we provide the key, which is, as you can see here, when we wrote it, and the key is the query string. And then what we're going to do is if there is a cache response, we are going to return the cache response. In fact, we're going to have to do a bit more than that. We're going to have to create a new response. The body is going to be, let's go double check this, maybe not this before. There we go. Cool. So we're going to return our new response. We're going to return the cache response, which is that string defied objects. And then we're going to, we need to make sure that we let the client know it's an object and not just a random string. So we've got content type application, JSON. And then just for an added bonus, what we're gonna do is we're just going to add a header here called x-line. So this is actually what we're going to call it. So this is going to be called X-Cache-Hit. And we'll put that as true. And what that means is if I save this, wait for this to rebuild. Let's just do a quick console log. Big old cache message.

14. Caching and Invalidating Queries

Short description:

If I do my query, and then let's just check here. X-Cache-Hit true. So what that means is that query went through and it was cached. If I change my query, there's no cache hit because it's a new query. If I run it again, boom, X cache hit is true. And you see that we get the cached query.

Build successful. Cool. So if I do my query, and then let's just check here. X-Cache-Hit true. So what that means is that query went through and it was cached. And if I change this, let's say I changed my query, let's just give it a name for now. My unique query. Send. In theory, you'll see that there's no cache hit there because this is a new query. So, it's not in our KV store. If I run it again, boom, X cache hit is true. And you see that we get the cached query. So what I'm gonna do, is I'm just gonna paste that in the chat and I'll update the branch as well. Cool. Give it a couple of minutes for folks to try that out.

15. Caching and Invalidating Queries

Short description:

Imagine you've made a query and we cache it. Whenever you request the user with this ID, you'll always get the same response. If a mutation changes the user, and the cached query is made again, the result is no longer valid and needs to be invalidated. GraphQL's introspection and schemas provide schema information for every piece of data, including type names. By using type names in combination with some kind of ID, we can uniquely identify nodes and update specific queries without refetching all others.

So this is a very quick example. So we've got a query and we say, get user with ID one and we get back the ID, the name and the age. Let's just do ID name, okay. So imagine this, imagine you've made this query and we cache it. This query is now in the cache. Whenever you request this user with this ID, we'll always get the same response back. This mutation comes along. What happens is someone triggers a mutation which changes the user. And then this query comes along later on and we return the cached result again. And the problem with the cached result is it's no longer valid and we need to invalidate it. As Rodrigo said. So any ideas, any thoughts as to what we might be able to do to address this? Let's have a look at the chat. So how could we look at a mutation and then work out what to do? We could invalidate all queries. So we could say any time a mutation comes along, we'll wipe the whole key value store, but obviously that's not ideal, but maybe there's other things we can do as well. Invalidate only mutated fields we restricts the whole KB. Yeah, I think you're closer than you think you are Rodrigo. There are some pretty advanced things we could do here. For these purposes, we're gonna look at what we did in url with the naive cache. So one thing that not a lot of people know about this, it's one of those things that unless you've needed to use it, you probably haven't. But with GraphQL, the reason that we can make GraphQL clients which have their own cache and are smart and know when to invalidate things is because of introspection. So if you think about your rest API and you do a request to a route, and you get data back, we don't really know what that data is. Is that a, we can look at the URL and try and make assumptions, but there's not really a lot of confidence as to, is this a user? Is this a user with, I don't know, a field that says friends with more users or a field that says organizations with organizations. And it's really hard to know like what type of data we're working with. So the really cool thing about GraphQL is we have introspection and schemas. And every piece of data that you get back actually has schema information. So if you've ever heard of type name, which looks a bit like this double underscore and then type name. This is actually what that is for. So if I do a quick request now for a query and I search, get me all users and get me the type name, you'll see that actually now it returns type name and that field has a value user. So the really cool thing about this is now that we know that this object here at least is a user. And if this object has some relationships, we can see the types of those as well. So that's pretty useful. And that means that we now know this is a user with ID one or this is a user with ID three. And we can actually like uniquely identify nodes. So if you imagine that we do this request twice, imagine that you got this response twice. Just gonna get rid of that. Cool. So we get ID three age 24, first name Estelle. And then we got ID three first name Estelle. Let's make this a bit more ambiguous. Okay. So we got ID three name Estelle, ID three name Jess. So this alone, if we just assume that ID three corresponds to the same entity, we end up with problems because what if ID three name Estelle is, I don't know, what if this was the name of a company and this is the name of a user? So what if this is like Estelle limited or something like that, right? And that's totally valid. Like you can have IDs that are identical but for different things and it doesn't mean they're the same but by introducing type names into the mix, the comp B, okay let's just add it in manually. So if we add type name user user and then we have the same here. Well, now that we know that this is a user with ID three and here we know that this is a user with ID three. So the cool thing about this is by using the type name in combination with some kind of ID, we now know that every time we get a for example, if we do this query and then later on we do a different query that returns ID three on type user and it changes name. We don't need to re-fetch all our other queries. We could just update this. And that's what a lot of clients Ercle, Apollo, Relay will do is they'll look at the they'll actually create a store that looks something like this, the amount of like org one, two, three, four. And I'll actually store a normalized lists of all these objects for a given type name.

16. Caching and Modifying Queries with AST

Short description:

We can uniquely identify entities using a combination of type name and ID. Aliasing in GraphQL allows us to query data with different names. The GraphQL client can handle aliases by reverting them to the normalized representation. We will simplify the caching process by disregarding the ID and focusing on the type name. Mutations should return the changed entity, including the type name. We will add the type name to the mutation request and invalidate queries based on it. Use AST Explorer to explore the AST and make the necessary changes. Understanding the AST structure is key to modifying the query.

And that the, and then when it updates in one place rather than forcing the user to re-fetch all the queries it'll just update the state to have the latest user three because we know it's the same entity. So that might sound a bit complicated. Feel free to ask some questions in the chat if you want. It's definitely not a easy to consume idea but the general way you can look at this is with a combination of type name and ID we can work out, we can uniquely identify entities and know that we're referring to the same thing because an ID is never gonna change and the type name tells us what type of thing we're working with.

Chat, how do you deal with aliases? If you've got an example of an alias, Roger you go. So when do you mutate something, well not mutate, but maybe like you query something with a different name. Like for example, instead of name, I don't know, do you alias it to a full name? And then. When you say alias, do you mean like in the query? Yep, yeah, exactly. Ah, okay, I'll be honest, I'm not super familiar with GraphQL aliasing, but I'm guessing there's a way of doing like, I'm guessing what you're saying is something like this, and then like I need timing and things like that. Yeah, yeah, exactly. Oh, perfect, so in the case of any GraphQL client, because it's the one that's making that outgoing request, it's able to know that there's an alias present, so it can basically look at the outgoing request, go, oh, there's an alias here, so when I get the response, I need to make sure that the normalized responder, I revert the alias to get the normalized representation, and then also when we update other queries, we've got to make sure that we know that the alias exists. So typically we're trying to keep a track of what queries have aliases, which ones don't, and what aliases there are, so we can move them around. But yeah, so that's actually normalized caching, and we're going to do something kind of similar. We're going to do a simpler approach to this, so rather than worrying about user ID three, user ID four, and so on, we're going to forget ID. So when someone does a mutation like update user, ID one, two, three, four, name, new name. Mutations typically... The idioms are to return the thing that has been changed. So in theory, what we should get back from update user is the new user, so new user with ID and name, and in this case, type name. So what we're gonna do is we are going to do a couple of things first. We're gonna make sure that type name is on the request of the mutation. So we're gonna add that in to the mutation so that we include it. And then we're also going to... Later on, we don't need to do that yet. We're also going to then try and invalidate queries based on this. So first task, I'm going to give you folks a few minutes to try this out. Is, have a look through on AST on AST Explorer. So for example, this user one here with an ID and maybe a name. And let's say for example, this user also has a employer which is the version which also has oops, an ID and, I don't know, just an ID for now. All right, so here's what we have before. A query that looks like this. And here's what we want after. Oops. All right. So I'm going to put this in the chat and I'll give you five minutes to just stick this in AST Explore and see if you can work out. Oh, sorry, the format is not quite there. And see if you can work out how you might do that. So imagine you've got the AST. What would you do? Where would you make changes? So I'll give you folks about five minutes to try that out and then I'll come back at about 16, 16, 18. And feel free to discourse together. Put your ideas in the chat, things like that. All right, this is pretty tricky stuff now and it's not super obvious. I think one of the hard parts about this is an AST is like a new language in itself. So it can definitely be hard to take in. So the way I'd go about this if I was completely new to this would be clicking on topic name and just seeing like what that shows me. And you'll see here that we've got something which is called the name. So we've got kind name and they look above that. We've got kind field and then that's surrounded by another field. So let's go there and this is kind field and it has a name ID. So you see how this is actually what we call a field mode with a name ID. And this is also a field node with a name type name.

17. Adding Type Name to Selection Sets

Short description:

We need to check if a type name exists in the field nodes of our queries. This ensures that the type name is visible when the mutation or query is returned. In GraphQL, fields exist within a selection set, which is a group of fields. We need to find these selection sets and add the type name fields to them. To accomplish this, we can use a visitor function to iterate through the AST and identify selection sets. By appending the type name fields to the selections array inside the selection set, we can achieve our goal.

So the trick is what we wanna do is we wanna go through our queries and anywhere where we see a group of field nodes, we wanna check a type name exists. And if it does, if it doesn't, then we wanna add it. And that'll just make sure that when the mutation or query comes back that we can see the type name.

Now, in order to do that, we need to know like where the fields exist. And the fortunate thing about GraphQL is it's only like one place where a field can exist and it's inside this thing called the selection set. So anytime you see something like this in GraphQL where it's a... Some kind of like object like syntax with fields. This whole thing here is actually called a selection set. And inside of it are selections which almost always are fields. If you have a fragment, for example, like my fragment, then it's not a field. It's instead called a fragment spread. But for the most part, you can just assume that like, you know, in a selection set, we've got a group of fields.

So, that's super easy in the sense that, okay, we just look at a selection set and add the fields. But the trickier thing is, we need to find those selection sets, right? So, imagine that we've got the selection set. Oh, yeah, we just go into selections, look for a field with type name and then append it if we need it. So, that seems pretty straightforward. But then we've got to be like, oh, well, where are the selection sets? How do we find them? And as you can see, we've got, for example, here, we've got a selection set with a selection set inside of it, which gets really tricky. So, fortunately there's a fix for this, which is a thing called a visitor function.

So, feel free to follow along. We've got a little, not there, got a little utils folder here and a GraphQL.ts file here. And we're gonna do is, just gonna make a new function here called a append type name, or let's call it add type name to selection sets. Pretty long name, sorry, but hopefully it describes what's going on. And let's take a document node and return one. Okay, cool.

18. Using Visitor Function to Simplify AST Iteration

Short description:

We can simplify the process by using a visitor function called 'visit' that iterates through the AST for us. It allows us to focus on specific nodes, such as selection sets and fields, without having to manually iterate through all the definitions. This makes our code more concise and efficient.

So, what we're gonna do is we've got this query and we could 100%, we could go query dot definitions dot for each, and then start iterating through all the definitions and all the things and all that jazz. But that's a lot of work. So instead what we're gonna do, we're gonna go up here to our import, we're gonna add visit, okay? And it's called a visitor function. If you hear someone refer to a visitor when they're talking about ASTs, not specifics, GraphQL, typically, or it does apply to most AST for this. A visitor function, what it does is it goes through the AST for us. And then, all we do is we just say, like, hey, go through the whole AST, you do all the work, and just give me a shout when there's a selection set or when there's a field or something like that.

19. Adding Type Name to Selection Sets

Short description:

We want to add a field, like the example type name field, to the selections array inside a selection set. We can use the visit function to iterate through the document and call a function whenever a selection set node is found. In this case, the function logs the node to the console. After running the test, we can see that the console logs display the selection sets. By checking if a field with the name type name exists in the selection set, we can return the same node and add the type name field. This ensures that the selections are not changed and we get the same query back.

So, if we go back here, the thing that we wanna change is we wanna add a field to, we wanna add a field, like this example type name field, to the selections array inside of a selection set, okay? So, the node here is a selection set, and then, inside it, it has an attribute selections, which contains fields. So, what we can do is call visit, call visit, and we pass it our document. And now, when this runs, it's gonna go through the whole thing and iterate through it all. Then what we're gonna do is we give it an object, and we're gonna say selection set, and we're just gonna give it a function. And, for our sake, we can just get rid of these brackets. So we've got selection set, and let's just do a console log. Node. So what do you think is gonna happen? So we pass all this to the query, and then we say, when is the selection set? Or we say, here's an object to a selection set with a function that takes node and returns, and prints a console log. So what do you think is gonna happen? Any ideas in the chat? Okay. Okay, no worries. So what it's gonna do is it is going to call that function whenever it finds a selection set node. So I'm actually going to, I've written some test set for you folks to use. So see the first describe on line five, just get rid of skip. And you stop yourself for now if you want, and let's just do a MPN run test. In fact, now watch as well just so we, oops. Cool, so I'm expecting the test to fail, but I'm also expecting, Hold on. My bad. Let's just make sure that we call this when we, we got normalized query functioning, don't need to worry about this for now. Let's just make sure that that basically just turns into this add type name to selection sets for now. So we got normalized query, takes a query, and then returns a call to the function that type names selection sets here. And if we run this now, here we go, we have some console logs. You'll see that the console logs are selection sets. So if we were to copy this into, into AST parser, we should have a selection set with the field ID and the field user, and then we'll get called again with a selection set with ID, name and address. And it should get called the third time with a selection set with ID unit. And that's basically what's happening is we should see three calls it one, two, three. So that's awesome. So that's the really like quick and dirty way of just getting all those selection sets. But obviously this isn't good enough. We don't just want to console log, we want to add a type name. So the first thing we can do in TypeScript makes this really a lot easier. You say, if nodes.selections.find. I guess we got a find, that's cool. So we're going to go through the selections. We're going to say if n.kind is equal to a field and we're going to do kind.field. So we check. We're going to go through all the entities in the selection. So for example here, first one would be field, second one would be field, third one would be field. So what we're going to do is, first going to check if it's a field and then we're going to check if it has the name type name. So then we do n.name.value is equal to type name. And we're just going to see if we've got any of those. So I don't know why we are, oh, we've got some. Here we go. That's better. Cool. So we've got a little conditional and we just say like, if it's, if in the selection set, in the selections, we have a field and its value, its name is a type name, then we're just going to return node. And what this will do is it means that selections that function is called, gives us a node, we return the same node, and then it will just, it won't be changed. We get the same query back. Is that making sense, folks? I will print twice on a room. I'm going to in play. Yeah, you're on the spot Rodrigo.

20. Manipulating AST and Intercepting Requests

Short description:

We parsed the query into an AST syntax and manipulated the AST by adding a type name field to all the selections. This gives us a new query with type names. We're intercepting the request and adding the type name fields to the query. We run the utility function on our query to change it and pass it through in the body. Pass the body as a second argument to your handler.

Any thoughts? Just a moment. Okay, well, you all said this is easy, so I'm going to keep going. But definitely stop me if you've got any questions. This is definitely not simple stuff, so don't worry about that.

So we don't want to add a second type name, which is why we return early on the selection set that already has one. If there isn't already a selection, with type name, here's what we do. So we go return that node. Selections is going to be the selection still. So we still want to keep all the selections that were in it already. So we don't want to remove ID or name or anything like that. We want to keep those selections in. But we're going to add a new one. And the new one we're going to add is, if I just click type name here, is this. So it's going to be field of kind field with a name node, which is type name, value type name. We do kind, which is kind dot field. Then name, kind is kind dot name. And if you're wondering why name needs to be, like have a kind as well, it's just because name is a type of node as well. Just like selections, selection set field, things like that, and the value, type name. Cool.

And now you'll actually see that we've got this test working. So, if we have a quick look at this test, I'll put this in the chat just so, so everyone's up to date. And also push it to the repo. But, what this test does, is it actually passes a query with no type names in it. And then it makes sure that every single field, every single selection set has a type name. And, the cool thing is, we can actually, if we, we've got a pass, which takes a string and turns it into a AST, if we print and said, so let's have a look at what the new GraphQL query looks like after we call that function, console log, and you should see that now, boom, we've got ID, user ID, name, address, but now all the selection sets have a new field type name. So, what we've just done is rather than doing like a crazy regex that goes through and does all this complicated stuff, instead, what we've done is we've parsed the query into an AST syntax. We've manipulated the AST by adding a type name field to all the selections, and then we've returned it, which gives us this new query with all these type names. So, I'm just gonna, let's have a look. Okay. Cool. Alright. That's on the progress branch. Okay, on a scale of one to 10, how much sense is that making to people so far? There we go. So we've got type names on all our queries, and the reason we kept this separate is this add type name selection sets is because we do wanna use it during query normalization, but we also query normalization, the idea is that we convert a query from something that's dynamic where it has maybe different query names, fields are ordered in different ways, and we just try and normalize it so that every query that is fetching the same data has the same representation, which then makes it a lot easier for us to make a key for the cache, because imagine if you have a, imagine you got a query that looks like this, and then you have a query that looks like this, we'd want both of those to cache, we'd want the first one to happen, right to the cache, and then the second one to be a cache hit, even though the ordering is different, it's the same query basically. So that's why we've got a normalization function, I'll maybe put like together a read me for anyone who wants to try that after the workshop, but any-who, so you've got add type, mean, selection sets. So what we're gonna do is in our request here, let's have a look. Let's take the body in our mutation handlers, just so we don't need to try and pull that out again, and pass it through with the document, and then what we're gonna do is on this, when we forward the GraphQL mutation, we're gonna include the request, and then we're gonna do body is, let me check this, let's triple check actually, something like that. Okay. There we go. Cool. So we're just gonna update the, oopsie. Cool. So we're gonna do the request, we're also gonna include the body, and we're gonna do the original body. So imagine the JSON that was originally passed, but instead what we're gonna do is we're gonna add those type name fields to the query. So in order to do that, we're gonna do, take that utility function that we made earlier, add sample time into selection sets, and let's just wrap our query in that. So what's gonna happen now is, once we have the import for it, cool. So we're gonna, run that function on our query, to change the query to include type names, and then we're gonna pass that through in the body. So now rather than, you know, just throwing in a, just forwarding the request, we're actually just intercepting it a little bit. And I will send, I'll put this in the chat. Just make sure you pass that body in as a second argument to your handler.

21. Manipulating Response and Caching

Short description:

Once you've done that, you should be able to do a mutation and check that your mutation response contains a type name field. We're going to do the same thing for the query, but with some differences. The tricky part is making sure we don't get a cache hit. Disabling the cache is a good idea. We're going to traverse the response, get all the type names, write them to the cache, and create a key that indicates which queries depend on these type names. When we get a mutation, we'll find all the queries that depend on the type user and invalidate them.

And once you've done that, you should be able to do a mutation and just check that it's, check that your mutation response contains a type name field now. So for me, I'm just going to do this quick mutation here. And boom, now I get, create user with type name included. And then I can change that, that particular mutation to a different variable. And you can just see here. All right, so I'll just run this through, and then you'll be able to see that sort of a mutation now so it is what you'd call your mutation, on the right, the coding structure. So you can see, this data is initially labeled as alga. And what that's doing that I can just quickly share my screen, you can see here that now when I do my mutation, you get a time name back, which is awesome. So for us, what that means is that now we know the type of data that's been manipulated by a mutation. So in theory, we could start invalidating the cache, and we're almost there. But one last thing we need to do is basically the same thing again, but for the query. And so when we, here, when we forward our query, we're going to do basically the same kind of things. So we're going to do a return a request. Our body is going to be, in fact, let's just copy pastes. And instead of add tap means a selection steps, we're going to call it normalized query. The main reason being that like, on a mutation the only thing you actually want to manipulate is adding tag names. Whereas on a query, there's a lot of stuff that you might want to do later on. So just sorting keys and things like that. So we're going to have a separate function call for that just to make sure that as we make changes further on, we don't end up changing mutations in ways we don't want to. And yeah. Put that in the chat. But you get the general idea. So the tricky bit now is making sure that we don't get a cache hit. But if I do this, oh, I added a tag name there. I shouldn't have had that. Let's get rid of that. Okay, so I'm not seeing the tag name, which means something is missing. And import normalized query. And that wasn't a cache hit, was it? No. Create normalized query. Oh, you know what? I didn't even save that's probably the issue. Wait for that to build and get that another shot. Build completed, sweet. All right, so I'm going to try this with, let's add more spaces. Just to make sure we, that was a cache hit because we're getting rid of spaces. So we just change the order instead. Cool, all right. Disabling the cache is a very good idea, Rodrigo. And basically when I was doing this last time I just wanted to quickly clear the cache and wasn't able to find a really clean way of doing it. But yeah, now you're right. I should probably just comment this out for now. That's a much smarter idea. Yeah. So we do that now. You should see that you get a tied name back with user. So, one last thing we're gonna do, or two last things. So the next two steps, so we probably should finish about on time. Step one is, we're going to traverse through this whole response and get all the type names and then write them to the cache. So, for example, if this returns a user on the organization, we want to store that information and then create a key in the cache that says, hey, this query depends on these type names. And then what we've gotta do is when we get a mutation, we're going to find all the queries that depend on the type user if there are responses a user an invalid event.

22. Speeding up and Extracting Type Names

Short description:

Let's speed things up. You should have a KBstore with responses, ID, preview ID, and type names. Restart wrangler to use the type names namespace. Write new JSON to the cache and extract type name attributes from the response. Try it out and experiment with it.

So I'm gonna try and speed things up a little bit just so we get things moving forward. So you should have, let's just, I'm sure I got two there. You should have a KBstore that looks like this. We have responses, ID, preview ID. And if I just make a new one called type names. Yeah, cool. I'm just gonna paste that in here. Add a comma and like we did before, copy this add a comma and change it to preview ID. And now we've got a namespace called type names, which we can then do to, sort out names separately. In theory you could sort them at the same place as responses if you want, just to make sure that you've got a way of preventing overlap between careers and type names. This is just a nice way of a clear distinction. Okay, so if I restart wrangler now, when I do env.typeNames.get or whatever, it's gonna go to go to this KB. So, what I'm going to do is... When we get some new JSON, we're gonna write it to the cache. But also, what we're gonna do is get all the type name attributes from the response, which was gonna be one of the challenges, but I don't think we're gonna have time. So what I'll do is I'll put in some new stuff for you to trial it, the answer in the chat, but you can definitely, I'd recommend having a look at this, playing around with it, and then trying it out yourself. And seeing if you can...

23. Caching Subscriptions in GraphQL

Short description:

Can you still cache the subscription as well? Subscriptions are not usually cached since they involve live data and events. Caching events can be done on the GraphQL endpoint using fail-safe or dead-letter queues. For our case, we only cache queries and mutations.

Okay, here we go. Here we go. A small question, can you still cache the subscription as well? Subscriptions, I'm assuming you mean like a GraphQL subscription. Wouldn't do that for this. Basically, cause like yeah, we'd be expecting subscriptions to be like live data, which is event driven. So we probably wouldn't cache events, but yeah, if you have a situation where for example, you need to send a subscription, but then also need to send it later on in case it's lost or something like that, you'd need to do that on your GraphQL endpoint, some kind of fail-safe or dead-letter queue or something like that. Yeah, for our case, it's just queries and mutations. But I like to mention the subscriptions. People definitely don't use them enough. So kudos Mark if you're using subscriptions in your projects, that's awesome. So I'm gonna put this in here.

24. Extracting Type Names and Storing in Cache

Short description:

We create a function called extract type names that visits each object within an object and checks if it has a type revealed. If it does, we store it. We paste this function into our utils folder and add tests to ensure it works correctly. We make some adjustments to the TS config to support sets and newer versions of JavaScript. Then, on our response, we call the extract type names function and store the type names and query strings. We use context.waitUntil to ensure that certain promises are completed before the worker shuts down.

You can think of this like a visit function. We just go through the object and we visit each object within the object and say does it have a type revealed? If it does, then we're gonna return that back and store it. If it doesn't, then we keep going through the objects until we eventually get all the time name attributes. So I'm good for that.

Let's paste that into our utils folder. And for now, like I said, I've added some tests. If you wanna wait around, if you wanna try and make this yourself, just get rid of this and try and plan on yourself. And there's some tests that will fail initially. And you can try and satisfy all the criteria. Let's look at sets. Yeah, okay, you might need to add. Okay, you might need to add this to your TS config. Cool, and just so that we have support for sets. There we go. And how did I miss this? In fact? Yeah, no idea why this didn't ever really would be but ES2017, all right, so we're gonna add that as well. Cool. Not as well. You know what? All right, let's make it a lot easier. Let's just change this to ESNext. I think that should cover us for everything. Yeah, okay. Not sure bill is gonna work, but we'll get there when we get there. Okay, so you just wanna add, oh yeah, that works. Just add this. This basically just adds TypeScript by default, the compiler will just assume that you on polyfilling or transpiling language features from newer versions of JavaScript. So things like object values, sets, things of that nature on in all versions of Node. So we're just gonna provide a lit just to tell TypeScript, hey, we're using some advanced features of JavaScript, which may be usually available, but for this case, yeah. Cool. We've got our extract type names.

And we got just a few minutes left. And what we're gonna do is on our response to, if it's an error, we ignore it. Oh no. If it's not an error, sorry. What we're gonna do is we're gonna do extract type names. Let's make sure you import that as well. To make sure that's, yep, in your tools. And we're gonna give it our new JSON. And that's gonna give us an array. And then for each one of those, we are going to go through type names and we're gonna do m.typeNames.put the typeName. And then we're gonna put the query string. And what we can do is, there is a cool little thing that you don't get in Lambda, unfortunately. But context.waitUntil allows you to say like, Hey, here's a promise. I'm not gonna await it now but, don't turn off the worker until this promise is satisfied. Which is pretty sweet because what it means is that, it means that we can do stuff like this, give it a bunch of promises to work out on. And, cool. So yeah, so we just give it two things to do. And we just say hey, like, make sure this promise is complete before you pull down the environment. We say, write the response to the cache and also for every type name, store in our type names, the type name and then, ooh, hold on. I missed something there. Hold on.

25. Caching and Storing Queries

Short description:

Let's try this again. We're gonna go for a cache and start type names.get for our type name. We'll get the type names, fall back to an empty array if there aren't any, and then take the previous array from storage and add in our query stream. We're wrapping promise.all with a bunch of promises, writing things for the cache and getting the existing collection of queries for each type name. We'll put this in the chat and return the promise from the port. Every time we get a response, we write it to the cache and to a list for that type name. I'll add a readme entry to the repo for the last step. Feel free to ask any questions on the repo or Discord.

Let's try this again. And, I mentioned earlier, but there's much more advanced ways of doing this. And, it's a lot to go through. So, keep that in mind. But, what we're gonna do is, we're gonna go for a cache and start type names.get for our type name. So, our type names, just to help you out is gonna look something like this. We're gonna have key, which might be like user. And, the value will be an object with maybe like query, id, and then we might have another query with, I don't know, users or whatever. And, we're gonna have that for every type name.

So, let's just check chat, see if everyone's good. Yeah. So, we are gonna get the type names. Pass that, and we'll just fall back to an empty array if there aren't any type names. And then what we're gonna do is gonna take that previous array that was in storage, and then also add in our query stream. And then this is gonna stop complaining. And, that's why we're gonna stringify this. Yeah, I'm pretty sure Cloudflare and just on stringifying this, this could be wrong, but cool.

All right, so, it's a bit of a mouthful, but hopefully. Let's keep that there for now. Hopefully this makes sense. We're saying, hey, wait until this promise is satisfied. And then the promise that we're doing is wrapping promise.all with a bunch of promises. So the first one is writing the things for the cache. And the second one is, oh, let's make sure that's async. For each type name, we get the existing collection of queries for that type name. And then we append on onEvil here. And then see where we missing. Oh. I think we got one too many. There we go. Cool. All right. It's touch gonna get built for me. Nope. ISL. Comma expected 66. I think we might be missing one here. We can just...

Cool. I'm going to put this in the chat, and I think... Shouldn't we return the promise from the port? I've just added in a wait. Let's just make it simpler like you said. Yeah, follow. Return the promise. Awesome. So what will happen is that we, every time we get a response, we write it to the cache and then also write the response to a list for that type name. What I'll do is I'll add a little readme entry or something like that to the repo with the last step. And then if anyone has any issues or if you want to have a mess around with the last step, feel free to I'll leave issues open in the repo. You got any questions, you can stick them on there or on the Discord.

26. Handling Mutations and Query Invalidation

Short description:

The final step is to handle mutations in the same way as queries. Instead of storing the type names, we retrieve the queries associated with the type names returned by the mutation. We delete all these queries, ensuring that when we make a subsequent request, the invalidated queries are not used. This approach is similar to how Urql handles queries and mutations by default. Unlike Apollo, there is no need to manually refetch queries. As long as the mutations return the updated data, Urql takes care of invalidating the relevant queries. Let's move on to Async.

You got any questions, you can stick them on there or on the Discord. But the final step is just going to be to do the exact same with mutations, where we get the type names from the mutation. The only difference now is rather than storing them, what we're going to do is we're going to go to our type name store, we're going to get all the queries for the type names that have been returned. So, for example, if the mutation returns the type name user, we're going to get all the queries that returned user as one of the type names on the response. And then we're going to delete them all. And we do that for every single type name that matches in the mutation. And then in theory, what that means is when we do a get users call and we do something to delete a user or change a user, and then we do the get users call again, we'll know for a fact that the query will be invalidated and all queries that use users will be invalidated and we can get a fresh copy. And that's actually how that's actually how Urql by default works. It's pretty naive, but it's really cool because what it means is like compared to Apollo, where you might have to call like refetch queries or something like that, not used in a while. But with Urql you can basically just make a bunch of queries and mutations. As long as your mutations return the data that's changed, you don't need to worry about like a refetching queries. It works for you. But I have my biases. That's basically a wrap. Like I said, we can continue with Async.

Watch more workshops on topic

GraphQL Galaxy 2021GraphQL Galaxy 2021
140 min
Build with SvelteKit and GraphQL
Top Content
Featured WorkshopFree
Have you ever thought about building something that doesn't require a lot of boilerplate with a tiny bundle size? In this workshop, Scott Spence will go from hello world to covering routing and using endpoints in SvelteKit. You'll set up a backend GraphQL API then use GraphQL queries with SvelteKit to display the GraphQL API data. You'll build a fast secure project that uses SvelteKit's features, then deploy it as a fully static site. This course is for the Svelte curious who haven't had extensive experience with SvelteKit and want a deeper understanding of how to use it in practical applications.

Table of contents:
- Kick-off and Svelte introduction
- Initialise frontend project
- Tour of the SvelteKit skeleton project
- Configure backend project
- Query Data with GraphQL
- Fetching data to the frontend with GraphQL
- Styling
- Svelte directives
- Routing in SvelteKit
- Endpoints in SvelteKit
- Deploying to Netlify
- Navigation
- Mutations in GraphCMS
- Sending GraphQL Mutations via SvelteKit
- Q&A
React Advanced Conference 2022React Advanced Conference 2022
95 min
End-To-End Type Safety with React, GraphQL & Prisma
Featured WorkshopFree
In this workshop, you will get a first-hand look at what end-to-end type safety is and why it is important. To accomplish this, you’ll be building a GraphQL API using modern, relevant tools which will be consumed by a React client.
Prerequisites: - Node.js installed on your machine (12.2.X / 14.X)- It is recommended (but not required) to use VS Code for the practical tasks- An IDE installed (VSCode recommended)- (Good to have)*A basic understanding of Node.js, React, and TypeScript
GraphQL Galaxy 2022GraphQL Galaxy 2022
112 min
GraphQL for React Developers
Featured Workshop
There are many advantages to using GraphQL as a datasource for frontend development, compared to REST APIs. We developers in example need to write a lot of imperative code to retrieve data to display in our applications and handle state. With GraphQL you cannot only decrease the amount of code needed around data fetching and state-management you'll also get increased flexibility, better performance and most of all an improved developer experience. In this workshop you'll learn how GraphQL can improve your work as a frontend developer and how to handle GraphQL in your frontend React application.
React Summit 2022React Summit 2022
173 min
Build a Headless WordPress App with Next.js and WPGraphQL
Top Content
WorkshopFree
In this workshop, you’ll learn how to build a Next.js app that uses Apollo Client to fetch data from a headless WordPress backend and use it to render the pages of your app. You’ll learn when you should consider a headless WordPress architecture, how to turn a WordPress backend into a GraphQL server, how to compose queries using the GraphiQL IDE, how to colocate GraphQL fragments with your components, and more.
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Top Content
WorkshopFree
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura
GraphQL Galaxy 2021GraphQL Galaxy 2021
48 min
Building GraphQL APIs on top of Ethereum with The Graph
WorkshopFree
The Graph is an indexing protocol for querying networks like Ethereum, IPFS, and other blockchains. Anyone can build and publish open APIs, called subgraphs, making data easily accessible.

In this workshop you’ll learn how to build a subgraph that indexes NFT blockchain data from the Foundation smart contract. We’ll deploy the API, and learn how to perform queries to retrieve data using various types of data access patterns, implementing filters and sorting.

By the end of the workshop, you should understand how to build and deploy performant APIs to The Graph to index data from any smart contract deployed to Ethereum.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
Top Content
We all love GraphQL, but it can be daunting to get a server up and running and keep your code organized, maintainable, and testable over the long term. No more! Come watch as I go from an empty directory to a fully fledged GraphQL API in minutes flat. Plus, see how easy it is to use and create directives to clean up your code even more. You're gonna love GraphQL even more once you make things Redwood Easy!
Vue.js London Live 2021Vue.js London Live 2021
24 min
Local State and Server Cache: Finding a Balance
Top Content
How many times did you implement the same flow in your application: check, if data is already fetched from the server, if yes - render the data, if not - fetch this data and then render it? I think I've done it more than ten times myself and I've seen the question about this flow more than fifty times. Unfortunately, our go-to state management library, Vuex, doesn't provide any solution for this.For GraphQL-based application, there was an alternative to use Apollo client that provided tools for working with the cache. But what if you use REST? Luckily, now we have a Vue alternative to a react-query library that provides a nice solution for working with server cache. In this talk, I will explain the distinction between local application state and local server cache and do some live coding to show how to work with the latter.
React Day Berlin 2022React Day Berlin 2022
29 min
Get rid of your API schemas with tRPC
Do you know we can replace API schemas with a lightweight and type-safe library? With tRPC you can easily replace GraphQL or REST with inferred shapes without schemas or code generation. In this talk we will understand the benefit of tRPC and how apply it in a NextJs application. If you want reduce your project complexity you can't miss this talk.
React Advanced Conference 2021React Advanced Conference 2021
36 min
Living on the Edge
React 18 introduces new APIs for rendering applications asynchronously on the server, enabling a simpler model for architecting and shipping user interfaces. When deployed on edge networking platforms like Cloudflare Workers, we can get dramatic performance and user experience improvements in our applications. In this talk, Sunil will demo and walk through this new model of writing React applications, with some insight into the implications for data fetching, styling, and overall direction of the React ecosystem.