For years, not being able to cache GraphQL was considered one of its main downsides compared to RESTful APIs. Not anymore. GraphCDN makes it possible to cache almost any GraphQL API at the edge, and not only that but our cache is even smarter than any RESTful cache could ever be. Let's dive deep into the inner workings of GraphCDN to figure out how exactly we make this happen.
How to Edge Cache GraphQL APIs
AI Generated Video Summary
The speaker discusses their experience with edge caching GraphQL APIs, starting with the struggles they faced with a poor database choice. They found success with GraphQL, which helped their servers scale and led to an acquisition by GitHub. The speaker delves into the key piece that enables GraphQL caching, the __typename metafield. They also address challenges with caching at the edge, including authorization and global cache purging. The speaker shares a success story with GraphCDN and mentions the ongoing challenge of handling cache for lists with pagination.
1. Introduction to Edge Caching GraphQL APIs
Thank you for that lovely introduction. I'm so glad to be back and I'm really excited to be here today and to talk to you about edge caching GraphQL APIs.
Thank you for that lovely introduction. Just like every speaker here and I assume also every one of you, this feels absolutely surreal. Standing in front of actual people, looking in your faces. This is insane. I love it. I'm so glad to be back and I'm really excited to be here. I love London. I actually got my very first job a long time ago. I used to live in London for a couple months and I did an internship here. So this city has got a special place in my heart and I'm really excited to be here today and to talk to you about edge caching GraphQL APIs.
2. The Story of Spectrum and the Realtime Database
I already had a super lovely introduction so I can skip a little bit of that but if you want to follow me anywhere on the internet, my handle is at mxsdbr which looks super complicated but it's actually just my name without the vowels. So just take my name, take out the vowels and you'll arrive at my handle. So, this story starts in 2018. I was the CTO of a young and growing startup called Spectrum. And we were building sort of a modern take on community forums. And for some godforsaken reason that I cannot remember, I chose a really terrible database that I will not name, because I don't want to shame anybody, but it's a really small database that probably none of you have ever used, and they advertised themselves as the Realtime Database.
I already had a super lovely introduction so I can skip a little bit of that but if you want to follow me anywhere on the internet, my handle is at mxsdbr which looks super complicated but it's actually just my name without the vowels. So just take my name, take out the vowels and you'll arrive at my handle. I know I wish I chose something different when I was way younger and didn't know what I was doing but it is what it is now. I can't change it anymore.
So, this story starts in 2018. I was the CTO of a young and growing startup called Spectrum. And we were building sort of a modern take on community forums. So we built this platform where mostly open source projects had all of their users talk with each other. And they, you know, reported bugs, they talked with each other about how to use style components in my case but also many, many other open source projects and many developers used it And we actually grew pretty quickly. Now, I was the CTO of that startup and I had no idea what I was doing. Literally zero. Particularly not a realtime public chat app, which is quite an interesting set of problems to solve because we were taking what forums were, where you can post posts, you can post threads, and you can comment under them, and we made everything realtime. We tried to combine sort of the best of what the 2000s forums give you with what Slack and Discord give you nowadays. We tried to combine the best of both of those worlds. And I came with some really interesting technical problems, because we had all of the read-heavy traffic of a forum. Lots and lots of people came to our platform and read all of the threads and posts and the content that people shared, but all of it was realtime. So all of them subscribed to a WebSocket connection and streamed updates in realtime as people were conversing with each other.
And for some godforsaken reason that I cannot remember, I chose a really terrible database that I will not name, because I don't want to shame anybody, but it's a really small database that probably none of you have ever used, and they advertised themselves as the Realtime Database. And their whole thing was, you can take any database query and you can put a thing at the end and then you get a realtime subscription to any changes on that database query. Damn, that sounds amazing. That sounds exactly like what we need. We're building this realtime chat platform. Surely, that's exactly what we need. So we used it, but the problem was it was really badly built. And I found out in hindsight that this company that built the database had raised some money from investors and they weren't growing very quickly. They didn't manage to get any market share compared to a MongoDB, right? Or even a Postgres or MySQL. And so their idea was, okay, we have this really cool database. We're going to just make it realtime. And then we're going to sell that. And they spent half a year building that, but they never really ran it in production.
3. Struggles with Database and Success with GraphQL
And so it just did not scale. We ran into so many limitations with this thing. Our servers were crashing every single day. We tried many different things. GraphQL worked out extremely well for us. We were acquired by GitHub and had no major security leaks.
And so it just did not scale, right? We ran into so many limitations with this thing were hundreds of thousands of users were using Spectrum every single month. This database couldn't even handle a hundred realtime update connections, literally. So we were struggling.
Now, if you've ever built a production back end, you know that changing your database is really, really difficult. I know there's ORMs and stuff, and we should have used one of those, but we didn't because this database brought its own ORM. So we thought, that's great. We get everything in one, right? All of this is amazing. So we couldn't easily change the database. And I distinctly remember a very specific timeframe at the beginning of 2018, like December, January, February. Our servers were crashing every single day. Literally. Two or three times a day, our servers would just crash and we would have to manually reboot them, believe it or not, which is absolutely ridiculous.
In order to create these slides, I googled server downtime, because I wanted to find a photo of what that looks like. And for some reason, Google associates this picture with server downtime. If this is what your server downtime looks like, you have way bigger problems because your entire data center is on fire. I hope that this is not what our server downtime looked like, because ours just crashed. At least they weren't on fire. But anyways, we had this, literally, two to three times a day, we were crashing our server and our servers were crashing, and I felt incredibly stressed. Because I was the main technical person on that team and I was responsible for making all of this work, and we could not figure out how to get this database to work for us the way we needed it to. And I spent months trying out many different things. We tried many different database settings. We tried to work with their core developers. Nothing worked.
Now, we were using GraphQL, and GraphQL actually is the one thing I would say that worked out extremely well for us. And GraphQL allowed us to build an API super quickly and make it work really, really, really well. Surprisingly well, because we had never used GraphQL before. In fact, one of my very proudest moments is, and side note, but one of my proudest moments is when we eventually got acquired by GitHub. GitHub put the NCC group, which is one of the biggest pen testing security firms on the planet, they paid 11 of their best people to try and hack our systems for eight days straight. Literally eight days full time, 11 of the best security researchers tried to hack Spectrum, because GitHub wanted to know, if we're going to buy this company, are we going to have to deal with any database leaks? And I still remember, I was super scared of that and the proudest moment of my life was when I received that report and the first item said, no major security leaks found. And I was like, fuck yeah, we did that really well.
4. Caching and GraphQL Clients
I have no idea how we did that but we did not have any security leaks, right? We were using GraphQL and we were very happy with it. I started looking into how GraphQL clients cache and that's what I want to talk about today. The key piece that makes GraphQL caching possible is the underscore, underscore type name metafield.
I have no idea how we did that but we did not have any security leaks, right? We didn't have any security policy or something but I think actually, kind of the way how GraphQL works where each individual resolver tells you which data to get from where and we just put a little access control snippet in every single resolver and that meant that nobody could access data that they should.
Anyways, so we were using GraphQL and we were very happy with it. And in my head, after like a month of trying to fix our server database problems, I was like, okay, we have to find a way to reduce our traffic. We have to find a way, without having less users, right, we still want more users but we want less traffic. How can we make that happen? And of course the answer to that is caching. And in my head, I was like, okay, wait, hold on, we've got this public forum thing, right, like we've got a public forum, lots of people access it without being authenticated and just read the stuff, that's like the ideal use case for caching, right. I just want to put a cache thing in front of my GraphQL API, just cache all the queries and hopefully that would reduce our traffic to a level where we wouldn't crash.
As I kept thinking about this, I was like, why can't I just run a GraphQL client at the edge, right, that's really the question I was asking myself because if you've ever used a GraphQL client or heard of one like Apollo or Urql, that's exactly what they do. GraphQL clients make for such a great user experience because they, in the browser, they run a GraphQL cache and they cache all the queries that they see and update them very smartly, on actions, and do a bunch of really smart stuff and that makes for the really great user experience. So in my head I was like, okay, GraphQL clients do this in the browser, why can't I just do this on the server? I just want the exact same thing, I just want to take Apollo client and deploy it on the server, right? Why can't that work? And then, even a step further, if I'm already caching, why can't I cache stuff close to my users at the edge, right? Sunil just talked about CloudFlare and there are 250 cities worldwide, why can't I just put a cache into every single one of those servers and then not only will it save us a bunch of traffic, but it'll also make the experience so much faster for all of our users worldwide. Why can't I just do that?
And so I started looking into this and I started reading about how actually GraphQL clients cache, and that's what I want to talk about a little bit today. If you look at a standard GraphQL query like this one, I made this example up, right, but this GraphQL query fetches a blog post based on a certain slug and it fetches its ID, title and of the author, it also fetches its ID, name and avatar. And really the one thing, the one key piece that makes GraphQL caching possible is the underscore, underscore type name metafield because you can add that automatically to any GraphQL query. And when you add that, the GraphQL query ends up looking like this, right? You have this magic underscore, underscore type name field in your GraphQL query. Now, you don't have to define this. Every single GraphQL API supports this because it is in the specification. Every single object type in your GraphQL API has to have an underscore, underscore type name field. What does that field do? Well, if we ran this query and we get back the data, what we would get back looks a little bit like this. It's the post data and then there's also these two underscore, underscore type name fields that tell us that the post is of the type POST, duh, and the author is of the type USER. OK, makes sense, right? We have a POST and a USER. Why is that type name field so interesting? What does that even tell us? Well, we can now look at this and go oh, OK, I can take this entire data, this entire response, put it into the cache, and I know that this response contains the POST with the ID 5 and the USER with the ID 1. OK? So we know that this response that we've just put into our cache contains the POST with the ID 5 and the USER with the ID 1. So what? Why is that? Why do we need to tag our cached responses with anything? Well, here's the other thing that's cool about GraphQL. It differentiates between queries and mutations. And so if we have a mutation like an EDIT POST mutation, where we say, OK, we want to change the title of this blog post, again, we can add the __type name magic field to this mutation. And we can say, OK, from the EDIT POST mutation, just let me know which type do I get back. Now, we as humans, of course, we know that EDIT POST probably, hopefully, returns a POST. So it would be kind of weird if that returned a user. That wouldn't make any sense. But the API tells us that as well.
5. GraphQL Client Mutation and CachedQuery Linking
And it does tell the GraphQL client as well. We now know that a mutation just ran against our API. And that mutation changed the POST with the ID 5. Because we have this __typename metafield, we can link mutation updates to CachedQuery results. That is the magic that makes GraphQL caching work for GraphQL clients. This introspectability, this __typename metafield, lets you just get the information of what even is in this JSON blob.
And it does tell the GraphQL client as well. And so what happens is when this response comes back from the EDIT POST mutation, when I run this against my origin, what I get back looks a little bit like this, the POST data with the updated title. OK. But now here's the key. We now know that a mutation just ran against our API. And that mutation changed the POST with the ID 5. So because we have this __typename metafield, we can link mutation updates to CachedQuery results. And we can say, oh, shit, we just had a mutation, and the POST with the ID 5 changed. That means I need to invalidate any CachedQuery results that contain that POST with the ID 5. How do I find those? Well, thankfully, I tagged them when I put them in the cache, right? We took that response. We put it in the cache. And we said this response contains the POST with the ID 5. So now, when a mutation comes through our cache, we can go and say, oh, look, we know that the POST with the ID 5 has changed. And we can invalidate any CachedQuery results that contain that POST. And that is the magic that makes GraphQL caching work for GraphQL clients. That's really the key piece, this introspectability, this __typename metafield that lets you just get the information of what even is in this JSON blob, right? Like, I have this massive amount of data, what even is in that? So that's awesome. And that works beautifully.
6. GraphQL Client Caching and ListInvalidations
Unfortunately, the magic ends at some point. ListInvalidations is where the magic ends. The main edge case that GraphQL clients make you work around manually is when a mutation changes data that is not yet included in a list query. GraphQL clients do something even more advanced now, with document level caching.
Unfortunately, the magic ends at some point. And the place where that ends is ListInvalidations. Because if you think about it, we have this CachedQuery response. It contains the POST with the ID 5 and the user with the ID 1. Now, let's say we have a POST query that is a list of POSTS, right? And this is where ListInvalidation becomes a little bit tricky. Because this list is going to return us the one blog post we have right now. Again, the POST with the ID 5 with a certain title.
Now let's say we have a mutation that is called createPost. And we create a new POST. And then from that mutation the origin sends back the new POST. When we look at the response data from that, the response data has a type name of POST and an ID of 6. So now we know, okay, a mutation just passed through our cache. It returned the POST with the ID 6. Let's just invalidate any CachedQuery result that contains the POST with the ID 6. The problem is, our list does not contain the POST with the ID 6 yet. Because how could it? The POST didn't exist when we queried that list. So we cannot automatically invalidate that query. And so this is the main edge case that GraphQL clients make you work around manually. If you've ever used GraphQL on the client, you very likely had to do manual cache validation for these cases. For example, with Urql, this looks a little bit like this. Essentially, you can say, hey, if the createPOST mutation passes through the Urql cache, then please invalidate any cache query result that contains the query with a list of POSTS. And so that way, I can say, OK, when the createPOST mutation passes through us, let's just invalidate the POSTS list because we know it has changed. If a POST was just created, it's probably in that list. We probably need to invalidate any queries that contain that list. And so there's a little bit of manual work to do to tell the cache how to combine lists with mutations and how those work together.
OK, now GraphQL clients do something even more advanced. What I was talking about and telling you about is document level caching. Essentially, when we started out, this is how people did GraphQL caching at the beginning. They took the entire query result and put it into the cache. However, nowadays, GraphQL clients are even smarter than that.
7. Normalized Caching and Edge Constraints
GraphQL clients split the response into individual objects and cache them separately. This allows for a better user experience by avoiding unnecessary network round trips. GraphQL clients have a separate data structure that links related objects. GraphQL is perfect for caching and works even better than REST APIs. However, caching at the edge has different constraints than caching in the browser.
They do something called normalize caching. So if we go back to our old example of the blog post fetching with the idea of the title and the author, rather than taking this entire response that we got, rather than taking this entire thing and putting it in the cache, GraphQL clients actually nowadays usually split this into individual objects. So they'll take the post data and they'll cache it, but then they take the user data and they'll cache it separately. So this ends up looking something like this. You have in your cache the data for the post with the id5 and then the data for the user with the id1.
Why would you do that? What is the point? The point is, imagine if now I have a get user query, right? And I say, get user id1. Now the cache can go, oh, hold on a second, you haven't fetched this specific query before that gets the user with the id1. But you've fetched the user with the id1 in this other query. And I can just give you that data immediately without having to go to the origin. And so it makes for an even nicer user experience because you avoid a lot of network round trips because you don't have to go and fetch a lot of data that you've loaded anyway just in some other query somewhere else deeply nested, right? We already know the data for the user with the id1. So we don't have to go to the server to get it. We can just get it from the cache.
Now the one thing that's missing here, of course, is post.author, because as we know, the post.author corresponds to the user with the id1. And so GraphQL clients have a separate data structure that tells us how to link these things together. And basically it looks something like this. It tells us that for the post with the id5, the author is the user with the id1. It basically stores the relations and links between these objects so that if you query the post.author, it knows how to resolve that, and it knows that that means that that is the user with the id1. So that's a little bit about normalized caching and how GraphQL clients really work. GraphQL is awesome for caching. That's the thing I want you to take away. People say, you know, for the last five years, it's been, you can't cache GraphQL, right? When you read articles about REST versus GraphQL, one of the main points people make is like, yeah, GraphQL is awesome, but you can't cache it. And in my head I'm like, that doesn't make any sense. GraphQL is actually perfect. It actually works really well for caching. It works almost better than if you just build a random REST API, right? Sure REST has Swagger and JSON API and other standards that also add this, but GraphQL has that as well. And GraphQL is actually awesome for caching.
So, going back to my original question, why can't I just run a GraphQL client at the edge, right? Why can't I just take this cache that I already have in my GraphQL client and just run it on a server? They're already doing everything I need them to. I just want to take this, put it on a server so that our servers don't crash anymore, right? I just wanted to solve our own problem. Now, the tricky bit here is the edge piece. Because as it turns out, caching things at the edge has slightly different constraints than caching things in the browser.
8. Edge Caching and Authorization
We use Fastly-Computed-Edge, which has 60 worldwide data centers where you can run Rust code. However, edge caching is not as simple as it seems due to authorization. Unlike GraphQL clients in browsers, edge caching requires considering user access permissions. We tag cache query results with authentication tokens to ensure authorized access to data. This critical piece was initially overlooked when implementing GraphQL caching at the edge.
We use Fastly-Computed-Edge. Sorry, Sunil. We don't use Cloudflare. We use Fastly. And Fastly-Computed-Edge has these 60 worldwide data centers that just correspond to wherever they've put their machines in randomized piece everywhere around the world. And it allows you to run code, Rust code, on those 60 data centers. And that's really cool. And that's really powerful. And it's kind of nice. I can deploy something to the 60 worldwide data centers. Cool. All right. All right.
But actually, edge caching isn't as simple as I thought it was, because authorization. This is like the main difference between GraphQL clients and caching things on the edge. Because if you think about it, a GraphQL client that runs in your browser, it knows it doesn't works on the assumption that everything that it has in its cache can be accessed by anyone. It doesn't need to worry about authorization, because it is in your browser anyway, right? It knows that if something is in the cache, then you can access it. So if you request the user with the ID one, and I have that in the cache, I can just give it to you. Because I know that you've already fetched that in your browser. So you obviously have access to it. So I don't need to worry about it. That's not the case on the edge, right? If you're running something on a server, you're getting requests from lots of different users to the same cache, and suddenly you have to worry about, crap, actually, that person isn't allowed to access the data for the user with the ID one. Oops. Did not think about this. This is slightly different than what GraphQL clients do.
So if we go back to our cache query result here, rather than just tagging it with the post with the ID five and the user with the ID one, we also have to tag it with a specific authentication token that the person that sent the request uses. We use something like this. Now, this token random base64 garbage isn't actually anywhere in the query. This is metadata that gets sent with the request, either in a cookie or in the authorization header or somewhere like that. We tag every single cache query result with that token, so that when we check, hey, do we have the user with the ID one in the cache somewhere, do we have this cached anywhere, then we can go, hold on, do we have the user with the ID one cached for this specific authentication token? Is this user actually allowed to access this data? Which was a major critical piece that we kind of missed at first, when we were like, we're just going to take the graphical caching and put it at the edge.
9. Global Cache Purging and GraphCDN
The other thing that's a little bit more tricky is global cache purging. We have 60 worldwide data centers. Fastly has incredibly fast global cache purging. Fastly can invalidate cached query results globally in 150. They go both ways at the same time, right? That's the trick. Anyways, so that's awesome. GraphCDN came from this problem at Spectrum. We tried for months to make this work, and eventually got it to work, and that is what GraphCDN is. And that's what I've been doing for the past few months, working on graphical caching.
Maybe be a little bit more careful. The other thing that's a little bit more tricky is global cache purging, because if you're in a browser and a mutation passes through and says edit post, well, you just kick out that JavaScript object. It's all in your browser in memory, it doesn't matter, you can just take this post with the D5 object and kick it out of your cache object and you're done. Cache invalidation solved.
We have 60 worldwide data centers. When we see a mutation we can't just go, oh, we're going to kick this out of memory. It's not quite as simple as that, because we have to invalidate it everywhere, not just in that one data center that the mutation passed through. So thankfully Fastly does a lot of this for us, and that's also one of the main reasons we use Fastly. Fastly has incredibly fast global cache purging. They have this API where you can post to Fastly and say, look, I want to invalidate this specific tag, say the post with your D5, and Fastly will invalidate the cached query result everywhere in the entire world in 150 milliseconds.
150 milliseconds. When I learned that I was like, hold on, how fast is the speed of light, right? To go around the world once, and I looked this up, takes about 130 milliseconds, right? Light takes about 130 milliseconds to go once around the globe. Fastly can invalidate cached query results globally in 150. How does that make any sense? That's incredible. Of course, the trick is, they don't have to go around the entire world. They go both ways at the same time, right? That's the trick. They go in both directions at the same time, halfway around the world, and so they only have to go around this way, and then they can do it really fast.
Anyways, so that's awesome. And so that is really where GraphCDN came from. And we had this problem at Spectrum, and we, I could not solve it. And so nowadays, we were like, well, if we have this problem, maybe other people also have this problem. And so we tried for months to make this work, and eventually got it to work, and that is what GraphCDN is. And that's what I've been doing for the past few months, working on graphical caching. Alright, thank you, everybody, for having me. I've got a whole bunch of stickers, so if you find me afterwards, you can have a sticker, come and talk to me. Happy to give you one. I've got enough for everybody. And thank you for having me. This has been awesome. Applause I would like a sticker, please.
10. GraphCDN Clients and Success Story
I think the GraphCDN logo is really awesome. So, this is my second or third time I see you do a talk. GraphCDN has been live for now just under half a year. There's a few clients where I'm very blown away. A recent one that we can make public is italic.com. They were struggling with scaling and were using GraphQL for the new website. We've taken, I think, 90 or 95% of the traffic away from their service.
Of course. Here you go. First things first. Two stickers each. So, that was actually my first comment I wanted to make. I think the GraphCDN logo is really awesome. Thank you, I appreciate that. I don't know if you did anything for the design, but I loved it. I love it. Appreciate that.
So, this is my second or third time I see you do a talk. I always feel like I'm watching a magician giving a talk. It's really great, the stuff you're always talking about.
My first question for myself is, so GraphCDN has been live for now just under half a year, right? Something like that, yeah. Is there any client, without discussing names if you're not allowed to, that you're like, oh my god, they're using my product? That's amazing.
There's a few clients where I'm very blown away. A recent one that we can make public is italic.com. And they're like an e-commerce store. They're trying to give you high quality stuff without a brand. So, they find the manufacturers in China that are responsible for making luxury bags, for example, right? So, they find the factory that makes Dior bags. And then they go to the factory and say, hey look, give us the same bags, just don't put the logo on it, right? Give us the same bags, but don't put the logo on it. And then they sell it almost at cost. And it's way, way, way cheaper and super high quality. And they've been growing like crazy. They're unfortunately only available in the US, but they've been growing like crazy. They just raised a serious B or C and an insane amount of money. And they were struggling with scaling. And they were using GraphQL for the new website. They put GraphQL in front of it. And we've taken, I think, 90 or 95% of the traffic away from their service.
QnA: Introduction to Q&A
And that to me is just like, I'm happy. You know what I mean? It really solved the problem for somebody. He's a good man. I can imagine. Congratulations. I wanted to remind everyone, if you're asking questions on Slido, you can win t-shirts for the best question.
And that to me is just like, I'm happy. You know what I mean? It really solved the problem for somebody. He's a good man. I can imagine. Congratulations. I wanted to remind everyone, if you're asking questions on Slido, you can win t-shirts for the best question. I didn't know this. You can't win one. You're going to ask questions for yourself? I'm going to ask a question right now and see if I can't win. But to identify, we need to know who you are. So if you ask a question on Slido, be sure to use your real name. And if you're watching remotely, because remote people will get the t-shirts shipped if they win the t-shirt, be sure to use your Discord handle as your name on Slido, too, so we can identify you and find you later on.
QnA: Handling Cache of List with Pagination
Pagination makes caching tricky, especially with validation. We try to provide manual control to handle this issue. At Spectrum, we invalidate specific lists of posts instead of all posts when a new post is added to a community. This is the hardest problem in GraphQL caching, but we're working on making it easier. We hired experienced people from the Urkel team to tackle this challenge. We have nothing against Apollo or Relay, and the hiring of the Urkel team was coincidental.
First question is from Anna. How to handle properly the cache of a list with pagination? It seems quite a complicated operation.
That's a great question. Yes. Pagination makes caching quite a bit more tricky, particularly in validation. Essentially, with graphical clients, you have a lot of manual control and we try to give you that same manual control. We can go, okay, I know, for example, in our use case at Spectrum, if somebody posted a new post, then that post was posted in a specific community. So we knew, okay, we don't want to invalidate every list of posts. That doesn't make any sense, right? That would kill our cache. Instead, what we could do is we could invalidate just lists of posts from that one community, if that makes sense. So that way you can work around a little bit of this sort of list issue, but it is honestly the hardest problem about GraphQL caching, and I think we're going to have to work on making that a lot easier, because at the moment it's still a little bit of a manual pain to implement all of that. So I hope we can make that easier in the future.
Thankfully, a lot of our people on our team have worked on GraphQL caching for quite a while, we hired a lot of the Urkel team, and so if anybody can figure it out, they can. I'm not smart enough, but if anybody can, they can, and I hope we get there eventually.
You're just a visionary leader. No, I'm just an idiot in the room that doesn't know what they're doing. Cool.
We have time for another question, and that's from, well, Anonymous. Well, they say, congratulations for hiring most of the Urkel core team. Are you completely betting on Urkel, and why, instead of Apollo or Relay? That's a great question. No, we actually have nothing to do with Urkel. I mean, we use it, and we're happy with it, but we don't have anything against Apollo or Relay. The fact that so many of the Urkel core team ended up working at Graf City was sort of almost by chance, because they just happened to be free, and were looking for jobs, right? Individually. Not as a group. Individually. All of these people left their previous jobs and were looking for new opportunities. And we were like, hold on. You've worked with GraphQL Caching, right? You want to work some more with GraphQL Caching? And thankfully, all of them went, hell yes, I want to do that. So that's how that happened. But we have nothing to do with Urkel.
QnA: GraphQL Caching and Time-Sensitive Data
We work with Apollo and Relay as well. It doesn't matter which GraphQL client you use. Just pass your requests through our gateway and we will cache it at the edge. Handling GraphQL caching with time-sensitive data is an unsolved problem, but GraphQL is evolving with solutions like defer and stream. By splitting queries and caching specific data, we can provide a performant experience for live and static data. We're investigating this common use case and hope to find a solution eventually.
We'll obviously, they have time to maintain it, but it's still a formidable labs project and it will be for the foreseeable future. And we work with Apollo and Relay as well. To Graf City, it really doesn't matter which GraphQL client you use. You just have to pass all of your requests through our gateway, and we will cache it at the edge just fine.
Alright, thanks. We'll do one more and go a little bit over time. Question is from Anonymous, so they won't get a t-shirt. How would you handle GraphQL caching where part of the data is time sensitive data, like live data? That's another great question. This is currently also a little bit of an unsolved problem. However, GraphQL is evolving, and if you've heard about defer and stream, those might potentially be solutions to that. The way that works is when you send a query, we could look at that query and we could go, okay, this stock ticker information is really real time, and we probably shouldn't cache that because it changes anyway every second. It doesn't make any sense. But the blog post that you're fetching at the same time, that's actually kind of super cachable. So, with defer and stream and query splitting, we could split those into two separate queries at the edge, send them to the origin separately, and then only cache the blog post, and then put them back together and send them to the client. And so that way, when you send the same query again, we could be loading the blog post from the cache and only send the live data request to the origin to get that data. And then with stream and defer, we could even return that blog post to the client before the rest of the data is there, which means it would be super performant, and you, as a sort of graphical user, wouldn't have to do any work to get that live data to be live, but also the static data to be cached. So that's something we're investigating. Hopefully, we'll get there eventually, because this is quite a common use case. We'll see, but we'll get there eventually.
All right. Well, that's all the time we have, unfortunately. But if you have any questions, well, Max is here all day, right? Stickers. Come get stickers. Everyone, remember the CSS clapping system.
Comments