1. Introduction to GraphQL Performance and Monitoring
I'll talk about GraphQL performance and monitoring. We'll understand the implications of using GraphQL by using a simple use case of a hotel property management system. Let's look at the HTTP calls that you would have to make from the front end client to render this user interface and show the list of bookings in this hotel.
Hi, everyone. My name is Ankita Masand. I'm an Associate Architect at Treebo. I work on Treebo's SaaS product called Hotel Superhero. Hotel Superhero is a cloud-based hotel property management system used by various hotel chains across the world. It provides a pool of features for creating, managing bookings, generating invoices, various reports, configuring prices of rooms in a hotel. We have been extensively using GraphQL for this application.
Today, I'll talk about GraphQL performance and monitoring. As a list of topics that we look at in this talk, performance implications of using GraphQL in the stack, designing GraphQL schemas, batching and caching at a request level using data loaders, lazily loading some fields and streaming responses in a GraphQL query using the new differ and stream directives, caching in GraphQL, and finally, monitoring GraphQL queries using New Relic. Let's get going.
We'll understand the implications of using GraphQL by using a simple use case of a hotel property management system. What you see on the screen is a representation of the Big Bank Hotel. It has three room types, room type 1, 2 and 3. 1A, 1B, 1C are the rooms for room type 1 and booking like the blank boxes indicate bookings for particular rooms. Booking 1 and booking 2 are for room number 1A. The bigger box indicates that booking extends for more number of days.
What would it take to render this UI using a front end client? So this is a view that a hotelier sees on a screen to understand and manage bookings for his hotel. Let's look at the HTTP calls that you would have to make from the front end client to render this user interface and show the list of bookings in this hotel. We'll first make parallel calls to fetch hotel data like name, location. We'll then make a call for fetching room types, rooms, bookings. And once we get a list of bookings from the downstream services, we'll make calls to fetch bills. So every booking is associated with a bill and there are also some attachments in a booking that we have to show on that user interface. So for every booking we'll fetch is corresponding bill and attachments. Every booking is done for a particular user. We'll then call the user's API to fetch more details about that user and also his preferences. This looks okay. And there are not too many calls. So this is something that we are used to. These are the API calls only when there are three bookings in a hotel. If you have seen carefully, we were fetching bill API thrice, which means there are three bookings in a hotel.
2. Performance Implications of Using GraphQL
A hotel cannot afford a property management system for managing three bookings. We were at the early stage of building this application and it looked okay to experiment with GraphQL and see how it turns out. The GraphQL would give the response in one API call, there are no multiple round trips to the server. It is declarative in nature and the client specifies the data that it needs and hence a considerable reduction in the response size. Let's look at the GraphQL query that we are using for collectively fetching the response of all the API calls. This looks cool because we don't have to write code on the frontend side to manage when to fire particular APIs. So what are the performance implications of using GraphQL? After all the hard work we felt that the performance of the application is not improved much for bigger bookings.
A hotel cannot afford a property management system for managing three bookings. What would happen if there are hundreds of bookings in a hotel? So we'll have to make 100 API calls to fetch bills, attachments. There would be too many round trips from front end client to server. How do we solve these issues?
We were at the early stage of building this application and it looked okay to experiment with GraphQL and see how it turns out. The GraphQL would give the response in one API call, there are no multiple round trips to the server. It is declarative in nature and the client specifies the data that it needs and hence a considerable reduction in the response size. There is no repeated boilerplate code for different front end applications. And the strongest of all, it has strong type system. The pitch went really well and the front end team got a chance to experiment with GraphQL and add it to the stack.
Let's look at the GraphQL query that we are using for collectively fetching the response of all the API calls. If you see, there's a query Hotel by ID. Inside HotelbyID, we are fetching room types. For every room type we are fetching rooms. We are also fetching bookings in the same query. And inside bookings, we are fetching bill. Bill for every booking, attachments, customers. And for every customer we are fetching his preferences. Let's see how each of these queries would be resolved on the GraphQL server. For resolving HotelbyID query, we are making a call to the downstream service to fetch details of a Hotel. For resolving room types, we are making a call to fetch room types from a downstream service. And for every room type, we are fetching rooms. This is the call that we make to fetch bookings. And for every booking, we'll make call to fetch its corresponding bill, attachments, customers, and for every customer we'll make a call to fetch his preferences.
This looks cool because we don't have to write code on the frontend side to manage when to fire particular APIs. Like Bill is dependent on booking. Once we get response of bookings, then only we can fire Bill APIs. We also don't have to run maps on the frontend to map room types to rooms, rooms to bookings, bookings to Bill. GraphQL does all of that. So what are the performance implications of using GraphQL? After all the hard work we felt that the performance of the application is not improved much for bigger bookings. Bookings that extend for more number of days that have more customers.
3. Optimizing GraphQL API Calls
In a GraphQL query, we make multiple API calls to fetch the required data for the user interface. This can result in a large number of calls to downstream services, especially when dealing with multiple room types, rooms, bookings, and users. To optimize performance, we need to reconsider how we structure our GraphQL schema and API calls. Instead of fetching data that is not immediately necessary, we should only retrieve the essential information. By aligning the GraphQL schema with the domain model and user interface requirements, we can reduce the number of API calls and improve the overall performance.
Or for hotels that have more room types and more rooms. So let's look at a list of API calls that we are making to the downstream services. Let's consider an example of 25 bookings. For example, in that query, we are fetching 25 bookings. We'll make one call to fetch the details of a hotel, one call to fetch room types. Let's say it returns six room types. Now for every room type, we'll fetch its corresponding room. So we'll make six calls to slash rooms API, one call to fetch bookings. This API returns 25 bookings. That's why we'll make 25 calls to fetch bills and 25 calls to fetch attachments, which are for every booking. Collectively, there are 100 users in all of these booking. One booking can have multiple users. So we'll make 100 calls to fetch user details, 100 calls to fetch his preferences. Total in all, there are 259 calls that we have to make for fetching 225 bookings and showing it on the user interface.
Yes, we have to make 259 calls to the downstream services for resolving just one GraphQL query. What would happen if there are more room types and rooms in a hotel? The number of API calls would keep increasing based on the number of room types, rooms, bookings, and the number of users in these bookings. How do we make this right? Let's analyze what we are actually doing in this query. We are fetching the entire response in one GraphQL query, including the users and their preferences. We are fetching room types inside the hotel object, which is all okay. But for every room type, we are calling the rooms API to fetch rooms because this is the structure the user interface expects. We are doing this because we don't have to run loops on the front end to get the entire mapped object as it is required by the user interface. We are fetching bills inside the booking object, which means we'll call the bills API for every booking. We don't have to display the user preferences on load. Why fetch it and the initial call and show loading state to the user? This is unfair to the user because he's not even concerned of seeing user preferences in the first go. How do we solve these issues? So should a GraphQL schema map one-on-one to your user interface or to your domain models? If you see carefully, we are fetching room types inside the hotel object because room types is a property of a hotel, which is all okay. But we are fetching rooms inside each of these room types. Rooms is not a property of a room type. Rooms is a property of a hotel. Domain model considers rooms as a property of a hotel and not of room type. We are doing this because that's how the user interface that we're trying to build expects.
4. Optimizing GraphQL Schema with Data Loader
We should design our GraphQL schema more like our domain model to satisfy different front-end clients. Batching and caching using Data Loader can significantly reduce API calls to downstream services. Data Loader collects API requests within a single frame of execution and fires batch API calls. This reduces the number of calls from 259 to 8 for fetching 25 bookings. Implementing Data Loader in the GraphQL server is not difficult.
So at which level in the query should we fetch rooms? We should try to design our GraphQL schema more like our domain model, because if you design it based on your user interface, you will be able to satisfy just one use case of your front-end clients. And GraphQL schema should be designed in a robust way so that it can be used by different front-end clients.
Rooms field should be a property of hotel and not room types. Let's move ahead. Here we are fetching preferences inside the user object, and preferences is a property of a user. That's how exactly the domain model also expects.
How do we solve this problem of multiple API calls for user preferences field, although this is mapped the right way? Here comes the interesting part of the talk, batching and caching using Data Loader. You would be able to understand the importance of data loaders, only if you have seen multiple calls going to your downstream services for resolving one single GraphQL query.
Data Loader basically collects the API request within a single frame of execution. What you see on the screen is the number of API calls before using data loaders is 259 to the downstream services. It is now reduced to two calls to the downstream services. How did this happen? Data Loader doesn't fire API calls immediately to your downstream services. It would collect all the API requests and then within a single frame of execution and then would fire one batch API call. For example, for resolving every bill, we were calling Bill's API 25 times. Here if you see, this is a batch to call. Data Loader is collecting all the Bill IDs and would fire downstream service only once. Same goes for attachments, user IDs and also for preferences. So if you count this, there are just eight calls that you have to make to downstream services for fetching 25 bookings. Data Loader is a great addition to the stack and it is not very difficult to implement this in the GraphQL server.
So this creates magic by reducing 259 calls to downstream services to eight and this is a great performance booster to your applications. Let's look at the implementation of Bill's Loader and Users Loader. So Bill's Loader, this takes in some keys as in some Bill IDs and it then calls the Badge Bill's Loader. We look at the implementation of Badge Bill's Loader function in a moment. You have to return these data loaders from the context objects so that is available in all the resolvers. This is how a Badge Bill's Loader function looks like. It collects all the Bill IDs within a single frame of execution and would then call the downstream service only once. It would return an array of Bills. So from your resolver, you have to write Bill's Loader dot load. This is the first Bill ID, second Bill ID, all these Bills are different, but a GraphQL server won't call the Bill's API immediately. It would collect all these Bill IDs and then would fire this Badged call at once.
5. Optimizing API Calls and Deferring Execution
Same goes for users. Batching improves performance by reducing API calls. Data Loaders help in caching responses for subsequent requests. Using a UserProfileLoader batched function, we can fetch user profiles with reduced API calls. By deferring the execution of non-critical fields, such as user preferences, we can improve performance and load data asynchronously without blocking the main view.
Same goes for users. This is the Badged API that we can use for users and these are all different users, but we would call the downstream API call only once. Batching really improves and helps a lot in boosting the performance of the application.
Let's see how Data Loaders help in caching at a request level. If you're calling the same key, toys in a Data Loader in the same request, that the subsequent responses would be sought from the cache. Let's understand this with the help of an example.
Let's look at the implementation. This is a UserProfileLoader batched function. It is going to collect the keys and then would hit the downstream services. If you see userprofile.load, this is the same user ID, right? This is the user ID of Bob. First call, we'll call downstream services. These are all same API calls. So it is going to call downstream service only once. Perfect. Looks like we are able to solve the multiple API calls disaster using batching and caching. Obviously 259 calls getting reduced to eight. Can we do more? Can we improve further?
If you have noticed carefully, we have the preferences field in the user object is not on the critical path of user interface interactivity. Can we defer the execution of that user preferences resolver? Yes, we can. We can use deferred directives to defer the execution of user preference resolver. Let's see how we would do it. All we have to do is just add deferred directive to the preferences field in your query. The response that we would get, so preferences would be null in the initial response till GraphQL resolves it on the back side. And in the same HTTP connection, we would get the response of preferences as patches. What you see here is the response of bookings object one, customers one, and these are the preferences of that particular customer. Preferences would be loaded asynchronously on the user interface, and it won't block the main view. What we have been thinking till now is, requests gets resolved to a response.
6. Optimizing GraphQL Requests and Caching
We send a request to a GraphQL server, you're subscribed to an observable, the same HTTP connection is still open, and GraphQL server keeps sending responses and patches as per the execution of the fields. Let's say, you have a hundred bookings for a particular hotel. You don't want to fetch all hundreds of bookings in the same query, you won't even be able to fit those bookings in the viewport of the user. The hotel room types and room numbers don't change quite often. We can cache static data that do not depend on the identity of the user in Redis or Memcache for some fixed EDL. Even though room types and room fields don't change that quite often, we are still hitting the GraphQL server. Is it possible that we get this response from the nearest CDN server and we don't even have to hit the GraphQL server? Yes, we are talking about get request here. The first milestone to hit here is to convert POST to get request. We can do it using automatic persisted queries and it is very simple to implement.
So we send a request in the HTTP connection to a GraphQL server. GraphQL server resolves all the fields in a query and sends back the response. It is easy to understand different stream, if you imagine it this way.
We send a request to a GraphQL server, you're subscribed to an observable, the same HTTP connection is still open, and GraphQL server keeps sending responses and patches as per the execution of the fields. Let's also see how we would use stream directors.
Let's say, you have a hundred bookings for a particular hotel. You don't want to fetch all hundreds of bookings in the same query, you won't even be able to fit those bookings in the viewport of the user. So in this query we are telling GraphQL, that please send me 10 bookings and stream rest of the bookings as patches in the same HTTP connection. We'll get 10 bookings in the initial response and the subsequent bookings would be served as patches.
This all works well for a single request, but there's one important thing that we have missed. The hotel room types and room numbers don't change quite often. Can we do something about this? Let's look at what resolver level caching is.
So we can cache static data that do not depend on the identity of the user in Redis or Memcache for some fixed EDL. Room types and room fields of a hotel, they are static and would be same for all the users. This is a good example of resolver level caching. Let's see how we would do that.
When a front end client request for hotel bookings query from GraphQL server. While resolving room types, we are first checking if room types is present in the Redis cache. If it is present, we'll directly serve it from the cache. If it is not present, we'll hit the downstream server, store the response in Redis and then send it back. Same goes for rooms resolver. We'll check if rooms is present in Redis cache. If it is present, serve it directly from the cache. Otherwise hit the downstream service and store the response in Redis and then send it back to the resolver.
Here in this case, even though room types and room fields don't change that quite often, we are still hitting the GraphQL server. Is it possible that we get this response from the nearest CDN server and we don't even have to hit the GraphQL server? Yes, we are talking about get request here. In GraphQL, if you have seen, we always make POST request. This is because we send the query string to GraphQL as a request payload in POST request because it doesn't fit well in cacheable get request. So the first milestone to hit here is to convert POST to get request. How do we do that? We can do it using automatic persisted queries and it is very simple to implement.
7. Automatic Persisted Queries and Caching
With automatic persisted queries, we can send the hash of the query string in a GET HTTP request to the Apollo server or GraphQL server. This allows us to take advantage of cache control headers and improve performance. We can use caching per requests, resolver level caching, and cache control headers on different fields in the query. One question to consider is whether GraphQL supports the 304 not modified HTTP response status code.
With automatic persisted queries, we send the hash of query string in get HTTP request to the Apollo server or in GraphQL server. Then server figures out that this particular hash corresponds to this particular query string. It would resolve that query and send the response back in the same response format that the client would expect. So here we have accomplished this goal of sending get HTTP request from the front end client using GraphQL.
Now what's next? Once we are able to send get request, which means that we can take advantage of all the cache control headers that are available. We can specify the max age of a field in a query for which this particular field is valid. And it is okay to solve this field from the nearest CDN service instead of hitting the GraphQL servers. So yeah, we can take advantage of all the cache control headers. And this would improve performance even further.
Let's recap what all we learned in caching. We learned that we can do caching per requests using data loaders. We can do resolver level caching to store the response of downstream services in Redis or Memcache. We can use automatic persisted queries to send the hash of the query string to GraphQL. And we can use cache control headers on different fields in the query. There's a question for thought. Now that we are using get request, can GraphQL support 304 not modified HTTP response status code?
8. GraphQL Query Trace Details
This is a powerful tool that provides trace details of GraphQL queries, allowing us to identify time-consuming fields and their impact on performance.
This is really interesting. And it is a great addition to the stack. What you see on the screenshot, this is a top 20 most time consuming GraphQL queries. You can also see the slowest average response time, slowest average response time, top 20 GraphQL queries. You can look at P95, P99 times of a particular GraphQL query using this powerful tool. This screenshot source, the calls that you're making to external services, the percentage of calls that you're making to your different microservices.
This shows the trace details of a GraphQL query. So whenever there's a new requirement or a new feature on the front end client, all we think that we have to add this particular field in a query and that's it. We would get the response and we would render it on the UI. We don't even think that this particular field is going to make an API call to downstream service or it's going to hit the database. And that would impact the performance of this particular query. So, by looking at this trace details, we come to know that this particular field takes this much time to resolve or this particular field hits the backend server. This particular field makes, this particular field is hitting your database. So, by looking at this trace, by looking at the trace details of a particular query, it is very easy to understand what is consuming time in this GraphQL query.
9. Analyzing GraphQL Performance and Poll Results
By analyzing the trace details of a GraphQL query, we can easily identify the time-consuming parts. In summary, larger and more nested queries can impact performance. We should be mindful of the number of API requests and database queries made in a GraphQL query. Designing the GraphQL schema based on the domain model, using data loaders for batching and caching, deferring non-critical fields, and leveraging automatic persisted queries can optimize performance. Additionally, New Relic provides valuable trace details for GraphQL queries.
So, by looking at this trace, by looking at the trace details of a particular query, it is very easy to understand what is consuming time in this GraphQL query.
Let's recap all that we learned in this talk. The clients might face a performance hit for bigger and more nested queries in GraphQL. We should keep an eye on how many API requests or database queries we are making in a GraphQL query.
We should design the GraphQL schema more like our domain and not map it to the front-end user interface. Data loaders are used for batching multiple API calls to the same endpoint in one request and thus reduces the load on your downstream services, the way we saw for bills and users API. Data loaders also help in caching the same API request and call the downstream service only once for the same cache key. We saw that Bob, Bob was speaking at four conferences but we were loading his profile only once because his user profile ID is the same.
We can use different directives to defer the response of some of the fields in a query that are not in the critical path of user interactivity, take more time to resolve or have a bigger response size. We saw that we can defer the execution of preferences resolver in a user object. We can use stream directives to stream the response of the list field in a query. We can use Redis or Memcached to cache the response of downstream services at a resolver level. We saw that we don't have to hit backend services for getting static data like room types and rooms of a hotel. We can get it directly from Redis or Memcached. Finally, we can use automatic persistent queries and make get requests from front end client to GraphQL servers. Here we sent the hash of the query in a get request to the GraphQL server. Using this, we can take advantage of all the cache control headers by specifying the TTL for some of the fields in that query. And then we can fetch these from the nearest CDN server. Finally, New Relic makes our life easy and it shows the trace details of a particular GraphQL query. It is really a good addition to the stack. Thank you so much for listening. Have a great day.
So looking at the poll results, just as a reminder, the question was if GraphQL supports a 403 response status code. And 50% says no. And then between yes and not sure and divide is 25 to 25. Well, let us know which one. Yeah. So it's 304 response status code. Let me explain what 304 response status code is. It means that it's a not-modified response status code.
10. GraphQL and 304 Response Status Code
GraphQL does not support the 304 response status code. Unlike REST, GraphQL fetches multiple entities in one query, so sending a 304 response for the entire query doesn't make sense. However, cache control headers can be used to optimize caching. Currently, GraphQL always sends a 200 response, even if none of the fields in the query have changed.
That means a particular entity that you're requesting from the server has not been modified since your last fetch. Which means, let's say, let's consider an example of a REST API. You're fetching some particular entity and you have let's say version one of that entity already on your client site and you're requesting it again, okay? And this entity has not been modified. So the server will return you 304 response status code, which means please use the entity that you already have, that would save some time. You don't... The time to download that particular object and parse it so that time would be saved. My question was, this we can easily use in REST and take advantage of it. My question was, can we support 304 response status code in GraphQL? Currently it is no, because GraphQL, we don't actually fetch just one particular entity. It's a set of different and many entities that we fetch in one particular query. So sending a response code as 304 doesn't make sense because we are trying to say the particular query as a whole has not changed, which may not be true. So we can take advantage of cache control headers but currently GraphQL sends 200, even if all the fields in the query have not changed. Cool. Okay.
Impact of Response Size and Pagination in GraphQL
Does the response size of a GraphQL query affect performance? Yes, it does. For example, if you're fetching hundreds of nested objects, the response size can be large, resulting in longer downloading times. To overcome this, filters can be applied on the server side to send a filtered response to the client, reducing the amount of data downloaded. Pagination is also a useful technique for managing large objects. Instead of fetching all objects at once, fetch them in smaller batches.
Then a follow-up question. Does it support 418 as a status code? The 418 is, it's what I am a teapot status code, which means the server refuses the attempt to brew coffee with a teapot. I can guess it doesn't support, but I'm not quite sure. I've not tried it. I think we should implement that. So if anyone has some time to implement this, it's an important status code that we need in GraphQL. Yes. Yes.
All right. Let's jump into the questions. If you still have any questions, there's still some time to type them while we're talking. So be sure to jump on to the Discord channel, GraphQL Milky Way Q&A.
First question. Can the response size of a GraphQL query affect the performance of the application? And if so, how? Yeah. So, response size actually affects the performance of the application. Let's consider an example. Now, you're fetching hundreds of objects in a list and these objects are nested, okay. So we have seen in our application itself, we have seen that you're transporting around more than two to five MBs of data over the network because the area is quite big. There are too many objects and it is quite nested, okay. So if you see the downloading time that this object takes is also more because the object is quite big. So yeah, it affects the performance of the application. If you want to overcome this problem, what we have done is instead of having too many filters on the client side and getting all the response from GraphQL and having filters on the client side, we could try having filters on the server side so that we send lesser feeds and the filtered response to the client. So in a way that you're downloading smaller content. I'm just wrapping my head around it. So yeah, that makes a lot of sense. But what about things like pagination, of course can also help. But then again, if your object is just the big, you can apply pagination. Yeah, pagination is a good idea. I think we should always be fetching, like I said in the talk as well, we should always be fetching. If you have hundreds of bookings don't fetch all hundreds of them.
Optimizing GraphQL Queries and UI Performance
In GraphQL, nested queries can affect performance. Differing fields not required on the viewport can boost performance. Fetching more fields may lead to additional database and downstream service calls, increasing query time. It's important to monitor and test queries to avoid slowing down the application. Using skeleton screens with suspense in React can improve perceived performance.
You won't be even able to fit them in the viewport of the user. You're fetching 25 bookings. In some cases, these 25 bookings are nested objects because in GraphQL you have a flexibility to query field inside field inside field, right? So there can be many nested fields. Yeah, that could affect the performance of your application. And like I said, you know, if you see it on URL like you would be able to understand. Yeah, you'd be able to understand that this particular nested query is also affecting the performance.
I muted myself and now I un-muted myself. Welcome to the digital world. It's like, it's like, this is a new thing, meeting remotely. So performance is of course, well, one of the most important topics to making a good web app. Of course, there's a lot of things that make a good web app. And, well, pagination is one thing you can do, but is there any other thing you can think of that can help, of course, with GraphQL to improve performance?
Yeah, actually, you know, like using, differing some particular fields that you don't need on the viewport of the user, I saw a talk by Uri that different stream directives are quite powerful and being supported by GraphQL Yuga. So with these fields, we don't really fetch, you know, what it was not required on the user interface, on the first load by the user. So that's quite a good performance booster that you can give to your application. But more than that, in our application, what we have seen is just fetch, you know, fetch lesser fields. On the front end side, we think that let's fetch five to six more fields and you would be done with the task that you are given at hand, right? But we don't understand that these five to six fields are going to all your downstream services. That's when you realize, that's when you understand that the cost of this query is hitting two, three database queries, hitting two, three downstream services. That's when you realize that it is going to take more time. Yeah. Yeah. It's, sometimes, if I understand correctly, you add two or three more fields to your query and you think it's just two or three more fields, but that can actually be a really heavy query. So basically, the advice would be to always just keep monitoring. When you're editing a query, keep monitoring, keep an eye on what's happening, test before, test after, and make sure you're not slowing down your application significantly. Yes, yes, absolutely. All right, awesome. So, okay. That's good stuff. And going back a little bit to the UI end. So we are saying just fetch what's in the viewport, but are you also using something like skeleton screens in the UI? Yes, yes. So like, till you're fetching your data, you can take advantage of, like we are using suspense, okay? Suspense in React, where you can show the skeleton of your UI till you get the data, you get an improved perceived performance on the UI side.
Improving Perceived Performance
We can improve perceived performance by providing visual feedback to users while data is loading. The term 'perceived performance' is more important than actual performance because speed alone does not matter if it doesn't feel fast. For example, using a skeleton screen can give users the impression that something is happening. Another example is the airport scenario, where a longer route was created to make people feel like they were not waiting as long. That's perceived performance in the real world.
Yeah, we can do a lot of stuff on the client side as well, you know, to gain improvements on the perceived performance to the user. User feel that something is loading, there's no blank screen, or you don't just show the loader to the user, there's something that is being there on the screen and he's patient enough that, okay, data will come now.
Yeah. Yeah, I like that term perceived performance. It's in my opinion, actually more important than actual performance because, well, if it's fast, but it doesn't feel fast and it doesn't matter, right? Hmm, yeah. So something like a skeleton screen can really help the user to feel like, okay, something is happening. Yes.
I remember actually when I was still in school and I was learning about performance and perceived performance, the example they gave. And it was about an airport that had a long walk, or really short walk from when you get out of the plane to the conveyor belt where you get your luggage. And people were complaining that they have to wait 20 minutes for their luggage. And what they just did is make a longer route. So they, it could be a two minute walk, but they made it just 20 minutes. So people are not waiting 20 minutes. They're just walking around the airport for 20 minutes. And then everyone was like, oh my God, they don't just wait, I look at the camera here. And I love that example, but yeah, that's perceived performance in the real world for you. Yeah. It was a great example.
So that's all the questions we have at the moment. So Ankita, thanks a lot for joining us here today. We're going to be taking a short break. And if you have any more questions, Ankita will be on her special chat, so click the link in the timeline below. Ankita, thanks a lot for joining us and sharing your knowledge. Thank you. Thank you so much. Bye bye.