Today it is fairly easy to integrate GraphQL on a client and server-side and get it all up and running quickly with any cloud service of your choice like e.g. Netlify or Vercel. With this setup, how can we monitor the performance, and how observe all parts together to find any root cause in case of problems?
Performance Monitoring of a Heterogeneous GraphQL Mesh App

AI Generated Video Summary
Performance monitoring is crucial for businesses as users don't like to wait. The ApolloEngine tool helps track and analyze metrics, revealing response time variances and other information. Instana combines traces for service communication with infrastructure metrics and end user monitoring, implementing open telemetry. Apollo Studio is great for managing the GraphQL schema and provides full observability, enabling efficient root cause analysis.
1. Performance Monitoring and Issue Investigation
I'm Robert Horslowski, a software engineer at Instaun in IBM company. I have experience with GraphQL and have encountered performance issues in live demo applications. Performance monitoring is necessary because users don't like to wait, and APIs are crucial for businesses. Investigating a real performance issue, I found that the communication with the database was sometimes very slow. The ApolloEngine tool helped track and analyze metrics, revealing response time variances and other information.
Hi everybody! I'm very happy to be here to have the opportunity to share my thoughts and learnings about performance with GraphQL specifically in a service mesh. Let me quickly introduce myself. I'm Robert Horslowski working at Instaun in IBM company and in 2016 I gave a talk about GraphQL in Relay. Later in 2018 I published this video course about a full-state trailer clone on top of GraphQL. By then 2019 I found a subtle performance issue in this live demo application which brings all this rolling.
But let's first dive into and see what do we mean with distributed mesh. So, actually we don't have only one service but typically our landscape from an infrastructure looks like this. So, of course there can be one or two machines going down and so on. But this typically handled. But what is then happening on the service level. And here also this is typically how a service mesh looks like when you look into it and have a representation of the traffic of the communication. And also here there are of course many communications running and this is typically not good visible if you have not such a tool.
But first, let's ask the question, why is performance monitoring necessary? Yeah, it's quite simple. Users don't like to wait. And typically when we have today a service mesh or at least some service is used. Maybe this is a tool for a payment service or anything like this. And typically, other services depend on that. And this needs to somehow be tracked. And in case of a failure, of course, should be easily found and fixed. Why is this important? Typically, today, when APIs are the center of a business, for instance, then also here, it's very important that timings are as expected. So nobody wants to wait for something and later find out it was not their fault, but somebody else. And even while there might have been a contract, so-called SLA, where you define a specific service needs to be reacting sometime. And if it does not, that's where somebody has a problem and the business has a problem at the end.
But let's come to investigating a real performance issue. As I mentioned, I had a problem with my live demo at the time. It's a simple Kanban board with some database transactions or a backend where you have some data stored, of course, but also, at that time the communication of the database was graphical. So, for some reason, it was very slow, but on other times, it was very fast. I couldn't say where the problem is, but sometimes it was really really slow, and there's only the tool out there, or it was there, it was called ApolloEngine. It was quite simple to just add an API key into the Apollo server when using the Apollo server library, and then it automatically tracks these metrics and showed them here in the board. So you can see here, this is the variance, let's say, or the spectrum of the response times, up to 13 seconds for a call, which of course is not acceptable, and there are some more information like on the right, so the number of queries and so on.
2. Instana and Apollo Studio
A year ago, I had the chance to use Instana, which combines traces for service communication with infrastructure metrics and end user monitoring. It implements open telemetry. To collect user data, inject the UEM snippet in the website. Tracking down backend traces and analyzing query counts is easy. I also monitor my application running on Netlify functions using the instanawrapper. The real problem was using a GraphQL service backend with a premium plan. Apollo Studio is great for managing the GraphQL schema and provides full observability, enabling efficient root cause analysis.
This was a year ago. In the meantime, they improved their service and also have some tracing built in, which can also be very easily enabled and for specific freemium services also quite easy and doesn't cost anything.
So but this, at that time, also gave me a little bit of information and I also had the chance to use Instana, and Instana combines these traces for the communication of services together with infrastructure metrics and also with UEM, so with end user monitoring. And by the way, it's implementing open telemetry, the latest standard in this area.
So how do we get there? It's quite simple at the end. Finally, to get all the information of the user and what the user is doing, you just inject your UEM snippet in the website, then the GraphQL query can collect all the data, how even JavaScript errors and so on. And even specific requests you can find here, and then tracking down, we find a few to backend trace there at the end, also show up the GraphQL query. And right side you see there's some meta information of the operation and so on. And we can also do some more analytics on the counts of queries and so on. But nowadays, my application also runs in Netlify functions, which at the end run on AWS Lambda. So how can we track that? It's quite easy, just using here this instanawrapper. And with this, I was able to monitor the Apollo application server here as we saw in the slide before.
So finally, what was the real problem? At the end, I figured out that the real small thing was that as I used at that time a GraphQL service backend which used this premium plan. So that was the only problem. Summary, it's quite easy. Apollo Studio is great for managing the GraphQL schema, and it's done as a full-blown observability with all these extra features, and it enables the left shifting for giving developers a full context of their running application in production. So this makes it also very efficient to find any root cause. I would say, let me say, thank you very much for listening. And for any questions, please reach me at Twitter, at their hosts, or the email robertoslofskaya.steiner.com. And, of course, I hope to see you and meet you at the conference chat.
Comments