I'll introduce the concept of a data mesh and why people increasingly are gravitating towards that as a solution to their data (both online and analytical) and application modernization problems. I'll talk about the key requirements around 1) domain-oriented decentralized data ownership and architecture, 2) data as a product, 3) self-serve data infrastructure as a platform, and 4) federated computational governance (ref). And then I'll talk about how GraphQL opens up an opportunity for laying the foundation for a data mesh and best-practices in building, operating and maintaining an enterprise grade data mesh.
Evaluating GraphQL for Setting Up an Enterprise-Grade Data Mesh
AI Generated Video Summary
This Talk explores the benefits of using GraphQL APIs for data, including representing domain models, addressing API tooling and infrastructure, and supporting interconnected data. It discusses model mapping, read APIs, filtering, sorting, and aggregating data, as well as mutations for invoking logic. The Talk also highlights the importance of standardizing infrastructure and leveraging GraphQL's automatic model mapping and authorization. Additionally, it touches on the use of specialized databases, the comparison between GraphQL and gRPC for data APIs, and the speaker's experience working on various parts of the stack.
1. Introduction to GraphQL APIs for Data
Hi folks, in this talk, I will discuss GraphQL APIs for data. We'll explore the challenges of fragmented data and the need for a comprehensive API. GraphQL offers benefits in representing domain models, providing various APIs, and addressing API tooling and infrastructure. It enables centralized concerns and supports interconnected data. By adopting a CQR-style thinking, we can design APIs with read models and commands. Bringing these ideas to GraphQL allows us to prioritize read models and handle data within the same or across different domains.
Hi folks, I'm going to talk to you a little bit about GraphQL APIs for data. My name is Tanya. I'm the CEO co-founder at Hasura. Hasura is an open source infrastructure project that allows you to create a GraphQL API easily on top of heterogeneous data sources. And a lot of the content of this talk is inspired by the work we do at Hasura and learnings from our users, especially in enterprise, as they mobilize their data and start to use GraphQL increasingly to make the experience around their data APIs much, much better.
So the rough context for this talk is that data is kind of increasingly getting fragmented and spread across multiple types of sources. And we need an API to start to access that. You might have data that is transactional data or data inside the SaaS service, or data that is more analytical. And within the same application context, or within the same user context, all of the data that is kind of split across these different types of sources is increasingly connected and relevant to the end user at the same time. And if you think about a simple ecommerce application, not only are you making transactions and interacting with data, but you're also searching through catalogs. You're also looking at aggregate analyzed information, and these are kinds of things that are all interconnected. And from an API experience point of view, you might need that to all come together into one API experience. When we think about this API, that's kind of the bottleneck because somebody needs to build this API. Somebody needs to talk to multiple types of sources, and you need to make sure that the API is continuously keeping up with the underlying evolution that's happening. And of course, you want kind of an API experience that guarantees certain amount of performance and security, right?
So in this context of having a data API, where data is getting fragmented, the stuff that we want, the properties that we want from a data API, are kind of the properties that we want for any API engine. But specifically, what we want to make sure is that we're able to represent our domain models, well, their evolution, their interconnectedness, right? We want to make sure that we have certain types of APIs that are again, usual for data APIs, whether it's queries or streaming kind of information, right? Certain ways of interacting with that data, methods on that data, and certain NFRs that are important to ensure that this data API is ready and usable for production. Most importantly, we want to make sure that the API tooling, the infrastructure for our API consumers is top-notch, right and keeps evolving. If we're able to kind of solve these problems, well, these types of problems, well, we essentially have a nice data API that is going to be easy to maintain and want to be easy to evolve, right?
So if you think there's lots of topics here, and so we're going to focus on mostly the AI and data modeling aspect of the API for the purpose of this talk, let's take a look at how GraphQL can kind of add value here when we think about the data API, right? Again, most of you would know, but just to kind of reframe some of those properties of GraphQL in the data API context. Our data is interconnected, so GraphQL represents that reality of a semantic graph of interconnected models much better than us thinking about resources that are siloed in, that are evolving kind of independently, right? GraphQL is JSON native, which is again nice from API consumption point of view, and offers us an opportunity to kind of centralize some of that infrastructure around kind of the JSON conversion-ness of when we're kind of interacting with data. GraphQL also has modern primitives around streaming in real time, which is again useful because data is starting to be less at rest and fair amount of our data is also in flight in real time, right? Then of course, GraphQL offers a GraphQL API always has a schema, a built-in artifact, which means that a lot of our challenges around API infrastructure, tooling, documentation, onboarding, are kind of automatically addressed, right? It's not 100% done, but a huge part of it that we would have to do manually if we weren't using GraphQL is kind of addressed, right? So, it offers certain benefits around what we'd want from our data API and offers certain opportunities to centralize some of those concerns and the way that we think about our data that is split across a mesh of sources.
Typically, when we're designing an API, the way that we would think about it is that we'd have models, we'd have getters or accessors of those models, right? So, if I had artists and albums in the music database, I would have a get method on an artist, right? Or where I might specify an ID as a parameter, right? I would have a list of artists that I fetched with a pagination parameter, right? And then I would have certain commands or methods that I would invoke that affect those models, right? And if we were thinking of it more as a CRUD-style thing, then we would have a model. The same model is a read model and a write model. So, we might post, put, patch, delete a model or we might get that model, right? But if you're thinking about it more from a CQRS point of view, then we'd kind of think of it more as, you know, we have commands that we're invoking and then we kind of invoke commands. Those can be create user, create artist, process artist, add album to artist, right? Or add label to artist, or whatever. And those would be kind of commands that we invoke and our models would be read models that we would kind of expect our commands to affect. And then we would read data from those models, right? And so, that might be a slightly different idea. Now, when we kind of bring these ideas to GraphQL, the way that I like to think about this more is kind of bringing that kind of CQR style thinking into our API design. So, we think of read models first. So, we have read models. Read models can be within the same source or within the same domain, data domain, or split across data domains.
2. Model Mapping and Read API
And these would have connections, right? We still haven't defined the methods and that's what we'll come to next. The first category of methods that we can have are queries, right? And because it's a data API, there'll be some common expectations and conventions that we can set up. And now, when we think about the right side of our API, especially when we think about graphical mutations, then the way that we think about it is that, hey, we invoke a command. So at this layer, at the GraphQL API layer, we can start to centralize some infrastructure in the way that we map these models, bring them into types, in the way that we create common read APIs across these domains, and the way that we invoke certain commands. So if you're able to centralize the infrastructure here, right, then we can kind of extract all the benefits that people get out of a GraphQL API, we can centralize a lot of the effort that we're putting into building the API. So what we're going to dive into next is just kind of going a little bit deeper into that model mapping into the APIs, the read API, and write API a little bit, just to see what that might look like. The second thing that we want to do to make sure that we want to represent is to make sure that we can model relationships between these models that we have. And because it's finally going to be JSON, the way that relationship will show up in the API is going to be, you know, maybe I have a nested added a nested object, so you have like many to want to be a nested object, or want to be a nested array, right? Or you can even have fields that are actually a relationship that is coming in from a different model. And, and each of those models should also have authorization tools that determine what fields are accessible, and what entities of that model are actually readable by a particular API consumer. So, that is that, this, these three pieces of work, mapping, relationships, and authorizations, policies, right? This is kind of the bare minimum information that we need to bring a model from our data domain, right, from one of our data sources, right, into the API. The second piece is read. So, what we want to do is create conventions around how we're going to read these models, right? So, we can create conventions around pagination, around ordering and filtering, right? We want to create a convention on aggregations, and then we'd want to compose them with a relationship, we'd want to represent those relationships in our queries, right? So, we'd want to be able to read models across relationships. We'd want to also make sure that our queries can represent aggregations, or ordering or filtering that can reference parent and child objects. So, let's take a quick look at, as an example, I'll just show you kind of the way we think about it at Hasura, and what that read API looks like, what that read convention looks like when we think about it with Hasura, the way that we think about it at Hasura, right?
And these would have connections, right? These would have semantic relationships within the domain or across domains. And these would essentially become the types in our GraphQL schema, right? We still haven't defined the methods and that's what we'll come to next.
The first category of methods that we can have are queries, right? And because it's a data API, there'll be some common expectations and conventions that we can set up around, are we fetching a single element? A single model? A list of models, paginating through them, ordering them, filtering through them. Aggregations especially are important for data APIs, right? So, we can start to create kind of a convention that says, once we have a read model, these are the common set of methods that we can have for reading those models, right? And that kind of addresses the query side of our API.
And now, when we think about the right side of our API, especially when we think about graphical mutations, right? Maybe certain queries as well, when we think about graphical mutations, then the way that we think about it is that, hey, we invoke a command. The command will go do something, but eventually it'll return a reference to a read model or to a list of read models, right? And that kind of gives us a way to think about designing the right side of our API, the graphical mutation side especially. Although like I said, some of it might, some commands might also be query commands. But the same thing, same idea, just revisualizing that in the context of multiple domains, right?
We might have multiple domains, each domain has models, commands, right? And we use that to create our GraphQL API, right? So at this layer, at the GraphQL API layer, we can start to centralize some infrastructure in the way that we map these models, bring them into types, in the way that we create common read APIs across these domains, and the way that we invoke certain commands. So if you're able to centralize the infrastructure here, right, then we can kind of extract all the benefits that people get out of a GraphQL API, we can centralize a lot of the effort that we're putting into building the API. And we can focus more within each of those domains, we can focus on the modeling, we can focus on the actual logic that might run and affect those models, right?
So what we're going to dive into next is just kind of going a little bit deeper into that model mapping into the APIs, the read API, and write API a little bit, just to see what that might look like. Right? So let's get the models, when we think about models, the most important thing is to be able to map kind of what we want on the on the model that we want to expose, the models that we have within our domain, right? So if you have a database type domain, right, we have to believe physical models, these data models might be physical models, they might be tables, right? Or logical models, where the exact data in the table doesn't actually correlate the model that you want to have in the API, right? So you might have a physical model or a logical model. But these are basically getting represented to a graphical model that you'd want at the end of the day. If it's not a database, but it's more like a source, you might have a model that's coming in from an API service, right? So the mapping layer that we have in our graphical infrastructure, code framework configuration, whatever it is, should should kind of help solve this mapping problem to bring in certain models from a from a data domain, from a data source as easily as we can.
The second thing that we want to do to make sure that we want to represent, right, that we want to add configuration around or code around is to make sure that we can model relationships between these models that we have, right. And because it's finally going to be JSON, the way that relationship will show up in the API is going to be, you know, maybe I have a nested added a nested object, so you have like many to want to be a nested object, or want to be a nested array, right? Or you can even have fields that are actually a relationship that is coming in from a different model. Right? And, and each of those models should also have authorization tools that determine what fields are accessible, and what entities of that model are actually readable by a particular API consumer, right? So, that is that, this, these three pieces of work, mapping, relationships, and authorizations, policies, right? This is kind of the bare minimum information that we need to bring a model from our data domain, right, from one of our data sources, right, into the API, right, and at this point we now have a nice way to think about the types of our GraphQL schema and the way those types will show up, right? Not all of the types, some of the types, right, the way that that will show up. So, that's kind of the first piece of the data API that we've kind of gotten sorted.
And if you look at this example that I have here where I have type artists, right? I'm seeing that my GraphQL model is id and name. In the physical model it might actually be first name plus last name, my API model, my GraphQL model I would like to be named, that's my read model, right? And then I might have albums, right, of the artist as a label, the artist is signed on label, the label might have its own information, right? So, that's nested object. The artist has multiple albums, so that'd be a nested array. So, albums and label would be kind of different models and have relationships to the albums model and to the label model, right? And here is where I've done the mapping of the artist model itself. So, we're seeing the mapping portion, we're seeing relationships, right? And then of course, we have authorization rules that determine what fields we can actually access. So, this is kind of how it will show up in the API schema, which is nice because now we can see kind of what our different interconnected models are, right?
The second piece is read. So, what we want to do is create conventions around how we're going to read these models, right? So, we can create conventions around pagination, around ordering and filtering, right? We want to create a convention on aggregations, and then we'd want to compose them with a relationship, we'd want to represent those relationships in our queries, right? So, we'd want to be able to read models across relationships. We'd want to also make sure that our queries can represent aggregations, or ordering or filtering that can reference parent and child objects, right? So, let's take a quick look at, as an example, I'll just show you kind of the way we think about it at Hasura, and what that read API looks like, what that read convention looks like when we think about it with Hasura, the way that we think about it at Hasura, right?
So, I have here a demo app that's running on my machine, and I've connected in a Postgres source and a SQL server source. These are kind of my two data domains. And I have a model called artists, which has ID and name, right? So, we're seeing kind of that information coming here. Let's take a look at what the API looks like. So, I have pagination, right? So, offset-based pagination or cursor-based pagination. We have a way of filtering stuff.
3. Filtering, Sorting, Aggregating, and Mutations
We can filter and sort the fetched lists using the filtering and ordering APIs. Additionally, we can apply aggregations to any list of models, allowing us to count or run other aggregate functions. Relationships between models from different data domains can also be represented, enabling us to fetch related data and perform aggregations across domains. This follows a GraphQL API convention and allows for neat representation of aggregations within relationships. On the other side of the API, we have mutations for invoking logic, such as creating a user.
So, we can say limit 10. That's what our ordering API, sorting API looks like. And then, of course, we have a filtering API as well. And we can filter on different we can say where, name, like. Let's say anything that starts with a Y. So, we have these sorted. That's kind of what the filtering is. That's kind of a convention around a list of models. So, we're fetching lists, and this is the way that we want to have certain arguments in fetching a list.
Similarly, a convention around aggregations in the way that we think about aggregations is that any list, any list of models that are fetched can have an aggregate applied to it, and with a certain set of aggregate functions that, again, can be added on for different types of models for different types of sources. For example, I can do a count. So, I can do a count to see how many artists I have in the system here. All right, let's take a look at relationships. So, we have artist ID and name, and let's limit this to 10. And now I can fetch, let's say, albums that is related to. And we can see the titles of those artists that are coming in here. Or artists and albums are actually in two different data domains just to show you what that looks like. My artists are coming in from a data domain, which is in Postgres and albums are actually in SQL Server. So even though they're across two domains, I'm seeing kind of a similar kind of API that's allowing me to do things like limit offset, paginate on the underlying field as well. In fact, I can start to bring in aggregations here as well. I can say, I can start to count and run aggregations on a list. And the nice thing is that this kind of follows a GraphQL API convention, but I can aggregate any list, but at the top level or even if it's a property of a particular parent of a child. So I can see the number of albums that I have for artist. And this kind of allows us to represent aggregations neatly in the context of relationships that we also have. So our concepts are composing, right? And our conventions are also kind of composing as we work with it. So this is kind of one way to think about the read API design, right?
Now, the other side of the API would be kind of the right side of our API, so mutations. So for example, we'd want to have things that we'd want to have logic that we invoke. Let's say for example, something complex like creating a user that has to go through a bunch of things. We can create a user.
4. Creating Users and Standardizing Infrastructure
We can create a user and expect the logic to return a reference to a user ID. GraphQL can pick that logic and return a user object in the API. We can create conventions around CRUD actions and different logic implementations. GraphQL also offers subscriptions for live queries and streams. By setting up the underlying GraphQL infrastructure, we can benefit from automatic model mapping and authorization. This creates a GraphQL platform that exposes data domains over a nice API, allowing owners to focus on domain models and logic. Thinking about GraphQL as a way to standardize infrastructure allows us to focus on the domain rather than the API itself.
We can create a user. The convention that we can have is that we expect our logic to return a reference to a user ID, right? And what our GraphQL system can do is pick that kind of logic, the output of that logic, which is a user ID, a reference to the user model, the logical model that we have for user object. And so in the final GraphQL API, we'd expect to see a mutation that returns a user object and allows us to traverse the user object, the user model. And if it's a list, then that list of models. And again, those same concepts that we had from a read API that we can kind of bring into the response of a particular command, right? So that kind of slightly CQL style flow is the way that we can think about how we bring in certain commands into our GraphQL API as well.
We can create our own conventions around CRUD, right? Because CRUD actions are essentially also just commands that are returning, again, a single reference or a list of references. And we can also come up with certain conventions around what we want to have for the logic that we have is written in different kinds of, maybe it's written in different languages, right? It's embedded inside the database, maybe it's just written as a serverless function. It's invoked over HTTP or RPC, and that doesn't matter because that convention can kind of be the same. The GraphQL layer just has to make sure that it knows how to reference the lead models that we had from our previous discussion and how to invoke commands and logic that we have from when we're kind of bringing commands and mutation into our system, right?
GraphQL also offers subscriptions. In the interest of time, I won't go into too much detail, but the subscriptions can have, the way that we think about subscriptions is you can have live queries for subscriptions that are very useful to consumers when they're looking at latest snapshots of data, or you can have streams, which are very useful for larger lists of data or for events, right? Especially when you have a really large list where you have like a billion elements and you want to extract that information over an API. An API is a very nice way to do it, but a billion elements, that would be really large, right? So you can kind of stream that data as it comes, or you can even stream real-time information the way that you would stream events. And subscriptions offers a nice way to do both. The new AppStream API, the AppStream directive will also offer a way to kind of approach streaming larger lists. The spec is still evolving, and I think our understanding around streaming especially is still evolving because there's things like back pressure and stuff like that, to also think through which don't have a firm answer yet, but there's an opportunity to kind of provide this kind of value for a modern data API as well. So, to kind of take a step back and just see what we kind of, what we went through, right, we thought about using GraphQL as a neat API convention for a data API, right? We saw some common usage patterns and how we can model those usage patterns for a data API. We see how those concepts compose automatically. We couldn't go into authorization because there's a little, that's too much detail for this conversation, but authorization at the model level can also kind of be made to compose automatically as we think about our lead APIs and our command APIs. When you think about this design and now kind of operationalizing it, right? The way I think about it is, what is the platform work that we need to do to set up the underlying GraphQL infrastructure, right? So that we're able to benefit from this whole, these three or four ideas we discussed, right? The first piece of infrastructure is that if you're able to set up a good way for the maintainers of a GraphQL API or for the owners of a GraphQL API to come in and do model mapping and authorization, that becomes really nice because now we can start bringing in models, right? And we can bring in models securely. So we can also then bring in the infrastructure to think about specific data source integrations so that our query execution can be optimized for different types of data sources that we're speaking to, right? And that piece can be a little bit decoupled from the model mapping and authorization piece, right? So that we're using the same semantic concepts but we're compiling queries if that is the more effective way of doing it or caching queries in request if that's a more efficient way to do it, right? And then of course, the infrastructure that we also need to set up is deciding how we wanna have communication with the upstream services, especially commands, right? So RPC or HTTP is necessary. Read-after-write semantics are also a key part of this that you wanna make sure that when you run a command and something happens, it returns a reference to a read model, you can actually read that reference. You won't get a state read or an outdated read. So that's an important piece that needs to also exist. But if we're able to set this up and we're able to kind of empower our GraphQL API on this infrastructure and let them and set this up, then essentially what we get is we get a GraphQL platform, right? That can then allow us to get some things for free, right? The first thing that we get for free is that all of the data domains that we have now, those data domains kind of get automatically exposed over a nice API that people really like, right? All of the metadata documentation, GraphQL API tooling is built in, GraphQL is an API that people really like to use. But from the point of view of the builders, right? Of this API, the owners of those domains now have more free time and flexibility to focus only on their domain, the models and the logic and not the plumbing of how they're going to expose it over the API because that part, we've kind of been able to create common conventions for doing, right? So, because we can externalize that, make that a common convention and then work on this for free. I think at a very high level, right? The amount of data that we have is just gonna keep increasing and gonna keep fragmenting and so thinking about GraphQL as a way to attack this problem offers a nice way of thinking about this problem and getting value from that faster and also kind of setting ourselves up for the future, right? The next five, six years is gonna be crazy in just the amount of data that we have and the ways that we start to use it. And what I kind of really like about this is that this opportunity that we can use to standardize the infrastructure allows us to focus on the domain rather than focusing on the API itself. It's sort of the same thing. It's not really, it's just pushing the where we're putting our effort in into a slightly different part, right? Into saying that instead of building, deploying, scaling and documenting APIs, because we use GraphQL as a way to standardize these things, can we now just focus on modeling the domain and creating the unique business logic? And that's all. That's the work that we do. We get kind of the API for free because it's a nice GraphQL API, right? So that's kind of the thought that I wanted to leave you folks with.
Data Sources and Aggregations
Please feel free to reach out to me if you have any questions. It's interesting to see the increasing number of data sources in applications. We're moving away from one database that does everything to using specialized databases for different tasks. With GraphQL, aggregations are handled through predefined types and relationships. The return type of an aggregation depends on the function being used.
Please do feel free to reach out to me. That is the end of my time, but please do feel free to reach out to me if you have any questions. Would love to kind of chat about how you're thinking through the challenges and share insights in using this kind of an approach, drawbacks of this approach, and what kind of benefits it's supposed to result in. But that's new folks, and do feel free to reach out to me on Twitter as well. Bye.
So you asked the question about how many different data sources of thruth does the app or API have? And we have got the people voting it till now. So it's like most of the people, 60% of the people have voted one to five. So yeah. And 20% say that they are using just one data source of thruth and then no one is... Okay, now it just changed to 17%, and then 10 plus, I mean, 17% of the people are using 10 plus and points, oh my God. What do you have to say about it? That's awesome. No, it was really interesting to kind of run that poll and sample what folks are doing because I think what we're seeing is that it's going to kind of keep increasing the number of sources of kind of truth that you're having in your application, right? Whether that's SaaS services or other databases, I think that that number is just gonna keep going up. I don't think it's going to reduce, it's not going to slow down. Yeah, I always like to say that, we used to think that there might be like one database that does everything. I think now we've just gotten, now I think we're just realizing that it's fine. We're not gonna have one database that does everything. We're just gonna use like five or six different databases to do specialized things, right? Like we'll have a time series thing and search thing. And of course with other services also, right? So it's getting kind of more and more fragmented, which is awesome. Yeah that's great. That's great, really.
So I would like to remind audience that do you have any questions, do let us know in the Milky Way Q and A channel. But meanwhile, I do have a question for you, Tanmay. So like, so the data APIs, they typically need the ability to run the aggregations as well. So what does a Graph QL based data API that supports generic aggregation look like? So that's, yeah, that's kind of a very, that's kind of been one of these interesting things when you think about the data API design, right? Because the challenge is that when we think about aggregations, right? And especially if you look at any aggregation system, right, whether Mongo aggregations or SQL aggregations, what ends up happening with aggregations is that the types of the, when you make a query to aggregate some data, right? Let's say you have a, you have users that have ID and name and something, and you want to run an aggregation to see how many people have the same name, right? And so you do an aggregate on this user's module and you're like a group by name, right? And count. So the return type of what you get from an aggregation that return type is not the same as the user type, right? The return type depends on what you're aggregating. So if you're aggregating the number of, the number of items, you'll get an integer, right? If you're aggregating this, say, what is the average length of somebody's name, right? Then instead of getting a string return, you're getting like a numerical or a float return, right? So the type of what you're trying to get ends up depending on the type of the aggregation function that you're running, which makes it interesting with GraphQL because the idea with GraphQL is that you have a GraphQL schema that has predefined types, right? So this is kind of the central challenge. So one of the approaches that we've taken when thinking about aggregation with GraphQL is that for any list type, anything that you have that, anything that returns a list of elements, we define a bunch of aggregations on the list, which are kind of predefined, right? And of course, you can add more aggregation types to it. And then when you want to do a grouping, right, which would affect the return type, that happens through a relationship. So what you do is you say that, you know, I have, like you have users, and then you say users.transactionsAggregate, right? Or orderedAggregate. So you're seeing how many orders, the max number of orders, the average amount spent in the orders per user, right? So the groupByKey that you would think of when you're doing aggregations becomes the parent and the aggregation result becomes the child.
Comparing GraphQL and gRPC for Data API
GraphQL might be a nicer API than an RPC style API for fetching data, even for service-to-service communication. Some projects are mixing GraphQL and gRPC, delivering GraphQL over a gRPC protocol. It's still dependent on the use case. Regarding the question about favorite areas to explore in Computer Science, I started late in software development but covered various areas like operating systems, networks, databases, and programming language theory. I enjoy building different types of applications and have experience with Kubernetes and Docker.
So that becomes a really easy way to kind of extend it in GraphQL, still maintain kind of a type schema. But then also kind of get fairly flexible groupByKeys with aggregation. So that's kind of a pattern, that's kind of an approach that we see works for kind of bringing aggregation next to GraphQL.
That's really nice, I mean, it's really insightful to know that. I have also another question for you regarding Data API. So like our Data API, it might be consumed by other services also and not just front-end applications, right? So how does the GraphQL compare to, let's say GRPC in this context?
Yeah, that's something that I've talked about with people before. Because when you're thinking about a Data API, like I said, it becomes a thing between services. So it's often an API that you're using between services, one service is hitting another service to get data, right? Team one is talking to team two to get team two's data, right? And that often is an internal service to service API where people often think about GRPC. So I think a GRPC addresses some of those problems as well in terms of having a schema or a protocol for between these services that is typed. Also, it's HTTP2, so it's bidirectional. And then I think the connection is also persistent, which is also very nice, right, between these two services. But I think for a data API, GraphQL still might win out in terms of ergonomics because when you're looking at a data API, you're looking at making an API call that fetches a bunch of information in the same shot, in a single API call, right?
From a client side, from a client point of view, like one of the services that's trying to fetch that data, it's easier to make a single query to fetch that data instead of having multiple things, instead of requesting for the data multiple times, which is what you might end up doing with if you were like thinking of it in RPC style, right? So I think that for a data-specific API, GraphQL would still be nicer, even for service-to-service communication. And I think it's going to be interesting because I've already seen a few projects that do this where people are mixing the two, right? People are merging, where people are kind of thinking about GraphQL, but delivering GraphQL over a gRPC protocol, right? So there's some interesting ideas kind of happening there as well. So I think that's also going to be interesting to see. But my gut feeling is that I think when it comes to kind of fetching data, a GraphQL API might be a nicer API than an RPC style API. But I think it's still dependent on use case.
Yes, definitely. And that's really interesting. I would really also want to see how it plays out using GraphQL and gRPC like.
So there's one more question from learner. So it's not related to GraphQL really, but they want to know that how long have you been developing software? So like what are your favorite areas to explore in Computer Science?
Yeah, I started writing code and building stuff very late in my life. I didn't start very early. I started only in university. I know a lot of folks who started like much before the school, even when they were children. So I started really late and but then I really liked it. I made sure that, I think, I touched as many different areas as possible. So, within the stack, right? And so when I was studying Computer Science, for me, it was I was trying to get into kind of all layers of the stack, right? Whether it's operating systems, networks, or databases, kind of like just programming language and programming language theory, right? So trying to kind of cover as many areas or levels of abstractions, like, which is a lot of fun. And then I just started building lots of different types of things, lots of different applications. I used to work as a consultant when I started my own consulting firm as well, where I would just build lots of different types of applications, I was helping people kind of even modernize applications or existing applications into kind of a more modern style. That was kind of, where I started to get very hands-on with Kubernetes and Docker, which was really new at the time as well.
Experience and Focus on the Stack
I have experience working on various parts of the stack, including databases, application servers, and front-end applications. I find this area very interesting and believe there is a lot of potential for increasing productivity. While I used to focus more on infrastructure and DevOps, I have shifted my focus to the front half of the stack in recent years.
So that's kind of allowed and given me the opportunity to kind of work on a lot of different things. And, you know, my areas of interest end up being kind of the upper-ish half of the stack in the sense of databases, application servers, and, you know, even front-end applications. Like that piece of the stack, it remains very kind of interesting to me. I feel like there's a lot of stuff to do there to increase productivity. So that's the area that I'm closest to. I used to be fairly kind of in the infra and Kubernetes and DevOps space as well, and kind of moved more into kind of this front half of the stack over the last years.