Building GraphQL APIs on top of Ethereum with The Graph

Rate this content
Bookmark

The Graph is an indexing protocol for querying networks like Ethereum, IPFS, and other blockchains. Anyone can build and publish open APIs, called subgraphs, making data easily accessible.

In this workshop you’ll learn how to build a subgraph that indexes NFT blockchain data from the Foundation smart contract. We’ll deploy the API, and learn how to perform queries to retrieve data using various types of data access patterns, implementing filters and sorting.

By the end of the workshop, you should understand how to build and deploy performant APIs to The Graph to index data from any smart contract deployed to Ethereum.

48 min
13 Dec, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

This workshop focuses on building a GraphQL API on top of the Ethereum blockchain, using the Graph Protocol. The Graph provides a decentralized approach to data querying and storage, allowing developers to build decentralized APIs that read and index blockchain data. The workshop covers topics such as creating subgraphs, defining data models, handling events, and deploying and querying the subgraph. Participants learn how to interact with smart contracts and the Graph Node, and how to customize queries to retrieve specific information.

1. Introduction to Building a GraphQL API on Ethereum

Short description:

This workshop focuses on building a GraphQL API on top of the Ethereum blockchain. It explores the relevance of blockchain technology in various industries and offers opportunities for learning and growth. The workshop is divided into two parts: a presentation and a hands-on workshop using a GitHub repository. The speaker, Nader Dhabat, introduces Edge and Node, a blockchain startup behind the Graph Protocol. The Graph is an indexing protocol for querying blockchain networks. It provides a decentralized approach to data querying and storage, unlike traditional databases.

Cool. So, yeah, this is going to be a workshop where we are building out a GraphQL API on top of a blockchain network, and the network that we're going to be building on top of is Ethereum. And this is probably going to be outside of the ballpark of most of the talks or the workshops given here. It's definitely a lot different than anything I had ever done when I started learning this stuff. But I think that it's becoming more and more relevant, I guess, with the emergence of a lot of the different jobs and opportunities that are out there.

And it's kind of like maybe a new area just to explore that to maybe accompany all of the existing knowledge that you have around GraphQL and how it might be used. So if you're interested in exploring opportunities in the blockchain space, or the Web3 space, or even with a lot of the traditional companies like Stripe, PayPal, and Meta and all these companies adding blockchain stuff, this might become relevant to our everyday day to day jobs, who knows. But with that being said, this is gonna be broken up into two parts. So I'm going to be giving a presentation. And I'm also going to be doing a workshop and the workshop I shared in the chat earlier. So you should see a link to that. It's a GitHub repo that we're going to be following. And this is it right here. Zoom in a little bit here.

Cool. So my name is Nader Dhabat. I'm a developer relations engineer at Edge and Node. Edge and Node is a blockchain startup that was created by the team behind the Graph Protocol, which is what we're going to be talking about today. And we do a lot of different things. We do support this protocol from the perspective of us being software engineers that build out and maintain the infrastructure for this protocol. We also do Web 3 tooling. We do Web 3 open source. We're creating a couple of Web 3 applications that are going to be open source. We also do Web 3 awareness and venture capital. So one of the things that I do a lot is just teaching people how to build stuff and sharing reference architectures and stuff like that.

So the Graph is what we're going to be using. And it's one of the many Web 3 web infrastructure protocols that are out there. And it is an indexing protocol for querying blockchain networks like Ethereum and IPFS. So what does that actually mean? Well, when we think of traditional databases, they are created and are built to be made in an efficient manner for us to query and to save data too. So a traditional database like a SQL or NoSQL database, there are no constraints around decentralization.

2. Introduction to Reading Blockchain Data

Short description:

In a blockchain, data is written over time in blocks, and innovation focuses on write transactions. However, reading this data is often overlooked. Developers used to build centralized indexing servers to gather and store blockchain data, but this approach was resource-intensive and centralized. The Graph indexing protocol solves this problem by allowing developers to build decentralized APIs that read and index blockchain data. Examples of indexers in real life include search engines like Google and libraries that use indexing systems to organize and retrieve information. Subgraphs, which sit between smart contracts and user interfaces, enable developers to query and fetch different types of data for their applications.

So we know this is going to be our data so we can store and we can optimize for read and write. But in a blockchain, the nature of it is that we have this data that's being written over the course of time, you know, days, weeks, months and years, and it's being written in these blocks. So all of the innovation is typically happening around write transactions. So when you hear about a lot of the innovation that is happening, you often hear discussions around how many transactions per second does this protocol handle? How much does the transaction cost? How long does the transaction take to be processed? What is the block time and things like that? But we don't hear often talked about how to read all of this data.

So what developers did in the past was that they would take these blockchain protocols, they would decide the data that they needed, and they would build out their own proprietary centralized indexing servers where you kind of are going and you're gathering all this data, you're saving in a database and you're opening up your own API endpoint on top of that. The problem around this was that it was very resource and engineering intensive and it also kind of broke the entire idea and the security principles around decentralization, because the blockchain data is supposed to be the source of truth. So how do you actually make this decentralized? If you're centralizing in a database to kind of read it? So the graph is an indexing protocol that allows developers to build these APIs that kind of read all this data, indexes it in a decentralized manner. And appear to peer network of nodes. And then developers, once they've deployed, those can open up their API endpoints for any other developer to build out front end or other applications on top of it.

So to kind of explain that a little more again, a traditional database might look something like this where we kind of have a database, we have built an API endpoint on top of that, maybe a serverless function or some type of server. We're going to be sending a request to this API. It's going to then read the data. It's going to then process that maybe in a return. And then it's going to bring that response back to the client. But again, we can't really do this when reading data directly from the chain, because the chain is kind of thought of as the database. So we don't really have this this compute layer in the blockchain space. So what are some other examples of indexers and real life? Well, Google or any search engine is an indexer. When we want to find information on the web, we can't just go and view all the websites because there's millions of websites that are out there. So how do we find the information that we need? Well, Google and other search engines have this idea of a indexing system where they crawl the entire Internet. They find the relevant data. They store it in their own centralized databases. And then they open up an API endpoint on top of that, either via some actual API that we can interact with from an application, or in the case of Google, we just have like a website that we can visit. And when we enter a term into the website, it's going to their databases and it's kind of searching what we're looking for. And then it returns with the connection to the website that we would like to view. So Google is indexing and making it available for querying by us on the front end. Another example is a library. So when we go to a library, we don't walk around the entire library for four hours looking for the book that we want. There's an indexing system where the Dewey decimal system or something like that might tell us exactly where to look for a book. And then we can go directly and we can just find that book. So in a similar sense, these APIs called sub graphs sit in between the smart contract and the user interface and allow developers to query and have different types of data fetching that you might need for your typical application. So you have these different types of, um, I guess you could say queries that you might hit your app with.

3. Decentralized Network and Query Market

Short description:

Sub graphs enable full tech search, relational data, filtering, sorting, and more. Once deployed, the API is accessible through a decentralized network of nodes. Applications query a decentralized network of indexers, competing to provide the best service at the best price. Similar to paying for an AWS serverless function, API requests are paid per query. The network operates in a decentralized manner, with people cooperating to provide utility.

So you might need full tech search or you might need relational data. You might need filtering, sorting, all that type of stuff. So sub graphs kind of enabled that. And once you've deployed your API, then it's deployed to a decentralized network of nodes. And applications can start querying for this data without relying on a centralized service. Instead they're going to be hitting this query market that's comprised of a decentralized network of indexers. And all of these indexers are kind of competing against each other to provide the best service at the best price. So when we think of how we pay for an API request from something like serverless function from AWS, it's similar to that. When you kind of call one of these, these endpoints, you're paying per query, some fraction of a fraction of a cent or something like that. Very similar to how you might pay for a Lambda API call. And the idea here is kind of like it is serverless. So how does the network operate? Well, instead of having a centralized service, that's kind of like providing this utility for a profit. Instead, you kind of have a decentralized network of people that are all cooperating to kind of make this happen.

4. Graph Ecosystem and Building a Subgraph

Short description:

We have different participants in the graph ecosystem, including indexers, curators, delegators, and sub graph developers. The graph is used by various projects in the web3 DeFi ecosystem, serving over a billion queries per day. We'll be building a subgraph for an NFT marketplace and walking through the steps to get started.

And we have a couple of different participants. We have indexers, we have curators, we have delegators and we have sub graph developers. And we are the sub graph developer today. So the index are kind of the people running the open source node. Graph node software. So these are typically site reliability engineers, or I guess, more likely DevOps people who can deploy one of these indexers. And they are going to be essentially allowing developers to deploy their APIs to the network. And then they are going to be processing those APIs.

Sub graph developers are the developers that are building out the APIs, describing the data that they want to index. Curators are people that know which APIs might be something that would be useful. And then they can signal to use some of their GRT, which are the graph tokens, which are kind of like the utility token of the network. And they can say, oh, this, this API looks like it's good. So I'm going to curate on this API. And then as a curator, I can share from some of the query fees that that subgraph gets. And then delegators act in the same way except for indexers. So they can find an indexer that they think that is trustworthy, that they want to support and they can delegate their tokens to that indexer. So therefore, instead of having a big corporation, centralized kind of like maintaining this infrastructure and profiting off of it, you have people in the ecosystem and the community that are kind of doing the same thing. And they are the ones earning the rewards for running this decentralized network. And this is how a lot of the different protocols work. You have these different network participants that are kind of running the software infrastructure that we might typically have used in the past from a centralized place.

So is the graph being, who is using the graph? The graph is serving over a billion queries per day at the moment, a lot of the web three defy ecosystem is using it. So projects like a Uniswap projects, like foundation projects like pull together. All of these different projects that have listed here are using the graph and it's kind of powering their, their UI. So we're going to be kind of building this subgraph for a NFT marketplace because I think visualizing this, this token with pictures and images and art is pretty a nice way to kind of get started with it. But we're going to be walking through all of that step by step in our code base in just a moment or in our tutorial and our workshop. But for now I'll kind of walk through the steps of how that might look. So to get started, you would go to the graph.com. And then we have like the user interface where you can go ahead and define the name of the API or the subgraph that you would like to build. And then once you've defined this, you can then go down to your computer onto your CLI and use the open source graph CLI to initialize a new boilerplate subgraph locally. And this is almost like if you wanted to create a new react application, you might use a MPX create react app or next JS app.

5. Building a GraphQL API on Ethereum

Short description:

To start building a GraphQL API on the Ethereum blockchain, you would scaffold out a boilerplate code and define your data model using a GraphQL schema. Additionally, you would specify the smart contracts to query and implement business logic to determine data storage. The GraphQL schema includes directives like 'ad entity' to make data queryable and 'ad derived from' to create relationships. Deployment is done using the CLI, and testing can be done in the graphical editor or GraphQL playground. You can deploy to the decentralized network and publish for others to use. Once you have the graphical endpoint, you can define queries, create a client, and interact with the API.

You would do that. This just scaffolds out like a boilerplate for you to start writing some code. Once you have that locally, you would then define your data model, which is your graph QL schema. You would define the smart contracts that you want to query from. And then you would also have some, some business logic that we're going to also write to define how that data is stored.

So we have our data model, which is the graph QL schema. We have the smart contracts we want to index. And then we have the other configurations that we're going to look at.

So what does a graph QL schema look like in the graph? Well, it looks just like any other graphical schema with maybe one small difference that we have a couple of different directives that are native to the graph. One is the ad entity directive. And this just means this is a data type that we want to be made queryable and we want to save into the, into the network. So a user is going to be something that we can query for by just adding this ad entity directive. And then we also have the ad derived from a directive which allows us to create relationships. So we want to be able to make a one to many relationship for users and tokens. And then we want to have a one to one relationship between the token and the user. And we have both of those declared here in this graph QL schema.

And then when we're ready to deploy, we just use the CLI to run graph deploy. And then we can test it out in the user interface. We have an actual graphical editor and a graph QL playground and we can just start hitting the end point there. And then we're also given the ability if we want it to at that point, deploy this to the decentralized network. Cause we kind of have this, we have a staging environment, you can think of where we can just like test it out and even use it in our front end. And then we have the final step if you wanted to kind of publish to this, this, to the network and allow other people to use it as well there.

So you're just given a graphical endpoint, just like anything that you've probably used in the past where you have the ability to kind of choose which graph to a client you want to work with. So we will give you this and then you define your query. You create your client and you start interacting with it. So nothing different than anything that we've probably used in the past with graph QL. Once you have that endpoint, you're ready to start going.

So with that being said, we are going to go ahead and start working on the workshop. And the workshop is at github.com slash debit three building a sub graph workshop. And I've also linked to it.

6. Building the Foundation NFT API

Short description:

We're going to build an NFT API for the foundation, a marketplace for digital art. The foundation has a developer portal with links to their smart contracts and documentation. Unlike traditional APIs, smart contracts are open and public, allowing anyone to build on top of them. By using their backend, we can create our own front end. The foundation also provides subgraphs for querying their data. To get started, we'll sign in to the graph hosted service.

And it looks like Lara has also a link to it again there as well. This is where we're going to start from. And the NFT API we're going to build is for foundation and foundation is a pretty cool marketplace for digital art. And you'll see that there's always some cool and sometimes weird stuff here. So this is what we're going to be querying for.

And another cool thing about foundation is that they have a developer portal. So if you go fnd.dev, I'll share both of these links here. Actually yeah. Fnd.dev is the foundation developer portal. And here you kind of see if you're a developer and you wanted to build something on top of their smart contracts, they have a link to their contracts actually and their docs here. So these are their smart contracts.

And this is one interesting thing about working with blockchain is and web three and stuff is that pretty much if an application is deployed, you know, we're so used to thinking that a backend is proprietary and that we can't really use someone else's backend unless they explicitly create like a really nice API for us to use. Like maybe we might want to use a Twitter API. And it's very limited in the sense that we can't really do everything that Twitter can do. And then one day they might decide to just shut that API down as well. Right? So it's not really something we can depend on to build a business on. But with a smart contracts, these are completely open in public. So if someone builds an application and deploys it, then anyone in the world can build on top of that and know that it's going to be there a year from now and five years from now. So we're essentially kind of saying, Oh, these people have built this smart contract. We want to use it and we want to use it in this manner. You can also build out your own front end on top of this. So we're kind of taking their backend and we're using it. And you can do that with any blockchain application. If you have their contract address, which is completely public and open and almost in all circumstances, I would say. They also have their own subgraphs. So you can kind of look here that they have the link to their main subgraph and then some query examples and stuff like that. So with that being said, just wanted to kind of share that, but we're going to build out our own API on top of that. So to get started, I'm just going to go here to getting started and it's going to say open the graph hosted service and I'm going to go ahead and sign in. Actually, I'm already signed in. So I might sign out and then sign back in.

7. Creating a New Subgraph

Short description:

To create a new subgraph, click the 'Add a Subgraph' button in the top right corner. Provide a name and subtitle for the subgraph, then click 'Create Subgraph'. Install the graph CLI and run 'graph init' to initialize a new subgraph. You can run 'graph help' to see the available commands. The 'graph init' command will guide you through the process of initializing a new subgraph, including selecting the Ethereum protocol and deployment options.

All right. And what we want to do after we've signed in and created an account is go ahead and create a new subgraph. So here in the top, right corner, we have this button for add a subgraph. So I'm going to go ahead and click that. Actually, I might see if I can't make this a little smaller so people can see it better. Okay, cool. All right. So I'm gonna click out of subgraph and here we can give the subgraph a name and a subtitle. Okay. So I'm going to call this subgraph a foundation subgraph or something like that. You can call it whatever you'd like. Looks like I already have one with that name on my own, so I'll call it foundation subgraph too. That's because I've done this workshop before and then I'll just call this like foundation subgraph. Okay. So we've put in just the name and the description or the name and the subtitle and we can go ahead and click create subgraph. I'm going to click hide just so it doesn't not add it to the subgraph explorer because I've created so many, but if it's up to you, if you'd like to have that or not. Alright, so once we've done that, we now have our Boilerplate UI that we can now push to essentially, so we can now go down to our CLI and start building out the subgraph locally. So to do that, I'm going to open up my command line and I am going to npm install the graph CLI. So I'm just going to copy and paste that. Already have this installed, just going to go ahead and install it again there just to kind of to show everyone that might be following along. Okay, cool. So once the graph CLI is installed, you should be able to just run graph and then here you should have some output. And if I do like graph help. You kind of see some of the different commands you can run. And we're going to be starting off with a net and a net is going to initialize a new sub graph. So not only that, but there are a couple of ways that you can run graph in it. So for instance, I can just run graph and net by itself. follow along here, we're going to be kind of like doing a more in depth command. But if we just do graph a net, it will kind of like walk us through all these different questions. So like which protocol Ethereum is like the Ethereum virtual machine, we might deploy to the hosted service or the sub graph studio.

8. Deploying and Indexing Events in Sub Graph Studio

Short description:

The sub graph studio is a sandbox for testing decentralized networks. We can provide a name and answer questions to create a sub graph. The 'graph a net' command is used to specify the from contract, network, contract name, and index events flag. Index events allow us to subscribe to events in smart contracts, similar to GraphQL subscriptions. By indexing events, we can automatically generate boilerplate code and gain insights into the contract's events.

So the sub graph studio is for the decentralized network, where's it going to be deployed to the hosted service, because it's kind of like the sandbox that we can test things out with. So we can give the sub graph a name and it'll just like kind of ask us all these questions. We can also just go ahead and provide a lot of those questions up front. So that's what we're going to be doing with the command that we're going to be working with right here, where we say graph a net. And then we pass in a couple of different flags.

So the first thing we want to pass in is the from contract. So this is going to be the foundation smart contract address. So we say dash dash from contract passing in the address. We want to pass in the network because there are dozens of networks that we support on a Ethereum virtual machine. So we support Polygon, Avalanche, Celo, you know, Arbitrum, Optimism, all these different networks. So we want to specify Ethereum mainnet. We also can pass in the contract name, because a single smart contract might hold multiple contracts that are part of that single contract. So I believe this contract has not only the token, but other different contracts within it. So the one that we want to be working with is the token contract. And then we have this index events flag. And this is going to be brand new for sure to anyone that's not really familiar with how blockchain works or how Ethereum works. But let me kind of describe this so it makes a little more sense. And we're going to be diving into this a little bit more as well. And hopefully by the end of this, you'll have a good idea around what events are.

So when we think of a GraphQL API, for example, a good example of like what this might relate to from what we're used to is that in a GraphQL API, we have this idea of a subscription, right? So we can kind of go ahead and set up a GraphQL subscription. So when a mutation is made, we can subscribe to those events. And whenever someone creates a new item in our database, that event is fired. In Ethereum, there's something very similar to that in your smart contract, you can create events from within a contract whenever any action is taken. So if someone wants to say when a new token is created, we want to omit an event. And in that event, we want to pass in arguments similar to a GraphQL subscription, where you kind of like have the mutation arguments, we can do that. So we essentially are going to say, okay, in this contract, we know there are events And we want to go ahead and automatically say that we care about those events, and we want to index those. And in doing this, we are going to go ahead and just be given some nice boilerplate code that will help us out down the road. And that's kind of like all I will say about it right now. But let's maybe take a quick look at the foundation smart contracts real quick. And maybe look at what these events look like.

9. Analyzing Contract Code and Events

Short description:

When using EtherScan, you can find the code for the smart contract by providing its address. The main contract, also known as the proxy, communicates with other contracts. The ERC 721 contract contains the MIT transfer event, which is defined and emitted in the code. Understanding these events is crucial for working with the contract.

So if I go to either scan, which is how you can look at different contract code, and I go to the contract here, when I go to the proxy, which is essentially, this contract is speaking to another contract, but this is kind of the main contract. And I'm just going to kind of look for MIT. We have an event that's happening right here where we have this ERC 721 contract. And we have this MIT transfer. And then if we have an MIT transfer, we probably have an event transfer. And the event transfer is less kind of like describing this event, and you define the event and then you emit the event. So like, these are the events that we're working with. Another cool note is that with EtherScan, if you have the address, you can typically find the code for the smart contract, which is essentially kind of like the code base that you're working with.

10. Creating and Initializing the Subgraph

Short description:

We copy and paste the required code into the graph CLI. We select the Ethereum protocol hosted service and provide the sub graph name. The directory and network are automatically set based on the sub graph name. After a short wait, a boilerplate is generated. We navigate to the generated directory and open it in a text editor.

Okay, cool. So with that being said, let's go ahead and just copy all this right here. And we're going to paste it here. So we have our graph CLI installed. We're going to say GraphINIT from contract, network main net, contract name token, and index events. And here we'll choose a Ethereum protocol hosted service. And I believe I have all this stuff here. So hosted service, sub graph name. So this is going to be a combination of your username slash the sub graph name that you defined earlier. So if I go to the dashboard here, and I just copy the the slug here, which is DaBit 3 slash foundation sub graph two, then that should work. And then the directory should be the name of the sub graph. And then the network should automatically be set to main net contract address that also are automatically be set. And then the contract name should also be set. So we should just be able to kind of like set the sub graph name and then everything else should automatically be defined for us. And then from here, we just wait a couple seconds. And what we should have now is a boilerplate that's been generated for us. So I'm going to look in my directory, and I see I have this foundation subgraph too. So go ahead and change into that directory. And I'll open this up my text editor.

11. Exploring Subgraph Parts and Defining Data Model

Short description:

We have our subgraph created and a lot of generated code. Let's explore the three main parts: the GraphQL schema, the subgraph.yaml file, and the graphical schema. The GraphQL schema defines the data model, while the subgraph.yaml file contains the configuration for deployment. In the graphical schema, we define entities such as tokens and users, with relationships between them. With this data model, we can create a 16-line code for our API, including unique identifiers for tokens.

Okay, so we have our subgraph created. And we have a lot of code that's been already generated for us. But let's kind of go look in the three main parts that make up this subgraph. One is the graph QL schema. And this is going to be our data model. And we can go ahead and just kind of delete all of that for now. We don't need any of that code. We also have our subgraph dot yaml. So if you've ever written any infrastructure as code on something like AWS or any configuration actually in yaml, you probably, you know, understand kind of what yaml typically does. It's typically some type of configuration. So it could be like configurations code. In our case, it's almost like infrastructure as code in the sense that when we deploy this to the network, the indexer reads this file to know what to do with this codebase. So the subgraph dot yaml, all the configuration that you're going to need. So we're given, you know, some boilerplate. So I might go ahead and just delete those entities because we're not going to be using those. And we're going to be deleting a lot of these event handlers as well. But let's not worry about that right now. What we want to do, I guess next, is let's go back to our workshop. And I'm going to jump now to the graphical schema. This is going to be where we define the entities that we want to store and make queryable. So when we look at this website for Foundation, what might the data model that we want look like? Well, we automatically kind of see these entities like this, you could think of as an entity. This is a token that has a picture and it has a description, has a price, it has all this other metadata around it. So we might say, okay, for each of these images or these pieces of art, we might say this is going to be a token and then the token has different properties. So for us, we can just go ahead and say, okay, we want a type of token. And then we want to have the different fields on that token be the different metadata associated with that, that we want to kind of save. And then we also might want to create a relationship between tokens and users, because we might want to be able to click on a user like this and see all the different tokens that they have created. Or we might want to click on a user and see the tokens that they own. So that's kind of what we're going to also do, we're going to have a user type, user has a relationship between tokens owned and tokens created. So using this data model, which is not that all that big, we can go ahead and have a data model for our API, that's about 16 lines of code. The token is going to have an ID, that's going to be like a unique identifier.

12. Token and User Entities

Short description:

The token will have an ID generated by the smart contract, a content URI for the image, an IPFS address, a name, and a created at timestamp. It will also have a relationship between the creator and the owner. The user will be identified by their wallet address, and a relationship can be created between the user's purchased tokens and the tokens they've created.

It's also going to have a token ID, the token ID is going to be the ID that is generated by the smart contract. So if someone mints a new token, the contract automatically assigns them that token, the content URI is going to be the URI for the actual image. And then we also have a token IPFS path, that is going to be the IPFS address for that token as well. We have a token name, and then we have the created at timestamp, that is going to be the block time that this token was created. And then we have the relationship between the creator and the owner. And finally, we have the user, which is pretty basic. All we care about, really, is the ID. Well, it's not really the only thing we care about. It's the only thing we have to know about a user, because all the user is going to be essentially giving us when they authenticate is their wallet address. And that's going to be their ID. And then we also can now create a relationship between the user tokens that they've purchased and the token that they've created. So we can go ahead and save this, and go back to our subgraph.yml. And now we can fill out our entities. So we have a token entity and a user entity. So I can just say token and user.

13. Modifying Configuration and Event Handlers

Short description:

We need to update our event handlers, address, and start block for the subgraph deployment. The start block is set to the block number at which the contract was deployed, allowing us to ignore previous data. The contract address is updated to point to a different contract, enabling upgradability. The event handlers we care about are 'token IPFS path updated' and 'transfer'.

The next thing that we want to do is we might want to go ahead and continue modifying our configuration. So we've done the entities of token and the user. We now need to update our event handlers as well as our address and our start block. So let's go ahead and jump down here and then we'll do the event handlers in just a moment.

So when we deploy this, this sub graph, it's going to go to this smart contract here. It's going to go ahead and find all the transactions that have happened. Actually, it's going to go to the original contract address. I believe it's here. It's going to go ahead and it's going to start looking at all the transactions that have happened for this smart contract. But it doesn't know where to start on the blockchain because, you know, the original transaction that happened, the first transaction that ever happened, it might not realize that. It doesn't need to start from there and kind of read everything that's happened ever since then. So if you wanted to do that, you would just deploy it as is. But you instead can say, OK, when this contract was deployed, it was at this block number. And then we want to just start from that block number because we don't really care about any data that was stored before that because our contract was not there before that. So we can define the start block by just setting the block number here. And then instead of going all the way back, it's just going to start at the block from which this contract was deployed or from wherever you set the start block to be.

The other thing we want to do is we want to update the contract address to this address here. And that is because when we kind of went to this contract a moment ago, and we looked at the actual contract address, we read that this was a proxy contract. And this is very kind of, I would say, a little advanced for most of the rest of the workshop in the sense that this kind of goes into how you might make an upgradable smart contract. And instead of having the contract that you deployed be the source of truth, you can essentially have it pointing to another contract and then update the location of that contract. So that when you make updates, you can kind of be pointing to another contract. That's kind of what this contract is doing. So when we initialized it, we wanted to index the events off of one contract, but when we deploy it, we want to be actually indexing a different contract. So that's why we're updating this address. And then finally, the only two event handlers that we care about are these two. We have token IPFS path updated and transfer. So I'm going to go back to this and I'm going to find those two events, which are going to be these two at the very bottom. So can I pass that updated and transfer and I'm going to delete all the rest of these. So now we have about 27 lines of code. So let's talk about these event handlers.

14. Handling Events and Saving Data

Short description:

In the smart contract, events are emitted and can be listened to in a GraphQL API. We handle these events in our client application by calling functions in our API and subgraph. The transfer event is fired when a new token is created or transferred between parties. The handle transfer function is called when this event is triggered. Another function we care about is handle token IPFS path updated, which handles updates to the IPFS path of a token. These event handlers are written in the SRC mapping.ts file, where we write the business logic. The graph CLI can generate helpful code for us by introspecting our GraphQL schema. The data we describe in the smart contract is saved in the node itself, which acts as a server with a database. We save information to this node.

So I mentioned in the smart contract, we have these events that are emitted and in a graph QL API that has subscriptions. You have these subscriptions that are listening to events that are admitted, maybe in a graph QL API. And then on your client application, you're going to handle that event. So you might listen to that event. And then with that event happens, you might do something locally, or kind of doing something like that here.

We're basically saying when this event is triggered, we want to listen for that and we want to handle that by calling another function. But the function is actually living in our API and our and our sub graph. So we have two events that we're going to be caring about for our API. One is the transfer event. And this function gets fired or this event gets admitted. When someone creates a new token. And it also gets admitted when someone transfers a token from one party to the other. So when you create a new token, this will be fired. And then when that person moves it to another party, it will be fired. And when that event gets when that event gets triggered, we then essentially are just calling this handle transfer function that we're we'll take a look at in just a moment.

And then the only other function that we care about is the handle token IPFS path updated function. And that means if someone has a token and they change the IPFS path, we want to be able to handle that update. And make that change locally, so where are these two event handlers living, they're going to be living in this SRC mapping.ts file, and there's a lot of boilerplate code generated for us here, we can go ahead and delete that. And this is where we're going to be essentially writing the business logic to handle these two events. But before we do that, we could actually use the graph CLI to generate some code that will help us out there. So if I go back to the terminal and I just run graph code gen, this will introspect our graph QL schema, and it will generate some code that will be helpful for us. What is that code going to do? Well, there are two things that you're going to be doing in these mappings. When you are thinking about what we're trying to accomplish here, we're kind of like saying, OK, we have this smart contract, we're describing the data that we want indexed, and we're deploying this to a network of nodes and that network is going to read this subgraph. And it's going to go through and it's going to save the data that we want saved, and then we're going to be able to kind of like query that from our front end. Well, where is this data being saved? Well, it's being saved in the actual node itself. So a node you can think of, it's kind of like someone running a server. Right. But, but it's kind of like being run on in multiple places. So in this server, we might have a database and in this database we want to save information. So one of the things that you're going to be doing here is just saving information to that node.

15. Interacting with Smart Contract and Graph Node

Short description:

We call the token contract API to ask for information using the token ID. The generated 'slash token' API allows us to interact with the smart contract, while the 'slash schema' APIs enable us to communicate with the graph node. In the 'tokenURIUpdated' function, we load the token from the graph node. If the token exists, we update its IPFS path and save the data back to the database.

The other thing that you might be doing from these functions is calling back to the smart contract, because you might need additional data from the smart contract that isn't coming in from the arguments. So when we, when we emit an event, we might have like some information about what's going on there, but we might not have all the information. So we might need to talk to both the graph node to read and write data, but also to talk to the smart contract itself to read data. So with that being said, that's kind of like just giving an idea of what we're about to do.

Let's go ahead and write some of that code. And that's going to be here in the assembly script mappings. So I'm going to jump here and I'm going to go ahead and import the two imports that we need. These two events at the top, the token IPFS path, updated event and the transfer event, these are really just type types that would be used kind of for type safety. So the CLI generates type types for us to use. And the only actual interface that we're going to be interacting with here to to make any type of API call is going to be this token contract. And this basically gives us a binding that allows us to speak to the smart contract to read information. So we can say we want to ask the smart contract for information using some like token ID or something like that. So this this generated slash token is an API for us to talk to the smart contract. And then this generated slash schema or APIs that allow us to talk to the graph node. So from the graph node, we might read and write data. And that's kind of what these two APIs are going to allow us to do. So we might say token dot save or user dot save and so on and so forth, or token dot load, so on and so forth, which we're about to do.

The first function we might want to work with is going to be the smaller of the two. So the token URI updated, this is going to take in an event. What we're going to first do is we're going to say we want to load the token from the graph node. And this means like this token already exists, and if it does exist, then we want to continue going. If it does not exist, we just return from this function. So we say token dot load, and in this event we have this params object that has a token ID, so we're essentially getting the token ID from the handle token URI updated event. And if we go here to subgraph.yaml, we see that we have the first argument, which is an integer, and this is going to be the token ID. So that's really the only thing that we care about it's the token ID. So if the token does not exist, we return. If it does exist, we then just update the token IPFS path with the event dot params dot token IPFS path that is coming in through this event. So we get the token ID, we query for that token. We then update the token IPFS path, and then we call token dot save. And this is saving that data back to the database.

16. Handling Token Transfer Events

Short description:

We have now updated our token. The handle transfer function is called when a new token is minted or transferred. We load the token using the token ID from the event. If the token does not exist, we create it by calling a new token with the ID. We add the creator and token ID properties. We also need more metadata about the token, such as content URI, token IPFS path, name, and created at timestamp. This information comes from the smart contract. We bind to the smart contract and retrieve the required information. If there is no token, we create one and set the owner. We save the token to the graph node. If there is no user, we create one. These events are defined in the subgraph YAML and handle events fired from the smart contract.

And then we have now updated our token. And then the only other function that we have is the handle transfer function. So go ahead and copy that. And this is the function that I mentioned gets called when a new token is minted, and then also when someone transfers a token to someone else. So let's say they sell the token or they just move it, move it somewhere else, they give it to someone else. So we do something very similar here. We first just load the token using the token ID coming in from the event. And we check if the token does not exist. If the token does not exist, this means that this is a newly minted token. So we need to go ahead and create it. So we call token. Uh, I'm sorry. We call a new token passing in the ID. This creates a new token. We then add the creator and the token ID properties. So we have that coming in also from the event. And that's really the only information that we have that we can use it up to this point. We also though need more metadata about this token that isn't available in the event, because if we go to this event, we see that we only have, um, the from address, the to address, and the token ID, but we also want to have other information about the token. We might want the content, URI, the token IPFS path, the name and the created at timestamp. So where does that information come from? Well, that comes from the smart contract itself. So we're now going to bind to the smart contract here, and then we're going to go ahead and, um, start getting information. So we want the content, URI, the token, IPFS path, and the name of the token. And then the created at timestamp actually comes off of the event.block.timestamp, which is available in every single transaction that happens. So that's a lot, but, um, that, that we basically just said, okay, if there isn't a token, we want to go ahead and create that and we're going to now set the owner and we're going to go ahead and save that to the graph node, and then we'll continue on and we'll see if there is a user, then we do nothing. If there isn't a user yet, we go ahead and create the user. So, um, That is the, the handle transfer event. So we have these two events. They're both being defined in our sub graph dot YAML. Um, and they are both handling events that are being fired from the smart contract. So I think that's it.

17. Deploying and Querying the Subgraph

Short description:

After saving and testing the code, we can run 'graph build' to ensure the build is successful. Then, with the access token, we authenticate and deploy the subgraph using 'yarn deploy'. The syncing process starts automatically, and once completed, we can run queries on the data. We can customize the query order and direction to retrieve specific information.

So if we go ahead and save that, we should be able to go ahead and test this out. So from there, we've written all the code that we need to make this work. So we can run a bilge to test out that our build is working by running graph build. And if this works, we should be able to go ahead and deploy.

So the build works successfully. So I'm going to go back now to the, uh, dashboard here. I'm going to need this access token in just a moment. So I'm going to run graph off and then, uh, this endpoint. And then I need to paste in my access token. So I'm just going to say graph off the end point and the token here and this should go ahead and authenticate me to be able to deploy there. So now I should be able to run just yarn deploy. And this should go ahead and run graph build and then deploy our sub graph. And then what is going to happen is like, once this is deployed, then the indexing should start happening or immediately.

So you now automatically see that we have the syncing. It looks like it's already synced, which is pretty quick. It looks like it's already synced, which is pretty quick. So a hundred percent synced, 13 million blocks, 159,000 entities, and that was very quick, actually. Okay. So all of this data should now be synced. So what we can now do is start running queries. So I might say I want to get the tokens. And there we go. We see that the data is coming back. We might want to do this a little different, though, because we don't want to get it in that order. We might say order by like ID and then order direction is going to be like descending. And there we get, like the idea of $99.99.

Watch more workshops on topic

React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Top Content
Featured WorkshopFree
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
GraphQL Galaxy 2021GraphQL Galaxy 2021
140 min
Build with SvelteKit and GraphQL
Top Content
Featured WorkshopFree
Have you ever thought about building something that doesn't require a lot of boilerplate with a tiny bundle size? In this workshop, Scott Spence will go from hello world to covering routing and using endpoints in SvelteKit. You'll set up a backend GraphQL API then use GraphQL queries with SvelteKit to display the GraphQL API data. You'll build a fast secure project that uses SvelteKit's features, then deploy it as a fully static site. This course is for the Svelte curious who haven't had extensive experience with SvelteKit and want a deeper understanding of how to use it in practical applications.

Table of contents:
- Kick-off and Svelte introduction
- Initialise frontend project
- Tour of the SvelteKit skeleton project
- Configure backend project
- Query Data with GraphQL
- Fetching data to the frontend with GraphQL
- Styling
- Svelte directives
- Routing in SvelteKit
- Endpoints in SvelteKit
- Deploying to Netlify
- Navigation
- Mutations in GraphCMS
- Sending GraphQL Mutations via SvelteKit
- Q&A
React Advanced Conference 2022React Advanced Conference 2022
95 min
End-To-End Type Safety with React, GraphQL & Prisma
Featured WorkshopFree
In this workshop, you will get a first-hand look at what end-to-end type safety is and why it is important. To accomplish this, you’ll be building a GraphQL API using modern, relevant tools which will be consumed by a React client.
Prerequisites: - Node.js installed on your machine (12.2.X / 14.X)- It is recommended (but not required) to use VS Code for the practical tasks- An IDE installed (VSCode recommended)- (Good to have)*A basic understanding of Node.js, React, and TypeScript
GraphQL Galaxy 2022GraphQL Galaxy 2022
112 min
GraphQL for React Developers
Featured Workshop
There are many advantages to using GraphQL as a datasource for frontend development, compared to REST APIs. We developers in example need to write a lot of imperative code to retrieve data to display in our applications and handle state. With GraphQL you cannot only decrease the amount of code needed around data fetching and state-management you'll also get increased flexibility, better performance and most of all an improved developer experience. In this workshop you'll learn how GraphQL can improve your work as a frontend developer and how to handle GraphQL in your frontend React application.
React Summit 2022React Summit 2022
173 min
Build a Headless WordPress App with Next.js and WPGraphQL
WorkshopFree
In this workshop, you’ll learn how to build a Next.js app that uses Apollo Client to fetch data from a headless WordPress backend and use it to render the pages of your app. You’ll learn when you should consider a headless WordPress architecture, how to turn a WordPress backend into a GraphQL server, how to compose queries using the GraphiQL IDE, how to colocate GraphQL fragments with your components, and more.
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Top Content
WorkshopFree
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
Top Content
We all love GraphQL, but it can be daunting to get a server up and running and keep your code organized, maintainable, and testable over the long term. No more! Come watch as I go from an empty directory to a fully fledged GraphQL API in minutes flat. Plus, see how easy it is to use and create directives to clean up your code even more. You're gonna love GraphQL even more once you make things Redwood Easy!
Vue.js London Live 2021Vue.js London Live 2021
24 min
Local State and Server Cache: Finding a Balance
Top Content
How many times did you implement the same flow in your application: check, if data is already fetched from the server, if yes - render the data, if not - fetch this data and then render it? I think I've done it more than ten times myself and I've seen the question about this flow more than fifty times. Unfortunately, our go-to state management library, Vuex, doesn't provide any solution for this.For GraphQL-based application, there was an alternative to use Apollo client that provided tools for working with the cache. But what if you use REST? Luckily, now we have a Vue alternative to a react-query library that provides a nice solution for working with server cache. In this talk, I will explain the distinction between local application state and local server cache and do some live coding to show how to work with the latter.
React Day Berlin 2022React Day Berlin 2022
29 min
Get rid of your API schemas with tRPC
Do you know we can replace API schemas with a lightweight and type-safe library? With tRPC you can easily replace GraphQL or REST with inferred shapes without schemas or code generation. In this talk we will understand the benefit of tRPC and how apply it in a NextJs application. If you want reduce your project complexity you can't miss this talk.
GraphQL Galaxy 2022GraphQL Galaxy 2022
29 min
Rock Solid React and GraphQL Apps for People in a Hurry
In this talk, we'll look at some of the modern options for building a full-stack React and GraphQL app with strong conventions and how this can be of enormous benefit to you and your team. We'll focus specifically on RedwoodJS, a full stack React framework that is often called 'Ruby on Rails for React'.