Go Find What We May Have Missed!

Rate this content
Bookmark

Coming into software with an exploratory testing mindset is like coming to a multi-layer canvas with lots of information and an open ended task: find what we may have missed! This is the assignment for us all in software teams in our quest for quality.


Framing the search of how our system falls short of expectations is easier when we are able to see software from its user’s perspective. However, useful tests aren’t a collection of end-to-end tests we automate, but great tests to leave behind will decompose the testing problem differently. In this talk, we learn about using architecture as a filter in decomposing tests and look at an example of taking control over the API responses to test a React frontend.


Users don’t know or care if the problem is in the frontend and services your team provided if it fails to meet their expectations but you care. Granularity of feedback matters. Recognizing the same problems in incomplete scope - half-done features or only in frontend or APIs - is a skillset the software industry needs to be building.

27 min
18 Nov, 2021

Video Summary and Transcription

Maaret Pyhäjärvi, a principal test engineer at Vaisala, emphasizes the importance of balancing different types of testing to build better teams. Testing the application with different locations reveals potential issues with its behavior. The speaker highlights the significance of testing integrations and dependencies, including libraries and operating systems. They prefer code-oriented tools like Requests and Python for API testing. Exploratory testing is the only type of testing they perform, and they encourage others to participate in it as well.

Available in Español

1. Introduction to Testing at Vaisala

Short description:

Hi, my name is Maaret Pyhäjärvi, and for the last year and a half out of my 25-year career, I've worked at Vaisala as a principal test engineer. I go and assess the results that we're able to provide by staying within the team as one of the team members for a while. I repeat this from team to team, usually spending six to even twelve months within one team, with the idea of leaving things better after I'm gone and helping the teams grow in the way that they do testing.

Hi, my name is Maaret Pyhäjärvi, and for the last year and a half out of my 25-year career, I've worked at Vaisala as a principal test engineer. A nice way for me to frame what I do for my work is this idea that I'm quality control for the testing that is happening in our teams. I go and assess the results that we're able to provide by staying within the team as one of the team members for a while.

A lot of times I frame my assignment as go find at least some of the things others have missed. I repeat this from team to team, usually spending six to even twelve months within one team, with the idea of leaving things better after I'm gone and helping the teams grow in the way that they do testing. I have done this over my career quite many times for various different products and teams, one of them being this particular example here on the slide where I asked a developer of an API-based testing tool for permission to test their application and use it as training material in some of the conference talks that I've done. Then they gave a podcast interview saying I basically destroyed it in like an hour and a half. This is a common experience that developers tell of me with a smile usually, hopefully at least, on their face.

And it is usually related also to the fact that I have by that time also had the conversations on, you know, I didn't destroy the application. The only thing I might have destroyed is the illusion that was never the reality. So you might be very proud of your application. You might be doing already a good job. And there still might be things that you're missing. And your customers might not be telling you.

2. Testing: Artifact Creation and Performance

Short description:

In testing, there are two kinds: artifact creation and performance. Artifact creation provides specification, feedback, regression, and granularity. Performance testing offers guidance, serendipity, and helps discover unexpected problems. To build better teams, a balance of different test types is needed, including faking components and testing with real integrations. The system should be responsive, easy to use, and secure. An example application demonstrates the use of front-end and back-end testing with mock responses.

So in all of this work that I've done, I've kind of summed it up as a recipe for better teams. How do we go about finding the things we're missing? And we start with two kinds of testing. There's the testing that is kind of framed as an artifact creation, whether it creates automation or checklists for repeatable tests for later. And then we have the other kind, testing as a performance, kind of like improvisational theater, where you look at the application. Application sort of speaks to you. It's like your external imagination. And it makes you more creative. And whatever you learn, you can then turn into the artifact creation part of the testing. You need both of these sides. They give you very, very different things where the artifact creation style gives you specification, feedback, regression, and my absolute favorite, granularity, knowing based on the results that you're getting, what was the change that broke things, and from the logs, what is now broken, without having to spend multiple hours or even days analyzing your results before getting to the actual fixes. These are things that you can get from the artifact style of testing.

On the performance style, it gives you a little bit more vague things in many ways, but also, it gives you kind of like a guidance. You know, the direction, are we going to a better direction? Is this good? Is there still more feedback, more conversations to be had? Is there something where we need to build our understanding and improve the models? And my, again, absolute favorite, serendipity, lucky accident, meaning that sometimes, some of the problems that we need to find are two interesting combinations of all kinds of things we didn't think of, that we just need to give it time. So, there's a saying, a quote by Arnold Palmer, a famous golfer, that it's not just that he's lucky, it's just that he has been practicing. So, kind of like that's the general idea with this style of testing. So, framed from the sides, we need something in the middle for the better teams.

And the thing we need in the middle is, of course, different kinds of tests. Whether it comes from the point of view of creating artifacts, or whether it comes from the point of view of performing testing and thinking what kind of things we might still be missing, we probably will test against different levels of interfaces available in the system, and try making a balanced set of all the different shapes of tests, be they small, medium, large, unit service UI, or unit integration system, or end-to-end, whichever words you end up wanting to use. You probably also will not have just these different kinds of tests where you're basically then just kind of growing the scope of it. You also probably would like to have in those better teams some kind of ways of faking, mock, stub, spy, fakes, whatever you want to call it. Ways of faking either the service responses, the data, or any of the components that you want to leave out of the testing scenario so that you can have a focus feedback. But also, you want to test with the real integrations, again, because of serendipity you are most likely going to see something different there, and that is what your customer will end up using anyway, not the mocks that you have created. You'll probably have a foundation of functionality, but also the three other key things. It needs to be responding for the customer's requests fast enough. It needs to be easy enough to figure so that you know what to do with the application. And the disfavored users should have mechanisms keeping them away from your system so that whatever the business purpose is that the system serves, the information also is safe from other people causing you harm. So this is kind of the frame that I think that we need for the beta testers.

And I wanted to give you a small example of what typically applying something like this looks like on an application. I took a small example application which was created basically for using this or showing the idea that you can have a front-end and you can have a back-end and you can mock the back-end responses. So there's a very simple React front-end, very simple React app and the possibility of changing whether you are working against the actual or the mock server is already in the user interface.

3. Testing the Application with Different Locations

Short description:

The app is clearly created for testing purposes, relying on manual performance testing. It uses an open weather map API and provides various responses based on city names, IDs, coordinates, and zip codes. Testing with different locations reveals some unexpected results and potential issues with the application's behavior.

The app is clearly created for testing purposes in the sense that whenever I kind of take it out from my notes there and I start running it the first few minutes goes into updating whatever dependencies there are. And after the dependencies have been updated I usually have to think of it for a moment on the fact whether you know I have any test cases that I could run. I actually do not have any, so I'm very much relying on the manual performance, attended performance type of testing with this application.

So I might start with reading about the application, figuring out that it's actually using an open weather map API for the current weather, and thus summarizing things for that particular API. And with showing things in this outline, even here, I can notice it does things around city names, city IDs, coordinates, zip codes. It's supposedly returning me different kinds of responses. There's something of formats, units of measurements. And just alone, the outline already creates a vast amount of ideas of what I could do. So I would probably want to start with a new application, trying to test it with something that I could see a basic positive scenario, something that is fairly easy for me to verify. So just checking the location where I am at, and seeing the results of that right about now, would give me the results that I can verify just by opening my window. And yes, very surely, it kind of shows exactly what I was expecting the case to be for Helsinki, Finland.

Similarly, I can imagine other locations, Sydney being one of the locations that is kind of furthest I have ever traveled from my own home. So it feels like a perfect place to go and test. But comparing to my own location, I'm not actually getting much of a difference here, broken clouds, few clouds, not relevantly different. So I was hoping to see sunshine and a whole different icon here in the user interface. But no luck with that one. But again, generating more ideas, I realized that I could and I have before written Sydney with a typo. And I realized that I'm kind of surprised that even when I typo things, trying out Sydney and many other different similar small but usual typos, it seems that the API actually gives me the right answer even though I am not very precise on my request. And this makes me, of course, a little curious, but I kind of just make a note of it, mental note of it.

Remembering the API description, I remember that there was an idea that I can also query, at least from the API, I can query based on a postal code or like the location code. And I first want to just try out my own location, this very neighborhood that I am in in Finland. And to my surprise, it locates me not to Helsinki, Finland, but to Vantaa, Finland. And that's considered a different city, so not quite okay. And also I was supposed to be entering cities. This doesn't seem like a city to me. So getting some kind of ideas on what I can do with the application. Similarly, thinking of, you know, postal codes that we all know, 90210 from the fairly famous series back in the days. I'm expecting to see Beverly Hills and I'm again disappointed, noticing that maybe there is a reason why this should only allow me to enter a city so that I don't always get disappointed when I enter the postal code of an area. So I wasn't definitely expecting to end up in Mexico. I go through, having looked at the application interface, I go through a little bit of the specification and I find that there's, you know, nice things about ideas of different kinds of weathers.

4. Testing with Mock Data and Integration

Short description:

The application relies on current weather data, making it difficult to test certain features. To overcome this, the speaker suggests taking control over the data and using mocking techniques. They provide an example of setting up a mock server and modifying the weather data to test specific scenarios. Through this process, they discover issues with icons, sunrise and sunset times, and temperature conversions. The speaker emphasizes the importance of testing all integrations and dependencies, including libraries, operating systems, and potential coexisting components.

Like this could be a thunderstorm, there could be a drizzle, there could be even a hurricane going on. And since the application is very much current weather, me trying to guess right now where in the world there might be a hurricane, that's not going to work very well. And similarly, me right now being here in Finland in the middle of the night with clouds, I can't really verify the daytime, nighttime icons and whether those work correctly that I could find in the specifications with the data that I have at my hands.

So what I usually then do, you know, trying to do proper testing is take control over the data. There's many ways of doing it. And mocking of course is one of the ways that we could consider. And changing into the mock server, well starting the mock server at first, I can run some of the test cases that I have prepared that set up stubs for different kind of settings. And I have here a stub with weather at Utsjoki. Utsjoki being a location in Finland from where I saved a response on summertime at a time when there is no sunset and no sunrise, because that's how certain locations in northern Finland are in the middle of summer. So I set up that Utsjoki and I set up Utsjoki data so that well, I only go and change the specifics of what is the weather like. So the times and all that stuff, I leave as it is. But I can edit a thunderstorm in. And yes, definitely. Now, I can verify that yes, if this is the response we would be getting, I could see a new kind of an icon, I could see the texts. And only parts that I was expecting to change are changing. And obviously, the first time I did this, it wasn't quite so straightforward figuring out what data belonged together. But it took me a little bit of investigation. I also know by now that while thunderstorm icon works quite nicely, there's other icons available that the application hasn't yet implemented, so they won't work quite as perfectly. Similarly, I noticed with the application that when the sun doesn't go down at all in the summer, I would definitely expect that the sunrise and sunset times are the same. But it also kind of drills me into realizing that there's a typo, it's not sunshine, it should be sunrise, and it's no more easy to pass it by, and that I am pretty sure that this application that I've picked up online is by someone on the IST time zone, meaning somewhere in the Asia time zones, rather than a neighbor of mine here in Finland. So it gives me ideas of things that have been relevant for the creator, but also will limit what the application is capable of doing right now.

Investigating a little bit more, kind of, you know, being curious about the temperatures, very quickly going into code, I figure out that there's a very interesting thing going on here with the application turning temperatures to celsius. They come in as kelvin and I very quickly realize that this is due to the fact that when setting up this stub and the URL there, there is a tiny typo in that URL which ends up not giving me the celsius for this application right from the API ending up with this extra need of implementing the transformation separately. If I would see this with time handling it would be a much bigger concern, because time handling and all the exceptions related to time handling it's notoriously awful to create your own, so please use whatever you have and, well, temperature's likely the same. So with this example what I want to show you is that for the better teams and for pretty much any teams aspiring to be better. We start with our thing. In this case my thing, our thing, my team's thing, is the little application created in React, and it is benefiting from integrating into somebody else's publicly available API. It is running on top of some libraries, it's running on top of an operating system, and all of those are part of the things and integrations we need to be testing. And it is somehow also running on the side of other things that shouldn't have an impact on it, like browser extensions for example, that still might, there's many different things that might need to coexist. There is some kind of expectations from the users and the stakeholders.

QnA

Approach to Testing and Exploratory Testing

Short description:

In testing, we approach applications, systems, and integrated systems with curiosity, wanting to understand how things work. Automation is driven by someone attending to it, and when it fails, it invites us to explore further. Understanding our own architecture and responsibilities, and caring for our users, is crucial. Collaboration is key, and blaming individuals is not productive. Questions about the best tools for API testing depend on the team and context. Personally, I prefer code-oriented tools like Requests and Python. Exploratory testing is dear to me, and it's the only type of testing I perform.

Usually they're somehow reasonable and they're also part of the things that we need to consider and looking at all these things together that's how we approach testing. If you learn from the testing you have missed that you thought it should work differently, you had an expectation that it should work differently, but the empirical evidence shows, trying it out the program disagrees with you, the program is going to win, and there will be a different program before you can get the program to agree with you, so we go about making those changes based on the empirical evidence. We try to avoid the speculation build up, we need to address how reliable, how trustworthy our testing is and all in all we approach the applications, the systems, the integrated systems, the users needs and the users expectations, we approach them with curiosity and wanting to understand how things work.

Automation will play a part, but automation is driven by someone attending to that automation. So whenever automation fails it's kind of like an invitation for us to go and explore further and we are exploring to create the automation in the first place. When we have good questions about things that need to be addressed with the application like for example does it run on the long-term, automation might be very useful for us in understanding that. But it all starts from understanding our own architecture, our own responsibilities and caring for our users not on just the things we personally wrote but everything we intentionally and unintentionally integrated with.

Have a great time testing I hope you enjoy that. Welcome and hi thank you so much for your talk it was a really really great one. What do you think about the results we got gathered here? I was seeing these numbers and I was laughing here all by myself because I'm kind of a strong believer that there's no individual who's ever responsible for these kind of things. It's a collaboration and there's these layers where we're trying to catch things and the idea that almost 20 percent 19 percent would find one individual but it's not always the most usual suspect which is the testers. It's an interesting thing to see in the polls. In a year if someone does this it will be all in the majority category. It was a bit surprising to see how the numbers change on the side who's going to be blamed for as the answers came in and was like, okay, okay. Of course, as you mentioned most of the times I think people will turn back for the tester. But from 80 I think we have a good start and we can aim for 100 perfect. Maybe someone just playing with the buttons. Looking at it don't do that.

Great. I think we should get some questions then. Henry is asking, what is in your opinion the best tools for API testing and API testing automation? I work with a lot of different teams with a lot of different technologies. Right now, I'm of course personally, since I'm in a team with lots of Python, I'm convinced that Requests and Python is the best tool. Half a year ago I was working with a JavaScript and Java team and I was absolutely convinced it was SuperTest because that's what we were using back then. It's really about what the team is, but I'm much more inclined to code oriented tools, like something that will go nicely with the code instead of the kind of GUI oriented tools. So I believe that that's the general favorite that I go for. That's great. I really believe that, again, there is nothing to rule everything else, like only one solution, and it changes on the context that you are working on at the same time. When and how often do you perform exploratory testing? Post release? I know this is a topic very dear to you. Yeah, it is very dear to me. So I don't perform anything other than exploratory testing, I answer.

Exploratory Testing and Identifying Dependencies

Short description:

Automation is my way of exploring things that a regular human couldn't. We split between feature testing and release testing. We invite everyone in the company to do exploratory testing. Ensemble testing in smaller groups helps people learn how to do proper exploratory testing. Identifying relevant things to co-exist with involves looking at dependencies and asking for architecture pictures or an end-to-end test environment.

But automation for me is part of how I document my exploratory testing. And automation is my way of being able to explore some of the things that a regular human without use of automation couldn't. So for me, all of that is exploratory testing.

Maybe more interesting thing is kind of the split between feature testing and release testing, which is the two concepts that we use kind of on the system level. And about 99% of our work happens on the side of the features, we do very basic checks anymore at the release time. But this is something that we are still trying to move towards no manual work on the release time. Whereas, the thinking work needs to happen while we build the features.

Indeed, indeed. Mark was adding that we do exploratory testing mostly close to major release. We invite all people in the company, no matter if developers or salespeople to join it. But the challenge is always how to teach them to be a successful, of course, exploratory tester in almost no time. What's your approach in test tours as they try this till now? I haven't brought people together to do exploratory sessions like this for a few years now. When I bring people together, I bring a little smaller group, usually about a group of 10 at a time. And we do ensemble testing, meaning we share a single computer. And I get to facilitate so that everyone gets to learn how to do that exploratory testing and use everyone's skills on that. So having done ensemble testing and letting them run and roam free, they're likely to know a little bit more about how to do proper exploratory testing. You could give people ideas of tours or ask them to go and find whatever bugs they're worried about, investigate areas of their expertise, any of those are actually great start-up points. But it depends so much on what information do you consider valuable and giving that guidance to your team of explorers.

Oh, that's a good one. Indeed, we do something similar when we create pros and cons of this is is not what we ask for when we do some test parties too. And Martin said that you mentioned things your team co-exists with are also very important. How would you go about identifying the relevant things to co-exist with? By looking around kind of I have a mental checklist of looking at things that we depend on. Whatever is in the operating system, whatever is in the network, whatever is in the platform, whatever is in the neighboring teams. So kind of like being able to label all of those and figure out who those are, asking for architecture pictures and just pointing at a box on that architecture picture. That probably helps us a lot. Another maybe simpler way is to ask for an end-to-end test environment and just make sure that everything goes through that so then you don't have to understand all the dependencies, but you have to see that they run together in an integrated environment.

Yeah, I think that that's important. Thanks so much, Merit, again, both for the talk and your answer to these questions. We are very thankful to have you around. It was great.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Top Content
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Top Content
Developers want to sleep tight knowing they didn't break production. Companies want to be efficient in order to meet their customer needs faster and to gain competitive advantage sooner. We ALL want to be cost effective... or shall I say... TEST EFFECTIVE!But how do we do that?Are the "unit" and "integration" terminology serves us right?Or is it time for a change? When should we use either strategy to maximize our "test effectiveness"?In this talk I'll show you a brand new way to think about cost effective testing with new strategies and new testing terms!It’s time to go DEEPER!
React Day Berlin 2022React Day Berlin 2022
29 min
Get rid of your API schemas with tRPC
Do you know we can replace API schemas with a lightweight and type-safe library? With tRPC you can easily replace GraphQL or REST with inferred shapes without schemas or code generation. In this talk we will understand the benefit of tRPC and how apply it in a NextJs application. If you want reduce your project complexity you can't miss this talk.

Workshops on related topic

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
Top Content
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.
GraphQL Galaxy 2021GraphQL Galaxy 2021
48 min
Building GraphQL APIs on top of Ethereum with The Graph
WorkshopFree
The Graph is an indexing protocol for querying networks like Ethereum, IPFS, and other blockchains. Anyone can build and publish open APIs, called subgraphs, making data easily accessible.

In this workshop you’ll learn how to build a subgraph that indexes NFT blockchain data from the Foundation smart contract. We’ll deploy the API, and learn how to perform queries to retrieve data using various types of data access patterns, implementing filters and sorting.

By the end of the workshop, you should understand how to build and deploy performant APIs to The Graph to index data from any smart contract deployed to Ethereum.