GraphQL Security Testing Technical Workshop

This video is only available for Multipass users

We’ve all heard the buzz around pushing application security into the hands of developers, but if you’re like most companies, it has been hard to actually make this a reality. You aren’t alone – putting the culture, processes, and tooling in place to make this happen is tough – especially for sophisticated applications like those backed GraphQL.

In this hands-on technical session, StackHawk Senior DevOps Engineer, Zachary Conger, will walk through how to protect your GraphQL APIs from vulnerabilities using automated security testing. Get ready to roll-up your sleeves for automated AppSec testing.

119 min
11 Oct, 2021


Sign in or register to post your comment.

AI Generated Video Summary

GraphQL is a query language and API characterized by a single endpoint URL, typically slash GraphQL. DAST scans, such as OWASP ZAP, are considered the gold standard as they run against your application in production, resulting in fewer false positives. To get started with the Workshop, make sure you are authenticated and have cloned the repository. The Docker compose step is important to run the test app, and participants are encouraged to ask for help if they encounter any errors. The StackHawk platform offers various integrations, allows for triaging issues, and provides detailed information about vulnerabilities found in the scan.

1. GraphQL API and DAS Scanning

Short description:

GraphQL is a query language and API characterized by a single endpoint URL, typically slash GraphQL. It supports queries and mutations, which can modify data in the data store. GraphQL also has an introspection endpoint that allows for schema information retrieval. Our scanner utilizes this feature to probe the data store and endpoint.

The GraphQL API Now, let's just talk a little bit about GraphQL DAS scanning and what it is that we're doing here. So GraphQL, as you guys well know I'm sure, is a query language and a type of API that's characterized by a single endpoint URL typically, and that is typically slash GraphQL by convention, although it doesn't have to be. A lot of the things in GraphQL are by convention but don't have to be. It is kind of a good idea not to use GraphQL as your single endpoint just because it's so discoverable and makes it a target, but lots of people do. There are two types of operations you typically do on a GraphQL data store, and that's queries and mutations. And queries typically just return queried data and often are considered to be safe types of operations against a GraphQL data store. But that's just by convention, so you can have queries that actually modify data in your data stores. And then there's mutations, and mutations typically update data and return queried data, so they can do both operations. That too is by convention, it can just return queried data and not update the information. Pick your poison. And then finally, GraphQL is characterized by an introspection endpoint, so you can create introspection queries that allow GraphQL to return schema information about the data that it gives you access to. And there's automated tools that can do this for you and create an automatic schema, those are actually a little bit… sometimes people don't recommend doing that because they can divulge too much information, it's often better to just curate the schema information that you want to give out. But anyway, it's a very powerful feature of GraphQL, and it's one that our scanner actually uses to ask about the information that's in the data store and ask about how best to probe the data and probe that endpoint.

2. DAST and StackHawk

Short description:

DAST stands for Dynamic Application Security Testing, a solid method of testing that involves running your application and probing it with requests to identify vulnerabilities. DAS scans, such as OWASP ZAP, are considered the gold standard as they run against your application in production, resulting in fewer false positives. StackHawk's HawkScan, based on OWASP ZAP, offers a Docker container and a simple YAML configuration file for easy automation. The platform allows you to collect and track scan data, triage issues, and integrate with tools like Slack, MS Teams, and JIRA. It also offers GraphQL enhancements and the ability to fork and clone the Vuln GraphQL API test application.

So, what is DAST? DAST is the type of scanning technology that we're going to be using today, it stands for Dynamic Application Security Testing, and it's a way of… And it's a way of… In this form of testing, you run your application and then you run a scanner against it, and you actually probe it with requests that are designed to probe for vulnerabilities, and then we analyze the responses from those requests, and based on the evidence that we get back from those responses, the scanner determines whether or not this looks like a vulnerability. And so at the end of that, what it provides is a big report of security bugs that it thinks that it's found in your application, and this is a really solid method of testing.

There's other methods that you can look at, like software composition analysis, which looks at your dependencies. You can look at static analysis tools that go through your code and try and query it and figure out if you've put vulnerabilities in there, if you've got patterns that indicate vulnerabilities. But DAS scans are kind of the gold standard because they're running against your running application. They tend to have fewer false positives. The vulnerabilities that they find typically are real things that are in your application that are going to be exposed when you run it in production. There are many DAS tools out there. Again, oawsp zap is one of them. It's an open source project. It's the most popular open source DAS scanner out there. Our product is based on that. We've taken that application and built in a lot of features to make it faster, easier to work with, but we've also put a platform behind it so that you can do a lot more powerful things with it in terms of data analysis and teamwork and working with a team on these bugs.

So, that brings us to StackHawk. Our company, we make this scanner, we call it HawkScan. It's based on oawsp's app, as I mentioned. We package it as a Docker container and it's got a simple YAML configuration file and it's got a simple YAML configuration file that makes it really approachable, really easy to configure, but not only configure but actually automate. So, you can keep that YAML configuration in your source code repository along with your code. It's very easy to automate it in CI-CD. All you need is the ability to kick off a Docker run command in your CI-CD platform and every one of them that I've come across allows you to do that. But we've got deeper integrations as well. For instance, GitHub Actions, CircleCI, there's a couple of them out there where we've done a deeper integration so that it's even easier to run it in those platforms. We have an online platform called and when you run scans, although you run the scan locally close to your applications so that it's nice and fast with low latency network access, when it's done, it's going to send data back to our platform and that platform allows you to collect all that information, track the scans over time, see how your security profile is looking, alert teams through Slack or MS teams when scans are complete or starting or new vulnerabilities have been found. It also allows you to triage issues that it finds. And we'll see some of this as we go through. So that if you have a bug that the scanner has found, you can actually work with it. So you can say, hey, that's a false positive, I reject that or, yeah, I know that's an issue, but it's not that important so I'll accept that risk. Or you can actually assign it over to another developer or assign it as a bug in JIRA with all of the information that the scanner has gathered about that issue. So you can track it to completion and also see so the scanner won't bother you about that particular alert because it knows that you know about that alert and you are dealing with them. We've got Slack integrations, Slack integration and an MS Teams integration, and that's how we notify users. Got a JIRA integration that I just mentioned. And if you have other tools that you want to integrate our scanner with, we can send webhooks out to arbitrary webhooks server services to take that data and do what you want with it. We've got a couple of other integrations too. Finally, there's a lot of GraphQL enhancements. So GraphQL scanning can be kind of difficult even though there's that introspection in point. Makes it easy for a scanner to find out more about the schema. A lot of times, Graph, a lot of times scanners will struggle with how deep to go or how shallow to go. And so a lot of times with scanners, you either get not enough of a deep scan or you get too much and the scan just takes forever because it's just looping on silly end points that are all virtually the same.

All right. So let's get started with this. The first thing that we're going to do is fork this test application. It's in the CACAA organization, and it's called Vuln GraphQL API. And I'd like you to fork this application, and this is in that guidebook that we have. And we're also going to send a link out on Discord. Thanks, Rebecca, to this repository. So we're just going to fork that, pull it down to your... clone it down to your workstation, open it up in your IDE, and then there's a script in there to remove all the existing automation that's in there so that you can work through that yourself. So I'm going to follow along with you. So I've got... I'm going to go grab a fork of this. Oh, jeez. Sorry. All good. That gives me time to mention. So we like to make sure everybody's able to follow along in these workshops, so anytime I post a link, I will add a thumbs up emoji to that. So just when you have been able to complete the step that link is related to, just click that thumbs up emoji for us so we know when we're ready to move on as a group and nobody gets left behind. Cool. Thank you. Exactly. All right. So fork this. I will fork it myself, and I'm pulling it into my personal organization. I'm going to be... we're going to be modifying this repository and pushing changes. So your personal organization is probably best. All right. I've got this forked, and now I'm just going to clone it down. I use git. You can use git or HDTPS, obviously, whatever. Pick your poison.

3. Getting Started and Repository Setup

Short description:

To get started, make sure you are authenticated and have cloned the repository. If you're unsure how to fork a repo, Zach will show you the process. Remember, forking allows you to work on your own copy and submit pull requests to the original source. If you encounter any issues, reach out for assistance. Next, run the script 'Workshop Prep shell' found in the 'scripts' directory. This will remove the automation and ensure a clean start. Refer to the Workshop GraphQL Testing guidebook for detailed instructions. Once you have completed these steps, we can proceed with the workshop.

You just know that you're going to be pushing to it as well, so you should be authenticated. Let me change to sharing my whole screen here. Is that working? Can you see my screen? Yep. Thank you. You sure can. Cool.

Okay. So, you can use your favorite IDE. I'm going to use VS Code. Actually, I've got this already. Of course. Okay. I've cloned that down. Now we will open it up. Cool. Pop open a terminal window and run the script should find on your scripts called Workshop Prep shell. On that exact, we still are. I think people are still getting the repo cloned down to their workstations. Give it just a second before we get going. We'll wait for those thumbs. I know in previous workshops people haven't been sure how to fork repos. So Zach, will you just show how to fork that once more in case someone hasn't used GitHub? Yeah. So I'll go back out to the source repository cacaw slash vuln-graphql-api and just come over to this fork button over here, hit fork and it asks you which organization you want to bring it into. You may only have one for your personal organization. That's what I did, forked it over to Zconger. You click on that and after a short time it will show up in your repository. So now I'm in my Zconger organization. I've got the same application it's forked. I've got a copy of it. But it remembers that I've forked it from cacaw. So this is a common way that people will work on open source projects, by the way. So they'll fork a copy of the repo, work on their stuff, and then you can actually, using GitHub, you can actually submit PRs from your fork over to the source that you forked it from. OK, and if you're having any problems with that or getting it running on your local workstation, feel free to post that question there in the Discord for everyone, or DM Nic, and we'll be happy to get you going. And you have to do these two steps to pretty much do the rest of the workshop. So I want to make sure everybody has time and is squared away before we move on. But, yes, Zach, I know you had one more step to get this repo ready. It was that script to remove the automation. That's right. And I'll show you that again in just a sec. I wanted to pull up this Workshop GraphQL Testing guidebook. So if you haven't gotten this one, go ahead and grab it. This is got, this is what this is made for. So if you're having any trouble with forking right now, and the steps to go through here, step one, fork the test application. So this is basically walking through the same stuff that I'm walking through right now. So it's got a link out to there that you can fork from. You can get clone that down. Enter into your project directory or open it up in your favorite IDE. Could be Atom or VSCode or IntelliJ Idea. And then we're going to run this script, scripts slash workshop prep dot sh. And if anybody's having trouble with this and would like a hand, can we get some thumbs down out there if anybody's having trouble? I'll add that thumbs down emoji now. All right, I'm gonna post that next step, Zack. I'm gonna cd into that bone GraphQL API directory, and we're gonna run this script to take away that automation, just so we can start clean. Somebody beat me to the thumbs up, very impressive. Nice. Yeah, wonderful. Okay, awesome. Oh, okay, sorry, I was missing the chats in the Zoom also. Thank you everyone for jumping in there. I'm gonna post this workshop guide in the directory too, just so people can follow or in the Zoom chat. Oh, hang on. That was not to everybody. Wonderful, okay. There we go. So feel free to follow along there as well. Cool. Okay. So somebody asked that they're having trouble finding the scripts directory. So when you download the repo, of course you need to CD into that directory and the directory should be called volumegraphqlapi and then there's a scripts directory under that. All right. All right, we'll give everybody just another moment or two.

4. Docker Compose and Test App Setup

Short description:

We received questions about using a different email for the swag draw and an issue with the Docker compose step. The swag draw is based on completed scans, and winners will be contacted through Discord or email. For the Docker compose step, if you're getting an unknown flag error, try using 'Docker-compose' instead of 'Docker compose' or upgrade your Docker desktop. Let's now focus on the Docker compose step to run the test app. The Docker file builds the Docker container with the application and sets the server port to 3000. It installs necessary applications and sets up the database schema. We'll run the Docker entry point script to start the Node application with the GraphQL route.

I want to make sure everybody's all caught up and then we'll keep going. Like I said, these are the foundational pieces so I want to make sure everybody has a chance to get these right before we move on. We're doing great on time. Yes, having the extra hour is a game changer. So, it's nice. I wish I had prepared some jokes for the work time but maybe our next workshop we'll be able to do that. We are technical people, we struggle with humor.

We did get a question, which is, if you're using a different email to the one we used when signing up for the workshop, do you still get entered in the swag draw? Yes, the swag draw happens based on completed scans. We will reach out to the winners in Discord, in the Get Nation Discord. We'll match your name. And if we can't find you in Discord, we'll email the account that completed the scans. So no stress, we will find you if you complete the scan and win the swag.

We did get a question on, oh, the Docker compose step. People are moving onto that Docker compose step. And maybe, Zach, you can weigh in on that one. They're getting an unknown flag. Let's take a look here. Docker compose, up build, attach, they're getting an unknown flag of build. It may be that you don't have the very latest version of Docker, or you're on Linux and you've got an older version there. So in the newer Docker desktop, you're on Windows. Okay. So try this. If you have Docker compose, try the command Docker dash compose. So just instead of a space between Docker and Compose, you just do a Docker dash compose. And if not, if that doesn't work, try upgrading your Docker desktop. Okay. All right. I think seems like, oh, sorry, where's the main channel on Discord? I will post that here. Someone just was asking and then, oops. Okay. Nice Docker dash compose works. All right. So I just posted that link to join the Discord that's in, in just a reminder, it's in that RADV3 workshops section. It's like three or four major sections down and we're in the October 11th GQL security testing. Okay, Zack, I think we should, let's start playing with Docker compose. All right. Next step, Docker compose, step to run the test app. Going to my slides that I spent so much time putting together. They look great. Thanks. Okay. So in the IDE or on the command line, however you do it, let's take a look at, at this repository. So in here, you're going to find a number of things. First thing to notice is this Docker file. So the Docker file builds the Docker container that contains this application. And what it's going to do is basically ask you what server port to use, we're going to use port 3000. It's going to install a couple, it's going to use the node JS image from Docker Hub. It's going to install a couple of applications that it needs. So let's see, AppUpdate and AppUpgrade. So we're just getting the latest updates. Actually, that's not even really necessary. Anyway, we're going to install SQL lite3, which is the database behind this application, and tsc and then we're going to. Once we have... Could you make your font a little bigger, I'm sorry, to interrupt my old eyes. Thank you. Yeah, much better, thanks. Okay, so then we're going to run database migration. So we'll set up the schema of the database behind this application, and we're going to seed it with a little bit of data. Gonna prep the GraphQL directory, which is where the app has been copied into. Change it to the app owner because that's secure. It's basically best practice to run as a non-root user in a container. And then we're going to run this Docker entry point script. And it's just a bash script that fires up this Node application. And the Node application just has this single route in it, slash GraphQL. So that's the Dockerfile. Normally you'd do like a Docker build this thing, but we've got a Docker compose file that will take care of everything for us. So in this Docker compose file, we're going to get that image. We're going to find the Dockerfile here. And in this directory, we're going to send in the argument server core. And so we have to set server port to 3,000 locally to make this work.

5. Building Image and Starting Application

Short description:

To build the image, run the entry point, and start the application, use the export server port command followed by docker-compose up --build --detach. If you encounter errors regarding port specification or unknown flags, ensure the export server line is included and use the dash between Docker and compose. For Windows users, try using set server port equals 3000 instead of export. If you encounter other errors, please reach out for assistance. The image build process may take a while, especially if you have limited memory allocated to Docker. If you have questions or encounter errors, don't hesitate to ask for help.

We're going to build that image and then we're going to fire it up and run the entry point, which starts the application within it. It's pretty simple really. So I will do that now, let me make sure I'm not missing anything. Yeah, I'll do that. So I'm going to export server port equals 3,000 and then docker-compose. And for you, if you've got docker-compose, it might be this. Docker-compose up, up, build and detach. So the build just says, hey, if you haven't built this up, or like whether or not you've built it, build it again. Detach means once we have this composition up and running, detach and give me my console back so I can do other stuff.

So we're building, ran the migrations, seeded the database. And I had a bunch of these image, these Docker image layers already set up. So it went really fast for me. It's gonna take a while for you. And in fact, I might actually, I might actually bring mine down just to show you what that's gonna look like. So you've got a comparison. So Docker-composed down. Let's see. Gonna remove my image for this, which was stackhawk-2015-office. and then, boom, GraphQL. And again, I exported server port equals 3000 because we'd want this thing to listen on port 3000. So export server port 3000 and then Docker compose up dash dash build dash dash detach to make it run in the background. Yes. And if you're getting errors that there's no port specified, just make sure you include that export server line first. And then if you're getting an error that there's an unknown flag or something like that, make sure you include the dash between Docker and compose. It just means you may be running a slightly older version of Docker where the command was Docker dash compose. So if you're running into either of those two errors, give those a try. If you're running into other errors, let us know.

It looks like we have a question. Someone's running into a problem where they're getting the no port specified error. But it's not working because they're on Windows. And so they used set instead of export. And it seems like that fixed it. So if you're getting that error and you're on Windows, try using set server port equals 3000 instead of export. Thank you for that tip, Billal. I wanna copy that into the Discord channel too. Make sure everybody sees this. If you're on Windows, use the set command. And Billal, do you have an exact command line for that? I've just been testing this on Mac. Oh, okay. Or you may, you need to use expose server port on Windows. I think set was creating an error on Windows, so I think, oh. Okay. Okay, set server port equals 3000. Okay. Any of, if you're getting other errors, feel free to drop those in the discord. You're probably not the only person seeing that error if you are, so we want to help get this app up and running so we can run that scanner locally, have super fast scan times, and really see the magic of what happens when you can run security testing right next to that app that you're running. So feel free to drop any problems into that discord, into the zoom chat or reach out to Nick directly, and we will get everybody squared away. We'll give everyone just a couple more minutes before we move on to the next step, just want to see a few more likes. many more quick questions while we wait for your comments. This image can take a little while to build because I can't remember which component. I think it's the SQL Light component it requires some C compilation to happen. And that just takes a while, especially if you're on a laptop, maybe you haven't allocated much memory to Docker. Yeah. I'm getting questions in the Zoom chat, but... Okay, still getting the error. Step 418, expose report server port. Service base built, no port specified. I wonder... Just noticed this. I wonder if we can actually set it right there. Set. A lot of you got AARGS servers. And in, I think this may work. So if you take a look at your MacDocker-compose yml file, in there there's this AARGS statement, AARGS server port. I think you might have some luck. And Zach someone else, oh you run that. Oh yes, okay. Just getting some errors on step 11. Cast between incompatible. Oh yes, there are a lot of warnings.

6. Building and Running the App

Short description:

If the image builds correctly and returns a command line without errors, you should be fine. We would love to see more people get the app up and running before we move on. Some participants are experiencing long build times, but we're working on improving that. The app has a graphical query interface and can run simple queries. It's designed to build and scan quickly. If you encounter errors, we're here to help troubleshoot. Some participants are experiencing Docker compose issues on Windows. We suggest trying the server port equals 3000 option in the Docker compose file. We're actively investigating and providing support for any errors encountered.

So these should just be warnings. Keep an eye on that, but if the bit image builds correctly and returns a command line without saying, hey, this didn't work, you should be fine.

Okay, what else are people running into? So we would love to see a few more people get this stood up before we move on. If you have gotten this app stood up locally, feel free to react to that. Step in Discord with the thumbs up so we know people were able to follow along. We can keep going through the workshop once you have this app up and running. I do feel like these first two steps are some of the hardest ones, we cover in the workshop, and then once everything's stood up, it's really simple to get going from there. So there's some light at the end of the tunnel once we get through this.

Oh, okay. Some people are just it's taking forever to build. It really does. I really should have prepared some jokes or some interesting stories to tell. But awesome. We'll give it just a few more moments to build then. Yeah. This is something I think we can improve the next time we do the workshop is see if we can get some of that stuff cached. Great. Kanke is up and running. Awesome. Definitely takes a long time to build. Maybe Zach, you can show us what the app that we just built locally what it looks like when it's running. Well, sure. I can do that. So mine is up and running. It should be on port 3,000. So you should just be able to browse to localhost port 3,000. And we get a little graphical query interface. Excellent. Get some more successes. Let's see something here. I've got a query stashed over here. So when I go to it, it takes me right to the graphical interface, which by the way is not a great idea security wise either to have a nice graphical interface to your GraphQL endpoint. Probably shouldn't bundle that with your code and your app, but we have, cause this is a vulnerable app by design. Hang on a sec, let me find that query. Okay. I can run this query and add this command, run query, and I get a result, which is some UIDs. And this app really doesn't do much besides this. Super simple. Part of the reason for that is just to make it a, well, one of the intentions was for it to build quickly and that's not happening, but- But we're doing our best. But also so it'll scan quickly and that it really does do. It scans pretty quickly. Okay. And Jason? Yes. I'm wondering if that's a Docker dash compose problem given that it's running on Linux, but- Oh, let me, let me paste in the mutation. Okay. So there's the mutation command and the query. And you should just get some data back. Zach, maybe you can look at Andreas's error. I'm not sure if it's a CD problem? Yeah, that's interesting. No such file or directory? I don't know. GraphQL exited. Andreas, can you, can you paste some of the commands that you run before you get to there? Is that during the Docker compose build? Let me see. Let me just. Oh, you took detach away. Andreas, are you able to reach the server? Is it running on localhost or 3000? Okay. Let's see. Are you on Windows or Mac or Linux? I have not tested this one on on Windows. Hey, Bill, did did you get this working on Windows? I wonder, Andreas, why don't you try, have you tried adding this server port equals 3000. to the Docker compose file? No, Bilal has not been able to step on step five. Step five, copy. Ouch, Well, I wish I had a Windows machine to test this out with, okay. Let's see. Mm-hmm, so. Let me see what your error is, see if I can Google around for this. Yeah, Exec user, oh. Oh, I think, so I have a feeling that this is a Windows permissions issue, Exec user process, because I think what it's trying to do there is create the user app, and it's failing to do so, but... switch file and picture. Mm. Okay. It looks like Nick is on it, some stack overflow.

7. Running a Hawk Scan

Short description:

We've run Docker Compose, checked the system on local host port 3000, and run a sample query. Now, let's move on to running a Hawk scan. We'll go to and sign up for a new account. After setting up the initial API key, we'll create the first application called volmgraphql-api in the development environment. We'll provide the host as HTTP, not HTTPS, with local host port 3000. Sign up using your Google ID or GitHub, and choose the developer free account. We'll scan the application, not Google Firing Range, to set up some sample data.

Thank you, Nick. Yep, I found the same thing. Oh, that's right, Nick, now this is all coming back to me that we haven't done this before. Yeah, I can't remember how that person fixed. I know Nick you work with the last person that Raymond had as well, so. So there's an answer in that stack overflow thing. I'll put it up for other folks to look at. Oh. So I think this is on your Docker compose file and your Docker file probably. Okay. Thanks. Believe it or not, we're still doing great on time, everybody.

Okay. Well, I think if you're still running into problems, ping Nick, he's now promoted to a panelist, so you can chat him in the zoom chat, or he's right here answering questions in the discord. Nick is a wiz, he's helped us troubleshoot many issues on this before. So if you're running into problems, ping him, but I think, Zach, do you want, we'll give it another 30 seconds and then we'll, keep going through the workshop. It looks like hopefully the people that were having that app take a long time to build, it has built and they're ready to rock. Yep. And if there are people who are still working on stuff as we move ahead, refer back to the guidebook. And I think, we should still have time at the end too. And we can hang out for a while until the end of the two hours and help people through. Okay. Awesome.

Okay, so we've run Docker Compose, we've brought the system up, we've checked it out on local host port 3000 and run a sample query. Next, we want to run a Hawk scan. So, this is the part where we're gonna go to I'm presuming that we're all gonna be new users. And as a new user, you'll get walked through the get started flow. And it's gonna ask you questions, it's gonna say, hey, here's your... Oh, nice, got another one. Andres, congratulations. Great job. So we'll log in to It's gonna set up our initial API key. And the API key is so that when you run the scanner, it can send results back to the platform, authenticated as you. So it knows to put the scan results in your buckets. We'll set up the first application, which we're gonna call volmgraphql-api. We're gonna set it up in its first environment, and we'll call that development. And this is an arbitrary value. So we're all gonna get a free developer account at Hawk Scan or at StackHawk. And it's free for life, free for developers. We're also free for open source projects. So anyway, we're gonna walk through setting up the first API key, your first application, which will be given a big, long UUID that's mapped to your application and environment. And you can have as many environments as you like. Cause for each application, we expect you to scan it in different environments, maybe locally on your workstation or in a CI-CD pipeline or in a QA environment. And then finally we'll tell it what host we're gonna be scanning. And in our case, it's HTTP, it should not be HTTPS, it should be HTTP, local host port 3000. Okay, so I'm gonna pop out to right now. And we're gonna sign up for a new account. I'm gonna use my Google ID cause it's really fast to sign up that way. GitHub is also very fast. You can do email if you want, and in that you're gonna set up like your email account, you'll set your own password. It's gonna want you to verify your email address, so it just takes a little bit longer, but you're welcome to do that too if you want. So sign up for a new account. I'm gonna use Google. And I'm gonna use one of my many shady personas. And fire up StackHawk. So this is the StackHawk welcome screen. And in the StackHawk welcome screen, it's gonna offer you to change just this one thing. I'm gonna say, it's my workshop org so I don't get confused later. And it gives you a little information. This is authorized through Google. This is the account we're using, and so forth. And then I'm gonna continue, and let's pick the developer free account. This is free for life. And if you did wanna go pro later, you can upgrade it to a pro account. That's no problem. All right, hit continue. And we wanna scan my application, not Google Firing Range. This just sets up some sample data.

8. Setting Up API Key and App Details

Short description:

Start by getting an API key and saving it in a.hawk directory. If you encounter any issues, reach out for assistance. Next, set up the app details, including the application name and environment. Set the host to local host port 3000 using the correct formatting. Feel free to ask for help if you encounter any errors.

So you can start looking at the platform without having to scan your own app. Continue. And welcome to StackHawk it says, and it's gonna walk us through the getting started flow. So we'll walk through these four steps again. We're gonna get an API key, gonna set up our app name, environment and stuff. And then it's gonna give us a YAML file that we can use as our initial StackHawk configuration.

So this API key, I recommend that you save a copy of this somewhere. Maybe you can save it in the last pass or open one password or really just copy it to a text file for now. Or, and, or please do this step. So this step is going to, you can just copy and paste this into a terminal window. There's also a PowerShell version if you're running on Windows. And what it'll do is it's just gonna create a.hawk directory in your home directory and a file under there called hawk.rc. And it's gonna save your key to that file so you've got a local copy that you can always come back to. And we will need to come back to this later. So I'm gonna copy that. I'm going to paste it in. And I already have a.hawk directory so I get this error but even so it's still stashed my key in that file.

And Zach if someone, I'm guessing somebody maybe had already signed up for a stack hoc account so they're not getting that initial welcome wizard. Could you navigate to show them where they might be able to get an API key? Yes. Or I guess that's gonna take you out of the welcome wizard, huh? Well, I think I can do this in a private browser. There we go. Yes. Let me try this. Or if you, actually, let me just go to my existing account. That's okay. It looks like maybe they figured it out based on the directions I put in there, but it's just over on that left-hand side. If you see where Zach's name is kind of hidden behind that opacity stuff right now, you'll click there and go to settings and then you can generate more API keys. Yeah. Okay. Perfect. Thank you, Zach. Yeah and if you already have, you can, yeah, there's this add an app button over here too if you already have an existing account. Yeah.

So we've got our key, we stashed it somewhere and we stashed it the recommended way under the.hawk directory. And next we're gonna set up our app details. So we're gonna- Zach, I'm gonna just do a quick thumbs up to make sure people have stashed their API key. I know when I was first getting going with Stackhawk, I had some trouble with that hawk.rc file. So give us a thumbs up if you were able to stash that API key. Oh, if you're running into problems that, I had lots of problems with that step too. So feel free to let us know if you're running into any errors. Oh, can you post that shell script, Zach? Oh, yes. It's not in the- So here is I'm going to post the Bash version first and then the PowerShell version second. And, of course just make sure you update the API key to be your API key instead of Zach's API key. Oh yeah. You all have my API key now. That's not good security. That's not good security, but I have a feeling this is going to be taken down. This app is going to be taken down shortly after this. That's right. Okay. Great, it looks like people have had pretty good success with that. So perfect. Now we can go on to application details. Cool, okay, so we'll call the application Vuln GraphQL API, which is the same as the repo name. That's kind of just the convention I use. I think most people do that. Next, we'll select the environment. And again, this is a concept. You can have as many environments as you want and the platform is going to sort of bucket data by environment for an app. And so you'll get to see over time what your scan results look like in development, pre-prod, and production, if you have those environments. And of course, you can call them whatever you want. You just change the name under the end setting. I'll show you later. It's very easy. Then we're going to set our host to local host port 3000. So the formatting here is kind of important. You want to make sure it's HTTP, not S, colon slash slash local host, port 3000. No trailing slash, it'll gray out and stop you from doing that. Just this.

9. Setting Up StackHawk YAML and Running the Scan

Short description:

To set up the StackHawk YAML file, download it and open it in a text editor. Copy the necessary information, including the app, application ID, env, and host, from the get started wizard on the platform. Paste this information into a new file named stackhawk.yaml in your repository. This file is important as it is the default file that the scanner will look for. The stackhawk.yaml file is a simple configuration file that allows for an effective scan of your application. It contains the application ID, environment, and host. You can also use environment variables for automation purposes. Once the file is set up, use the provided Docker command to run the Hawk-Scan container and perform the scan based on the configuration file.

Just this. And next. Okay, so we've got our application ID, and this is gonna be present in your StackHawk YAML file. Otherwise I'd say you save it, but you can always find this application ID later. It's available on the platform, it's not a secret.

So we're gonna download this file. And, oh dear, this is gonna open with Xcode. I hate it. That always happens. I'm sorry. Actually, I'm gonna circumvent that. Open it in Atom. The.

Okay. So open, so download the stackhawk.yaml file and open it in your favorite text editor, which I'm pretty certain is not Xcode. And all you really need are these first four lines, the app, application ID, env, and host. And those are the, those are the pieces of information that we just set up in the platform in the get started wizard. So I'm actually just going to copy this and put it into a fresh new file at the base of my repository called stackhawk.yaml. And the name is important because this is the file that the scanner is gonna look for by default. So, in the stackhawk.yaml file, this is one of the magical things about stackhawk is that's a pretty easy dashed configuration file.

Now this isn't, this isn't gonna give you a very deep scan, but it does give you an effective scan and it is gonna find some information that's usable. So there's one of these per configuration file. Each configuration file has a standard dashboard and each configuration file, each configuration file or set of files strictly pertains to a single application. Got the application ID, the environment we're running it in and the host that we're gonna scan. And you can use environment variables in here too so that you can use automation to like send in different information depending on the environment that it's going into. But for now we'll just keep it simple. So there we go. And then finally we can just copy and paste this.

What this is gonna do and there's a PowerShell version as well is it's going to source that file that we created. So it pulls in that environment variable that contains your API key for StackHawk. Then we're gonna do a Docker run to fire up this container, the Hawk-Scan container that will run the scan based on the configuration file that we just made.

So, just to walk through this Docker command cause I know it's a little complicated, we're gonna say Docker run, run a container. We're gonna send in the environment variable API key. So within the container, Hawk-Scan is expecting this environment variable API key. So we're gonna set that environment variable to whatever we have in Hawk API key, which is that variable that you exported that contains your StackHawk API key. When the scan is done, we will remove the container. So, you don't get your Docker PS littered with old Hawk-Scan containers. We are gonna volume mount the current working directory to slash Hawk within the Docker container that's running Hawk-Scan. And that is where Hawk-Scan is gonna expect to find your StackHawk.yaml. So, you should be in your repo directory when you do this. So, it's gonna, basically it's gonna mount your repo directory to slash Hawk so it can find your configuration file. Dash IT is for interactive and TTY. That's just so it can print results out to the console as it runs. StackHawk.hawk.scan.latest, grab the latest image. Okay. Let's do it. So, I'll hit go. Sourced my file, Docker run with API key, mounted the current directory. It's printing out a little bit of summary information for us. It's got the latest version of HAWK scan. You notice that it doesn't think this is a GraphQL application, and that's because we haven't told it yet that it's a GraphQL application. MSNPS yes, that's right. It should be HTTP not HTTPS. Thank you for that. I'm going to fix that in the guide. I'll fix that in the... Okay, that is fixed in the message. Okay, so I think we have, we have a lot going on, so if you're able to get that YAML in place, give us a thumbs up, make sure you have that hosted to HTTP, not HTTPS. Thank you MSNPS for catching that. Hopefully that will fix your 28-minute scan time, obviously not what we're hoping for. So we appreciate that. Okay, so now I'm going to pop it into my slides. It looks like Zach, somebody doesn't have the API key stashed. Yes. Source.hawk.rc file. Yes, and Alexi, are you on? May I have the PowerShell call? Andreas, here you are. And Alexi, I'll be right back to you in just a sec. Alexi, you can see how we did that. If you scroll up to where Zach pasted his API keys, those are the commands to create that hawk.rc file. Right up here.

10. Generating API Key and Setting Up Application

Short description:

To generate an API key, go to your name, Settings, and then API key. Save the key to your hawk.rc file. Set up the application with an arbitrary name and choose development for the environment. Use HTTP local host 3,000 as the host. Download the stackhark.yaml file and put it in your repository directory. Source the environment variable and start the scan using the provided commands.

Now you can also, if you still have this window open, you can jump back to your API key and get these commands back. Mhm. I think in settings you can also see your API key once more after it's created. That's right. So if you don't have that API key handy, go back to that, go back to your name, Settings, and then API key, and you'll see it there. OK. So if you're just trying to concentrate on getting these steps correct, you can walk through this create new app process. I'm going to walk through it one more time. Thank you. Angelos, you can't see the API key later. Yeah, sometimes it works, sometimes it doesn't. But you can create a brand new API key if you want. And then you can just go back and edit that hoc.rc or hoc.ps1 file. Okay. I'm going to walk through this. So turn the volume down if I'm just distracting you for a moment. But I'm just going to walk through this one more time. So when we first jump into this getting started flow, the first thing it asks is generate an API key. And this is the API key that we're going to send in to the hawk scan container so that it can use it to authenticate to the platform. I got that. Zack, you keep doing that. So we're going to take that key and this set of commands here saves that key to your hawk.rc file. So it's actually in.hawk, the directory and under that, there's a file hawk.rc. So it's just sending this line export hawk API key equals your API key. And we're going to use that API key environment variable later just to pass it into hawk scan. So hawk scan has it. Next, we set up the application. You can give it an arbitrary name. Doesn't matter. What hawk scan is really interested in is the application ID, which is a big UUID number that's generated and associated with this name. And then in the environment, this is arbitrary, but just for consistency among us, choose development. In your environment, of course, you can use whatever name you like. You just edit it in the YAML configuration file. For the host, we're going to be scanning HTTP local host 3,000. So I've got in my documentation, I've got HTTPS. That is a bug. That's a typo. We really want to be scanning HTTP. So I'll fix that in the prezo for next time. And then finally, it gives us that application ID, which is associated with the name Vuln GraphQL API. You can download the stackhark.yaml file, put this in the base of your repository directory. And then you can use these commands to run the scans. So source this to export that environment variable into the VM, and then start the scan. You can use the vuln.yaml hop API key. And then use docker run to send the API key into the Hawk scan container, have it kick off a scan, and be able to communicate with the platform.

11. Scan Errors and Results

Short description:

Some participants encountered errors, such as 'failed to connect to scanner after 300 seconds' and 'can't find a local host'. These errors can be resolved by adding the '--network host' flag to the default command. Additionally, participants running on a MacBook Pro with an M1 processor may encounter compatibility issues with the Intel-based Hawk scan container. It is recommended to use an Intel-based machine or a Linux box for the workshop. After completing a scan, participants can view the summary information and details of the scan in the scans section. The summary includes the number of paths, issues, and their severity levels. Participants can also triage issues and set up failure thresholds to block builds based on severity levels. Detailed information about each issue, including the request, response, and highlighted problems, is available for validation.

Hey, Zach. Someone has gotten a failed to connect to scanner after 300 seconds error. I think we've run into that before Am I recalling that? Yeah, so let's try this. But you've got it. Go ahead. So the default command that we use may not work on some environments. If you add this dash dash network host, that's right before. So that dash dash network host tells the container to just use your local host's network stack instead of trying to do an isolated docker network on top of it. And a lot of times that will make those kinds of can't find a local host issues go away.

All right. What other errors are people running into? Looks like we've got the 28-minute scan resolved. So that's good. Little hard to automate scanning when it takes that long. All right. Why don't we give everybody a few more minutes? Yeah, oh, interesting. Okay. So, the requested images platform. So, fascinating. Oh, are you running on a brand new MacBook Pro with an M1 processor? You lucky, lucky person. That's really awesome. Wish you could've brought one for all of us though. Those are great laptops, but they use an ARM processor and all of the, the Hawk scan container is based on an Intel image. So you'd have to run on an Intel based machine, unfortunately. But if you have a Linux box handy, you could, you could walk through all of this stuff on a Linux machine and it should work unless it's Linux running on a Raspberry Pi or something, which is also an ARM processor. Okay. All right. All right. All right. All right. I'm going to drop in just a quick. Is it Command C that kills the scan sack, or Command K? Oh, Control C? Control C, yes. Yes. If you're on a Mac, Matt Simpson, I think you can do Control C, if you're on a Windows machine. Try that. If you're on a Windows machine, just reboot. Just kidding. Just turn it off and back on again. Oh. Matt Simpson. So if you have one that says that it's still in progress in the web console for a while, but eventually should clear out. All right. If you were able to complete a scan, give us a thumbs up in Discord. Because we're going to add some more exciting stuff, bells and whistles and GraphQL specific scanning capabilities. And maybe we'll even have time to automate it in pipeline. So, let us know. But of course, if you're still figuring out the scan, we don't want to move on without you.

Okay, let me just show a little bit about this scan that we just ran. So if I click into... So if you click into the scans section, this is going to be a running list with the latest one at the top of all the scans that you've run. And the summary information that you've got here is, okay, the last thing I scanned was this application, VolnographQL API, in the development environment, I found six paths, and I found some issues. None were high severity, six or eight were medium and 12 were low. And then there's this... In parentheses, there's this triage number that comes in in the future when you triage issues, like assign them out to developers, or say risk accepted, this number will start to grow and it will take away from the number of vulnerabilities. Triage all eight, this is gonna be zero parentheses, eight. And that's just a track, what are you actually, what have you acknowledged and what are you working on? And what does HawkScan not need to alert you about in the future and count against you? Cause you can set this up to actually block your build if you want. By default, it doesn't do this, but you can set it up so that... You set a failure threshold of high, so that if it finds any new high severity alerts, it will actually return an error and that can be set up to block your build. So your build will no longer complete. And you can't... If you have this set to run on PR, which we recommend, your PR tests will fail. And so you can't merge it until you either fix that high severity issue or triage it, assign it to somebody. We just want to make sure that you have dealt with it. But again, that's not the default behavior. This is just an option. If I click into this, I get to see more details about the issues that were found. So there's a cross-domain misconfiguration. We're setting our cores header to allow cross-site scripting from anywhere to here. And so one of the things that you see is like in all of these paths, we found that issue. This is the request we sent, including the evidence header that we sent in and the response that we got back and highlighted in red is the problem that it found. You can also validate these issues.

12. Configuring Scanner for GraphQL

Short description:

To optimize the scanner for GraphQL, we need to configure the stackhawk.yaml file. Enabling GraphQL in the configuration, turning on autopolicy, and enabling auto-input vectors will improve the accuracy and speed of the scan. Additionally, we'll turn off the default spider and instead focus on the GraphQL endpoint. By doing so, we can run introspection to determine what can be scanned in the GraphQL API. Once the configuration is set, we can rerun the scan and monitor the results on the stackhawk platform.

So I can construct a curl command right from this website that allows me to run exactly the same probe that got sent in and I can verify that I get exactly the same result. So I'll just copy that to my clipboard. I will paste the same thing in and I get the same response including that access control allow origin response header. So that's kind of cool, but none of this is GraphQL of course. So next we need to tell this scanner that this is a GraphQL app so we can actually get some good GraphQL results.

How are we doing in the chat, Rebecca? Someone their scan ran, but it returned zero findings. And I'm wondering, it looks like it may have been configured to run on HTTPS localhost. So I'm wondering if we rerun it on an HTTP localhost, if that will change things. I'm trying to think of other reasons we've run into that type of behavior before. Yeah. So Alexi, just go over to your stackhawk.yaml file, and I think you've got an S there. Just make that an HTTP. And save it. And run the scan again. And in the meantime, I'm gonna come back here to the workshop slide.

So we've run our first Hawk scan, and now we're gonna tune this up for GraphQL. So we're gonna add a couple things to the stackhawk.yaml configuration file. We're gonna tell it that it's a GraphQL configuration. We're gonna enable GraphQL in configuration. And there's a lot more options under here, and I'll show off a couple of those in a little bit. We're gonna turn on autopolicy, that's so that you're fully insured when you drive down the road. Or I mean, not really. That's not what that is. Autopolicy is actually a thing that automatically configures the ZAP scan policy depending on the features you've turned on. So autopolicy is gonna recognize that you've turned on GraphQL Conf. And so it is a GraphQL application. And so it's gonna, so this is an OAS ZAP concept is that they've got scan policies that turn on and off various features to tune it to your scan. So this is gonna turn off things that are really not at all related to GraphQL and make sure that all of the GraphQL related plugins and probes are turned on. And this has the effect of making it more accurate and faster. Now we're also gonna turn on, sorry go ahead. Zach, I'm just gonna add on to that, I think Lassa posted in Discord about that. He had different results than you, I think slightly. And so that can happen when you're using a dynamic application security testing tool to scan GraphQL apps because a lot of DAS tools just weren't created. They were created long before GraphQL was created so they don't have a way of making sure those scans are repeatable and accurate and find all of the right things specific to your GraphQL app. So by implementing these pieces that Zach is talking about here, you'll find that we get better quality results that are very focused on that GraphQL app since we're pulling the right policy, since we're invoking that API correctly and we're able to better spider that app.

That's right. Well said. So we'll turn on also this auto-input vectors feature. We'll set that to true. What that does is normally, excuse me, Hox scan is going to try every different kind of requests that it thinks could work. So that means in GraphQL, it might try gets and posts, but with auto-input vectors, it's just going to get smarter about that. And for GraphQL, for instance, it's just going to do post. Cuts down on the scan time and not sure if it improves accuracy, but it definitely cuts down on the scan time. And usually when you're using GraphQL, posts, post requests are what you want anyway. Then finally, we're going to turn off this default spider. So the scan that we just ran used the default ClassicWebSpider. It operates like Google. You know, it finds the root, finds any links off of that, follows those, finds any links off of those, and continues to spider through. We're going to turn that off because instead, with GraphQLConfEnabled, it's going to go find by default that slash GraphQL endpoint and run introspection to figure out what it can scan in your GraphQL API. So let's do that. I'm just going to copy my own work here. And this is, you can actually copy and paste this from the guidebook as well. GraphQLConfEnabled is true. I'm going to set auto policy true true and auto input vectors. And we've got, soon we will have auto completion for most IDEs. I don't know if I should be saying that but we're working on that so that you can auto complete some of this stuff. Then we're also going to add a major section here called Hawk, which has some of our special Hawk scan only features. I'm going to change the spider base false. So this turns off the Google web spider. And then if we run our scan again, just as we did before using the same command. And again, if you're having any network trouble, you might try dash dash network host and we'll run that scan. So now it finds that we do have a GraphQL configuration scanning localhost, got a scan ID to refer back to, found the stack Hawk YAML configuration file. And now when it spiders, it's going to show us a different kind of spider result. Before it said, hey, I found these paths, but this time around it says, hey, I found these schema objects that I can go and scan. And if I pop over to the stack Hawk platform, which I'm going to do now. I should see that scan has started and I can click in and watch results come in. So I found three paths, has a few findings and it's running through the plugins. This is kind of cool. You can see what zap plugins are running. And this can be a useful place to come when you're troubleshooting.

13. Scan Progress and Issue Management

Short description:

The scanner is aggressive and manipulates data, so it's recommended to run it on a fresh database. Some participants encountered issues with the configuration file, but the scan is progressing well overall. High severity issues were found, including possible SQL injection and remote OS command execution. The latter is a real vulnerability with significant consequences. Assigning issues to other users and integrating with JIRA are possible actions. The tool provides detailed information about the vulnerabilities and allows for validation of the scan. A question was asked about how the tool finds issues, and an overview of how GAS works was requested.

If a scan starts to take too long, you can take a look at how many tests are running. One thing with GraphQL applications and any applications got data behind it is that the scanner is very aggressive and it will try to manipulate data. So it will actually try and go in and delete users for instance, or so this is one reason you don't wanna run this in production. It'll try to add data or remove data. And so your scan results may change from run to run unless you condition your database. So this is why we like to set up a fresh database, seed it with known data and run the scan. And if you do that every time, this is part of the reason why it's so nice to be able to automate all of this is that you're always starting from a known data condition.

All right. So are we done? Yes, completed on. Mine took about 10 minutes.

Okay. Mutation. Oh, Alexi. You have this trailing slash here. Try removing that trailing slash in the configuration file. Nope. We're a little bit picky about that. I'm sorry. Right. If you're able to run the scan give us a thumbs up. Sometimes when you're updating that email, if you add extra space somewhere it can be a little bit finicky. Let us know we can help you troubleshoot those. I'm just trying to make sure everybody gets a successful scan, GraphQL, option configured. Let me see. Right. We've got seven thumbs up on running that scan. If you're ready to move on just give us a thumbs up so we know your scan still running. That's cool. We'll keep hanging out. Oh, this successes are rolling in. Let's walk through some of these results, a little bit more you can see now we've got a couple of high severity issues that have been found, including sequel injection, where it found that it could send in a request and at this one I'm not actually really super sure about this. So we get a sequel SQL light error back sequel I error unrecognized token hyphen and that looks like it might not actually be a SQL injection. I'm going to need to check back with the, with the big brains on this one, but, but this other one, I'm quite certain is a real vulnerability remote os command execution. So what this finds is that if you send in a correctly formed mutation. And you send it you sneak in this extra command. You can catch the Etsy password file, which isn't really that bad but it's, it's not good to be able to run extra commands on your application server in your data center. The response that we get back is actually your entire etsy password file. And if this container was actually running as a root user. You could probably get your shadow file to which really is bad. You could probably also manage to do things like install arbitrary software into that container. Maybe run a Bitcoin miner, that sort of thing. I might be talking a little bit beyond myself but these are plausible things so this is a real issue and it really is a high severity issue. So we can click on this, and if we want we can say, hey, that's fine, or. I do believe this one this really is an issue so the right thing to do here is to assign this to another user. Now I haven't set up my JIRA integration here. But I can show you what it looks like to set up a JIRA integration. I'm going to sign in to Swamp Theory for a moment, and this will allow some folks to continue working on the scan. I'm going to go to an incognito window and log in. I'm going to log in here, and select this thing, and I'm going to assign this to myself, and hopefully this integration is still set up. I might have torn it down, but scroll over here and say send to JIRA, and it finds my project. I've got an engineering project in JIRA. It sets up a summary title for the issue, which is, you know, a brief description of the application that it refers to, and the vulnerability that was found, and then it's got all this great detail, where it says, hey, I found this when I ran in GitHub actions. Here's the finding. Here's the criticality. The path. That is an issue. into the query. And then here's a link back to the scan result, so I can look at this entire set of data here and validate the scan and everything. Hit create issue. Yeah, it actually works. And then open that link. I was only nervous because I can't remember if I had this set up still. Hey, welcome. Well, I'm sorry, I'm not going to show you the ticket in the system, because I don't want to walk through all of this stuff. But it is there. It is a super handy feature. All right. Let's check and see where we're at. Okay. We did get a question, which was, how does the tool find this type of issue? Does it run multiple queries for the specified query? Maybe you could just give a quick overview on how GAS works.

14. Remote Execution and Vulnerability Detection

Short description:

There are plug-ins used by the scanners, including some from the OAuthZap project. These plug-ins have been vetted and contributed by the community. The scanner tries to inject an OS command by sending a request to the GraphQL API. It found a vulnerability by successfully running the 'cat etsypassword' command and obtaining the expected response. The scanner can also try injecting malicious content into different String fields.

For the remote execution one in particular. So in general, what happens is there's a ton of plug-ins that these scanners have. We're using a lot of the plug-ins. We've got some of our own, but we're mostly using them from the OAuthZap project. And those are open-source plug-ins that have been vetted by the community and contributed by the community. We've written a couple of them as well. Basically what they do is, in this case, remote OS command injection. What it does is, it tries a couple things to try and inject an OS command. So this is the one that worked. There are probably a bunch of other ones that didn't work. But it just tried it and said, hey, I found in the schema when I did introspection, I found that there's a mutation called super-secret private mutation and it's got these characteristics. I'm going to try sticking some malicious things into this query and into this variable that I send in through the query to see if I can trick it into running an arbitrary command. And the example that they gave me for the command that I can run is caca. But I'm going to try slipping in an extra command, cat etsypassword. So it sent in that request and the response that it got was what it would expect to see when you go into a container and run cat etsypassword. Etsypassword is a file on all Linux systems that contains all the users of the system. And it actually used to contain their encrypted passwords, as well. It doesn't anymore. But it's still an important file and it's an important file to protect. So anyway, it finds that the results of that command that it sent in were what it would expect to see from that command, from the cat etsypassword command. So that's how the scanner knows that it found a vulnerability, that it found the ability to run arbitrary commands on the host. And that's our vuln-graphql-api application container that it ran that command on. So if I log into that container, and why don't I do that? Let me see if I can connect to this container. So I've got the vuln-graphql-api container running. This is our test application that's running on port 3000 and that we can jump into and run the graphical on. If I run Docker exec IT-vuln, I remember my yeah, vuln-graphql-shell. Yeah, so I've got a shell here. Now, the command that it tried to send in was cat slash etsy password. And this is exactly the result that it got. It just shows it to you in a slightly different format because it's returning json. You see there's the root user, the binbash shell, and in here we've got the root user with the binbash shell. And that's what all these problems look like. They're just trying stuff, they're messing around and finding out if their vulnerability attacks actually worked. And then the BST time zone is England, British Standard Time. But change, Traws says if I change the query from command and the fields from Standard out and Standard error to something else, will it run the plugin for such a vulnerability. I'm not sure I follow, but if you're asking, if I try injecting another, a different kind of command, can I get it to work too? Can I export this myself? And the answer is yes. Would it run the plugin for all string fields? It's really trying. It's trying some specific String, String fields. Oh, so if you had different String fields that you could send into the query, yes it would try, it would try sending in that malicious content into all those String fields.

15. Automating Application Build and Hawkscan

Short description:

We're going to automate the process of running the application and Hawkscan on every push to the repository. GitHub Actions is a built-in CI/CD system that allows you to define workflows easily. We'll create a workflow that runs on every push event and consists of a series of jobs. The first step is to clone the repository and check out the branch and revision that was pushed. Then we'll build and run the vulnerability QL API using Docker compose. Finally, we'll run Hawkscan using the Hawkscan action and provide the API key stored in GitHub Secrets. Make sure to protect your secrets and store them securely. By enabling workflows and setting up the API key as a secret, we can automate the process without any additional cost. If you have any questions or need assistance, feel free to ask.

Hey Zach, I'm thinking we only have 20 minutes. We still haven't gotten to the coolest part, which is getting this to run in Zach, Erin, get out of action. So what do we say we do some automated, make this automated so it could run on every pull request? You got it, let's do that.

Okay, back to the slides. So we just tuned it for GraphQL. We ran that scan, worked beautifully, got cool results. So now what we wanna do is we're gonna take our API key for Stackhawk, we're gonna add it as a secret to GitHub Actions, and we're gonna enable GitHub Actions workflows. And we're gonna create a workflow that looks like this so that when we push changes to this repository, such as all the updates that we've just made, it will go and build the application, run it, start it up as a service, and then run Hawkscan against it automatically every time.

Let me back up just a minute. GitHub Actions, I don't know how many of you have worked with GitHub Actions or heard of it, but it's a CI-CD system that's built right into GitHub, and so it's like Jenkins, it's like Spinnaker in a way, it's like CircleCI or TravisCI, but it's really nice because since it's built into GitHub, all you really have to do is add a file that defines the workflow that you want to run every time you push or whatever, and it will run it. So it's a super easy thing to set up, and you also get like 2000 free minutes even with just a free developer account with GitHub. So what we're gonna do today is not gonna cost you anything.

Let me just walk through the taxonomy of a workflow really quickly. And this is a simple one. Give it a name, build and scan, tell it when to run. We're gonna tell it to run every time we do a push to GitHub. You can also do it for PRs, you can do it for lots of different GitHub events that might happen like a PR being accepted or somebody making an API call to GitHub or another workflow finishing might kick this workflow off. That's very flexible so you can create complicated pipelines this way. So every time we push, we're gonna run a series of jobs. Really just one called hawkscan that will name building scan. It's gonna run on an Ubuntu virtual machine. It's a full virtual machine that's got a bunch of tools on it. So the build tools that you need are usually there. The first step that we're gonna take is to clone the repo. So we've pushed it we've fired up this Ubuntu image. Now we're gonna check out exactly the branch and revision that we just pushed. Then we're gonna build and run vulnerability QL API, gonna run this simple command to do the same thing that we did before export server port equals 3000 and Docker compose up, build, detach. And you notice I use Docker dash compose here. And that is because on this Ubuntu image, they don't have the newest version of Docker, but this works. Then we're gonna run Hawk scan. Now, this uses the Hawk scan action. This is a tight integration that we've developed for GitHub Actions. Just makes it a little bit easier to run Stackhawk, so you don't have to run the big Docker command. You just run this action. And we're gonna feed in the API key that we stashed in GitHub Secrets called Hawk API key. GitHub Secrets is just a secret store. This is important for any CI system that you use. You wanna make sure that you protect your secrets. So don't put secret information into your repo directly, always use a secret store. So let's walk through that. First off, I need my secret. Where did I put my secret? It should be back here in hawk.rc. There it is. So I'm gonna copy that so that I can paste it into GitHub secrets. Back will you paste that one in the chat? I didn't just so people can, the cat command just so people can grab that out of their. Yes. IDEs. There's one in the zoom chat and here's one in the Discord. Okay. So copy your key. Then we'll jump out to GitHub to your copy of the Vuln GraphQL API app. And under the settings tab here in your repo and then off to the left, way down here there should be a secrets section. So I'm in actions secrets and this is specific to this repository. I'm gonna create a new repository secret, I'm gonna call it hoc API key. And this is case sensitive and paste your key in right there. It adds secret. And it shows up down here. So then this is available in an action workflow using that special notations secrets dot hoc API key. it's one more thing I need to do from the GitHub repository to head over to actions. And because we fork this from the kaka from a different organization from a third party and brought it in and the repo already had some workflows in it that we deleted. As a safety measure, they turn off workflows for you if if you forked it and there were existing workflows, because they don't because because somebody could sneak in a workflow that just runs a bunch of Bitcoin miners or whatever all the time. And you don't want that. So, anyway, we know that this is safe. So let's go ahead and enable workflows for this repo. And if you've been able to store that as a secret, just give us a thumbs up. I know we're coming up on time here, so not sure how many folks are following along. But we posted how to do that in the discord once more. To give us a thumbs up just before Zach jumps into how to build that workflow. Want to make sure everybody's able to successfully execute this. Yeah.

16. Enabling Workflows and Running Hawkscan

Short description:

To enable workflows, go to the settings section in the vuln-graphql-api-repo and add a new repository secret called hawk-api-key with your key as the value. Then, enable workflows in the actions section. Copy the provided workflow code and create a new file named build-and-scan.yaml in the .github/workflows directory. Save the file and push the changes. You can monitor the workflow's progress in the actions section. The build and scan job will clone the repository, build and run the Vuln GraphQL API using Docker compose, and then run Hawkscan. The scan progress can be viewed on the StackHawk platform.

So again, just to show that off, you head over to this settings section in the vuln-graphql-api-repo. And then down on the left is the secrets section and you add a new repository secret here. And call it hawk-api-key. And put your key in as the value and add a secret. I assume this will error out if I try it because I already did this. Oh, it updated it for me. It updated. Okay, awesome. And then go over to actions and enable workflows. Okay, now in the interest of time, because I know we're running short, I'm just going to copy and paste this workflow in. But we walked through what it is. It's just defining a job called Hawk Scan named Build & Scan, runs on Ubuntu, clones the repo, builds the app, runs the app and scans the app. And it pulls in that secret that we just created called Hawk API key. I'm just going to copy this whole thing and create it. So we need to create it under this slash workflows. And we're going to call it build-and-scan.yaml. And.github slash workflows is just where it looks for these workflow files. And every time you do a push or some event happens, it checks those. Pasted that in just as is from the guidebook. And looks good. I'm going to save it. And you should have a bunch of changes stacked up. I changed my docker compose. Got stackhawk.yml. We deleted some stuff when we ran that workshop prep. So I'm going to git add all of that. Git commit added, scan config and workflow. And we're going to push it. So those commands, right here. I'm going to paste that into the zoom chat and discord chat. And let me copy and paste your config file as well. So this config file again is GitHub slash workflows, and that's a.GitHub. And your file can look exactly like this. There's nothing special about it that's unique to my repo. The only unique bit of information is what the scanner is going to find in the StackHawk.yaml configuration file and this secret key that we're feeding in. Now, if I go back to my repo and I've pushed that, and I drop into the actions section, we should see all workflows build and scan, one workflow run. And as you do commits, you'll see the commit message up here. And you'll see if it's running or not. It'll be yellow if it's running, green if it succeeded, red if it failed. You can click into it, and here's the build and scan job that is in the workflow. So that maps to this. And I can click into that and see how it's doing on these steps. So I'll click in. It has set up the job. It found Ubuntu 2004. It's got a virtual environment provisioner. It's got a bunch of default environment variables and so forth. And then we cloned the repository, checked out exactly the same commit. So that's my commit hash. Then, we're building and running the Vuln GraphQL API. So this is the same Docker compose that we ran locally. And here are some of those warnings that one of our participants mentioned, but they're just warnings. They should be okay. Huge, huge amounts of output. And here's where it's trying to see the database with some fake users. Goodness, that's thorough. And then finally, it's completely built the application and started it up. And now Hawkscan is running. Hawkscan, the Hawkscan action kicks in and from the data that we fed it, it constructed this command. So Docker run TTY remote volume. It's mounted the repository volume to the slash Hawk directory within the container, sent in the API key. It's using network host because that's always more reliable. And then it's found our stackhawk.yaml config file. Pulls down the image, runs Hawkscan. It found our configuration file. It's running against local host in the build environment, in CI CD, in GitHub actions. And it's noticed that it's a GraphQL application cause we configured it as such. It's found some GraphQL routes and it's in progress. So if I pop over to the stackhawk platform, I should see that scan has started and it's running. Click in and see how it's doing, running through these plugins.

17. Running in GitHub Actions and Platform Features

Short description:

If everything went right, it should be finding pretty much the same thing that it found on my laptop. Some folks have been able to get this up and running on GitHub actions. If you have run into any problems or errors in GitHub actions, please let us know. We covered scanning locally, GraphQL custom configs, and running in GitHub actions. If anyone has any questions, feel free to ask. Make sure to enable actions in your GitHub repository. Under the applications tab, you can view scan results and triage issues. The StackHawk platform offers various integrations and documentation for further details. You can configure failure thresholds to break builds on different severity types of findings.

And if everything went right, it should be finding pretty much the same thing that it found on my laptop. What say you, Stackhawk? It looks like some folks have been able to get this up and running GitHub actions. We got a screenshot and the discord. We'd love to see that. If you have run into any problems or get into any errors in GitHub actions, please let us know. I know we're coming up to the end of our time here, so I wanna get everybody squared away with another successful scan before we wrap up. So just drop us a line if you're running into anything funny. Yeah, and if you're running on that M1 Mac and you followed along to this point, if you push this stuff in, if you've got the configuration set up right, your scan will run in GitHub actions through the pipeline. It's a little bit of a cheat code. All right. Well, geez, we covered so much today. We covered scanning locally. We covered GraphQL custom configs. We covered running in GitHub actions. We do have a couple more minutes. If anyone has any questions, feel free to drop those in. We'd love to help get you answers. Or if you want to try to figure out how to set this up with your own app, a GraphQL app as well, definitely let us know. Matt Simpson, yes, excellent call. That's right, you need to make sure that actions is enabled, because on these four prepos, GitHub will turn it off by default. Because it doesn't want you pulling in actions that might be overly aggressive and do stuff that you weren't expecting to do. Just to show you where that is. Just click over the actions tab, and if you click into actions, and instead of this stuff, you see a big button that says enable actions, click that button. Or enable workflows. I wonder if I can turn it off to reset it. Actions, disable actions. It just makes it go away. That's not fun. This is another place where you can go probably to find it, so go to settings, actions.

All right, and while you're looking at that, if you're done and you want to see a little bit more of the StackHawk platform, let me show you just a thing or two. Under the applications tab, there's this card view. So within each application, you'll see a card for every environment that you run your scans in. And you'll see this little bar graph. And the bar graph on the right-hand side is the very latest scan, a little bit older. Oldest scan is the farthest to the left. And it shows you a profile of what it found in each of these runs. So you can watch over time. If you've got this in your CI-CD system and you're running scans every day, you can see how your profile has changed over time. If you triage issues like assign them or accept some of them, you'll see a gray bar start to descend down below the axis. And so then you can get a view also of what you've triaged and how you've handled things over time. So that's the applications view. You can also from here, you can get to details about your scan settings. You can set technology flags for instance. I think tech flags are disabled. You get that on the Pro account. So with tech flags, you can define like, hey, this is running on Linux. So I'm using Java and JavaScript. You can check in what technologies you're using and that helps the scanner tune itself. It basically speeds up scans and can increase accuracy. Tons of integrations. We've got integrations with all the various CI CD platforms. We can notify you through Slack and Microsoft Teams, go in and create that integration. We can submit your data to Datadog so you can track it there. We can integrate with project management through JIRA and we've got generic web hooks that you can send out to other platforms. What did I miss, Rebecca? Documentation. Yeah. So go to and there's tons of details on everything that we've talked about today and more. one of the big things that you probably want to get into if you try and scan your own GraphQL application is most of your interesting routes are authenticated. So you want to head over to this authenticated scanning section and look into that. We've got scenarios and it's all again just configured through the same YAML configuration. Yes Zach we did just get a question on can you configure the, can you break builds if you find you know different severity types of findings? I know it's one line of YAML and I was just about to pull it up but you probably know it off hand better than I do. yes. Yes great question. That is under the Hawk set of stuff and it's called failure threshold. I will copy this link. If you want to break the build on, when high severity findings are found, you just can configure that variable failure threshold with the value high, same for medium or low. I think the teams we have that have that configured, NCI typically set it to high. But certainly depending on the type of app you're working on and the regulations of your industry and customer requirements, you could make that as rigorous as you would like. So I'll just show you where that goes to. I think it's under Hawk, right? It is.

18. Closing Remarks

Short description:

If anybody wants to wait for that to finish, this should fail because I've got some high severity issues. We've still got about a week to the conference, but we'll be there and you can reach out to us in the Discord if you have any questions. In the meantime, if you need help getting this going with your own app, feel free to tag Zack or Nick in that Discord as well. We have a great support team here at StackHawk too. We will reach out to people that have one swag next week. We try and make this stuff as easy as possible, but it's security. We're continuing to work on that. If you ever have any problems, reach out to us through this channel. Thank you all so much for coming. Have a great morning, day, night, evening, whatever time zone you're in, wherever that may be. Have a good one. We'll see you guys at React Advanced.

So you'd say failure threshold is high. And if anybody wants to wait for that to finish, this should fail because I've got some high severity issues. That's a great question. Awesome. Alright.

Coming up on time but any other questions we can answer before we wrap up? No, we've still got about a week to the conference, but we'll be there and you can reach out to us in the Discord if you have any questions. In the meantime, if you need help getting this going with your own app, feel free to tag Zack or Nick in that Discord as well. We have a great support team here at StackHawk too. So, Anything and everything you could possibly need, we're here. We will reach out to people that have one swag next week.

Yeah, we really pride ourselves on support. We try and make this stuff as easy as possible, but it's security. It's still not as easy as we want it to be and we're continuing to work on that. It's still not as easy as we want it to be and we're continuing to work on that. But if you ever have any problems, reach out to us through this channel. Reach out to us at support at, we will help you with your configuration. And if your company wants to trial this and give it a shot for pro usage, we can set you up with a Slack channel so that we can actually interact in real time. Thank you all so much for coming. You guys have been great. Thank you, Zach. Thanks for leading us through it. We appreciate it. All right. Have a great morning, day, night, evening, whatever time zone you're in, wherever that may be. Have a good one. We'll see you guys at React Advanced.

Watch more workshops on topic

GraphQL Galaxy 2021GraphQL Galaxy 2021
140 min
Build with SvelteKit and GraphQL
Workshop Free
Have you ever thought about building something that doesn't require a lot of boilerplate with a tiny bundle size? In this workshop, Scott Spence will go from hello world to covering routing and using endpoints in SvelteKit. You'll set up a backend GraphQL API then use GraphQL queries with SvelteKit to display the GraphQL API data. You'll build a fast secure project that uses SvelteKit's features, then deploy it as a fully static site. This course is for the Svelte curious who haven't had extensive experience with SvelteKit and want a deeper understanding of how to use it in practical applications.
Table of contents:
- Kick-off and Svelte introduction
- Initialise frontend project
- Tour of the SvelteKit skeleton project
- Configure backend project
- Query Data with GraphQL
- Fetching data to the frontend with GraphQL
- Styling
- Svelte directives
- Routing in SvelteKit
- Endpoints in SvelteKit
- Deploying to Netlify
- Navigation
- Mutations in GraphCMS
- Sending GraphQL Mutations via SvelteKit
- Q

React Advanced Conference 2022React Advanced Conference 2022
96 min
End-To-End Type Safety with React, GraphQL & Prisma
Workshop Free
In this workshop, you will get a first-hand look at what end-to-end type safety is and why it is important. To accomplish this, you’ll be building a GraphQL API using modern, relevant tools which will be consumed by a React client.
installed on your machine (12.2.X / 14.X)
- It is recommended (but not required) to use
VS Code
for the practical tasks
- An IDE installed (VSCode recommended)
- (Good to have)*A basic understanding of Node.js, React, and TypeScript
GraphQL Galaxy 2022GraphQL Galaxy 2022
112 min
GraphQL for React Developers
There are many advantages to using GraphQL as a datasource for frontend development, compared to REST APIs. We developers in example need to write a lot of imperative code to retrieve data to display in our applications and handle state. With GraphQL you cannot only decrease the amount of code needed around data fetching and state-management you'll also get increased flexibility, better performance and most of all an improved developer experience. In this workshop you'll learn how GraphQL can improve your work as a frontend developer and how to handle GraphQL in your frontend React application.
React Summit 2022React Summit 2022
173 min
Build a Headless WordPress App with Next.js and WPGraphQL
Workshop Free
In this workshop, you’ll learn how to build a Next.js app that uses Apollo Client to fetch data from a headless WordPress backend and use it to render the pages of your app. You’ll learn when you should consider a headless WordPress architecture, how to turn a WordPress backend into a GraphQL server, how to compose queries using the GraphiQL IDE, how to colocate GraphQL fragments with your components, and more.
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Workshop Free
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contents
Part 1 - Hour 1
      a. Relational Database Data Modeling
      b. Comparing Relational and NoSQL Databases
      c. GraphQL with the Database in mind
Part 2 - Hour 2
      a. Designing Relational Data Models
      b. Relationship, Building MultijoinsTables
      c. GraphQL
Relational Data Modeling Query Complexities
      a. Data modeling tool. The trainer will be using
      b. Postgres, albeit no need to install this locally, as I'll be using a
Postgres Dicker image
, from
Docker Hub
for all examples
GraphQL Galaxy 2021GraphQL Galaxy 2021
48 min
Building GraphQL APIs on top of Ethereum with The Graph
Workshop Free
The Graph is an indexing protocol for querying networks like Ethereum, IPFS, and other blockchains. Anyone can build and publish open APIs, called subgraphs, making data easily accessible.
In this workshop you’ll learn how to build a subgraph that indexes NFT blockchain data from the Foundation smart contract. We’ll deploy the API, and learn how to perform queries to retrieve data using various types of data access patterns, implementing filters and sorting.
By the end of the workshop, you should understand how to build and deploy performant APIs to The Graph to index data from any smart contract deployed to Ethereum.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
We all love GraphQL, but it can be daunting to get a server up and running and keep your code organized, maintainable, and testable over the long term. No more! Come watch as I go from an empty directory to a fully fledged GraphQL API in minutes flat. Plus, see how easy it is to use and create directives to clean up your code even more. You're gonna love GraphQL even more once you make things Redwood Easy!

Vue.js London Live 2021Vue.js London Live 2021
24 min
Local State and Server Cache: Finding a Balance
How many times did you implement the same flow in your application: check, if data is already fetched from the server, if yes - render the data, if not - fetch this data and then render it? I think I've done it more than ten times myself and I've seen the question about this flow more than fifty times. Unfortunately, our go-to state management library, Vuex, doesn't provide any solution for this.
For GraphQL-based application, there was an alternative to use Apollo client that provided tools for working with the cache. But what if you use REST? Luckily, now we have a Vue alternative to a react-query library that provides a nice solution for working with server cache. In this talk, I will explain the distinction between local application state and local server cache and do some live coding to show how to work with the latter.

GraphQL Galaxy 2022GraphQL Galaxy 2022
30 min
Rock Solid React and GraphQL Apps for People in a Hurry
In this talk, we'll look at some of the modern options for building a full-stack React and GraphQL app with strong conventions and how this can be of enormous benefit to you and your team. We'll focus specifically on RedwoodJS, a full stack React framework that is often called 'Ruby on Rails for React'.
GraphQL Galaxy 2022GraphQL Galaxy 2022
16 min
Step aside resolvers: a new approach to GraphQL execution
Though GraphQL is declarative, resolvers operate field-by-field, layer-by-layer, often resulting in unnecessary work for your business logic even when using techniques such as DataLoader. In this talk, Benjie will introduce his vision for a new general-purpose GraphQL execution strategy whose holistic approach could lead to significant efficiency and scalability gains for all GraphQL APIs.