GraphQL Security Testing Technical Workshop

Rate this content
Bookmark

We’ve all heard the buzz around pushing application security into the hands of developers, but if you’re like most companies, it has been hard to actually make this a reality. You aren’t alone – putting the culture, processes, and tooling in place to make this happen is tough – especially for sophisticated applications like those backed GraphQL. In this hands-on technical session, StackHawk Senior DevOps Engineer, Zachary Conger, will walk through how to protect your GraphQL APIs from vulnerabilities using automated security testing. Get ready to roll-up your sleeves for automated AppSec testing.

104 min
06 Dec, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Today's workshop covers Automated GraphQL Security Testing using tools like Dependabot, CodeQL, and StackHawk. It explores security tests such as software composition analysis and static application security testing. The workshop demonstrates setting up GitHub Actions workflows, enabling Dependabot for dependency security, and configuring CodeQL analysis. It also highlights the use of StackHawk for automated penetration testing and optimizing the scanning process. The workshop emphasizes the importance of continuous testing and security measures in software development.

Available in Español

1. Introduction to Automated GraphQL Security Testing

Short description:

Today's workshop is about Automated GraphQL Security Testing. We will cover various security tests, including software composition analysis (SCA) and static application security testing (SAS). We'll use tools like Dependabot and CodeQL to scan for vulnerabilities in your dependencies and code base. These tests will be automated in the CI/CD pipeline. Let's get started by forking a GraphQL application repository and adding security tests to it.

The following videos are extracts of this series. Today, the title is Automated GraphQL Security Testing. And my name is Zachary Conger. I'm a senior DevOps engineer at StackHawk, and I work as a company manager for StackHawk. And my main role is to help you have a DAST product, a dynamic application security test scanner. I'm a DevOps early adopter. I've been doing software development, automation, testing, observability, that kind of stuff for many years now, far too many years. And in my spare time, I like to play music, ride my bike and take amateur photographs.

I'm really, really super helpful with any technical issues that will come up. But hopefully there won't be too many, because again, we're gonna be operating completely out of a web browser, which is pretty much all you need today. One last time, you should go get the workshop guidebook, there's a link in Discord and in the Zoom chat. We're gonna be using Discord, so get on Discord if you're not there already. We'll be using it for polls and checkpoints, so as we get to certain points in the workshop, we'll stop and say, hey, has everybody caught up? And you give us a thumbs up if you are and thumbs down if you are not. Really helps with the flow and making sure that everybody can get the most out of the workshop. And finally, if you don't have a GitHub account, go to github.com and sign up.

Here's the agenda for today. We are going to take a GraphQL application. It's a simple test GraphQL application that really doesn't do very much other than provide a graphical interface. It's like a little blog engine, simple blog engine. It's got some vulnerabilities in it. What we're gonna do is all of us are gonna fork that repository to our own GitHub account, then we're gonna add GitHub Actions workflow to automate the build of that application in GitHub Actions. Then we're gonna start adding security tests to that workflow so that every time you push code to the GitHub repo, you're gonna run some more tests against it. So the first test, actually, some of these are workflow-based and others are more automatic than that, they just happen in the background. So Dependabot is the first test that we'll add, which will test your application for known vulnerabilities and any dependencies that you pull in. The next one will be CodeQL, which will scan the app's code base, actually, it parks through all of the code and looks for patterns that indicate vulnerabilities. And then, finally, we're gonna add Stackhawk, which will dynamically scan the running application for vulnerabilities. And this all happens in the CICD pipeline, so it's automatic on every push.

Okay, so I mentioned these forms of testing and let me go through them in a little bit more detail. This workshop is supposed to be kind of general, but we are using some specific tools. Of course, Stackhawk is one of them and I'm from Stackhawk. We think it's the best DAST tool out there, but there are other options and the point of this workshop is to make you aware of all of them, or many of them. So the first kind of test that we're gonna do is called SCA or software composition analysis. And in this form of testing, it operates on static code and it goes through your dependencies. It looks at your, in this case, we're gonna look at the package.json file and the package lock.json file, and we're gonna look, it's gonna build out the dependency chain and check those all against the catalog of open source libraries and dependencies and report on any known vulnerabilities in any of those library versions. And if it finds any, it's gonna give you a plan of action to fix that. So SCA has become sort of a table stakes standard thing that you should do in all of your repositories. There are no false positives in a sense because all of the vulnerabilities that it finds are known vulnerabilities and it's really fast and it's really easy and it's generally free. Most of the outfits out there that offer SCA offer a free plan or a pretty cheap plan. So we're gonna use Dependabot today, but there's other really great options out there. Sneak is one of the best, actually. I may actually, we may actually pull Sneak in as the SCA option that we do in the future. Dependabot's really nice too. And then there's another version out there, an open source version called FASA, they also have a commercial backed option. The next kind of testing that we're gonna add to our application is called SAS or Static Application Security Testing. An examples of this are CodeQL, which we're gonna be using today. There's also SonarQube and CheckMarks and a number of other good options. If you just search Google for SAS, you'll find lots of options out there. This kind of testing also operates on static code, but it's not looking at your dependencies at all. It's actually looking through your code base and it's looking for patterns that would indicate that you might have a vulnerability. Perhaps you're not doing any sanitization of inputs, for instance, it would generally find that kind of problem and report it back to you.

2. DAST and GitHub Actions

Short description:

DAST is a dynamic application security testing method that scans running code for vulnerabilities. It sends requests to the application, analyzes the responses, and reports suspected vulnerabilities. Unlike SAST, DAST has lower false positives and finds more useful issues. However, it can be slow, depending on the size and responsiveness of the application. We'll also explore GitHub Actions, a CI/CD pipeline integrated into GitHub, with a marketplace of actions that simplify complex steps. We'll use GitHub Actions to fork a test application and create a workflow with multiple steps. Each step can run shell commands or use actions from the marketplace, and secrets management is built-in.

It's really neat because as it finds your bugs, it can actually pinpoint them by file and by line. So it can let you know exactly where the problem is. It does tend to have kind of high false positives. And I found in my experience that it doesn't find much that's useful in comparison to other tools. But that's improving every day as these tools improve.

It's also kind of slow, cause it needs to compile your code and search through it and query against it. The larger your code base, the slower it gets.

Finally, we're going to be looking at DAST, dynamic application security testing. And there's several examples of this kind of scanner. Ours is called Stackhawk. That's what we'll be using today. Others include OASP Zap, which is an open-source project that Stackhawk is actually based on, we've built on top of OASP Zap. There's also Burp Suite, which is one that probably a lot of people know as a very common pen testing tool. And they've been working on making it easier to automate as well.

So dynamic application security testing is a form of scanning that operates on your running code. So typically this is a web-based application, maybe a REST interface in our case today. It's a GraphQL interface. And it just runs, it runs, it probes that service for vulnerabilities. It sends in requests, it looks at the responses and based on the responses that it gets, it tries to determine if there's some form of vulnerability. It reports on those suspected vulnerabilities. And instead of giving you line-by-line details, it gives you the input and output that caused the scanner to think that there is a problem with your code. Like SAST, it finds your bugs, but unlike SAST it tends to have lower false positives and it finds more really useful stuff. And in fact, because it's scanning a running application you can have some confidence that things that it finds are really things that are exposed in your application and exploitable in the wild. As long as what you're testing really represents what you're going to be running out in production. It can be a bit slow. That's one of the downsides of DAS. But that's kind of a function of how big your application is, how responsive it is, how close the scanner is to the running application. And even if it's fairly slow, there's usually ways that you can break it down and parallelize the scan to make it faster.

Okay, so that's what we're gonna be looking at today. And I think from there, we're gonna jump into starting out by, yeah, we're gonna look at GitHub Actions. So the first step in our plan is we're gonna fork a test application called phone GraphQL API. And we're gonna create a GitHub Actions workflow for it to build it basically in a CI CD pipeline built into GitHub.

So GitHub Actions is a CI CD pipeline built into GitHub, has a simple YAML configuration language and a huge marketplace of actions, which are like plugins. And this is kind of the genius of GitHub Actions in my opinion, that they created a really easy way for authors to create actions to make otherwise complex steps really easy through these little actions that you can add. So we'll use a couple of those actions. We're gonna use a stack hoc action, a code QL action, and so forth. But it's a really neat system. It's event driven, generally based on pushes, PRs, but you can have arbitrary web hooks as well, kickoff events. And for each workflow that you create, they call them workflows. Within a workflow, a single workflow will fire up a single runner. And by default, that happens in the GitHub cloud. They actually have VMs that will pop up for you in their own cloud. And then within a workflow, which instantiates a runner, you can have multiple jobs, which can run in parallel or can trigger one another, they can be ordered. And within each job, you've got a number of steps that run in sequence. So we're just gonna do a single workflow, single job, and a couple of steps in our workflow. So simple example. And then each step can be either to run a shell command, you can have Windows based runners, and you can have Linux based runners. So you can either run a simple shell command or run an action from the GitHub Actions marketplace. And we'll do a little bit of both. There's a built in secrets management solution in GitHub Actions, which is gonna come in handy because we're gonna use that to stash a secret API key.

3. Setting Up GitHub Actions Workflow

Short description:

For your personal accounts, you get 2000 free minutes per month of build. Let's head over to the volumegraphqlapi repository and fork it. Create a new file under the dot GitHub directory called build and test YAML. This workflow will build the application in a Docker container. It will run on an Ubuntu image, version 20.04, with decent machine specifications. We'll clone the repository using a GitHub action.

And for your personal accounts, this is all free. You get 2000 free minutes per month of build and that's pretty cool. So like, if you don't use it for work, I recommend using it for your personal projects just because it's so cheap and effective.

All right, so let's jump into it. First step, let's head over to this repository. So this is a simple test application again called volumegraphqlapi. We'll post a link to this. Cool, there we go. All right, so head on over to the link in Discord. Let's go ahead and hit Fork and then give us a thumbs up when you've forked it. So just gonna head over to volumegraphqlapi here. Hit the Fork button up at the top right corner here. I'm gonna fork mine to my own personal organization. We got a bunch of pros in here, you guys are on top of this, this is awesome. All right, thumbs up that myself. All right, so we've forked it over here. I'm just gonna follow my own agenda here. All right, looking good. We've got eight people caught up.

And now what I wanna do is we're gonna add a new file. So up along the top here, there's this go to file add file code, hit add file, create a new file. Now, I know some people are gonna be horrified by the way we did this today. We all are used to working in IDEs and I am too, but this is just to make it so that everybody's working from the same page and to minimize, minimize like workstation differences and variances that can cause people to get hung up sometimes. So gonna create a new file and this file should be under the dot GitHub directory. So the dot GitHub directory tends to have special GitHub files like templates for PRs and in our cases, our case workflows, there's a code owners file. You can add tons of GitHub proprietary stuff tends to go in there. So under dot GitHub, create a workflows directory and this is where all of your GitHub actions workflows are gonna go. We'll create our first GitHub actions workflow called build and test YAML. In main and I always use YML. I think you can use the YAML too. But anyway, I'm gonna refer back. I'll take this out slowly here. My reference, it's handy here. So this is our first workflow, build and test. We are going to really just build the application in a Docker container. So we'll call this, build and test. We will run this anytime we push, and under here if you wanted, you could add like branches and stuff. I think it would be more like this. Branches, main, et cetera. But just for simplicity, I'm going to say on push. So anytime we push code to this repository, we are going to have a single job called HockScan. And we are going to call it, build and test. And it's going to run on an Ubuntu image, version 20.04. And it's actually going to fire up a virtual machine with seven gigs of RAM and I think like 16 gigs of disk space and two CPUs. So it's a decent machine for most of these kinds of jobs. There's also an option to run your own runners on your own premises if you like. If your workloads get bigger than that. Okay so we're going to run a number of steps. The first one is going to be to clone our repository and for this one, we're going to use a GitHub action and this is one of GitHub's standard actions. They have a number of built-in actions, essentially, that they maintain.

4. Building and Testing the Application

Short description:

In this step, we clone the repository and build the application using a shell command. It's recommended to explore the test application on your own and run it locally. The build process may take some time, but it should speed up with subsequent builds. Running the application locally allows you to see the graphical interface and query it. Later on, we'll discuss an example of a vulnerability in the application.

They're high quality and they do a lot of the low level stuff so this one is just a clone a repo and by default, it takes care of figuring out which commit we're working on in this workflow. You don't have to specify the branch or the commit hash. It takes care of that for you by default. So uses action, check out.

Next step we are going to build the application and this one is actually gonna just run a shell command. That shell command is docker compose build. And later on after the workshop, if you want to continue with this and sort of explore this on your own, definitely worth taking a look at that test application, run it on your own workstation, see if you can get it running and scan it locally. But here we're just gonna run and scan in GitHub Actions.

So we've got our workflow file. The workflow is called build and test. It's got a single job also called build and tests. Clones the repo, builds the app. Let's commit this file. And once you do this, if you click on over to actions, you should see it building. So Nick has posted the sort of file configuration to the Discord chat. If you get this working and you click over to action and you see your workflow running, go ahead and let us know. Unable to, oh, I've got a typo in mine. So I can't give you a thumbs up quite yet for myself. I'm gonna pop back in here. Actions. That should do the trick. Swing back over to actions here. The actions tab at the top of your repository console. And mine's running. Let's see if it's working this time. Looks good. Clone the repo. Building the app.

So the build the app section here is gonna take a little while. It's using Docker to build the application and it has some stuff to do. As we go through, hopefully as we build more and more this'll actually get a little faster because the GitHub runners, hopefully you'll pick up a runner that's on a machine that has your cache. And so, if you do this frequently it tends to run a little bit faster the next time around. Let me see, I think I've got this running locally, yeah. So, I've got a copy of this thing running locally and I don't want, if you are getting bored you can try and download this and run it locally but it's definitely not necessary for what we're doing today. But I'll just show you what you get if you log into the thing. It's running at localhost 3000 and you just pull up a graphical interface and I've dropped a little query in there, not much to it, the only reason I have it running locally myself is because I want to show you a little bit later on an example of a vulnerability in the application.

QnA

Addressing Questions and Issues

Short description:

We're still building, but let's address some questions and issues. There's a typo in Antoine Sharife's question. Benjamin points out an interesting observation. Gabrielle Cristea is experiencing issues and Claudia shares her repository URL for Nick to review. The build completes successfully, and there's a suggestion to clone the test application. Benjamin's help and the collaborative chat are appreciated. Shashank's workflow file has a typo that needs fixing.

Alright, so we're still building, this'll take a while. Got a question? I think Antoine Sharife, I think there's just a simple typo. Yep, Ubuntu instead of Ubuntu. I don't know. Thanks, Benjamin. Interesting. Claudia, Gabrielle Cristea has an issue where it has not worked for you. Can you share, since these are public repos, Claudia, can you share the URL to your repository. Maybe Nick can take a look at it. Thank you, awesome, we get build completes. Okay, great. Awesome, we're getting build completes. Oh, I think workshop Graph QL testing, I think you have cloned the workshop guidebook rather the test application. Let's see, I think what you want to clone. I just posted the URL there for you. This is great. Thanks again, Benjamin. This is always great when people are helping each other out in the chat. And thanks for everybody's patience. We should have plenty of time to get through all of this, but I'll, I'll make sure we stay on track. Yeah, so Shashank. There's a there's a typo in your workflow file should be uses instead of users. Sweet. All right. We all got to make a typo today. I've made my own. I'm supposed to be running this thing.

Using Dependabot for Dependency Security

Short description:

We'll use Dependabot, a free SCA for GitHub repositories, to check for vulnerabilities in our dependencies. Dependabot is enabled by default on public repositories but needs to be enabled manually for forks. It identifies libraries with vulnerabilities and provides guidance on how to fix them. It can even create a PR to fix the issues. Although it may have false positives, it's important to keep up with updates and have a reliable tool to remind us of vulnerabilities. Let's enable Dependabot alerts and explore the security tab to see the identified alerts and recommended fixes.

All right, so I want to so while folks catch up there and feel free to just go to the guidebook and copy paste the workflow configuration out of there if you're having trouble. I'm going to move on to the next step, just start describing what we're going to do.

So we're going to use Dependabot. Dependabot is a free SCA for GitHub repositories. There's a lot of other options out there for really great SCAs. You'll even find it embedded in a lot of the artifact repositories these days. I know JFrog and Nexus each have something like this.

It'll just check your dependencies right there in the artifact repository. Dependabot in GitHub is actually enabled by default on public repositories. But if you fork a repo, it's not enabled by default, even if it's public. It's also easy to add to private repositories, and it's free for both. It finds libraries with vulnerabilities, it'll take your dependencies file, if that's your Gradle file or Maven file, Palm file, just search through it, and it will even try and blow out all of the sub-dependencies that are implied by the dependencies you've pulled in. And then it will go through and just check each and every one of those to see if there's known vulnerabilities out there, and it'll let you know.

And Dependabot's super cool. There's other tools that do this too. When it finds an issue, it'll tell you exactly what to do to fix it. And if it can, it will actually create a PR for the fix. And in our case, that PR will kick off our workflow again, so it'll go through any tests that you have, and it will make sure that it builds okay, and then all you have to do to fix it is accept the PR. It does have some false positives in practice, even though everything that it finds really is a vulnerability. It can be false positive because you might pull in a giant library that has a little bug over here in all of the tools that you use are, none of them really touch that bug. So it's possible theoretically that it would report on something that is a vulnerability, but it doesn't really affect you. Still we all know it's best to keep up with these things on a cadence, and it's nice to have something just reminding you, just reminding you, hey, there's updates you should be looking at, and here's an easy way to fix it.

All right, I'm going to jump into fixing it. Hopefully, everybody's kind of caught up with us, and let's do it. So I'm going to go back to my code base. I'm in my fork of Vuln GraphQL API, and I'm going to just go to the security tab over here. And under the security tab, there's a number of cool things, and we're going to use two of them, but first, we're just going to do dependabot alerts.

So we want to enable dependabot alerts so that we get notified when one of our dependencies has a vulnerability. Click on the security tab, hit enable dependabot alerts, and now it's got all these options. We're going to do these first three. We're not going to do this one yet. We'll do this one later. We just want to do these three. So first, Dependency Graph enables GitHub to check out your dependency file, whatever it is, if it's a POM file or in our case, a package.json file, and process it essentially, and go find all the sub-dependencies. We want that. It'll find more stuff. We want dependabot alerts so that we'll get alerts we'll get alerts anytime new vulnerabilities are found. It's an easy one. Then dependabout security updates. Easily upgrade to nonvulnerable dependencies. So this is the feature where it will actually issue a PR to your repo, and that's pretty awesome. So we'll enable that. All right, and if you've enabled these three things, but don't worry about this one yet, if you've enabled these three things, very shortly you should see a new badge next to your security tab. It says, in my case, 11, I suspect everybody else has 11 as well. Click into there, and you'll see a badge that says 11 over by dependabout alerts. Click into there, and you can see 11 different alerts. So this has figured out that we've got a bunch of NPM dependencies that we have specified, and they're problematic. So let's just click on one as an example, this tar dependency. So it found that we've got version 4.4.13, and it recommends that we bump to 4.4.19 because there's five different vulnerabilities in it, and it shows us how we can make this change just by editing our Yarn file, and it gives us a bunch of details on what the problem is, arbitrary file creation, override, and code execution. Pretty bad, you don't want that in your tar utility.

Dependabot, Testing, and CodeQL

Short description:

Dependabot checks for vulnerabilities in the CVE database and suggests pull requests to update dependencies. A good CI/CD workflow with sufficient testing ensures the application will still work after updating library versions. CodeQL is a SAST scanning utility that compiles code into a database format and runs open source queries against it to identify vulnerabilities. It's free for research and open source projects. Private repositories require a security license.

So where is it looking to check for these vulnerabilities? I think it's the CVE, let me see. Yeah, it's checking the CVE database. It is CVE 2021, yeah, there you go. Let's go over here. So all terrible stuff, you've got plenty of details here. Anyway, simple thing to fix, just update the thing and oh, hey, look, there are a number of pull requests available right now. So if you head over to your pull request tab, hopefully everybody else is seeing this as well. If you look at pull requests, you've got a number of pull requests that are lined up. And one of them is for the tar utility that we were just looking at. And if I click into this, shows me the PR, it says, Dependabot has created a nice little description of what it's trying to do, it shows you a couple of the files that it changed. So it actually made a couple of changes to our YarnLock file, including hopefully one of them is the tar file. So it changed the YarnLock file to go from 13 to 19. And if we look back at the conversation, it's running some tests. So it's running our build and test workflow that we already created to make sure that making this change won't break the application. So we'll take a look at the details of that, and that should take us back to GitHub Actions under the Actions tab. And it's showing us that it's trying to build the application for you, and it's showing us that it's trying to build the application again. And as long as this succeeds, it should be safe to push this PR and merge it with your code base. So that's dependabot, how's everybody doing on this? Is everybody caught up and seeing the same thing that I'm seeing? Okay, keep going. So somebody asked, can you validate your users, how can we make sure that it's all working properly and having this back end that's a traffic location and somebody asks, how do I know that my application will still work after I update the library version? That's a great question. Yeah, so the answer is, as long as you've created a good CICD workflow that goes through sufficient testing to catch any problems, you can have good confidence that it will work if those PRs come through. So let's go back and check these PRs again. So let's take a look at the tar PR, and it has run through its tests, and it was able to successfully build the application again. So as long as we've done our tests well, the application should work just fine. Now we haven't done it, we haven't set up a whole lot of extensive testing, so we don't know that for sure, but presumably in your code base, you've got unit tests and some integration tests. You can merge PRs if you like, but it's not necessary. I'm not gonna merge mine just because we want to keep moving on, but it's a great thing to try maybe later on after the workshop. But if you can't resist and you've got to merge one of those suckers right now, go for it. Let's see. Add some manual validations in case of a major bump. Yeah, and I think in the case of a major bump, there's some of the vulnerabilities that dependabots finds. It actually won't suggest a PR for you. And I think a major version upgrade is one of those places where it probably would not try to create a PR for it. So, yeah, all right, unless anybody wants me to slow it down and definitely let us know, I'm gonna move on to the next issue that we wanna do. So the next thing we're gonna look at is, CodeQL, which is a SAST scanning utility. So CodeQL, again, it's a Static Analysis Security Test scanner. And what it does is it will actually take your code and it only works on specific types of languages. Fortunately in our case, JavaScript is one of them. And the JavaScript engine also works on TypeScript, which part of this application is written in. So it takes your code, scans through it, it kind of compiles it into a database format so that it can run queries against it. And then in the case of CodeQL, there's a bunch of open source queries that researchers have created for CodeQL. And it'll run all of those queries against the database that it is indexed based on your application. And any code patterns that it finds that indicate a vulnerability, it will report on. And the set of queries for common vulnerabilities that's growing every day. This product is free for research and open source. So as we do it, as we do this on our public repository, it's gonna be free in any public repository. If you wanna use it for private repository, there is a fee. And I think it's something on the order of 21 bucks per user for the security license. That's for the pro license that includes security. So with that, we'll jump right into it, but I wanna pause. I see there's a lot of chatter on discord.

Commit Messages and Automatic Change Logs

Short description:

Benjamin asks if commit messages can be configured for automatic change logs using conventional commits. The speaker is unsure but suggests editing the PR message. The conversation continues with positive feedback for Nick.

So I wanna catch up with some of that. So you've got a bunch of PRs. Quick question about dependabot, can commit messages be configured to make them fit for our automatic change logs if we use conventional commits for example? That's a great question, Benjamin. I'm not sure, but you... That is a great question, and I apologize, but I don't know the answer to that. You can probably edit the PR message, but I don't think that's what you're asking. I think you're asking about commit messages that kick off like get Opsi kinds of processes that will automatically do stuff. Which is a great question. Cool. Ah, yeah. Way to go, Nick. Nice.

Enabling CodeQL Analysis

Short description:

We're going to enable CodeQL on the Vuln GraphQL API repository. Set up the Code Scanning Alerts and choose CodeQL Analysis as the SAST scanning option. This will create a new workflow file in your repository. The workflow will run when you push or create a pull request to the main branch. It will also run once a week on a schedule. The workflow includes a job called analyze, which runs on the latest version of Ubuntu and performs a matrix of builds for each language found. The steps include checking out the repository, initializing CodeQL, building the application, and running the analysis. The workflow will run in parallel with the original build and test workflow. Check the progress in the Actions tab while waiting for the CodeQL analysis to complete.

All right, so CodeQL. Nick and Benjamin are having a great conversation here. Nice. And about tries to match the format of prior commit messages. That's pretty slick.

Okay. So the next thing I want to look at is CodeQL. We're gonna enable it on this repository. Again, I'm back in the Vuln GraphQL API repository, and it's my fork of it. So again, we're going to go to the Security tab, and this time we're going to hit Code Scanning Alerts. We're going to hit that button that says Setup Code Scanning. So just to make sure everybody's with me, you're in your code repo, head to the Security tab, Setups Code Scanning, and from here, there's actually a bunch of options in here for different types of SAST scanning, and it's worth taking a look at them because there's a lot of good ones out there. And Sneak is actually a really good one. Stackhawk, we've got our own application in there as well. It's actually for DAST, but it uses some of the static analysis tools to give you more information. There's Systig and Barricode. They're not that great. SEMGREP, et cetera. We're gonna use the built-in one, CodeQL Analysis, the very top one. So just hit this button, set up this workflow, and what this will do is set up a new workflow file in your repository. So it's in the same directory as the one that we set up ourselves, the build and test workflow. So it's in.github, slash workflows, slash codeqlanalysis.yaml. And this guy works. You don't have to change a thing. But it's nice to know what it's doing because for your code base, it may not work as automatically as this one does. Generally, I found it's been really good, but let me walk through what it's doing. So it sets up a workflow called codeQL, and it's gonna run anytime you push to the main branch or do a pull request to the main branch. In addition, it's gonna run once a week automatically whether you push to it or not. So this is another one of those triggers that GitHub actions workflows can run off of a schedule. In it, there's a single job called analyze. I think it's just a single job, yep. And that runs on the latest version of Ubuntu available, and it runs a matrix. So if you had multiple languages, it would hopefully detect that, and it would run a matrix of builds that is basically run this same thing in parallel one for each language that it found. So in our case, it found JavaScript. And so it's just gonna do a matrix of one. In this job, there are a set of steps. So we're gonna check out the repository as before, gonna initialize CodeQL with your language. It's gonna try and automatically build your application, and finally, it's gonna run the analysis on it. Pretty simple, commit that, and it should kick off a new workflow. It's gonna run this workflow as well as our original build and test workflow in parallel. So if you head over to Actions now, you should see under Workflows, you've now got, in addition to build and test, you've also got CodeQL. You can see that both workflows are running currently in parallel. So, hopefully the build and test one is gonna work just as it did before. Let's check in on the CodeQL one and see where we're at. So, click in to the matrix, and for mine, I've got Initialized CodeQL running, and by default, it's running in parallel. So, let's check in and see where we're at. And while we wait for this, it'll take a little while for this to run, check over on Discord and let us know if you've gotten to the part where you have committed the CodeQL Analysis file to the repo. Awesome. You guys are going faster than I am. Nice.

CodeQL Alerts and Recommendations

Short description:

Folks are getting CodeQL alerts already. You can see that we've got a couple of high severity issues, insecure randomness and clear text locking of sensitive information. It points to the exact file and line of the concerned issue, and gives language-specific recommendations. Although DAST is generally better, having a cross-check for other scanners is always good. If anyone has questions about CodeQL or SAST, feel free to ask. We'll take a 5-minute break and come back to use StackHawk for Dynamic Application Security Testing (DAST).

Folks are getting CodeQL alerts already. I haven't gotten mine yet. But I'm coming to the end of my queries, I think. I'll turn it off. Let's see if I'm getting alerts already, even though it's not complete. Not quite yet. You guys got the fast runners? Okay. Now we're done. And yeah, so my security tab had 11 alerts before, and now it's at 13 and I've got two code scanning alerts, just like it sounds like most of the other folks are getting that same result. Click into there. You can see that we've got a couple of high severity issues, insecure randomness and clear text locking of sensitive information. So click into these guys and it'll tell you, hey, you used a random value in a security context that is cryptographically insecure. You shouldn't do that. And they're probably right, but they found this in our database seeder that just seeds a bunch of random users. So this is part of our database seeding just for further testing basically. So this is something that you could mark as a false positive, but it is a good point. It is a good pattern that it found. And I don't know, I don't think I would fix this for a seeder for tests, but if it was in your main code base, it's probably something worth looking at. Let's take a look at the other one. Cleartext logging of sensitive information. Again, this is in the seeder. So as we are creating users, we wanna spit it out to a log so that you can see what users we've created. And so, but the cool thing that you see here is that it points to the exact file and the exact line of the file, and in fact, the exact characters of the file that it's concerned with. And it tells you a little bit more information about it, gives you some recommendations. Because it's language specific, it's giving you language specific recommendations for your code. And that's that pretty pretty handy utility, you know on your open source, on your open source or public repository is definitely worth it because it's free. And if it's if you've got private repos, you have to consider if 21 bucks a month is worth it per user, but, but this, this is not a bad value and it's and it's pretty, pretty nice utility. I think that although I think Dast is generally better because it finds more and has fewer false positives. I still think that if, if the value for this is kind of helpful and just having sort of a cross check for your other scanners that are out there, it's always good to have a variety of scans going on. So that's code 2L. Does anybody have any questions about code 2L or SAST at this point in general?

If not, I'm gonna say we should take a five, 10 minute break and come back. If people need a bio break, I know I do. But I'm gonna give you a minute or two if you've got any questions. Okay. So we've got a question. What would be the recommended process to fix these issues, can they be integrated into something like JIRA? So for this one where we're logging, it will, let's see, it should give you a specific recommendation. So, and I haven't read through these. I just made these mods. I just actually added this tool to this workshop and I haven't gone in depth enough to it, but basically it's probably gonna recommend, you know, you mask it or you send it to, or you encrypt it somehow. Credentials should be encrypted, for instance, using the crypto module. So, yeah, I mean, they give you a pretty detailed explanation as to how to address this issue here. All right, why don't we take a five minute break, come back here at 1202. Our time. So at the top of the hour, plus a couple minutes, let's take a quick break and come right back. Sound good everybody? 10 minute break, it's a deal, Nick. I'm with you. When we come back, we'll use StackHawk. So our next big foray, this is the final test that we're going to add, we're gonna add Dynamic Application Security Testing, DAST, and we're going to use StackHawk. So StackHawk, we make a scanner that's big enough to scan a page. Because we know that we're going to try and scan back to the ADIF, we need to set up a built-in view while we're scanning that page.

Automated Penetration Testing with StackHawk

Short description:

StackHawk is an automated tool for penetration testing and automation. It packages the OWASP XAP utility as a Docker container and simplifies it with a YAML configuration file. It integrates with various CICD platforms and has an online platform for tracking scans. StackHawk also offers GraphQL scanning enhancements and supports sending notifications via Slack, MS Teams, and integration with Jira. While it doesn't have direct integration with GitHub issues, it provides webhook support. To get started, create a StackHawk account, generate an API key, create an application, and configure the Hawk scan for your application.

And that's based on the Open Source OWASP XAP. And in fact, the founder of OWASP XAP, Simon Bennetts, works at StackHawk. And he pretty much spends his entire day working on improvements to OWASP XAP itself. And he's working on, he's been working on automation framework stuff, and telemetry, and some really good improvements to OWASP XAP. So definitely a tool worth looking into.

It's great for penetration testing. It's getting better for automation as well. But at StackHawk, what we've done is, we've taken the OWASP XAP utility. We've packaged it as a Docker container. And we've added a simple YAML configuration so that it's really easy to automate it. One of the issues with this XAP, and this is improving, is that it's got configuration files scattered around in various places. It's really designed to run as a desktop application first. And we've tried to take away the desktop application aspects of it and just give it a simple YAML configuration file so that it's much easier to drop into the CICD pipeline. We've got integrations documented for lots of different platforms, CICD platforms. And we've got an online platform in the background for tracking your scans. So as you run scans with StackHawk, the results are getting piped back to the StackHawk platform so that you can collect the data back there and analyze it in the future. But also so we can do other interesting things like send notifications to you via Slack or MS Teams for instance. We can integrate with Jira for a triage in collaboration of issues that the scanner finds. And we've got generic webhook support so that you can also send scan data to other platforms that you might be interested in that we don't directly support, but which can take webhooks. And we've got a lot of GraphQL scanning enhancements that we've made. And I dare say that we've got the best GraphQL scanner out there among the DASK tools because of the modifications we've made in spidering GraphQL so that we find enough stuff to find lots of issues that are there, but not it doesn't go so deep that you're in an endless loop of GraphQL queries. And that can be a hard balance to strike.

Okay, just like Jira, does it have integration with GitHub issues? That's a great question. No, we don't have an integration with GitHub issues, but you may be able to pull that off with the web hooks support. All right, so here's what we're going to do. Let me check the guidebook real quick here. So we're on step four, Dynamic App Scanning. So we're going to go through, we are going to create a StackHawk account. And in your StackHawk account, as we set it up, it's going to go through a little getting started wizard. Getting Started wizard. And first step of that is going to be to create an API key. I'm going to take that API key and stash it in GitHub Secrets so that we can use it in our pipeline later. In the StackHawk platform getting started workflow, we will also create our first application. So that is, we're just going to create a name for the application, and it's going to return a big long UUID to uniquely identify the application we're attempting to scan today. And then finally, we're going to start create a starter configuration file for Hawk scan to scan your application. So this startup routine is going to go through all of this for us. So let us begin, just going to head over to app.stackhawk.com. StackHawk.com, what you see over here drop the name, we're actually going to call it App.stackhawk.com. That's interesting. That look, there you go. So head on over to app. stackhawk.com and if you don't have an account already, that's perfect. We're going to sign up for a new account right here. If you have credentials or your Google credentials, if you like you can just use an email address and set up your own password. It's a little, take a little bit longer cause it has to verify your email address and everything. So you'll have a little bit of email back and forth, but that's fine, if you want to go that route, I'm just going to use Google and use my alter ego, my alias z at econger.com. Don't spam me, please. Now get logged in. And this is the welcome screen. I'm gonna change this, so I don't lose track of which one I'm working on here.

Setting Up StackHawk for App Scanning

Short description:

ZConger's workshop.org. Choose the free plan for the developer account. Scan your own app by selecting the stack and application type. Name your application and choose the development environment. Scan the local host address on port 3000. Select API type as GraphQL. Use the default path slash GraphQL.

ZConger's workshop.org. And then the rest is just some boiler plate information. My profile picture, if it's found from Google, which is not third party account access. Basically just saying, hey, here's credentials from Google. Hit continue. And in the select your plan, I recommend that you choose the free unless you really wanna trial this and potentially go for the pro. But with the developer free account, you can scan a single application all you want in any number of environments. So that's what I'm gonna use to demo today for the workshop. Hit continue.

And then we wanna scan your stack. So we're gonna scan our own applications. So choose this version instead. If you wanted, you can come back another time and set up a Google firing range set up. And that just pulls in a bunch of sample data so you can look at the platform without having to scan your own app. But scan your own app is indeed what we were gonna do. So pick your stack, scan by application and hit continue. So give me a thumbs up on this, on this URL here if you are to about where I am in this create new app workflow. Thank you.

All right. So I typically just name the application after the repo. I'm a Unirepo kinda guy, not a model repo, I'm sorry, a repo for every app. So Vuln Graph QL API is the app that I'd call it. You can call it whatever you'd like. API key commands first. I wonder why I didn't get an API key command first. So, what I want you to do if you've got the API key commands go ahead and snag a copy of your API. Yeah, I wonder why I skipped it for me because I deleted this account. This might actually be a bug where it's remembering my old key for some reason. But you are seeing the normal flow out there if you're getting an API key prompt first. Go ahead and collect that API key, and save it someplace. And we'll go and stash it in GitHub actions shortly. So, cool. And give us a thumbs up on Nick's comment there, save your API key. And Nick says, and this is true, if you don't save your API key here, it's okay, we can create another API key, which is what I'm gonna do. So I'll show you where we can go set up a brand new API key in a moment.

All right, so you've got your API key saved, and now you should be on this page, app details. And give yourself an application name, Vuln GraphQL, API is what I call mine, same as the repo name. And then for the environment name, the idea here is you may want to scan this in different environments, depending on the stages and environments and flows that your CICD process goes through. I'm just gonna use development. We do recommend that you use StackHawk in a non-production environment. So development, pre-production, staging, QA, whatever you want to call it, but generally not production because it does attempt to change data, which can be problematic for your application if it changes the wrong thing, obviously. But also, it can lead to inconsistent scan results if it's adding data and then it's got more data that it's scanning each and every time. Anyway, we'll select the development environment this time, and then we're gonna be scanning the local host address on port 3000. So, HTTP localhost colon 3000, and that's HTTP, not HTTPS, localhost colon 3000. Hit next, and then it asks you what app type you have. And in our case, we've got a GraphQL application, so we're gonna pick API, or application type, and then the API type is GraphQL. And then this default is correct. So, it's looking for a GraphQL introspection points, so it can ask questions about what queries it can make. And in our case, we've got the default path slash GraphQL. And I'll hit Next here.

Setting Up StackHawk Configuration

Short description:

Set up a new stackhawk.yaml configuration file. Download and open the file. Copy and paste the content into a new file in your repository. Customize the configuration for different environments. Set the application ID and environment. Configure the GraphQL-specific details. Change the request method to post for better performance. Commit the changes.

Okay. From here, it's set up a brand new stackhawk.yaml configuration file, and we're gonna download that, I'm gonna break out to my entire screen so that you can see what I'm doing. So I'm gonna download this stackhawk.yaml, and I'm gonna open it, and open mine in Atom. And since we're working out of the browser only, I'm gonna recommend that you copy and paste this into a file in your repository.

Does the scan change anything depending on the environment that the app is in? Satellite production is more careful. It's a really good question. By default it doesn't really change the nature of the scan depending on the environment. But, if you look deeper into Stackhawk, you can make it alterations to this file. You can inject environment variables into this file. So if you want it to behave differently in a production environment, you can do that. So like for instance, if you really wanted to run this in a production environment, one thing that you could do is just say, hey, all I want to run queries here, I don't want to run any mutations that could change my data. And you could feed that in at build time.

So when you download your stackhawk.yaml file, you should see something like this. And the only thing that should be different between yours and mine is this application ID because that maps directly to your application. So I'm gonna take this file and copy and paste it into a new file in my repo. So head back to your repository. Let's see, where did mine go? So go to the base of your repository vuln-graphql-api in your organization. And we're gonna create a new file at the base of the repository. And just call it stackhawk.yml. stackhawk.yml. We'll paste in all of that detail that we just copied. Stackhawk.yml configuration file that the getting started workflow created for you. So in here we've got an application ID that maps to your application in the StackHawk platform. The environment, and you can change the name of this to anything that you want. Every time you run a new scan in a different environment, it'll just create that automatically lazily in the background. We're gonna scan our application. We're gonna scan Http localhost, colon 3000. And then there's this section here that goes through the details of the GraphQL specific configuration. So in here, by default, there's just a normal web spider. If you set up, if all you did was add these four lines to your configuration file, you'd have a working scan that would use a normal sort of Google like web spider. It'll just like, look at your robots.txt in the base of your web app, and it will find links and it will follow those links and spider your application that way. And then it will check each and every one of those paths for vulnerabilities. But we're telling it, hey, I have more information for you than that. I've got a GraphQL configuration, and my schema path or my introduction endpoint is at slash GraphQL in the application itself. I've enabled GraphQL. And I want you to run queries and mutations. In other words, I wanna run all operations against my database. Also says request method, get. And I recommend that you change this to post, and I'm gonna recommend that to our developers as well. Because I think most GraphQL applications typically use post, it's better for longer queries if you have longer queries. And in the case of this application, it actually finds some slightly different things. We also got Auto policy. So Auto policy set to true. What this does is there's lots of like Zap scan policy stuff in the background that you can tweak and tune. And we've basically done a bunch of automatic tweaking and tuning because this is a GraphQL application. For Auto input vectors, this is supposed to, like, if you left this blank, it would probably pick post for most of its input vectors as it tries to make queries to GraphQL. So what you should end up with is exactly what you pasted in, except just change request method from get to post. And let us know if you're with us this far. I'm gonna commit this change. Okay.

Enabling StackHawk Scanning

Short description:

To enable StackHawk scanning, we need to make some changes in the stackhawk.yml file. We'll add a line to run the application using docker-compose and leave it running in the background. Then, we'll use the Stack Hawk/HoxScan Action to scan the app. However, we need to enter the API key, which is not yet available. Once the API key is added as a secret, the scanning process should work. For detailed instructions, refer to the workshop guidebook or the additional steps provided in Discord.

And again, that file is stackhawk.yml. And you should see that it's going to run code QL and build and test again, but we haven't told it yet that it needs to run StackHawk. Or the scanner, we call it Hawkskin, by the way. So I'll say Hawkskin and StackHawk, they're kind of interchangeable. Anyway, we haven't told it to run Hawkskin just yet, so it's actually not gonna produce results just yet, but that's the next thing that we'll do.

So we're gonna go back into GitHub Workflows. We're gonna find this build and test.yml file. And we're gonna edit that to add a line to run the application so that it's available for scanning and then scan the application with Hawkskin. All right, so I'm in the build and test.yml file. I'm gonna hit this pencil so I can edit it. I'm gonna add a couple lines here. First line, run the app. Run docker-compose up detach. So our docker-compose file explains how to build this application in a docker container and it explains also how to run the application and listen on the correct port. So we're gonna do that to bring the application up and then we'll use this dash-dash detach flag to leave it running in the background. And that way we can move on to the next step in the job and leave that application running. Next step's gonna be to run HoxScan to scan the app. And we'll use, so we use this uses keyword, Stack Hawk slash HoxScan Action at V1.3.2 is the current version. We're gonna use the API key that you entered, except that you haven't entered it. Oh dear. So I'm gonna enter this and this job's gonna fail. But let's go ahead and enter these details anyway. I have a little bit of a flow problem because I didn't do my API key. So we're gonna enter, this eventually is gonna be in the secret stash. So we'll put this in, hock underscore API underscore key. I'll show you what that refers to in a minute. So this should all work as soon as we get that secret in there. So I'm gonna go ahead and commit this file. There you have it. And you should have, this whole thing is also detailed in the workshop guidebook, if you want to reference it. And Nick has entered the additional steps in Discord as well. So you can just copy and paste those in.

Okay, so you probably have an action running now that's got this change and it's probably gonna fail on the hock scan step because we didn't add that secret. So let's head into the secrets stash. So in your repository, go over to the settings tab and then over here on the left, go to secrets. And this is where we're gonna add our secrets. So before you hit this button, go back to wherever you stashed your API key, copy the API key. It should start with hock.bunch ofnumbers. a bunch more numbers. If you don't have that saved somewhere, we can go create another one right now. Which is what I'm gonna do. So I'm gonna head into your username and settings and go to API keys. And yeah, these are old API keys from previous organizations that I've set up. So this is something that I'll alert our developers to. So I'm gonna create a new API key, workshop V2. You continue. Then I can copy that key. Starts with a Hawk. Bunch of letters and numbers, bunch more letters and numbers. I'll copy that into my paste buffer.

Configuring API Key for StackHawk

Short description:

To configure the API key for StackHawk, go to the actions secrets settings and create a new repository secret. Name it Hawk_API_key and paste in the key value. Make a minor change in the file to trigger another workflow run. Check the actions tab for the previous run, which should have failed due to the missing API key. Now, go to the current running job in the build and test workflow and monitor the build process. Once the app is built, the hawk scan step will be executed.

Go back to actions secrets. And again, this is under the settings tab and secrets. I'll create a new repository secret. Call it Hawk underscore API underscore key to match what we put in our workflow file and just paste in the value of the key and hit add secret. All right. So now that should match what we have added to the workflow. Build and test.yaml. Hawk underscore API key. So now I'm gonna make a minor change. I'm just gonna tickle this file. I'll edit it and just remove this blank line. Maybe I'll just add another line that says tickle just to create a change that will kick off another workflow. So go ahead and do this and that'll kick off another actions run. And let's go over to actions. And so here's my previous one and mine failed, yours probably will too if you hadn't already entered your secret API key. And if I click into it, I'll just verify. It should have failed on the Hawk Scan step. And there's a console message here that says, error, the API key supplied isn't valid, basically because there is no API key that we supplied. And so it couldn't authenticate to the StackHawk platform. So now we'll go back and go to our current job that's running. So look for, under all workflows, go to build and test, and your most recent one that is running, and click into there and let's watch it build. So for me, it's currently building the app. And that might take a little while, a couple minutes, maybe. And it'll run the app, which worked last time. So I think running the app step will work, and then we should get to the hawk scan step.

Workflow and Scan Progress

Short description:

Nick's workflow is running and a Hawk scan has started. The scan has found high, medium, and low severity issues. The plugin summary shows the progress and the findings tab provides a summary of the vulnerabilities. One of the findings is a mutation called super-secret-private mutation that exposes sensitive information.

What's up, guys? My name is Nick. If you've gotten to this point, you've got a new action workflow running. Check Nick's last comment and give him a thumbs up. If you've adjusted this part where you've got a new workflow running, you've got your Hawk scan API key stash as a secret, and this has kicked off. And this, again, is a test. I've got another one that I'm working on. I want to make sure that we're getting this right, so that I can do good with it. So I know that there's an error. But I just want to make sure that we're getting the settings right. And I'm planning on doing one for August. If you haven't done one yet, I hope that I can get an entry to that later. But if you did, I'll make sure to read the notes and then we'll hear from you, but I'm planning on doing one for August.

All right, and this will take a little while. While mine is building, occasionally just building the app section just takes way too long, and I'm not sure why. But let's wait for this to run. You guys can let yours run, and if you have any errors, let us know. We can start troubleshooting those. Okay, so let's just wait for that computation. Mahindra, that's a good question. So it's getting all of the information. It's doing all of that information-seeking through the slash graphQL introspection endpoint, where you can expose what data is available, what types of queries are available. And by default, a lot of like the express exposes a lot of the information by default. So there's a lot for the scanner to go on generally. It's actually a good idea to curate the graphQL endpoint and what metadata is available to give people clues as to what is queryable in your database. But by default, a lot of the libraries and modules that you might use to create a graphQL endpoint are gonna provide a lot of detail by default. And that's what we're using. So if I head back to the scans section, I can see my scan has started, which is cool. So we can see run Hawk scan to scan the app. And since it shows up for me as having started from the scan section of the StackHawk website, I can tell that the API key must have worked. So if I click into this, it'll show me details about where it's at so far. So we've got a progress bar. It's scanning, it has been scanning for about a minute now. It is so far found too high 14 medium and 39 low severity issues. I clicked over to this plugin summary tab. There's also the findings and paths tab. So on the plugin summary, it's showing me, Hey, here are some of the plugins I've got and here's what I've completed so far. You can see there's lots of big general categories and within these big general categories of plugins, I mean, they're actual plugins, but each plugin actually has generally pretty large set of tests that it runs. Under findings, it gives me a summary of the findings that it has found. So we've found a SQL injection bug, remote OS command injection. These are both high severity issues and they sound pretty bad and they are. Cross-domain misconfiguration, CSP scanner, wildcard directed. These started be a little bit more esoteric and yet really false. More esoteric and yet still definitely worth looking into. And then a bunch of low severity issues that you should look into if you have time, but these are easier ones to kick down the road. Let's click into one of them. This is my favorite. So what it did was it found through introspection that there's a mutation called super-secret-private mutation. It was of course exposed through the introspection in point because that's what a lot of the standard libraries do. But it sent in this mutation and it sent in this variable. Basically what it's doing is it sends in a command, caca, and then cat Etsy password.

Issue Details and Jira Integration

Short description:

The tool provides detailed information about each issue found, including remediation steps and reference material. It also allows you to validate the issue by running the same query against your application. The page shows the specific path where the issue was found and allows you to mark it as a false positive, accept the risk, or assign it. The Jira integration provides powerful features for creating tickets with all the necessary details and a link back to the issue.

And so the request, so that's the request that it made. And then the response that it got was the Etsy password file. So cat Etsy password worked. And that's an arbitrary command execution. It's terrible, terrible bug. And so it's aptly named a high severity issue. But what you can see from this is that you've got a lot of detail for each one of these issues that it found, including some details on how to remediate the issue. You know, a little bit more about this issue, and then a full cheat sheet that you can link out to that tells you how to prevent this sort of thing.

So this is clicking out to a blog article for Python, but the details that we've put in here, although they're specifically for Python, it's good general information on how to fix a remote OS command execution bug. So you're getting lots of reference material that you can work on there. You also have, in here you've got the request, the specific request that we made that indicated that there could be a problem, and then the response that we got back from the application that indicated that there's a problem. And there's a validate button here, which gives you a curl command that you can use to run the exact same query against your application, if you've got it running and available to your workstation.

So for you guys, because you don't have an, unless you have done this, you probably haven't downloaded the application and run it locally. And this is why I'm running it locally, so I can just demonstrate that the validate button really works. So I hit validate, paste the path. Hit validate, copy this to the clipboard, and then paste it in. So we're pasting in curl, we're doing a post. And we're sending in this mutation and variable, and then hit return and we get the same response, we get the contents of the Etsy password file, which is the evidence that we've got a remote OS execution bug. And then finally on this page, and I'll invite questions. You can see that this is found on a specific path called mutation. If you have normal API routes, that's what will show up here. So you might find this kind of issue on multiple routes in a normal API. And so we would have a list of routes here, but basically what we show here is, I've got this zoomed in so much that it's not laying out very well. But for each one of these paths, it says, hey, I've got a status of new, which means we haven't seen this issue before, or it has not been triaged in any way. We haven't marked it as a false positive. We haven't accepted it. And we haven't assigned it to anybody. But what we can do from here is mark it as a false positive or accept the risk or assign it. And I don't have the integration set up now but the Jira integration on this is really powerful. So if this was a Jira integration, what you would see here is you'd be able to select which project in Jira you wanna add the ticket to. They don't have all of this detail filled in to create a new ticket, including a link to these scam results, a summary of all the scam results. So what's the mutation? What are the variables that are sent in with the mutation? What are some of these, what's the cheat sheet that you link out to for more details about it? And so forth. So if you play with it, definitely worth playing with the Jira integration. And I'll show you, let me back up and show you another, another entry. So I think cross site scripting reflected... if I click into that... I forgot some of these ones. So if I click into the cross site scripting page, oh, there's a mutation as well. Here we go. So here's what you'd find with a normal API that's not really GraphQL. You'd find all of the paths on which these issues were found. So we've found it on the root path, as well as a number of mutations and queries. Cookie without same site attribute. And again, each one of these, you could select all of these and say, hey, I want to assign all of these to another ticket or to another developer through JIRA. Let me, I want to show you, let's see, I want to show you an example of what I mean by the JIRA integration just so you can get a better feel for it. I think that I have that set up in my own account. So going behind the scenes here a little bit. If I was to take this guy and assign it out, zoom in to socialize this a little bit better, so I could hit, I think I've got this JIRA integrated, send to JIRA, and here's the kind of detail that you'll get. So you can select the project that you want to send this to, summary, nice detail, title, description, criticality, path and then a link back to this page so that you can look at all the details including the evidence that you found. Including the evidence that you found.

StackHawk Scan Results

Short description:

StackHawk provides detailed scan results with a link back to the scan results. As you run more scans, you'll see a summary and the ability to triage issues. By default, StackHawk won't break your build, but you can set a threshold to fail the pipeline if there are high severity issues. Triage those issues, and your builds will start working again. In the applications section, each application has cards for different environments, showing a bar graph of alerts. Resolving issues will decrease the alerts, and triaged issues are shown as a gray bar below the line.

Page 27. And there it is. So you get some good detail here and a link back to the scan results. All right. So that in a nutshell is StackHawk results. And let me show you, I'm gonna give you a little bit more of a tour of what else is in here. As you run more scans, you'll see more scan results filter in here. You'll see a little summary as you go. And as you start to triage things, so this scan result initially had two high severity results, and we just triaged one. So now it's got one new high severity result and one triage. And this comes into play later, if you want, by default, StackHawk will never break your build, but if you want, you can set a threshold that says, hey, if I see any high severity issues, break the build, fail, so that the pipeline bails. But if you triage those issues, those will no longer count against you. So that's a pretty common pattern with StackHawk is you get a new high severity issue, you triage it, and then your builds will start to work again. Then over in applications, we've got, we'll have a set of cards. So each application like vulnerabilities graph QL API has a number of cards, one card for each environment. So far, we've just got the development environment. And it shows you this bar graph. And so if we were to continue running scans, you'd see this bar repeated over and over again. And as you resolve issues, hopefully your yellow and green and red alerts will go down and down and down with each scan so that you have something of a visual indicator of how you're doing on resolving your bugs. And you'll also see issues that are triaged as a gray bar that descends below this line.

Authentication and Integrations

Short description:

Is there any way to set up a login authentication before scanning the app? Stackhawk integrates with various CICD platforms, including GitHub Actions. It also offers notifications integrations with Slack, Datadog, and Microsoft Teams. Project management is available through JIRA Cloud and JIRA Data Center. You can invite users to join the platform and share results with each other. Try these tools on your own applications and see what you find. If a scan takes longer than the six-hour limit of GitHub Actions, there are options to mitigate long scans, such as breaking the scan into pieces.

I've got a question. Is there any way to set up a login authentication before scanning the app? Fantastic question. And Nick provided a link to authenticated scanning. That's a very important issue because most GraphQL applications and most important APIs typically have authenticated sections. And yes, we've got a really detailed set of guides for how to set up an authentication before scanning the app. How to set up authentication so you can get into any of those. And we've been working on additional features for that as well.

Let me head over to integrations. Under integrations, we've got a bunch of CICD integrations. So these are really just links out to our documentation for each of these platforms. And really, we integrate with any platform. It's pretty straightforward, with just a Docker container and a YAML configuration file. Any build system that can support that can support Stackhawk. We're also, we're also working on a CLI version of hawkscan that doesn't require Docker at all. So we're gonna remove that Docker requirement shortly. And it's not out yet. Not sure if I'm supposed to be talking about it, but it's pretty awesome. I can't wait. We've also got notifications integrations with Slack, Datadog, and Microsoft Teams. So the Datadog integration is all about sending scan results to Datadog. So it's in there along with your other logs that you're logging to Datadog. Just so you've got an event record in the same place as all your other stuff. Project management through JIRA Cloud, as well as JIRA Data Center, the on-prem version of JIRA. And finally, generic webhooks. So you can click into any of these guys and like set up your new integration. They're all pretty straightforward and simple. And finally, you can invite users. So this is really designed to be a team platform. So you can invite all of your coworkers to join the plan and share results with one another, share scan results, share JIRA tickets, and so forth. And that is Stack Hock.

I wanna kinda open the floor for questions at this point, cause this is about what we've had and I'm really pleased with how long this took. We're just about at the top of the hour. So if anybody has questions, wanna make sure people can fit them in before they need to go on to the next workshop. One question that we get sometimes is, what's the next step? Where would you go from here? And I would say, try these tools on your own applications. If you've got a home project, that's a great place to start or if you've got a work project, all of these tools are really useful for that. CodeQL again is for a price for any number of applications if it's in a private repository. But the others are free. GitHub actions is free for 2000 minutes, stack rockets free for one application. Give it a shot and see how it works on your applications and see what you find. You can also go back through the PRs that Dependabot created and the issues that CodeQL found and see if you can find resolutions to them. I got a question here, if GitHub actions has a limit of six hours per job, what happens if your scan takes longer than that? Any other options? It's a good question. So, I don't know that we've run into a lot of scans that take six hours, but we actually have a good blog post on how to mitigate long scans. Nick, could you see if you could find that? It's a pretty recent blog post I think. Oh, quick on the draw. Oh yeah. Can you see if you can find the StackHawk blog post on... It's basically about how to mitigate long scans. But there's a number of options. You can break the scan into pieces, so GraphQL might be a single component. You can limit the spidering.

Optimizing StackHawk and Authentication

Short description:

Tuning options in StackHawk allow limiting spidering and paths checked. Running components in parallel is recommended for large applications. Benjamin suggests separate scans for mutations and queries. Consider using self-hosted runners in AWS for longer time limits and fewer resource constraints. Authentication step gives deeper scanner access. Seed data for consistent scan results. StackHawk offers support and a Discord channel for questions and assistance. Try StackHawk and other tools like Snyk for easy-to-use SCA and SAST utilities.

There's lots of tuning that you can do with StackHawk to limit the spidering and limit the paths that you're checking. You can break it down into components and run a bunch of components in parallel. That's usually how I'd address it if you've got a really large application.

Here we go. Okay. Great suggestion, Benjamin. Benjamin says, try a scan for mutations and another scan for queries. It's a really good idea. Another option, like, it depends on the pipelines that you use too. So not everybody's using GitHub actions, and if you go deep on GitHub actions, I've seen a lot of people end up going with self-hosted runners, and you can run the runners in AWS, for instance. So, like, you don't have to really run it on-prem. You can just run it out in AWS or whatever. And there you don't have the same time limits. You don't have the same resource constraints that you have in GitHub actions.

So, yeah, as you try it on your application, like, the next big thing to check, or the next big thing to do to really make it shine is that authentication step. That typically gives the scanner much deeper access. It's really good when you're doing DAST scans to seed the data and always start with the same consistent set of data every time you run the scan, especially if you're doing mutations, because as you do mutations or change the data, and you're gonna change the nature of the scan results in the end. I'm gonna stick around for a little while and continue answering questions as they come.

I hope everybody got something out of this. I hope everybody got something out of this. And I hope you learned some things that you can take back to your own applications, whether they're for work or for your home projects. And we are always around. Like if you try the free version of StackHawk, you can send us emails to support at stackhawk.com and we will reply to them. We really pride ourselves on good customer support, even if you're just using the free plan. Oh yeah, and Nick is mentioning, we also have a Discord channel. You can ask us questions there. Come on over. Oh, that's cool. Magna Logan, you use StackHawk in your DevSecOps training. Where did you first learn about StackHawk? Is this a course that uses StackHawk? That's awesome. Super cool. Well you guys, thank you so much. I'm so glad that everybody was able to follow along. Looks like we had really consistent results. We really appreciate you being here and taking the time. And keep an eye out for StackHawk, give it a shot. Try those others tools, too. Also check out Snyk. I want to put in a word for them, because they've really got great SCA and SAST utilities, as well. Definitely worth a look. Both really, really easy to use. And thanks a lot, everybody. Take care. I'm going to go ahead and sign off. But if you have any additional questions, come join our StackHawk discord and we'll answer them there.

Watch more workshops on topic

GraphQL Galaxy 2021GraphQL Galaxy 2021
140 min
Build with SvelteKit and GraphQL
Top Content
Featured WorkshopFree
Have you ever thought about building something that doesn't require a lot of boilerplate with a tiny bundle size? In this workshop, Scott Spence will go from hello world to covering routing and using endpoints in SvelteKit. You'll set up a backend GraphQL API then use GraphQL queries with SvelteKit to display the GraphQL API data. You'll build a fast secure project that uses SvelteKit's features, then deploy it as a fully static site. This course is for the Svelte curious who haven't had extensive experience with SvelteKit and want a deeper understanding of how to use it in practical applications.

Table of contents:
- Kick-off and Svelte introduction
- Initialise frontend project
- Tour of the SvelteKit skeleton project
- Configure backend project
- Query Data with GraphQL
- Fetching data to the frontend with GraphQL
- Styling
- Svelte directives
- Routing in SvelteKit
- Endpoints in SvelteKit
- Deploying to Netlify
- Navigation
- Mutations in GraphCMS
- Sending GraphQL Mutations via SvelteKit
- Q&A
React Advanced Conference 2022React Advanced Conference 2022
95 min
End-To-End Type Safety with React, GraphQL & Prisma
Featured WorkshopFree
In this workshop, you will get a first-hand look at what end-to-end type safety is and why it is important. To accomplish this, you’ll be building a GraphQL API using modern, relevant tools which will be consumed by a React client.
Prerequisites: - Node.js installed on your machine (12.2.X / 14.X)- It is recommended (but not required) to use VS Code for the practical tasks- An IDE installed (VSCode recommended)- (Good to have)*A basic understanding of Node.js, React, and TypeScript
GraphQL Galaxy 2022GraphQL Galaxy 2022
112 min
GraphQL for React Developers
Featured Workshop
There are many advantages to using GraphQL as a datasource for frontend development, compared to REST APIs. We developers in example need to write a lot of imperative code to retrieve data to display in our applications and handle state. With GraphQL you cannot only decrease the amount of code needed around data fetching and state-management you'll also get increased flexibility, better performance and most of all an improved developer experience. In this workshop you'll learn how GraphQL can improve your work as a frontend developer and how to handle GraphQL in your frontend React application.
React Summit 2022React Summit 2022
173 min
Build a Headless WordPress App with Next.js and WPGraphQL
Top Content
WorkshopFree
In this workshop, you’ll learn how to build a Next.js app that uses Apollo Client to fetch data from a headless WordPress backend and use it to render the pages of your app. You’ll learn when you should consider a headless WordPress architecture, how to turn a WordPress backend into a GraphQL server, how to compose queries using the GraphiQL IDE, how to colocate GraphQL fragments with your components, and more.
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Top Content
WorkshopFree
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
Top Content
We all love GraphQL, but it can be daunting to get a server up and running and keep your code organized, maintainable, and testable over the long term. No more! Come watch as I go from an empty directory to a fully fledged GraphQL API in minutes flat. Plus, see how easy it is to use and create directives to clean up your code even more. You're gonna love GraphQL even more once you make things Redwood Easy!
Vue.js London Live 2021Vue.js London Live 2021
24 min
Local State and Server Cache: Finding a Balance
Top Content
How many times did you implement the same flow in your application: check, if data is already fetched from the server, if yes - render the data, if not - fetch this data and then render it? I think I've done it more than ten times myself and I've seen the question about this flow more than fifty times. Unfortunately, our go-to state management library, Vuex, doesn't provide any solution for this.For GraphQL-based application, there was an alternative to use Apollo client that provided tools for working with the cache. But what if you use REST? Luckily, now we have a Vue alternative to a react-query library that provides a nice solution for working with server cache. In this talk, I will explain the distinction between local application state and local server cache and do some live coding to show how to work with the latter.
GraphQL Galaxy 2022GraphQL Galaxy 2022
16 min
Step aside resolvers: a new approach to GraphQL execution
Though GraphQL is declarative, resolvers operate field-by-field, layer-by-layer, often resulting in unnecessary work for your business logic even when using techniques such as DataLoader. In this talk, Benjie will introduce his vision for a new general-purpose GraphQL execution strategy whose holistic approach could lead to significant efficiency and scalability gains for all GraphQL APIs.