JS Security Testing in GitHub Actions

Rate this content

This workshop will focus on automating software composition analysis, static application security testing and dynamic application security testing using GitHub Actions. After a brief introduction covering the different types of application security and the importance of finding security vulnerabilities before they hit production, we'll dive into a hands-on session where users will add three different security testing tool to their build pipelines.

101 min
04 Jul, 2022


Sign in or register to post your comment.

AI Generated Video Summary

Today's Workshop focuses on automating build and security tests for a Node.js application using tools like Stackhawk, Dependabot, CodeQL, and GitHub Actions. GitHub Actions is a powerful CI platform with a marketplace of Actions and built-in secrets management. CodeQL is a SAST utility that scans code for vulnerabilities, while Stackhawk is a dynamic application security testing tool. The Workshop covers enabling code security and analysis, configuring StackHawk, and running scans locally. Overall, the Workshop provides practical guidance for integrating security into software development pipelines.

1. Introduction to Workshop and Agenda

Short description:

My name is Zachary Conger, a Senior DevOps Engineer at StackHawk. Today, we'll automate the build and security tests for a Node.js application. We'll use various tools like Stackhawk, Dependabot, CodeQL, and GitHub actions to ensure application security.

My name is Zachary Conger. I am a Senior DevOps Engineer here at StackHawk, and I've been a developer and an automator, tester, observer, also operator of IT systems for many years Also a musician, cyclist, and photographer, and I love doing these workshops. I love seeing people's reactions to the tools and how easy it can be to add security to your pipelines.

The bird we are repping today is Stackhawk. Stackhawk helps developers find, triage, and fix application security bugs before deploying to production. It is one of the tools that we'll be using today to automate security in your pipeline, but certainly not the only tool. In fact, we're going to be starting with a couple of other ones.

Our agenda today. We're going to automate our build of a Node.js application. We're going to automate security tests for that application as well. We're going to use all you need is a web browser and Discord. And what we're going to do is we're going to take a sample application, we're going to bring it into GitHub. We're going to fork that application. We're going to use GitHub actions to automatically build that application. Then we're going to add Dependabot to scan the app's dependencies for known vulnerabilities. Then we'll add CodeQL to scan the code base and look for vulnerable patterns in the code base. And then we're finally going to add StackHawk to dynamically scan the running application for vulnerabilities all within the build pipeline using GitHub actions.

2. Introduction to GitHub Actions

Short description:

GitHub Actions is a powerful continuous integration platform built into GitHub. It's easy to get started by adding a GitHub Actions configuration file. It uses YAML configuration language and has a marketplace of Actions. It's event-driven and has a built-in secrets management platform. GitHub Actions provide 2,000 free minutes per month. Let's begin with GitHub Actions by forking the vulnerable node express repo to your own repository.

ZACHARY LOTOS. We got a question in the discord, which I'm thrilled about. Should we trust the repo locally in VS Code or should I just use GitHub.com? DREW REINHARDT. Yes, you can trust this application. It's okay to bring it down. You should fork it to your own repo. Yes, you can also use it in GitHub code spaces. We're doing all of this work in the browser only to make it easy for attendees to follow along to minimize any unpredictable outcomes. I encourage you to follow along in the browser, but if you want to bring it down to your workstation or GitHub code spaces, that's fine too. If you're like, what is Zach talking about? We will go over all of that in just a second. So no worries if you're not as familiar with GitHub, we'll be covering all that. Somebody has jumped ahead in the workshop guidebook. An overachiever. All right, so getting started.

The first thing that we're going to work on is GitHub Actions. And GitHub Actions is a powerful continuous integration platform that's built right into GitHub. So it's super handy to use. It's super easy to get started. If you have a repo in GitHub, you can light this thing up just by adding a GitHub Actions configuration file. It will read that file and start building on it, if it finds it, as long as you've got GitHub Actions enabled in your repository and enabled for your organization, which it is by default. So by default, this just works. If you add a configuration file for it, some organizations will turn it off. And sometimes if you fork an application that already has GitHub Actions workflows defined, it will disable actions just so you don't mistakenly do a builds. It's a powerful CI system built into GitHub and uses YAML configuration language and it's got a huge marketplace of something called Actions, the basis of the name. The Actions are like Jenkins plugins. They are little pieces of functionality that are made really accessible by so you can add functions for various things with just a couple of lines of YAML. Everything that we do today, I think, has an action associated with it. CodeQL does, StackHawk does. It's an event driven platform, so it's driven off of events like I pushed some code to GitHub to my repo or I opened a PR or you can send other kinds of webhooks to kick off events. You can also have one workflow kick off another workflow, so it's really, really flexible and you can build complicated pipelines if you want to, but it's very easy to get started with it as well. There's also a built-in secrets management platform in GitHub Actions so that if you have any secrets that you need and we'll have at least one example of a secret that we'll need to inject into our pipeline, you can stash that in the secrets management platform in GitHub so that you don't have to put those secrets into your code base and into your GitHub repo, which is generally a no-no. You don't want to put secrets in your Git repos. It's also really accessible in that they provide 2,000 free minutes per month. Last time I checked, which was a while ago, they may have changed that, but that's a lot of time. That's a lot of build time to work on your projects. I use it for personal projects all the time. Very handy platform, easy to get going, easy to start. Let's go ahead and begin with GitHub actions actually. So I want to refer over to the, let me see, yeah, okay. So what I'd like to do is direct you to, if you have the guide book open, we're going to start with step one, continuous integration workflows and GitHub actions. I will provide a link to that very spot. And you can read from here what we're going to do. So first step we are going to fork this application repo, the vulnerable node express repo. What this is, is just a simple test app that we often use to test various security tools against. What we're going to do is just fork it to your own repository. So hit this fork button here up at the top right. It should prompt you for a good default repository name within your own organization. Give it a description if you like. It's just a vulnerable node express application. Create that fork.

3. Using the Discord Server

Short description:

In a few moments, you should have it forked over to your repository. Join us in the GIS security testing channel on the Discord server. React with a thumbs up if you have forked the repo. Make sure to go to the rules channel, read and agree to the rules, and give a thumbs up. Then, come to the security testing channel and give us a thumbs up on the fork the repo step. Let us know if you have any trouble forking the repo.

And then in a few moments you should have it forked over to your repository, so. Awesome. And Zach I'm going to go ahead and jump in here for a little bit of insight on how we're going to be using the Discord server. If you are in the Discord server make sure you join us in that GIS security testing channel and I will be kind of posting steps as we go. So if you're in there right now you can see I just posted forked the repo. If you have gone ahead and done that please react with the thumbs up so we know people are ready to move on. We do a lot of these and know people move at all different paces so we try to use that kind of thumbs up as a barometer to know that people are with us, they're following along. Zach and I haven't been saying craziness that no one understood. So make sure you go into that Discord channel, go to the rules channel first and like that message. Give it a thumbs up and that should show you the other channels. Someone just pinged us with that. If that solved your problem, let us know. But actually Zach, do you mind if I grab the screen share just so I can show this one last time? Of course. Just since this is going to be kind of the piece that holds all the workshop together. So you're going to go in this rules channel, you're going to read those rules and agree to them. You're going to like this with a thumbs up and you're going to come into this security testing channel and give us a thumbs up on that fork the repo step so we know that you are with us. I will post that link one last time in the Zoom chat so people can follow along with us if they would like. But once you forked that repo, please go ahead and react so we know folks are ready to move on. Awesome. All right. My last time given that spiel, I promise. I know we're going to be giving A lot of dives this morning. This is great. Okay. We've got nine joiners. If you're having any trouble forking a repo, please let us know in the Discord channel. I can circle back and help you out. I will go ahead and move on.

4. Adding GitHub Actions Workflow

Short description:

Just out of interest, what are those JetBrains icons in GitHub? JetBrains has some plugins that you can add to GitHub. We're going to add a GitHub Actions workflow. A workflow in GitHub actions is a YAML configuration file. Each YAML file is a workflow. You can have any number of jobs and steps. We'll have a single job with steps to build and test this application. We'll create a file.github.workflows.buildandtest.yaml and copy the configuration into it.

Just out of interest, what are those JetBrains icons in GitHub? JetBrains has when you fire up IntelliJ, I can't remember where this setting is. I found it deep within their settings labyrinth. They have some plugins that you can add to GitHub. I don't really use them very often though. I usually just grab this, but I think the idea is you can click these and clone it in IntelliJ IDEA. I probably should use that. I'd save a lot of time.

All right. We've got this cloned down. Now what we're going to do is we're going to add a GitHub Actions workflow. So a workflow in GitHub actions is some amount of YAML configuration. Each YAML file in a GitHub actions setup is a workflow. You can define one workflow per YAML file. I guess you could probably add another YAML file with a little dash dash dash separator. Anyway, generally one file is one workflow and within a workflow, you can have any number of jobs, and those jobs can have any number of steps. We're just going to have a single job with a couple of steps to build and test this application. So I'm going to copy it directly in my case from the guidebook here. So we're going to create this file.github.workflows.buildandtest.yaml and we're going to copy this configuration file into it. So I've just copied the whole file and I'll walk through it after I've copied it into place. So in your web browser, you should have this add a file, create new file. The file we're going to create is, and of course, this is in the vulnerability express file. Again, you should have this add a file. This is another repo that you've cloned. Sometimes people try to do this in the workshop guidebook repo, and that won't work, of course.

5. Creating GitHub Actions Workflow

Short description:

We'll create a GitHub directory called '.github' and a subdirectory called 'workflows' to store the GitHub Actions workflows. The workflow is triggered by a push to the main branch or a pull request to any branch. You can add other triggers like API requests or Cron jobs. The job section defines the sequence of tasks. We'll run the 'build and test' job on an Ubuntu 20.04 instance. GitHub Actions provides a virtual machine with sufficient resources. The steps include checking out the code, installing Node.js version 14, setting up caching, and running NPM install to install dependencies. Finally, we'll run unit tests and commit the file through the web interface.

Okay. So we'll create.github, and the.github directory is where a lot of sort of GitHub control files end up going. And then there's going to be a subdirectory of that called workflows, and that directory is where GitHub always looks for any GitHub actions workflows. We'll call this one build and test. You can call it whatever you want, of course. But that's what I've chosen. And then I'll just paste in this whole configuration.

So the major parts of this file are the name of the workflow, which we're calling build and test. This on defines the triggers or events that will kick this workflow off. So if I make a push to the main branch, which is the default branch here, then it will kick this workflow off. Or if I make a pull request to any branch, it will also kick this thing off. You can add other kinds of triggers in here such as, hey, if somebody makes an API request to GitHub and defines this workflow, kick this workflow off. Or if some other workflow successfully completed or failed, kick this off. Or you can set up a Cron job entry here and say just run this thing every week at 5 p.m. on a Tuesday.

Then there's the jobs section. So we've got one job defined. And if you want to define multiple jobs, you can. By default, the jobs all run in parallel, but you can force sequencing by having one job depend on another. We just have one job today and we want to run all this stuff sequentially. Give it a name, build and test. And we say, hey, we want this thing to run on a Ubuntu 20.04 instance. So in the background, GitHub Actions, when this workflow runs, is going to spin up a virtual machine running Ubuntu 20.04 and that virtual machine is going to have, I can't remember like 30 gigs of disk and 7 GB of RAM. Pretty capable machine. Generally does the trick for most build functions. There's also an option to host your own runner if that's not enough hardware for you, and you can choose whatever kind of runner or virtual machine or physical machine you'd like.

Then we get into the steps. We're going to check out the code using a built in action or an official GitHub action. So, anything, any action that starts with this keyword actions is an official GitHub action. And it's coming from a GitHub repo called actions. So, I'm going to take a quick diversion here and just show you that. So, this organization, actions, has a bunch of repos, and most of these repos are, in fact, actions. And just to talk about actions a little bit more, actions, the plugins. I'll show you that you can go to this GitHub actions marketplace. And from here you can search for various actions. There's tons of them. People contribute them all the time. They're pretty easy to write. In fact, Node.js is one of the default, I mean, it's the preferred language for writing these actions. So, if you're interested, this is a fun place to check out other people's code as well. Back to the new file. So, I'm just going to check out the code and that will check out the exact GitHash of the thing that you just pushed, the thing that just triggered this workflow. Then we're going to install Node.js, version 14. And this is another official GitHub action that just is all about setting up Node.js, you know how difficult that can be to get just the right version of Node and just the right version of NPM and all that stuff? We'll get version 14 and we'll set up caching. GitHub actions has a caching functionality that's supposed to cache like your Node modules and stuff like that. Finally, we will run through NPM install to install dependencies. You can do NPM CI if you want. This is just a little bit faster with caching. Then finally, we'll run unit tests. With all of that, just go ahead and commit this file through the web interface.

6. GitHub Actions and Security Testing

Short description:

Once you've set up your workflows, you can check the progress and details of each step from the Actions tab in your repository. The first action is to check out the code, followed by installing Node.js and dependencies. Unit tests are then run, and finally, some cleanup is performed. Now, let's move on to adding security tests to the workflow. We'll focus on software composition analysis (SEA), which examines the dependencies in your code base, such as the package.json and package.lock.json files, and cross-references them with a catalog of known open source libraries and dependencies.

When you do that, if I did that right, you should start to see your workflows kick off. Hey, there it goes. So from the Actions tab, you can see all workflows. We've got one workflow called Build and Test, and in that workflow we've got a single run running right now. And if I click into it, you can watch as it sets up the job, sets up a virtual machine, checks out the code, installs Node.js, and now it's installing dependencies, and it will do that for a while. And this is probably a good place to checkpoint again and see, make sure everybody has gotten to this point. You've created a new build and test.yaml workflow configuration under .github.com slash workflows slash build and test. And did you get an action running? If you have, go ahead and give us a thumbs up on that.yaml snippet so we know you're able to follow along. Zach, I'm thinking of some of the issues we've seen before. I know if you missed the period in the.github, it does not set up a GitHub actions. If you're not working in the right repo, you'll have problems. So if you're getting error messages or you click into actions and nothing is there, please let us know. Drop the question in the Discord or the Zoom chat. If we don't get this part setup correctly, none of the rest of the workshop will work for you. So we always like to take a lot of time here, make sure everyone was able to be successful with that piece of code and establish their GitHub actions pipeline. Again, if you are having problems, please, please, please tell us so we can help you so that you can follow along for the rest of the workshop with us as well. Awesome. Looks like we've got a lot of people are getting this. And again, if you go over to the actions tab in your repository, you should see your build and test workflow running. Maybe by now it's completed. Mine looks like it's completed. And so you can look at all of the details of each one of these steps. So like from the very top, the first thing you see is it gives you a it gives you a little rundown of what it's about to do and what it's about to kick off. So we've got a Ubuntu 2004 virtual machine, virtual environment provisioner, whatever that means, token permissions, and so forth. So that's just setting up the runner. And then that first action that we defined in our workflow is to check out the code. You can see it's syncing the repository, pulling it down, so we can start working on it. Then we install node JS, just runs through that GitHub action which has pretty terse comments. Then we install our dependencies and we've got a familiar I'm sure flood of messages about installing those dependencies. We run unit tests. Then it does some cleanup after all of that. Some of these actions have a little bit of cleanup activity that they do at the end of a run. That's great. Looks like everybody has followed along. Hopefully everybody's action has also worked successfully. If you had any problems with that, if you've gotten any errors, give us a mention. Mention it in the chat and we will try and address that. All right. That's GitHub actions. Now we're going to start adding some security tests to your workflow. We're going to look at three different kinds of security testing. We're going to add three different tests to the workflow. First is called SEA, or software composition analysis. And this kind of testing, this general form of security testing, is exemplified by Dependabot, which we're going to use today. Snyk, S-N-Y-K also has a software composition analysis product that's really nice to use. Then there's an open source project out there that does a similar thing. And software composition analysis operates on your code base. But what it basically does is it takes a look at the dependencies that you pull in. So, in the case of Javascript, we're looking at the package.json and the package.lock.json file. And then it's going to cross reference that with a catalog of open source libraries and dependencies that it knows about.

7. Introduction to SAST and DAST

Short description:

Static Application Security Testing (SAST) analyzes your code for vulnerabilities and reports on suspected vulnerabilities with file and line details. It finds your bugs, not the bugs of other dependencies. SAST suffers from high false positives, but it's easy to automate. Dynamic Application Security Testing (DAST) operates on running code by probing your application and analyzing the results to detect vulnerabilities. It provides evidence and details of suspected vulnerabilities. DAST testing has low false positives and is slower than SAST and SCA. We'll start with SCA and address questions about CodeQL.

And it reports on any known vulnerabilities in specific library versions. Because of this, it really doesn't have false positives in one sense, in that anything that it flags is a true vulnerability. It's a truly vulnerable library that you should update in your code base. You should not be pulling those in. But in a way, it kind of does have false positives in that it doesn't know to what extent you're using a particular library. So you may not actually express the vulnerability that is present in that library. So just something to be aware of, but it's so easy to use these tools that it's generally a good idea to stay on top of the recommendations that they make for upgrading your dependencies.

A huge advantage of this kind of testing is that it's really fast and easy to implement. The next form of testing we're going to do is called SAST, S-A-S-T, which stands for Static Application Security Testing. And this is exemplified by CodeQL, which we're going to use today. So it's built into GitHub, but also by Snyk Code, which is a really great new product by Snyk that we mentioned partially because we integrate with it in a really interesting way that I'll describe later. But it's a very good tool, easy to automate. SonarQube is also a very popular option out there for SAST.

So the way SAST works is it operates on your static code, but instead of looking at your dependencies, it's actually looking at the code that you wrote. And it analyzes your code for patterns, code smells that would indicate that you might have a vulnerability. It reports on those suspected vulnerabilities, and this is the big advantage, it can point to exactly the file and the line within that file that has the vulnerability or the suspected vulnerability. It finds your bugs, not the bugs of other people who have written dependencies. It's not even gonna look at your dependencies. It does suffer from high false positives. And the reason for that is that it can find patterns that are not good patterns and that could lead to vulnerabilities. And you should probably fix those patterns and take the advice from your SAS utility. But the vulnerabilities that it finds may not be expressed in the runtime application when you actually bring it up and make it available to users, just because, just for various reasons, you may not have exposed the code paths that your SAS utility found. It's a little bit slower than SCA, but it's generally also very easy to automate in pipelines.

And then finally, we're gonna be looking at DAST, which stands for Dynamic Application Security Testing. And this is exemplified by StackHawk, the company that Rebecca and I work for, but also oAASPZAP, which is a really popular open source DAST utility. Really popular for penetration testers, for instance. And it's also the project that StackHawk is based on. So we have taken oAASPZAP and wrapped it in a bunch of extra tooling to make it easier to automate. Burp suite you may have heard of, like port swigger, is also a very popular pen testing tool. Not quite as easy to automate as StackHawk or oAASPZAP, it's really one of the old utilities in the field. Still pretty popular with security professionals who are doing manual penetration testing. DAST operates on running code. It does not look at your code base at all and it doesn't look at your dependencies. Instead, you need to bring your application up, you need to bring your application up, get it up and running and listening on a web port. So it's generally listening via HTTP, HTTPS. And what a DAST utility will do then is it will start to probe that application. So it will actually make malicious requests to your web application, to your API service, whatever it may be. And look at the results that it gets and analyze those results to see if it was able to successfully exploit a vulnerability. And if it gets that evidence and it matches the pattern, then it will report on that suspected vulnerability and it will give you the input and output details that led to that conclusion. It also finds your bugs and a huge advantage of DAST testing is that it does not have high false positives. There are some false positives, but generally if it finds something, it's got the receipts on it, it can show you the exact evidence that it was able to exploit a vulnerability and present that back to you so that you can test it yourself and verify. It tends to be a little bit slower than SAST and SCA, but that's highly dependent on the depth of your code and in some cases, what language it's written in. So node applications it's gonna scan pretty quickly. Ruby applications, maybe a little less quickly. And of course it depends on the size of the application, how big it is, if it's a giant old monolith, it could take a while to scan that and if it's a brand new shiny micro service, it's gonna be a lot quicker, so that's the overview and we'll start with SCA. I see that there's a little bit of chat going on and I wanted to catch up on that. Yeah, so I think the first questions, which is actually troubleshooting question, which is someone can't add the file to their forked repo. Maybe you could help us troubleshoot there and then we can, the questions are about code QL, following that. Let's see. So in your forked repository, are you sure you're in the forked repo and not the original repository? Maybe you can show, yeah, exactly, how to know if you're there.

8. Introduction to Security Testing Tools

Short description:

So this is the one that I've, this is the one that I have forked. We'll move on. Thank you for flagging it. But how does Dask really work? It's sort of like a classic penetration test where you're trying to hack a running application, a running web service, a running API. Another way to think about it is we're trying to input malicious data into fields and see what your app sends back. Looks like socket is a new SCA tool. And there was a question about CodeQL, which we'll be using today. Let's do it. The first one we'll look at is Dependabot, which is SCA or software composition analysis. It's built right into GitHub.

So this is the one that I've, this is the one that I have forked. And so you should see your name here for the organization and then see that it has been forked from kakaa.net slash Vulnodexpress. Sometimes, easy to accidentally be working in the wrong tab. I've done that many times. Unfortunately. Let's see if that maybe works. It's not working. Okay, well maybe it will still- It's still cannot add. It's- Huh, I'm not sure why that would be. It's your own personal organization, Valdor later? Maybe if you're working in your, like in your company's GitHub, it won't do your settings if it's just your personal repo. Interesting. And so when you say add file, create new file. Something like this, all right. All right, we'll move on. Thank you for flagging it. Sorry, we weren't able to fix that one in real time. Sometimes when we bring along another tech person, can be handy to have them go engage on the side. But we did get a couple questions about different types of security testing tools, Zac. So the most recent one, there's a little bit of chatter. So, trying to sort through them. But how does Dask really work? Does it look at stack and data? Does it look at the opcodes? Why are the false positives less than sassed? Okay. So the way it works, it's sort of like a classic penetration test where you're trying to hack a running application, a running web service, a running API. So you're actually sending in malicious requests. And we'll get to that when we start talking about Stackhawk and I can show you in more detail. But it sends in malicious requests and then it looks at the response for evidence that the attack was successful. So for example, if you wanna do a cross site scripting reflection attack, a common way to do that would be, hey, I found a search field on this application. I'm gonna enter a script tag into that search field and hit return. And if I get that script tag played back to me and it does a script action, then I know that I have successfully exploited that vulnerability. And that is how a DAST scanner works. If that makes sense. Yeah. Another way to think about it is we're trying to input malicious data into fields and see what your app sends back and based on if your app sends back things that it shouldn't, then we'd know we'd be able to get access to data that we shouldn't have. We being the DAST tool. Hopefully that helps clarify it. I know there's kind of chatter about other SAST and SCA tools. Looks like socket is a new SCA tool. I have not heard about it, but I will certainly have to check it out after this workshop. And there was a question about CodeQL, which we'll be using today. So I think Zach, what do you say we get going with these different types of security testing tools? Let's do it. All right.

Okay. And so, the first one we'll look at is Dependabot, which is SCA or software composition analysis. It's built right into GitHub. It's a free service for all GitHub repositories. By default, it is enabled on public repositories. And if you have a private repository, it is also free and it's easy to enable that on private repositories. Also it turns out for forked repositories, they don't turn it on by default, which is handy for this workshop, means we can go turn it on so you can see how to do it. Again, it finds libraries with vulnerabilities. And in the case of Dependabot, it automatically issues PRs to fix the vulnerabilities that it finds. It does have some false positives in practice, but super easy to keep up on stuff when Dependabot is opening PRs for you and offering to fix the problem for you.

9. Enabling Code Security and Analysis

Short description:

To enable code security and analysis in your forked repository, go to the settings section and turn on the dependency graph, Dependabot Alerts, and Dependabot Security Updates. You can also enable Version Updates for non-security-related updates. We'll cover code scanning later.

So let us begin with that. And what we're gonna do is in your forked repository, just jump over to the settings section. And under settings, look for code security and analysis. And in here, there's a number of options that you can turn on. We're gonna turn on these first three, and then we'll come back to the others later. The first thing we wanna enable is this dependency graph to understand your dependencies. This allows GitHub to go into your package.json file and go into your package.lock.json file and run through those dependencies and find the sub-dependencies and wrecking all of that stuff so it can know exactly what dependencies you're pulling in. And we're gonna enable Dependabot Alerts. So this is the thing that will create PRs for you. I actually know this is the one that will alert you and let you know that it's found vulnerabilities. And then this next one Dependabot Security Updates will allow Dependabot to open pull requests and offer to resolve those problems automatically. This other one, Version Updates, you can enable this if you wanted to open PRs also even if it didn't find a security problem but it knows that there's a problem, it doesn't know that there's a security problem but it knows that there's an update to your library. Might be handy, you might want to turn that on and keep you sort of ahead of the game. And then finally, don't worry about this code scanning thing we're gonna work on that one later.

10. Enabling Dependabot and Introduction to CodeQL

Short description:

If you have enabled Dependency Graph, Dependabot Alerts, and Dependabot Security Updates, your Dependabot is already in action. You can check the security tab in your repository to see the flagged issues. To enable Dependabot, go to the settings section of your repository and enable the first three features: dependency graph, Dependabot alerts, and Dependabot security updates. Dependabot is user-friendly and automatically generates pull requests to address security issues. If you encounter any problems, let us know in the Discord channel. In the security tab of your forked repo, you will see a number of alerts and dependent bot alerts. Some issues can be resolved automatically, while others require manual intervention. Each vulnerability found by Dependabot includes a summary and a description of the problem. Dependabot opens pull requests to fix these issues, which can be reviewed and merged. By enabling Dependabot, the GitHub Actions workflow is triggered, ensuring that the application can be built and tested. Next, we will explore CodeQL, a static application security testing (SAST) utility that scans code for vulnerabilities. CodeQL has an open source library of tests and is free for research and open source projects. It can be used on public projects for free, but there is an additional cost for private repositories.

So if you have enabled Dependency Graph, Dependabot Alerts and Dependabot Security Updates your SCA, your Dependabot is already in action and it's probably already found some stuff. And you can tell that by looking up at the security tab in your repository. In my case, I've already got 17 flagged issues. So click into the security tab and you can learn more about that. But first, let's see if everybody has gotten this far with me. So again, in the Repo that you forked I just went over to settings and then over on the left-hand side I looked for code security and analysis and I enabled the first three features, dependency graph, dependabout alerts and dependabout security updates. I'll come back and review. I was just gonna say, if you missed how Zach got there originally, you're gonna go to settings and then you're gonna go to code security and analysis on the left hand side and again, enable those first three options. If you are running into any problems in this, certainly let us know. I love dependabout because it's super user-friendly, easy to enable and I love that it automatically generates these pull requests for you. So very light lift security tooling that's easy to get going, so I wanna make sure anyone that wants to is successful with that. So let us know if you are successful with a quick thumbs up in the Discord and if you have problems, please let us know so we can help you troubleshoot those as well. Back to you, Zach. Thanks Rebecca. Alright then, now if you go to your security tab in your forked repo, you should see a number of alerts here. And if you look on the left-hand side, you'll see dependent bot alerts. We've got 17 of them. And there we go. Turns out this application has a ton of insecure dependencies or dependencies with known vulnerabilities. And there we go, that's tar. So it's also started to open some pull requests to address some of these issues. Some of these issues, it won't be able to resolve automatically and others it will. I'm gonna pick on the first one that it seems like it can resolve automatically based on this little icon over here, indicating that it is open to pull request. So we've got a prototype pollution in lodash. Lodash is a dependency that I've pulled in to this application. And it's got a review security update, which is really bringing me over to this pull request that it has opened for us. But for each of these vulnerabilities that it's found, it also gives you a nice little summary. So, Hey, the package I found is lodash. It's got a vulnerability, the effective versions are anything prior to 4.17.20, and the patch version is this one, which I suspect is the thing that it is offering to update us to. And then it gives a little description of the problem. And some of these descriptions are more detailed than others but they all tend to be pretty handy and enough to give you an idea of what the problem is. Then if I go to pull requests, I can see that there are a number of these PRs opened by dependabot offering to fix these issues. Here's the lodash one. It's actually going to bring us up to 4.17.21, and a cool thing about this is that since it is open to PR, the PR kicks off our GitHub Actions workflow. And so this thing has already indicated, it's already tested to make sure that we can build and test our application. So it should be, as long as our testing is adequate, it should be safe to merge this pull request. And that'll just go through and fix the application for us. Update that dependency, and that's one less thing to worry about. So that's Dependabot. And again, just quick overview. You just go over to settings in your repository, find code security and analysis, enable these first three Dependabot related features, and it's up and running. That's Dependabot. And then the next one that we're going to look at is CodeQL, which again is a sass utility. Other examples are sneak code and sonar queue, very popular options. Again, sass utilities in general scan your code. They look for code patterns that may cause vulnerabilities. In the case of CodeQL, they've got an open source library of tests that they run against code and they're constantly getting new queries submitted to them from the open source community. It's free for research and for open source projects. You can light it up for free on your public projects such as this repository that we just created. The fork of the Vuln node express application, but it does cost something extra if you want to use it on private repositories.

11. Enabling CodeQL and Running the Workflow

Short description:

You need the security license from GitHub. In your forked application repo, go to the security tab and click on the 'code scanning alerts' button. Configure code QL alerts by clicking the big green button. This sets up another GitHub Actions workflow called code QL analysis. It runs on every push to the main branch or pull request, as well as on a schedule. The workflow checks out the code, initializes the code.QL tool, auto builds the application, and runs an analysis. Let us know if you were able to enable CodeQL and if the workflow is running. Deploy tools like this to protect your data and ensure code quality.

So you need the security license from GitHub. All right, so let's just jump into it. For this one in your forked application repo for Vuln node express, go back to the security tab and look for this last button here, code scanning alerts. Click on that. And then click on this big green button that says configure code QL alerts. I think it's interesting, they recently changed this interface. It used to come to this page and it would feature code QL upfront. But I think it's interesting to note that under code scanning, there are a bunch of other code scanning tools out there. For instance, I think Sneak is down here. Yeah, I think you might. It's on the next page. Anyway, there are a bunch of different SaaS utilities that are available from this page. But the default one is the GitHub supply one called code QL. Just notable. So let's click that big green button. And what it does is it just sets up another GitHub actions workflow. So, as you can see, it's prompting us to add this new file called code QL analysis under the workflows directory under the dot GitHub directory. And what this workflow does is it runs every time we try and do a push to the main branch or a pull request to the main branch. And it also runs on a schedule. So this is a Cron job definition for when to run this thing. It runs at 22 UTC on a Wednesday. So just pick some random time. Has a single job. It will run this job that it calls analyze. It's set some permissions for itself, and this is really a good best practice for actions workflows to set these permissions so that your workflow can't do damage to your own repository. So then it has this strategy, it's got different strategies for how it might fail. FAIL FAST would just fail on the first incidents of a security problem. This matrix statement, so for every language that it finds in your codebase it will set up one of these matrix entries. And so let's say you had three different languages that it covers, it would run three of these jobs in parallel against each of those codebases, against each of those languages. But in our case, it just found one, JavaScript. It's a NodeJS application. And then it goes through the steps. For step, check out the code, next step, use the code.QL action to initialize the code.QL tool. And it's gonna try and auto build your application. So it's gonna look at your repo and see if it can figure out how to build it. In our case, all it needs to do is compile, transpile some typescript. And then finally, it runs the analysis. So it runs a query against your code. And if I check this in, as you should, then it should start running. So I've checked that in, it's living alongside my build and test workflow. If I go over to actions, now you should see that both of those workflows are available and in workflow runs, you'll see both of those workflows got kicked off in parallel. So one is our original build and test workflow, and the other is this new CodeQL analysis workflow. And now is a good time to let us know if that worked for you. If you were able to enable CodeQL, and if you were able to commit that new workflow file, and are you seeing your action run? And if not, let us know what errors you're seeing. I know last time we did this workshop, someone's CodeQL, they clicked the button, but it didn't pop the YAML, so that was an interesting one. I'm trying to think of other things we've seen with CodeQL, Zach, I'm trying to think through other errors we've seen that we can troubleshoot. Usually this one works out pretty well, and I think the reason is because by this time folks are pretty familiar with what the workflow is, and thankfully that workflow file is automatically set up for you. In some cases with your code base, you might need to make a couple of tweaks to that workflow file that it created, but not in our case. It's great, it's a go for that, deploy tools like this to make sure that you're keeping that data protected, in that you're protecting yourself from those vulnerabilities, and if you know, this makes it just another aspect of code quality. What did I miss, Zach? No, that, I think that's really important to keep in mind that it's so easy to set this stuff up, that, you know, you generally, like why not do this for all of your projects, including your home projects? Don't let the hackers attack your greenhouse.

12. CodeQL Analysis and SQL Injection

Short description:

In GitHub Actions, you should see the Perform CodeQL Analysis step. It shows what tests are performed and the issues found. If CodeQL runs correctly, it will produce results in the security tab. One high severity issue called Database Query Built from User-Controlled Sources is found in the file search.js on line six. The input is unconditioned and can lead to SQL injection attacks, potentially causing significant damage.

It's just not right. It's so easy to prevent. Protect those plants, I'm obviously biased, but, yeah. And let me just show you really quickly, in your GitHub Actions, in the Action Run, when you click into it, you should see this step, Perform CodeQL Analysis. It's kind of interesting, if you have some time later, to kind of take a look at what it does as it goes through here and as it finds, usually it's got a little bit more detail than this. Rerender this page. You can see like, what kinds of tests it's doing, what kinds of issues it's finding. But then, in the end, if this is all run correctly, it should produce results to the security tab. So let's go check on the security tab. And you should see, if your CodeQL has run and completed, you should have at least one code scanning alert. And I'll talk through that. So if you're over here, security code scanning alerts, you click into there, and you'll see we found one high severity issue. And it's called Database Query Built from User-Controlled Sources. We'll click into that. And this is the great thing about a SAS tool in general, is that it can show you exactly where the problem lies. So it's in this file, search.js, on line six. We've got a line that says, hey, I want to make a DB call. I'm just constructing this SQL query, and I'm pulling in some search text from that. And CodeQL was able to determine that that input is unconditioned input from a different spot. Let's see. Usually it shows you. Yeah, here we go. User provided value. So it shows you where this value is getting pulled in, and it can see that you've done nothing other than to drop that into a query, which is unsafe. You should really be checking that query to make sure you don't have any escape text in here. Cause of course the problem is that this is a great way to do SQL injection attacks. So you could drop some escape characters in there and write in a whole another SQL query. And depending on your permissions, people could do a lot of damage from here. They could run commands on the machine. They could drop tables. They could drop the whole database potentially. And that's definitely something to be avoided. This is a case where SAST has found a true problem. We'll verify a little bit later that this is actually also expressed in the runtime. This is a real problem.

13. SAST and False Positives

Short description:

SAST looks for code patterns where user inputs are not checked and sanitized. It may produce false positives if sanitization is done by libraries or if unused functions are flagged. The runtime application may not be affected.

Is that, could you talk a little bit more about that? Like the real problem? That was kind of a second part of a question we didn't get to, but why, why does SAST have more false positives than DAS? So SAST is looking for code patterns like this, where it sees a user input, for instance, that it doesn't see any evidence that you've done anything to check that input and sanitize it. But there might be, for instance, some library that you've had, that you've got that has done the sanitization for you that the SAST utility is unaware of. And so it doesn't realize that you actually did sanitize it. Or you might have this function where you are pulling in this unsanitized user text and dropping it into a SQL query. But if you never use this function and make it available to end users, then SAST is still gonna flag it. But in fact, it may not be expressed in the runtime application if that makes sense.

14. CodeQL Settings and Performance

Short description:

I added settings to CodeQL and it's taking a long time to finish. Occasionally, the virtual machines used for CodeQL can be slow. Disabling the query feature for now can improve performance.

I added these settings to CodeQL and it's taking a long time to finish. So I'm reading a user question here. Oh, interesting. Okay. So the queries, is that an additional function that you found for CodeQL? Let me see what mine looks like. Seven minutes, 35 seconds. Let me see how long mine took. Now, one thing I've noticed is occasionally you can get a slow runner. So they're provisioning these virtual machines on the fly. I think it's probably possible that you could end up on a relatively slow runner. I mean, it's not as bad as AWS, but it's the cloud, what are you gonna do? So you have, let's see here. So you've got, with languages and queries. So you've found an additional, okay, so you lit up this feature. Which is great. I would probably recommend that you disable that query feature for now. That's probably the thing that's taking a little bit longer definitely play with that after the workshop. But since we're gonna be running a couple more actions workflow runs, that might just slow you down in the meantime. But kudos to you for jumping ahead, that's awesome.

15. Introduction to Stackhawk and Account Setup

Short description:

Stackhawk is a modern dynamic application security testing tool based on the open source project OST ZAP. It provides a standalone scanner that can be run anywhere and a GitHub Action for easy integration with GitHub Actions. Stackhawk offers a simple YAML configuration and integration with various CICD platforms, Slack, MS Teams, Jira, and webhooks. It also has enhancements for scanning GraphQL applications. To get started, sign up for a free Stackhawk developer account at app.stackhawk.com, set up an API key, and configure your first application. The standalone scanner allows for fast scans by running it close to your application, avoiding latency issues. If you encounter any issues during sign-up, reach out for assistance.

All right, so that was SAST. Hopefully everybody's had a cool glass of water and we're ready to jump into Stackhawk. So Rebecca and I are from Stackhawk, a little bit about Stackhawk. It's a modern dynamic application security testing tool. That's DAST. And the scanner technology in Stackhawk is based on the open source project, OST ZAP. OST ZAP has been around for like 10, 11 years now. It's headed up by Simon Bennett. And Simon actually has joined Stackhawk. He's a distinguished engineer for us. He spends now 100% of his time working on OST ZAP. And he's always looking for help. If you're interested in contributing, there's tons of work to be done to make OST ZAP even better than it is today.

So Hawk scan itself is, again, we take OST ZAP, we wrap it in a Docker container, and also a new CLI that you can download to your computer for easily running scans. It's configured with a simple YAML configuration, which is kind of a unique thing among DAST utilities. Most DAST utilities have a more GUI oriented configuration. And when you try and break those down into, you know, things that you can stick into a code repository that tends to be a lot of different files. So we've got a simple YAML config. We are, we pride ourselves on being easy to integrate into CICD pipelines. We've got integration guides for tons of different CICD platforms. We've also got an online platform for tracking your scans and we integrate with Slack and MS Teams for team notifications. Got an integration with Jira so that you can automatically create tickets from issues that are found in the utility. And we've got webhook support so you can send your scan data to other utilities if you like. And many GraphQL scanning enhancements. So if you've got a GraphQL application, those can be difficult to scan with DAS utilities. We've really done a lot of work to make that better with StackHawk.

So let's jump into it. I'm gonna refer back to the workbook here, step four. So the first step is gonna be to sign up for a StackHawk developer account and you can click the link there, but the secret is that that's just a link to app.stackhawk.com, app.stackhawk.com. And I'll post a link to that. So if you click through to that, you should get a prompt to log in, and if you're new here, as we all are, you can say sign up for a new account, click on that, and what we're going to set up is a free account, so don't worry, you're not going to spend any money on this. I usually sign up with Google, my Google credentials, you can also use GitHub credentials, or set up your own email username and set up your own password, local to the stackhawk platform. I don't recommend that, you can if you want to, it just takes a little bit longer, you've got to go through the email verification step, so I'm going to use Google, and I'm going to pick my alter ego to set up a brand new account. So, once you set up your new account, this is the page you will be met with, a little welcome page, from here you can change the name of your organization, call it Zachary Congers Workshop Organization, gives you a little information about the credentials provider, and third-party access, you can hit continue. Awesome, Zach, maybe we just wait here, in case anyone did choose the email, email sign up method, I know that can take a little bit longer, so we'll just wait here a second, so no one misses walking through the modal, since there's a couple steps there, give us a thumbs up, if you were able to successfully sign up for an account, so we don't move forward without you, otherwise, I mean, hopefully those invites, the internet is cooperating those, on their way to your inbox, if you're using an email sign-up. That's right. Yeah, so just head on over to app.stackhawk.com, log in with your choice of credentials providers, Google, GitHub or your own email, and then just click through that first welcome page, you can change your organization name if you want to, and then once everybody gets here, spoiler, we're going to select scan my application, and continue from there, and then the first step is going to be, we're gonna set up an API key so that the scanner can talk back to it, can talk back to the platform, and then we're gonna set up our first application in the platform, and let's just talk a little bit about how this all works. So StackHawk has a standalone scanner that you can run anywhere, and we've also got a GitHub Action associated with it, and the GitHub Action makes it particularly easy to run it in GitHub Actions. But that standalone scanner, you can also run it on your workstation, you can run it in your data center. And the idea is that we wanna get that scanner as close as possible to your running application, and that's the key to getting good, fast scans with the DAS scanner. Some solutions out there will run a scan from the cloud, and so they're reaching across the internet to your application, and the problem with that is that generally your test environments are not available on the public internet, they probably should not be. But even if they are, that just adds a lot of latency and it can slow DAS scans down. So that's why we've got a standalone scanner that you can run anywhere. All right, we've got six joiners. If anybody's having any trouble getting signed up, let us know in the chat and we can swing back and help you out. If you've gotten this far, click through that first welcome page where you can change the name of your organization. Then when you get here, we're gonna select this scan my application button and hit continue. And then the first thing that we are prompted to do is to generate an API key. So API key is something that we're gonna plug into the scanner so that it can communicate back to the platform and send all of the scan results back to the platform for further analysis and use. So you should, you can follow these instructions to grab a copy of the key and put it in a usable form in a.hawk.rc file on your local machine.

16. Setting Up API Key and Repository Secret

Short description:

We have online help available. Copy the API key and go to your repository. Under Settings, find Secrets, then Actions. Create a new repository secret called 'Hawk_API_key' and paste the API key value. Give us a thumbs up once you've done this. We can assist if you encounter any issues. We'll run through the process again after the first scan.

Hey, and we've got online help. You can go ahead and clear this. We have it live, we don't need the chat help. We have something even better. Thanks, Shawn. Shawn's an actual person. That picture is of an actual human who will actually help you. Anyway, so yeah, you can copy of this to your local machine if you want to, it's also easy to create new API keys in the future, so don't sweat it if you didn't do that. But importantly, before you move on from here, you should copy this, cause we want to stash this in GitHub Actions secret store. So copy the API key, you can hit this little Copy button here to grab it, and then go to your, go back to your repository with the forked full Node Express application. We're gonna add this secret to this repository. So go under Settings, and then under Security, find the Secrets button, then find the Actions button. So we're in Settings, Secrets, Actions. And in here, it's just a secret store. So we can say, hey, I want to create a new repository secret with that Secret button. That was too fast. So there's this button here to add a new repository secret, click that. We're gonna call it Hawk, oops, Hawk underscore API underscore key, and we will be referring this, referring to that variable name in our workflow later. So it's just important that the name matches in the two places. In my examples, I'll use this Hawk API key name and then just drop the value of your new API key in there and hit add secret. So again, from the app getting started wizard, I copied the API key, went over to actions secrets. So in your repo under settings, actions secrets, we add the secret, We will be calling that variable with that Hawk underscore API underscore key name here in a moment. So I'm sure Zach you mentioned this, but just really encourage folks to use that same API key to limit errors as we go in set this up and get out of actions. Give us a thumbs up if you've done that, if you have tried and it didn't work, let us know and we can help you troubleshoot. But we will need to have that value stored in GitHub in order to successfully execute that scan. So give us a thumbs up once you were there. Cool. And somewhere down the line, sometimes people forget, it's cool. I'll probably run through this again one more time after we run our first scan just to make, like if anybody has missed it, you just let us know. We can run through it again. Cause it's easy to recreate or create new API keys as well.

17. Entering App Details

Short description:

Next, we'll enter the app details, including the app name, environment name, and host to be scanned. The app name can be chosen freely, while the environment name is set to development. The host should be entered as 'http://localhost:3000'.

All right. All righty. Moving on. Next step. So if you've got your API key, stashed it locally, you've stashed it in GitHub action secrets, let's move on to the next step. And we are going to enter our app details. So what we're gonna do here is, we're gonna like set up the initial parameters for the application, and it's going to generate a new configuration file for us. So I always just name the app after the repo that hosts the app. In this case, that's voln underscore node underscore express. You can call it whatever you like though, and the environment name I'm gonna give it as development. You've got these three choices. These are really arbitrary. They're kind of like tags that we're applying that help us sort your scan results in the platform. And I'll show you a little bit more of that later. So I'll call it development, and then finally, the host that we're gonna be scanning is http colon slash slash local host colon 3000. Just a reminder, that is not a HTTPS, that is just a HTTP.

18. Configuring StackHawk.yaml File

Short description:

When configuring the StackHawk.yaml file, ensure that there is no trailing slash in the host name and that HTTP is used. Specify 'dynamic web application' as the application type and 'other' as the API type. Download the configuration file and create a StackHawk.yaml file in the base of your repository. Ensure the spelling of 'StackHawk' is correct. Commit the file to your repo.

Oh my gosh, I left a trailing slash in my Discord message. Can you edit? Yeah. Discord always adds the trailing slash. It's so irritating. Do not add the trailing slash. Okay, there, if I formatted it as code. That's worse. So on that host, make sure you're using HTTP. And then just make sure there's no trailing slash on that host name or it won't let you advance.

Someone beaded a thumbs up, thank you for that. If you're clear on that, and you have HTTP, HTTP has configured, give us a thumbs up. Otherwise, we will just give everybody a minute here. Thing to really get right is that host name. We can be a little bit flexible on application name, and environment name.

All right, Zach, looks like we've got folks following along. I think we can go on. All right, cool. All right, next step. Application type. Now, we support all kinds of application types. We've got tools to help get better scans for OpenAPI, applications are SOAP or GraphQL, so that's why we ask you to specify here. In this case, it's really simple, dynamic web application, we don't have an OpenAPI spec, so just pick that first option, dynamic web application. And then the API type is just other, because we don't have any special hints for the scanner. It's just gonna use a traditional web spider to crawl the site, and look for endpoints to scan. So dynamic web app and other and hit next, and now it's going to give us an initial configuration file. So what we need here really is just, there's some other things that you can do from here. Let's not worry about that for now. We can come back to that later, and come back to it on your own time. We just need this configuration file, we're gonna have GitHub actions run this for us. So hit that download button, I'm gonna find it in finder and open it with Atom so that it doesn't open in xcode, which I don't understand why it always wants to open in xcode, but I hate that utility for opening YAML files anyway. And so this is what you should see once you've downloaded it. You've got a simple StackHawk configuration, really the only things that we have lit up here are an application ID, which is a unique identifier for the application that we just set up in the platform, the environment, and here, you can just change this to anything and we'll create that environment on the fly, on the backend and then the host that we're gonna be scanning, so just make sure this is http, not s, localhost.3000, no trailings, slash. So let's take this whole thing and you wanna create this StackHawk.yaml configuration file at the base of your repository. I'm gonna go ahead and hit finish here. So at the base of my repository, go into your repo webpage, go to the base, add a file, create new file, I'm about to sneeze. No sneezing. Call it StackHawk.yaml and just paste that. And we will reference this in another workflow here. Make sure we're doing.yml if you're wanting to just kinda copy paste what we put in the discord chat. If you had yaml we will get an error, I learned that last week. And then just double check your spelling of StackHawk, if we don't spell it correctly we'll also get an error. We should have a django, that's stackhawk, yaml. Cacah. Cacah. Cacah. Zach, I didn't know you were so talented and could write jaml on the fly. Thank you, yeah. There's a lot you don't know, so much. There you go. All right, so stackhawk.yaml, and just paste that in, and commit that sucker to your repo.

19. Editing Build and Test Workflow

Short description:

We just covered a lot. Download the file from the StackHawk platform and open it in a text editor. Create the file in GitHub and give a thumbs up. Edit the build and test workflow by adding a few lines. Run the application in the background using NPM run start. Run Hawkscan with the secret API key. Commit the changes and check the results in the build and test workflow. The unit tests have passed, the node API service is running, and Hox scan is authenticating and starting the scan.

Awesome. We just covered a lot. So just a reminder, you download it out of the StackHawk platform, you open it, if you're feeling risky in XCode, if not, in any other text editor. And then you're gonna create that file in GitHub. Once you've created that file in GitHub, give us a thumbs up so we know folks are following. Otherwise, we can go through any step that you need. If we went too fast, or you just wanna see it again so we can help get that squared away.

Okay. So where were we? So we've added this file. Now that won't automatically do anything, it will kick off your new actions, because we just added a file to the repo. But nothing new is gonna happen in the latest actions workflow run. Now we are gonna light up the stack hoc configuration and make this thing work by editing our existing workflow build and tests. So this is the first one that we set up ourselves originally. So click into your build and test file and that's under your repo, the dot GitHub directory slash workflows, build and test. We're just gonna add a few lines to this. And I would recommend that you go to the workbook or watch the chat for us to paste this in, but you can just copy this entire updated workflow config. But what we're gonna do is just add a couple lines to it. So, initially our workflow checked out the code installed node.js, installed the dependencies for our application and ran unit tests. What we're gonna add is again for a dynamic scanner to work we need to run the application itself. So, we're gonna do that. We're gonna use NPM run start and give it an ampersand so that it runs in the background. That means that once this thing kicks off it will run our application, our daemon, in the background. And we'll move on to the next step even though that application is still running. And in this step we're gonna run Hawkscan. It's going to use the Hawkscan action and it's gonna pull in that secret API key that we stashed a little bit earlier. So, update that, I'm gonna copy this and drop it into my workflow configuration and replace it entirely to avoid any formatting issues and then commit. All right. Cool, so if you've done this and you've added these few lines to your build and test workflow, then you should see the actions kick off again and look for your build and test, you click into the build and test workflow and look at the current run and you should see results starting to come through. So hopefully you're seeing that. I'm gonna just walk through this one more time just in case anybody missed a step. So what I did was from the root of my repo, I clicked into the GitHub slash workflows. I edited the build and test workflow and added these few additional lines. And this entire configuration, Rebecca pasted that whole configuration to the discord chat. So you can just copy this entire thing, edit this file, replace all of your text with what she pasted into the chat and then commit that file. And that should kick off your actions run again. And this time and now when you go over the actions tab, look for your build and test workflow and click into the current run. And you should be able to see the results. So we've got, I've already passed the unit tests. I have demonized the node API service. And now Hox scan is beginning to run. So you can see what's happening with Hox scan. It's already begun in my case to authenticate to the platform. So the scanner is up and running. Successfully authenticated to the platform using the secret API key. Parse the configuration file. Gave us a little summary here of what it's about to do. Scanning a local host port 3000 in the development environment. Gave us a link to the scan results. So even though the scan is not complete, there's a little placeholder here. If you command click through to this.

20. StackHawk Scan Results and API Routes

Short description:

There we go. You should see the scan status. It's showing me a plugin summary. Each plug-in has a set of tests built around a theme, such as path traversal or SQL injection. If you have tried to kick off a scan in GitHub but you're getting an error, let us know so we can help you troubleshoot. In this run hawk scan step, you can see all the scan details, including the scan results in the Stack Hawk platform. The scan shows what the spider found when it first crawled your application, goes through passive and active scans, and provides a summary of the results. The real action is back on the platform, where you can find the plug-in summary and individual scan results. StackOpt can find API routes by feeding it an open API spec or using a classic web spider that looks at the root and robots.txt file. It can also handle SOAP, WSDL, and GraphQL introspection or schema files.

There we go. You should see the scan status. So in this case, we're still scanning. It's showing me a plugin summary. So these are all the plug-ins that it's running. Each plug-in has like a set of tests built around a theme. Built around a theme. So the path traversal plugin has a number of tests that it's gonna run against the application all around trying to get around path traversal or SQL injection, or server-side code injection, remote OS command injection, et cetera. If you have tried to kick off a scan in GitHub but you're getting an error, let us know what those logs are saying so we can help you troubleshoot.

In this run hawk scan step, if you click into your current Actions run, you should see all this detail about the scan, including where to go to find the scan results in the Stack Hawk platform. Now that mine is complete, I can show you the rest of what happens during a scan. And this is what you would see if you ran this from your own computer, for instance, if you wanted to run a scan against an application. And a lot of times, developers will run it on their own workstation to scan their own application before they submit it. But what it'll show you is, hey, here's what the spider found when it first crawled your application. In our case, it found just a couple of pages. Then it goes through to these passive scan and the active scan. And then when it's all complete, it shows you a quick summary of the results. And in this case it found a SQL injection problem, cross-site scripting, a couple of medium severity issues, and some low severity issues. But the real action, although this summary is handy, the real action is really back on the platform. And again, you can click this link to get there. And from here, you will see the plug-in summary tab from the individual scan results. First of all, if you just go to the scans page, if you just log into the platform, you can find your latest scans at the very top of the list from the scan section of the platform. This will show you a little summary of, hey, this scan found five paths, two high severity issues, three medium, three low, and this is when it completed. You can click into that to get to the scan details. Shows us the findings by default. We've got a SQL injection issue, cross-site scripting, couple of mediums, couple of lows. And then of course, the paths that it found. And when it runs the probes, it runs every probe that it's got against each one of these paths. You can also, well, go ahead. There's a question on that paths topic.

Yes. Like we needed these questions in the audience. They're so great. How does StackOpt find about the API routes and supported queries? Do the routes still show up if they're nested or have a weird name? Nested, great bird pun. I'm sure that wasn't the intention, but I'm pulling that one out. Nested, I get it, very canny. So to find out about API routes, you can feed it an open API spec. Like a Swagger document, and that will tell it exactly what API routes are available. And if you've got an API service, that is the best way to go. If you don't have an open API spec, it's not that difficult to create them. If you have a postman collection, there are scripts out there that can take your postman collection and convert that into an open API spec. That's really an ideal way to scan APIs. Otherwise, by default, what we have is a classic web spider. So it looks at the root and it also looks at your robots.txt file, and it tries to find any links that are available at the root or in your robots.txt. It goes to those pages that it finds. It looks for more links in there, it goes to those. It's a classic web spider. We can also reckon a SOAP, WSDL configuration file if you've got a SOAP API, and also look at a GraphQL introspection in point, or a schema file if you prefer. So you can take a GraphQL schema, we can take that and come up with a good scan plan from that. Hopefully that helps.

21. StackHawk Workshop and Application Tour

Short description:

I'm gonna paste into the discord chat a workshop, Zach and I did a year ago that was on GraphQL security testing. If you wanna check that out, great. Lars, you've got great questions. Thank you for that question. Would we define the schema or swagger file in the root StackHawk.yaml? No, you can actually add an OpenAPI spec directly to the StackHawk.yaml. At this point, if you've gotten a scan working, that is like you've done the hard part and the rest of this is pretty straightforward. Let's do one more change to your StackHawk configuration file. We're gonna add a line called Hawk, and that's for some special Hawk scan features. We're gonna set the failure threshold to high. Now the scan will run again, and hopefully this time it will actually fail because we've got some high severity issues. Let's just make sure that's running. It is running for me. Hopefully it's running for you as well. Now while folks are working on that piece, I'll come back to my little tour of the application. So latest scan, click into that to see details.

I'm actually gonna paste into the discord chat a workshop, Zach and I did a year ago that was on GraphQL security testing. It's similar to this one, I think at this one we're running the app locally, not in GitHub, but you can see a little bit about that open APF. In this case, it's GraphQL introspection in point, but we get into a little bit more around how to configure the scanner for APIs. So if you wanna check that out, great. If not, there's great docs on this topic too.

Yeah, and if that's the workshop I'm thinking of, we're gonna be doing that at HasuraCon a little bit later this month, I think.

The question, Lars, you've got great questions. You've gone deep on this application. The missing anti-click-jacking header versus X-frame options header not set. I'm not sure why that's showing up different in the command line versus the platform. I would trust the platform, but I'll look into that question, I'll take it back to the engineers and find out. Thank you for that question.

Would we define the schema or swagger file in the root StackHawk.yaml? Great question. No, you can actually add an OpenAPI spec directly to the StackHawk.yaml. So, yes, I suppose that is an option. But generally we're looking for either an OpenAPI JSON file or an OpenAPI endpoint in your application if your application actually serves it itself. So, I guess yes. And there's other options to provide your OpenAPI spec as well. And that's all documented at docs.stackhawk.com.

All right, I'm gonna continue. So, at this point, if you've gotten a scan working, that is like you've done the hard part and the rest of this is pretty straightforward. There's more you can do with the scanner. And I'll talk a little bit about that a little bit later, but at this point, I kinda wanna show you a little tour of what you get if you jump onto the OpenAPI website. And you'll see that I have a couple of different things and I'll show you a little tour of what you get from StackHawk from this point.

So, just to kind of create the story. Say you have opened a PR with this some code changes, Hawk scan ran, one thing you can do... Oh, actually, we should actually break the build first before I go through this little talk. Let's do one more change to your StackHawk configuration file. So what we did was we ran a scan and it found some really high severity issues, and I'll go into more detail in a minute. But I wanna show you that that's not all we can do. We can also break the build depending on how bad the problems were that we found. So we've got this whole, we've got one last section we can add to that StackHawk.yaml configuration file, and it's just this right here. We're gonna add a line called Hawk, and that's for some special Hawk scan features. We're gonna set the failure threshold to high. So copy that content and let's put it in our configuration file here. So this is the StackHawk.yaml configuration file at the root of your repository. Edit that, and add this section to the very end. Hawk failure threshold high. And what that'll do is cause the scanner to fail or exit with a non-zero status code, and that way it'll break your build. If you commit this... Yeah, there we go. Commit that change, and just add Hawk failure threshold equals high. Now the scan will run again, and hopefully this time it will actually fail because we've got some high severity issues. You can set that to high, medium, or low if you wanna be extremely strict. And then let's just make sure that's running. It is running for me. Hopefully it's running for you as well. Again, I just went to the base of my repo, edited the stackhawk.yml configuration file, and added this Hawk failure threshold high. Now while folks are working on that piece, I'll come back to my little tour of the application. So latest scan, click into that to see details.

22. Vulnerability Analysis and Silent Scan

Short description:

When analyzing vulnerabilities, the scan details provide explanations, remediation steps, risks of not addressing the issue, and guides for fixing the problem in different languages. Request and response data from the attack are also provided. The validate button generates a curl command to recreate the attack. The build and test workflow can be configured to set a failure threshold for detecting new bugs. However, there is no silent switch for making the starting and stopping progress of the scan more silent. The no color option reduces the output but does not eliminate it. SCA and SAST have different trade-offs.

I got all the findings. Let's click on one of the findings to see what's going on here. So this SQL injection of vulnerability is something that codeQL also found. So that's really interesting and tells us that this is a serious issue that we should really take a look at. Oh, we got locked out. Nice. Nice.

Okay, SQL injection. Okay, when I click into scan details for the SQL injection vulnerability, and for any vulnerability really, you'll get this sort of presentation of scan details. So we don't assume that developers understand all of these security terms. So we'll give them a little primer for what this is that we found. So we tell them what it is, generally how do you remediate this kind of problem? And what are the risks if you don't remediate the problem? Since we are looking at the running application, we don't know, we don't care what language it's written in. So we will give you some guides on how to fix this in several different languages. And even if we aren't kept capturing your exact language, you can generally get the point from these little helpers. We also give you request and response data from the attack. So for this attack, what happened was, here's exactly what we did. We sent in a request to the application with these headers and we entered this into the body of this request. So we sent in, we saw that there was a search text field and we tried to insert some data. And then the response that we got indicates that we were able to do some math in the SQL query. We were able to inject some unwanted content into the SQL query. And so in this other info section, we kind of try and explain this in a more friendly manner what exactly happened. There's also a validate button. So if you click this validate button, it gives you a curl command that recreates the attack. And that way, if you're working on this problem, if you're a developer whose PR just got blocked because of this issue, you can use this command to recreate the attack on your own local running application and make sure that you fixed it before you submit your next commit to the PR. Cross-site scripting, same thing. We give you a little bit of information about what is cross-site scripting. How do you remediate it? What are the risks if I don't? And here are some examples of how to fix it in different languages. And we've also got request and response data for that. In this case, the malicious text was to try and drop a script tag into a search box to try and drop a script tag into a search field. And the response that we got back was that script tag got reflected back to us. Validate button to recreate the attack. We go back to my GitHub actions run, and we should see, yeah, the build and test workflow ran again. And after adding that failure threshold high, we got failure. So over threshold error, two findings with severity greater than or equal to high. And we exit with the meaning of life error code, 42. And so this is a great way to run, this is how we run it in our own environment. We set the threshold to high that way if we introduce any new bugs, we know about it right away, the PR is blocked, the developer is right there in the context of what they just wrote. So this is the very best time to try and address a new bug that you've introduced. Is there a silent switch? Yeah, I did wanna go back to that question we got earlier, which we said we'd come back to, which is, is there a way to make the starting and stopping progress for hoc scan more silent so it doesn't span the action logs? And in that thread, there's, how the scanner can repeatedly send that starting scan engine message. Any tips on that? This has been bothering me too, I have to be honest. I don't think we do have a silent tag for that. There is a no color option. So you can set an environment variable called no underscore color equals true, and it will give you less of this stuff, but it still prints a lot of output. And this was really designed to look nice on the console if you're running it locally, but I agree, this is kind of a lot of chatter and I am definitely gonna take that back to engineering and see if we can address that. As far as I know, we haven't, it's not on the roadmap to address that, but it's a pet peeve of mine as well. Thank you, Lars, for validating my opinions. So there you have it. I think that is about all we have for the workshop. There's one other thing that I wanted to mention, which is, we talked about SCA and SAST and DAST and we've seen how easy it is to instrument all of these things in your own GitHub actions workflow. SAST and DAST are really interesting because they have the exact opposite set of trade-offs.


StackHawk Integration and Running Locally

Short description:

With a SAST scan, it can show you exactly where in the code base the problem is. DAST has low false positives but doesn't show the exact location. Stackhawk integrates with Snyk to combine the two and cross-correlate results. The scanning doesn't work for browser-based attacks, but a web scanner is ideal. Stackhawk CLI can be run locally as a Docker container or using the Hawk CLI. Running locally is easy and provides detailed information for troubleshooting.

With a SAST scan, it's got high false positives, but it's really nice because it can show you exactly where in the code base the problem is. DAST has the exact opposite set of issues. So if it finds it, it's generally a real problem. We've got very low false positives, but we're not looking at your code so we don't know where to show you where the problem is. So we have an integration with Snyk that combines the two and you can tie your Snyk account to your Stackhawk account, tie your results together, and if we find a problem and they also find the same problem in a particular application, we can cross correlate those and we can point you, so then you know, hey, my DAST scan says it's a problem, so I know it's actually expressed in the runtime, we better address that one, and then the Snyk scan can tell you, and here's where you go to fix it. And I think I have an example of a scan result to show you that I need to log in to my other account. So I will log out here, come back in with my Stackhawk account. And yeah, so if you had that integration active, then you would see Snyk's little patch icon here in the scan results summary. If I click into it, again, the patch icon shows up for any result in particular where we found it, and so did Snyk. Then I can click in on that, I get the details, and I can see Snyk codes take on it. Okay, Anti-CSRF token scanner. We found it, Snyk found it. Here are the files that you need to look at. If I click into one of those files, we highlight exactly the line where you need to look to fix that issue. Super cool combination. We are working actively on doing a similar integration for CodeQL, which is the SAS tool that we set up today in the workshop. So watch for that to come soon. Question, does the scanning also work for browser-based attacks, i.e. on the front end side? No, not really. For those, a web scanner is still ideal, although you can still scan front end applications. We also have an Ajax Spyder so that we can take a single page application, render it in JavaScript in the scanner and find links and then scan the API behind it. But really what it's doing is scanning the API behind that site. So that's the Sneak integration. See, did I get through all the points indicator if we found the same vulnerability and links back to the code, same StackHawk-dast details as before. It's a really cool integration and we're looking forward to doing as many of them as possible with other tools. We've got another question, which is, is it easy to run locally? Zach, you know how I love the StackHawk CLI because it makes me feel like I'm a real developer. Can you talk about different options for running locally? Yeah, yeah, you can run it easily locally. We distribute it as a Docker container. So if you're comfy with Docker, you can run it that way. Hesitant to use the verbose true and debug true. Yeah, oh my gosh, Lars, you're not kidding, but it can be very handy in some troubleshooting situations. We give you, literally, all of the details there, Lars. Okay, is it easy to run locally? Let me show you how easy it is to run locally. You can run it as a Docker container for Mac in particular. You can brew install the Hawk CLI. You can also run that on Windows and Linux. And if everybody has time, I don't think we have a whole lot more to cover. I can actually fire up a quick demo of that, show you what it looks like. Where's my VS Code? Okay, so in this example, this is how a lot of developers may wish to run. Please respond. There we go. So if I just do, in this case, so I've got an application here called Java Spring Bolni, it's another vulnerable application. I'm going to get that up and running. It's going to list on port 9,000. I've got a StackHawk configuration already lit up for it. Up, up, up. Up, up, up. Seeding the database, and now I can simply run Hawk scan, scan, actually Hawk, the CLI has a bunch of built-in help. I'll blow this text up before Rebecca tells me to. I'm all zig.

StackHawk Scanning and Next Steps

Short description:

My eyes can't see. Me too, I can barely see. We've already run Hawk init and Hawk scan, which makes running actual scans easy. If you prefer Docker, you can deploy the Docker container locally with a simple Docker run command. The CLI is also available, which is super easy to use. For more details, visit docs.stackhawk.com. Thank you for joining us and don't hesitate to reach out if you need any support with security in your pipelines.

My eyes can't see. Me too, I can barely see. So it's got built-in help, so it can tell you what commands are available. So I've already run Hawk init, creates a properties file, you can say Hawk. Oh yeah, so Hawk init will also prompt you for an API key. So you can stash your API key automatically from the scanner. And then to run an actual scan, it's really as easy as Hawk scan, and it kicks off a scan. Demo gods. The demo gods are kind. Today. And if you really love working with Docker, we also can deploy the Docker container locally, just the Docker run command. The CLI just came out in January. It's super easy to use, which is why we're showing it here. But in our docs, you'll also see how to run that Docker version locally as well, if that's what you prefer. We still recommend the Docker container for CICD. You can use the CLI in CICD as well. It's just that we've got tons of experience in large install base for the Docker method. And so, for the moment, it's still a bit supportable, but the CLI, I think, is gonna be coming to the CICD system as well soon. So, thank you for asking. Glad that worked. Boom. Results. Back to the scan platform. So, there you have it.

And we are nearing the top of the hour, so I wanna close this out and thank everybody so much for joining us. And there are some next steps, so with StackHawk in particular, you can go to docs.stackhawk.com to look at all the details on how to run the scanner, how to interpret the results and how to get the most out of it. We also have a number of integration guides for different continuous integration platforms, if GitHub Actions is not your platform of choice. And we've got a great blog too where we've got a lot of thought leadership pieces on how best to secure your applications, a lot of technical tips and tricks, walkthroughs and stuff like that. So thank you so much. Again, my name is Zach. Thank you, Rebecca, so much for helping out. These things always go a lot easier when you're here. Check us out at stackhawk.com. Yes, and if you're interested, we're happy to set up a one-on-one demo with Zach who was a DevOps engineer is now a Solutions Architect. He can if you have a custom environment and you want to see how you can configure this to run, we'd love to help you with that. And we also have some other great Solution Architects and Engineers that'd be happy to help you if you run into any questions. So, thank you all. It's been so fun hanging out with you this morning for Zach and I, but this time of day, wherever you are. Don't hesitate to reach out if we can ever be of support to get security embedded in your pipelines. So thanks everybody. Thanks a lot. Thanks, take care.

Watch more workshops on topic

React Day Berlin 2022React Day Berlin 2022
86 min
Using CodeMirror to Build a JavaScript Editor with Linting and AutoComplete
Using a library might seem easy at first glance, but how do you choose the right library? How do you upgrade an existing one? And how do you wade through the documentation to find what you want?
In this workshop, we’ll discuss all these finer points while going through a general example of building a code editor using CodeMirror in React. All while sharing some of the nuances our team learned about using this library and some problems we encountered.
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2022JSNation 2022
41 min
Build a chat room with Appwrite and React
API's/Backends are difficult and we need websockets. You will be using VS Code as your editor, Parcel.js, Chakra-ui, React, React Icons, and Appwrite. By the end of this workshop, you will have the knowledge to build a real-time app using Appwrite and zero API development. Follow along and you'll have an awesome chat app to show off!

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Do you have a large product built by many teams? Are you struggling to release often? Did your frontend turn into a massive unmaintainable monolith? If, like me, you’ve answered yes to any of those questions, this talk is for you! I’ll show you exactly how you can build a micro frontend architecture with Remix to solve those challenges.
Remix Conf Europe 2022Remix Conf Europe 2022
37 min
Full Stack Components
Remix is a web framework that gives you the simple mental model of a Multi-Page App (MPA) but the power and capabilities of a Single-Page App (SPA). One of the big challenges of SPAs is network management resulting in a great deal of indirection and buggy code. This is especially noticeable in application state which Remix completely eliminates, but it's also an issue in individual components that communicate with a single-purpose backend endpoint (like a combobox search for example).
In this talk, Kent will demonstrate how Remix enables you to build complex UI components that are connected to a backend in the simplest and most powerful way you've ever seen. Leaving you time to chill with your family or whatever else you do for fun.
JSNation Live 2021JSNation Live 2021
29 min
Making JavaScript on WebAssembly Fast
JavaScript in the browser runs many times faster than it did two decades ago. And that happened because the browser vendors spent that time working on intensive performance optimizations in their JavaScript engines.Because of this optimization work, JavaScript is now running in many places besides the browser. But there are still some environments where the JS engines can’t apply those optimizations in the right way to make things fast.We’re working to solve this, beginning a whole new wave of JavaScript optimization work. We’re improving JavaScript performance for entirely different environments, where different rules apply. And this is possible because of WebAssembly. In this talk, I'll explain how this all works and what's coming next.
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Let’s face it: technical debt is inevitable and rewriting your code every 6 months is not an option. Refactoring is a complex topic that doesn't have a one-size-fits-all solution. Frontend applications are particularly sensitive because of frequent requirements and user flows changes. New abstractions, updated patterns and cleaning up those old functions - it all sounds great on paper, but it often fails in practice: todos accumulate, tickets end up rotting in the backlog and legacy code crops up in every corner of your codebase. So a process of continuous refactoring is the only weapon you have against tech debt.In the past three years, I’ve been exploring different strategies and processes for refactoring code. In this talk I will describe the key components of a framework for tackling refactoring and I will share some of the learnings accumulated along the way. Hopefully, this will help you in your quest of improving the code quality of your codebases.

React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.