Security Testing Automation for Developers on Every Build

Rate this content

As a developer, you need to deliver fast, and you simply don't have the time to constantly think about security. Still, if something goes wrong it's your job to fix it, but security testing blocks your automation, creates bottlenecks and just delays releases, especially with graphQL...but it doesn't have to...

NeuraLegion's developer-first Dynamic Application Security Testing (DAST) scanner enables developers to detect, prioritise and remediate security issues EARLY, on every commit, with NO false positives / alerts, without slowing you down.

Join this workshop to learn different ways developers can access NeuraLegion's DAST scanner & start scanning without leaving the terminal!

We will be going through the set up end-to-end, whilst setting up a pipeline for a vulnerable GraphQL target, running security tests and looking at the results.

Table of contents:
- What developer-first DAST (Dynamic Application Security Testing) actually is and how it works
- See where and how a modern, accurate dev-first DAST fits in the CI/CD
- Integrate NeuraLegion's scanner with GitHub Actions
- Understand how modern applications, GraphQL and other APIs and authentication mechanisms can be tested
- Fork a repo, set up a pipeline, run security tests and look at the results

82 min
14 Dec, 2021


Sign in or register to post your comment.

AI Generated Video Summary

This workshop focuses on security testing automation for developers, with a specific emphasis on GraphQL. Neuralegion offers a comprehensive security testing solution for developers, supporting various types of applications and providing actionable results with remediation guidelines. The tool integrates seamlessly into CI/CD pipelines and prioritizes accuracy by minimizing false positives. Support and assistance are available 24/7, and the tool provides detailed information about findings and multiple ways to copy requests for debugging. Overall, the workshop highlights the importance of putting security testing into the hands of developers and offers practical solutions to integrate security into the development process.

1. Introduction to Security Testing Automation

Short description:

Thank you for joining this workshop on security testing automation for developers. Today, we will have a specific focus on GraphQL. We will start with a brief introduction and then jump into a hands-on technical workshop. Feel free to ask questions in the chat or on our Discord.

So, thank you to those for joining this workshop on security testing automation for developers on every build. And, obviously, with this being GraphQL Galaxy, we're gonna have a specific focus on GraphQL. But I suppose for the purpose of those of us who are already using GraphQL using JavaScript, we're going to focus specifically on GraphQL as a tool for searching, but for the purpose of this introduction that I'll start off with.

And I will actually go through an agenda, so let's perhaps not ruin that. But my name's Olly Morodov, VP here at Neuralegion. And we're joined today by Bar Hofves, who you can see, who's our CTO and cofounder. So, I don't know if you want to say hello, Bar, but… Hi, everyone. There you go, so you can differentiate our voices at the very least.

So, what are we going to be looking at today? So, we're going to do a very, very brief intro. What we want to do is get very hands-on, we want to get technical, we want to jump straight into this hands-on technical workshop with you. But, let's just set the scene at the very least. I'll give a very, very brief intro into why security testing is important, what is DevFirst, DAST, and the Neuralegion platform. Platform, and then we'll just jump straight into the hands-on workshop. So, if you do have any questions, by the way, do please feel free to put them either in the chat, or one thing that I will be keeping an eye on more is going to be our Discord.


Introduction to Resources and Questions

Short description:

There is a specific GraphQL Galaxy channel. We have the GraphQL Discord with GetNation and the Neuralegion Discord. Our docs are a valuable resource. Feel free to ask questions in the chat. Barr is a fountain of knowledge and I will be monitoring the chat.

And there is a specific GraphQL Galaxy channel, which I've just realized is misspelt. So, I'm going to change that because I misspelt graph wrong. But GraphQL, let me save changes. And I will just add the link already in there if you haven't already got it.

So, we have one. There is the GraphQL Discord with GetNation. You can use that, of course. We'll be monitoring that. Or indeed, the one on the Neuralegion Discord, if you look in the chat, you should be able to see it there.

What else? For those of you that are going to be playing along with us, and I hope that's going to be all of you, our docs is always going to be a really good resource for you moving forward. If you haven't already done so, do go to forward slash sign up and sign up either now or you can go through that with Barr. So, Barr and I will be taking you through the whole process step-by-step whenever we do these sort of workshops. Most people are a few steps ahead of us, but they always have questions.

And talking about questions, do please put them on the chat. Don't be shy, I don't bite. No question is silly or stupid in any way. We want to have a conversation. It's quite a nice, small group actually here. So, let's keep the conversation flowing. We're here to be helpful as much as we can, whether that's to test your GraphQL schemas or indeed any form of app sec testing. Barr is a fountain of knowledge, so please do feel free to use that. I'm not a fountain of knowledge, so don't try and use me for that, but I will be monitoring the chat and obviously will be able to answer any questions you have.

Introduction to Neurolegion and Security Testing

Short description:

Neuralegion is passionate about putting security testing into the hands of developers. Their DAST tool enables developers to scan web apps, APIs, and mobile applications, including GraphQL, seamlessly integrated into CI-CD pipelines. The technology ensures no false positives and provides clear, actionable results with remediation guidelines. Many organizations are moving from traditional dash tools to Neuralegion due to its effectiveness. Security testing needs to be put into the hands of developers to keep up with rapid release cycles and the rise of APIs. Neuralegion's dynamic application security testing tool allows testing of GraphQL schemas early and often, matching developer speed and integrating security at all phases. Different security testing approaches exist, such as software composition analysis and static analysis, but Neuralegion's tool provides a comprehensive three-dimensional view of the application, considering microservices and APIs.

So, just a quick about Neurolegion. For those of you that don't know, we've been going for a few years now. We've got a global team of, I think, about 70 people now, or maybe more. We always…there's always… Every day, there's a new starter. We're very passionate about app sec, particularly putting security testing into the hands of developers.

Okay, so, developer first or developer focused DAST, Dynamic Application Security Testing, to scan your web apps, your APIs, whether that's REST, SOAP or, of course, GraphQL, although it's a bit silly today, server-side mobile applications, and, of course, their corresponding APIs. And we're all about putting security testing, putting DAST into the hands of developers. You probably would have heard of DevSecOps, Shift-Left, you know, you have other tools that are in the developer's hands. DAST is typically something that's carried out later on in the process. So, we're all about putting DAST into the hands of developers, building a scan surface from the very first unit test, enabling developers to start or to be able to use DAST in an effective way and to maximize adoption, seamlessly integrated across your CI-CD pipelines.

But one of the fundamental pillars of our technology is no false positives. So, you're confident and can start trusting the reports and the outputs that our engine is giving. And unlike other DAST engines or DAST scanners, you don't need to go through that manual validation. You get clear, actionable results as each vulnerability is detected with remediation guidelines. So, you can start to remediate accordingly, or at the very least, prioritize because prioritization is often probably one of the biggest issues that people have when it comes down to security testing.

Okay, a select customer list. We don't need to go through the names, but what I do want to say is that whether you're a startup or whether you're an enterprise organization with a thousand developers, people are moving away from their traditional dash tools to Neuralegion. Many might be using fantastic open-source technology as well, but actually want to couple it with ours or are moving completely away from their traditional dash tools to start using Neuralegion. And this is for a multiple of different reasons, which perhaps we'll get on to. And like I said, I'm not going to be talking for too long about this, we want to start getting hands-on as soon as possible.

Why is security testing so important? Well, applications are and continue to be the weakest link, right? But actually, it's the rise in APIs that's really resulting in an exponentially growing threat model and attack surface. And actually, when you look at GraphQL in particular, the majority of the tools out there do not provide support for this, right? So when it comes down to security testing, you may find that a lot of this is going to be happening in a manual way. Actually, what we want to do is to give you the ability to test your GraphQL schemas as early and as often as possible. Developers aren't stopping. Security can't, either. This is where everyone talks about the need for security testing to keep up with your rapid release cycles, to keep up with the speed of DevOps, to keep up and maintain that speed, to keep up with your CICD. And traditional DAST tools are built for security professionals. But actually, those silos need to be broken down now. Security testing needs to be put into the hands of developers, which is that shift-left methodology. And DAST predominantly has been the missing part of that shift-left side of things. Security must match developer speed with integration at all phases, pushing pre-release testing earlier in the SDLC. Any questions so far? By the way, don't be shy. Feel free to interrupt me as often as you like.

So, there are different security testing out there. So, this is just really to set the scene. You may be familiar with your software composition analysis, like Snyk, White-source, with the log4j that's happened recently. This is where, you know, these things are really going to come into the fold there. Checking your dependencies, checking your libraries, making sure they're all up to date and patched. You may be using static analysis, SAS tools, Sonicube, GitHub CodeQL has one built in, for example. But where a dynamic application security testing tool, a dev first, that's tool that looks at the build compiled application, looking at it from the outside in, performing what is theoretically an ethical hack against it. But looking at it from the same perspective as an ethical hacker. Looking at it in this almost three-dimensional form with all the microservices and APIs working together. Whereas, static analysis is looking at things in a relatively one dimensional space, looking at the source code, but not really understanding how each of the different parts or microservices are working together and functioning together as a built, compiled application. And that's really what you want to be able to test. This is what you may be spending a lot of money on periodic annual penetration tests. Your tool is going to be looking at the application layer in an automated way. But I don't know if I've missed anything there or if there's anything you want to add. You can just say no and I can move on. That sounds pretty good. You do your job well.

Security Testing Solutions for Developers

Short description:

We offer comprehensive security testing solutions for developers, particularly in the context of modern DevOps methodologies. Our tool supports various types of applications, including web apps, internal apps, and APIs like GraphQL, REST, and SOAP. We provide multiple ways of discovering the attack surface, such as crawling the application, using HAR files, and supporting swagger or OpenAPI and Postman collection schemas. Our technology is designed to be user-friendly and optimized for developer needs, with built-in scan optimizations and the ability to configure scans directly from the CLR or via a global YAML configuration file. We prioritize accuracy by minimizing false positives, enabling developers to quickly identify and address security issues. Our engine automatically validates every issue, providing trustworthy results that can be actioned immediately. Our seamless integration with your pipeline ensures fast and efficient security testing without slowing down development.

Good job, Oli. I need to start getting paid more, I think. I get promoted to a panelist before and now I'm doing my job. Co-host. Anyway, let's just look at very briefly because we're not here to sell and pitch, we're here to go into a nice workshop. So why does security trust us and why do developers love us? I will need you in a second, because I want you to delve into things in a bit more depth.

There are a number of things that we looked at with security testing, we looked at other DAS tools, that have been around for 20 years plus. But there are lots of pain points, lots of limitations, particularly when it comes down to security testing, for developers with these modern DevOps methodologies, CICD. One of which is coverage, and this is very pertinent to GraphQL. You want to be able to test your web apps, your internal apps, the APIs when they're GraphQL, REST, SOAP, for example. Microservices, when these modern technologies, modern architecture of application need to be handled effectively.

Authentication is a really, really big aspect, and Bar will go through the multiple different authentication mechanisms that we can handle. But what you really need is multiple ways of discovering the attack surface. So you'll see, we'll go through it. We support crawler where we crawl the application, detect the entry points, extract the parameters, and build the attack surface. We support HAR files, HTTP Archive files, which you may be producing already with your QA automation. So if you're using things like Selenium or Cypress IO, if you're functional scripts, these can all be exported as a HAR file and we can use these to then also scan your applications. The benefit of this is actually very scope defined because whatever is included in the HAR file, let's say if you interacted with a search field or a specific entry form, that will define the scope of the test. And this will enable you to run tests for many, very few minutes instead of several hours. And perhaps more pertinent to this call, full support for swagger or OpenAPI and Postman collection schemas. Okay, so we can get onto that.

Usability and adoption, we will show you the multiple different scan optimizations that are inbuilt into our technology. Having been to meetups pretty much across the globe on DevOps, on developer focused ones, obviously with a focus on security testing, even penetration testing meetups, they all have the same problem with Dust. They're hard to use, they're hard to configure, they've got far too much noise. The results just are not trustworthy at all. The tools get disabled, or it needs a lot of manual validation to understand whether these issues are really here. So, we'll show you the inbuilt scan optimizations that we have within our technology. Really removing the heavy lifting from you, the developer, if you are a developer, and, in fact, it'd be great for the few people that are online just to put in the chat, Are you a developer? Are you an architect? Are you in security? Let's try and understand. Who we got here. Who we got here. Let's understand your thought process and we can tailor little pieces to specific people. But more importantly, built for developers, right? How you can configure the scans directly from the CLR, use the multiple command lists that we have in our docs to hook, to call, to schedule, to retest. Really controlling the scans via code individually or indeed using a global YAML configuration file. But the no-false positives really is one of our fundamental pillars, okay? Security, love, all the HyperBowl, all the edge cases, we want to know everything. We want to go through engine logs all day and all night and try and find everything. This is not sustainable, scalable for development. This is not scalable for developers to match your rapid release cycles. Developers want fact, okay? You want to be able to run a very quick test that's going to last a few minutes against your application using your schema, for example, with GraphQL. You want to understand if the issue's actually there. And then you want to be able to prioritize, okay, which ones are we going to fix now? Which ones are we going to accept the risk? And perhaps which ones are we going to do later? But more importantly, that's what can we fix now so that we're pushing secure product into production. You can't do this with other DAS tools because they produce too many false positives. You need that manual validation. And going back to that customer list, if you're a startup, you haven't got the time or the inclination to start finding which ones are my real true positives. If you're a massive enterprise with a thousand developers, it gets even worse. You've got a thousand developers pushing new commits into production every single day. The amount of technical debt as a cause of the security debt and the manual validation requirements really is insurmountable in many times. And so our engine automatically validates every issue and you can start trusting the results. And this means that you can start to action these now. Chip away at your technical debt, but more importantly, whenever you test, you know it's going to be there and it's real. Seamlessly integrated across your pipeline and we can go through the integrations, speed, again, we don't want to slow developers down, short, small, incremental tests that run for minutes, not days, adjusting the scans, scope with each test or each commit.

Introduction to Testing and Workshop Setup

Short description:

We scan for technical LS talk 10, Mito 25, and business logic tests. Our engine understands responses and uses natural language processing. We aim to reduce reliance on manual testing and find vulnerabilities early. Traditionally, DAST is used at later stages, but we prioritize early testing. We can automatically open tickets for validated issues and integrate with Slack and other platforms. Let's get started with the workshop!

And in terms of payloads, we scan for the regular sort of technical LS talk 10, the Mito 25, but we're also the only automated tool that can test for and detect business logic tests. And we'll go through that as well.

This alludes to the fact that our engine that was built off the back of an AI guided fuzzy technology actually understands the responses we're getting back. We have natural language processing. We're looking at the responses we're getting back. We can understand what we're looking at. We can use this to manipulate our tests to carry out logic-based attacks. It's all about reducing your reliance on manual testing, shoving that straight into your pipeline. Let's try and find as much as we can before they go to production so you're producing secure product as early and often as possible. And then, generating the right tests for the right targets. This all goes around our scan optimizations, which we'll get on to during the workshop.

Are there any questions so far, by the way? I haven't even been looking at Discord. Apologies for that. I did say I would be. Any questions for me yet? No? Okay. No problem. I can see we've had a few of you joining the Discord, which is great. Do go to events and then GraphQL Galaxy 2021. And let's use that channel for any questions, unless you're using the…Okay. Ah, unless you're using the Get one, then feel free to use that as well. I'll flick between the two. So, this is what it should look like.

Traditionally, DAST has been used at Stages 4 and 5. We're really about doing it as early as possible. Push commits to GitHub, trigger the CI, the CI even, run a scan, build will pass or fail. How many of your builds are failing because of false positives? This ain't going to happen with us. Open a ticket automatically. You can't sometimes do that because you can't automatically open a ticket because you don't know if it's an actual issue. Well, actually, with Neuralegion, you can, because you know it's a validated issue, and then you can start to prioritise. And we have integrations with Slack and a whole host of others. But if there are no questions, which it appears there are not, let's get cracking with the workshop. Are you ready? Let's do it.

Okay, great. One thing I do want to know from everyone that's joined is if you're joining in with us. Okay, I really want to know who is, you know, signing in, who's playing along. I'm just going to stop sharing now, by the way, and it's going to go over to you. Who's playing along? Who's, yeah, who's playing along basically? So, don't be shy. Like I said, we're here to, you know, have an open conversation and discussion and, of course, to help you out as we're going along. So do feel free to… do feel free to… To let us know what's going on. Yeah, yeah.

So, before we go in, there are three things that we're going to work with today. One is, of course, our application. Olly will, if he didn't do it yet, he will now send it again in the chat and also in Discord. So it will be very obvious where you need to go. We're going to work with the Neurolegion example actions repository specifically. And I will note that again, the GraphQL branch, which is why we're here today, right? Working with GraphQL. And we're going to be testing and running against the damn vulnerable GraphQL application, which for the time of this workshop, we'll leave in Don't worry, it doesn't matter where it is. It's already set up and configured in the repository. But just so you know what we're going to scan, this is, of course, just an open project on GitHub.

Signing Up for a Free Account

Short description:

To get started, sign up for a free account in Neurolegion. You can sign up with your GitHub, Google, or email. Provide your full name, email, and a secure password. After creating your account, verify your email and log in. Let us know if you need any assistance or if you're ready to proceed.

You can just get it from the devf.damnvulnerable.graphql.application and spin your own for testing and just learning about GraphQL vulnerabilities. The first thing we're going to do is to sign up for a free account in Neurolegion. You can do it either by going to the main application and in the login, just go down. You don't have an account, so click try it for free or you can just go directly to the sign up part, which will of course just bring you here directly. Because, I don't know, maybe you like shortcuts. You can sign up with your GitHub. You can sign up with your Google. I'm going to use the sign up with your email. Now, a few things. You need to put your full name. You need to have a full name. You see, full name. I don't actually put my full name. That's a problem. Full name, email. I'm putting my email here. Password, please use, you know, super-secured, awesome password. All right. And click on Create free account. Woo-hoo! If you're here, you should have gotten a free account. And basically, you should have an email waiting for you in your inbox with the please verify your email. And you can just click on the button, which will open the page, allowing you to log in. Basically, taking you back to the slash login, but now you actually do have an account. Yeah. Is everyone good with that? Are we all signed up, signed in, etc.? Let us know if at any point you want me to take it a bit slower, if you want me to go over some kind of a step again. Anything? We're here. Meanwhile, I'm just going to click on the sign-in and I'll give you a moment to catch up or to tell me, Let's go. Yeah, I think just looking here, I think all I can see… Already have a user and already finished the workshop. They've basically done it, so over to everyone else to give me and Bar the workshop and we'll be tested at the end. All right, so I think we're good to go.

Setting Up Organization and Forking Repository

Short description:

Create a new organization and give it a name. You can choose between using Docker, NPM, or the Windows Installer to set up the repeater. Once set up, copy the repeater ID and token to the CLI. Confirm the connection and proceed. Fork the repository in Neurolegion's GitHub. Enable the workflows and CI part by clicking on SS and enable workflow twice.

All right, so I'm creating a new organization, I'm giving it a name. You can just use No, don't use, that's mine. I'm going to create a, too because I don't have it. Now, organization is basically your workspace, allowing you to share information, scan results, and just collaborate with your friends or colleagues, obviously. So just create whatever you want. You can change the name afterward, so it isn't a big decision.

Once you click on create, we will start the wizard. Now, the wizard here is a few steps to set up the repeater. The repeater and nextpoint CLI is a tool that will allow you to scan local instances or local development environments or even in the CI-CD context to scan stuff inside of closed networks, like a Docker Compose or just things that happen inside a CI-CD machine, which isn't routed to the outside world. So it's going to be pretty quick to set it up. You can choose between using Docker, NPM or the Windows Installer, I don't know, if you like that sort of thing. You can just copy the docker run command and paste it in your terminal. I'm going to go with Docker. If you want, you can use the NPM, of course, or the Windows Installer. Once you've got that in, you can click Next, which will ask us, are you sure you did it. Yes, we did.

Now, you will see that the wizard has automatically generated for us a repeater ID and a token for us. And you can just copy that, and take it to the CLI again and paste it. If everything is okay, and everything is working very quickly, you should be able to click on Refresh Status, and you will see it connected. When this is done, it means we can go forward. Just click Done, and tada! We are in. So this is where I want everyone to be. So just let me know that you're right there with us, because our next part is over at the GitHub land. So before we start forking and cloning and the YAML editing, I just want to ensure that you're all on the same page. Are we all there, everyone? Sakes is ready. Nice. Brain vision all set. The brain vision… I think it's brain invasion. That would be good but it's envisioned. Bra it's actually bra... Envision... You didn't think that one through, did you? Mark... Mark in his bra. Okay, so we got Mark, we got Sakias, and I'm assuming others... Altrix, are you still with us? Avinash, I just want to make sure people are up to speed and we can move on. All right.

So as I said, our next step is to fork the repository. Just go to Neurolegion. Example, actions, and click on fork. Obviously, fork it wherever you can. And once it's done, we just have something to do. So we're going to go to actions, which is the most important part. Understand the workflows, blah, blah, blah. Go ahead, enable them, don't worry about that. Click on the SS. And then in the CI part, you need to again click enable workflow. Yeah, it's very important. Yeah, so usually it's just one click but GitHub changed it, so it's two clicks. You need to enable that twice.

Setting Up Workflow and Secrets

Short description:

To set up the workflow, we need a repeater and an exploit token. We'll create new ones for a different test scope. Go to settings and add new repository secrets. Create a new repeater in the NeuroLegion app and copy its ID. Next, create an exploit token in user settings and copy the API key. Paste the values in the action secrets and ensure everything is correct. Once set up, we can proceed.

So again, go to actions in your fork, enable actions, and then go to the actual workflow and enable the workflow. Once you got that all set up and ready, remember that what we're working with is the GraphQL branch. That's what is interesting for us for this demo, this workshop. So I will say, okay, I guess we're all set.

For this workflow to work, we need two things. We need a repeater. Just zoom in a little bit, Bar, just to. So for this to work, we need two things. We need the repeater secret setup and an exploit token secret setup. You remember in the wizard, when it gives us the token and repeater it did auto-generated that. So those are the things we needed, but we're not going to use the ones that the repeater, that the wizard has generated. We're going to create new ones because we need a different scope for the test. So how do we actually get those things and where do we actually put them?

First, let's go to settings and go to secrets. Here we want to add new repository secret. The first one will be the repeater. Where do we get this value from? We go back to the NeuroLegion app. We go to repeaters. We're going to create a new repeater. You can even give it a specific name so you will remember it. So CICD, repeat there, and click create. Then we can just copy the ID of that repeater that we just created and put it here. All right, once this is set, we are almost done. We have just one more thing to put here. So new repository secrets. The next one is an exploit token. Right? So an exploit token. And we're going to take that from going to our user settings. All the way down, we have create API key. Now let's give our API key some kind of a name, we can call it CI, CICD. And there is specific things that we actually need here. So we need scan start, scan read, all kinds of things. But for this purpose, let's just select all because we really just want to get this thing running. Also, it's really super not secure to do that. But again, you can just delete it or you can use the knowledge base to see which exact scope you want. But again, for this example, you don't need anything extra. So we'll click create. Now you will see this screen. You need to copy that. Before you close it, go to action secrets, paste that value here. Make sure that everything seems to be okay and click add secret. Now you can close that. Now once we know that everything is okay, because once this is closed, it can't be recreated but you can just create a new API key. So it's not a big deal if something happened to it. Once those two things are set up, we can go back. Let's just make sure, are those two things set up with everyone? Just a yes or a no or a some form of interaction just so that we're not moving too quickly, too slowly. If we are moving too slowly, then tell us to get on with it, it's easy. But they're all typing at the same time, which is good. Is that the correct key? Nexploit underscore token, yes. I created repeater with the ID of the repeater. Great.

Running the GraphQL Scan

Short description:

Do you need us to go back to any specific points? We need to choose the GraphQL branch for the test. We'll start a scan based on any push to the GraphQL branch. We'll spin up the testing environment, install dependencies, and set up environment variables. Then we'll start a new scan specifically for header security. The target is We'll wait for the scan to run, testing the status every 30 seconds with a 10-minute timeout. We'll use a breakpoint for any issues. Once the scan starts, we can see it running in the UI.

Do you need us to go back to any specific points? And the UAPI key I still have the nexploit token. Okay. Good job, everyone. So Sakis are you good? Sakis all set. Great. All right, so. There's a little bit of work to do at the front, but actually now once it's done, it's done. So yeah, good. All right, so once this is done, we have just one more thing to do.

So we go back to our Fork repository. We need to choose the GraphQL branch, right? Because that's the actual one that we want to run the test with. We're going to go into workflows a bit just to talk about what is actually happening here. So we're going to start a scan based on any push to the GraphQL branch. And we're going to spin up the testing environment. We're going to install some dependencies, install our next Play CLI tool. Then we're going to set up those things as environment variables based on the secrets that we got here. Then we're going to start a new scan. We're going to test specifically for header security. We're going to, of course, add more and more and more tests, but just we wanted to run fast so we can get the information and just see what's going on. The target, as I said, is this one. That's the So we're going to scan that. And you can see that we're using the repeater that will be spin up as part of the repository setup. And we're going to use the token that we configured. Then we're going to wait on the scan while it's running. We're going to test it, the status, every 30 seconds. We're going to have a timeout of 10 minutes. We have our token being used and we're going to break at any kind of issue. So the moment... So we're going to use the breakpoint here, which is basically, if there is any issue, just stop everything. Oh, sorry. Mohammed just asked, could you just share, I'm assuming he's meaning the... The used resource... Actions that we're using. If you look above, you should have the link to set up an account, follow the thing, but I think he's looking for the GitHub link. Go to fork. So like the original GitHub is this one, right? So it's a neurolegion slash examples, but let me just share it here again. Mm-hmm, mm-hmm, mm-hmm. Right, so you just need to fork that. That's the repository. Okay, so once we understand what we're doing, we can go back to the main repository. Remember the GraphQL branch, and then we're going to click edit here, and we're going to, I don't know, add a bug to the system. You can do any kind of edit or change to the readme, really, or even one of the YAML files, if you know what you're doing, anything that will just push directly to the GraphQL branch, and start the build process. Once you click on commit changes, you should see here that there is a build running, right? There is a CI job being run. And the moment it will start running, we will also see it here. So here you can go to scans. Once we hit the part where we're actually having the call to start the scan, we will see our scan running here. And now it's just building the container, grabbing some resources, environment variables, all kinds of DevOps magic. And once we hit the start NextPlate scan, we will be able to see the scan running also in the UI. Any questions meanwhile? Well, Mohamed says that it's the same GitHub link as previous workshop.

CI Scan and GraphQL

Short description:

Our CI has reached the point where we are starting the scan. You can see the direct link for your scan and all the configurations. The scan is in progress, scanning for pages, mapping the target, and finding URLs and forms. GraphQL is a query language where we send queries and set parameters. You can play around with the vulnerable app and find resources on securing GraphQL. The scan is still running.

Yes, but we're using a different branch. Yeah, so we were showing the GraphQL version now. So that's going to take a few more seconds. Meanwhile... Sorry, just quickly, Mark says he's quite a bit behind. Where is the API key? You go to the top right hand side with the profile figure, maybe, Bar, you can show it. And you click, yeah. So you click user settings and here, you can create an API key. Just give it any kind of a name. As to not waste too much time, just select all in the sense of scopes, and click create. This is where you get the value for the API key. And for the repeaters, just take it from here. Go to the repeaters tab, click on create repeater, and you can just give it any name and you will have the ID here as well. Yeah. And just to say, just for those, there are two places to get an API key, one is in your organization tab on the left. But that's if you have one of our pro-tier licenses, I can't remember which one it is, but that will give you an organizational level key. This is user, a user API key, so there is a difference. So obviously you will be able to use that, but obviously when you want to start working on more of, sort of expanded projects, and collaborate, etc. with other teams and colleagues, then obviously you'll need an organization key, but that is the difference. So if you are looking in the docs, there is a difference between a user API key and an organization API key. So here in this example, we're using the user API key, not the organizational one, just in case that comes up.

So meanwhile, while we were explaining that, our CI, at least for me, the CI has reached the point where we are starting the scan. You can see that you also get the direct link for your scan. Here you can just click on that. It will take you to where you need to be. And there's the pulling, starting, waiting on issues, using all of the configurations that we have. So in a few moments you could see that the scan has started and we will be able to see all kind of interesting information about it. Right now you can go to configuration. And you can already see all of the settings that we made through the CI, which is pretty cool. So the scanned hosts, you can see who initiated it, which should be you, not me, hopefully. You will see the crawler target. And of course, the test that we actually wanted to run. And of course, all kind of configurations. And also your repeater. So that's your repeater ID, just so you know which repeater was actually used to run that scan. While the...

Yeah, so for me, it's now in the running state. It moved from pending, which means it's already in progress. We're using the crawler. So it will now start to look for pages to scan. It will map the target. It will find all kind of URLs, looking at all the forms. And the most important part is that it will go over the GraphQL parts, which as we know, is the interesting thing. So as you most likely know, GraphQL is a bit interesting because, unlike other schemas like Crest, this is a query language, which means that we send some kind of a query and we can set some kind of parameters. You can really play around with this vulnerable app if you want. It has some very good information attached to it about how to actually secure GraphQL. There is good resources here. The GraphQL 40 minutes is a very interesting video as well. So all of that is part of OWASP is pretty amazing. So this is still running.

Understanding GraphQL Scanning

Short description:

In the scan, we detect and parse GraphQL queries correctly, breaking them down to the lowest and most complex levels. You don't need to import any schema or configure anything. We automatically find and map the GraphQL endpoints in your application. Rest assured that we know how GraphQL works and can scan it effectively.

Let me just see if we get some information, cool. All right. A bit about what we can now start seeing here. So in the scan itself, other than just seeing the request average and the discovery of entry points, we see also passed parameters, which is all of the parameters which we detected that can be tested. You can see insight map, all of the pages that you already found. So you can see all kinds of resources. You can see the GraphQL endpoint, which I guess is the most interesting one for us. And I think that one of the most important things about our capabilities is that while other tools other tools look at something like a GraphQL query and say, okay, it's like some kind of maybe adjacent, maybe not, maybe it's just a random string, we actually know how to detect it and parse it correctly. And I don't just mean like JSON, where we have the, some kind of parameters. Alright, cool. But, you know, GraphQL has its own schema with its own way of configuring and its own names. So you can see that we really know how to break all of those down so that we can do those kind of tests and those kinds of verifications to the lowest and most complex levels which are possible. So if you're using GraphQL, you can rest assured knowing that we know what the GraphQL is. How it actually looks like, how it actually works. We know how to parse it and we know how to scan it because as you can see, and this is all from crawling. You don't need to import any schema, you don't need to do anything like that. We find that those part of your application automatically, we will map them, we will find them and we will show them here for you as well. So you can actually see what we're doing.

GraphQL Endpoint Detection and Schema Upload

Short description:

We have multiple GraphQL endpoints in the application, and we will detect them automatically. The main point for all GraphQL queries is usually the slash GraphQL, and the query syntax is the only thing that changes. The scan will run and find the necessary information. The crawler provides automation, but you can also upload the Schema to define the test scope. If you want to test the back end or APIs directly, you can choose the API schema for the API endpoint. We support open APIs Swagger and Postman collection. You can even edit the schema before the actual scan.

So a bit about- There are any questions actually about what Bar mentioned just now? Seconds, we have several GraphQL endpoints that we stitch together in the FE. How can test them all in one go here? In the discord, under GraphQL.

So we have several GraphQL end points that we stitch together in the front end. How can we test them all in one go here? So the beautiful thing is that you don't need to do anything. We will detect, also in this application, there are multiple entry points of GraphQL. So you can see that we found. Let me just show you. So we have multiple GraphQL end points here. And if you put find an item GraphQL, I think it will come up with all the ones- Yeah, that's most likely also true. So let me just refresh that. And GraphQL. So we're still crawling the target, but anyway, I think it has four or six GraphQL end points. And once we finish scanning, you will see all of them mapped here. So for each one of them, there is a unique amount of parameters that you can verify and query and see that we have found them correctly. But that's not the situation which is problematic for us. We created the logic around. Usually everyone just uses the slash GraphQL. And that's the main point for all the GraphQL queries. And it's just the query syntax which is changing. So it shouldn't be any problem scan-wise.

Okay, so that's the scan. It will run, it will find some stuff and it will be done. And when that will happen, we will see that our scan is either done in a failure if we found the issue or it will be done just by hitting the time-out because this application is quite large. So the parsing might take a bit more time. Yeah, I think just to add to what Barr mentioned, we use the crawler here, which obviously is gonna provide the most automation. If you were however, to upload the Schema Barr, then that would be a lot quicker, right? Yeah. So- We wanted to show you both aspects so that you can understand that you've got a lot of automation there but then you can also define the scope of the test by uploading the Schema. I don't know if we've got that here available? Yeah, so I will show you that as well. So just taking another resource into this example, so we have broken crystals. That's a vulnerable application that you can also use in your tests. If you want to play around with that, it's, sits in the same example actions. You just change to the broken crystals branch. You already set up the secrets, you already set up the repeater, so there is nothing for you to do. In that case you can just also click on the little pencil, you know, change something, do the commit. And that will also start a broken crystal build and run against that, which will also like show you all kinds of vulnerabilities. So it's pretty cool. Anyway, if you want to do an API test and you don't have a UI or you just want to test the back end or the APIs directly, a nice option that we have is when you choose the target, instead of doing automatic crawling here, you can just do via API schema for API endpoint. In that way, you can just go to the actual schema. I'm just going to get the JSON. And I'm going here. Just to slow it down a little bit. So you can see here there's three things. You can upload it from a file from a desk. You can use a pre uploaded file or you link to the file, which if you've got the documentation that's going to be publicly available, like Bob is showing now, you can just literally add that. And click import, which will get it and automatically parse all of the API. That way we support both open APIs Swagger and also Postman collection. And you can even go into the schema itself and edit that on the fly. So for example, if you have, if you want to, let's say, scan only these posts to API render, and you want to even edit that while before doing the actual scan, then you can do that all from here. You don't need to do anything special. It's like all here ready for you.

Scan Options and Optimizations

Short description:

You have multiple options for running scans, including using open API Swagger and Postman, the quarter feature, the HAR file recording, and importing schemas. You can also choose from various templates or create your own. The scan optimizations allow you to customize the scan behavior, such as stopping the scan if the target becomes unresponsive or optimizing for speed and accuracy. You can control which parts of the request to test, including the body, URL query, fragment, headers, URL path, artificial URL query, and artificial URL fragment.

You can just take it away and have fun with it. So that's a pretty neat feature. Again, open API Swagger, and Postman. You can use any of those.

Other than that, we also have, of course, the quarter which we just showed where you can just put the URL of what you want to scan and it will automatically map and scan it. And there's the HAR file. The HAR file allows you to do a recording with your browser or with Selenium or any kind of QA automation so that you can create a specific scenario recorded and then scan that again and again allowing you to really pinpoint specific parts of your application that you want to test and scan. It's especially useful if you have things like, okay, we just added a new feature that allows you to do, I don't know, some context as form. You don't want to go into scanning the whole application again and again. You just want to pinpoint those details scenarios. So either you can create that or you can import the schema and just choose this new endpoint or you can ask your QA automation people to just slap that on Cypress IO or Selenium and just have it all in one go. So all of those are available.

One more thing that I just want to maybe show is that we talked about header security in that example action, but that's just one out of all of the things that we can actually test for. So when you go to the tests part, you just have all of the tests that we can actually do which as you can see are quite numerous. Yeah. So you can choose which things makes more sense, which things are less relevant. And of course, I think one of your options as well is when you start a new scan, you can start that scan. Sorry, let me just create a new scan. And I can do that, I can create a new scan from where is my templates, App Settings. Yeah, you see we updated. Oh, that's the Scan Templates, sorry. We updated the UI. Yeah, we've updated the UI. And now with the Zoom, I can't find anything. But yeah, so we have templates that you can just use to run the scans. So that's the, you can start from OSTOP 10, MitraTrop 25, from 2018, 19, then 2020. You can run a Light Scan, Passive Scan, Deep Scan, and API scans. Those will automatically choose the relevant tests for your scenario, right? And will allow you to not waste time, just test what is relevant for you. Yeah, you can create your own templates. You don't need to use the ones that are global, the ones that we've created. Yeah, it's basically like creating a new scan. You can choose, configure, and then just use that instead of configuring the whole scan together. barbecue We just, sorry, before you show them that, could we please just go through the scan optimizations? And they create a scan. Cause I did say that I would talk about it. Well actually I said that you would talk about it. You haven't spoken about it yet. All right. So attack surface of the visualization. When you create a new scan, you have this very nice little section that talks about how to optimize the scan so that it will be faster and it will find more things, which is something that we always want. So this part really gives you a lot of control over what you will, how you want to behave against the target that you're scanning. So for example, if the target might become unresponsive and in that situation you want to stop the scan automatically, you have this option here, stop scan if target does not respond for. You have the smart scan optimization, which as you can see, uses a lot of automatic decisions based on algorithms and contextual analysis to decide if you want to scan some stuff and some stuff should we skip maybe. If you want a more robust scan, you can disable that but that means that you will pay for that in scan time. So the optimizations are always here to try and optimize the scope, test less but on what matters. Of course if you don't care about time and you're not doing this in the CICD just Hail Mary, open everything and you can also do that. Skips the static parameters allows you to control if we should skip or not skip parameters that does not seems to change how the target behaves. So for example, all kind of values that may be changing them does not seem to affect the target in any way. So we can skip those and of course if there are some entry points in the target which takes longer to respond and we don't want to waste that time or we say listen if it takes more than one second to respond then just skip it, we have that option here. The last part is about which parts of the request do we actually want to test? So the default is the body, the URL query and the fragment. You can also add headers, URL path, artificial URL query and artificial URL fragment.

Security Testing Solutions and Authentication

Short description:

We offer comprehensive security testing solutions for developers, particularly in the context of modern DevOps methodologies. Our tool supports various types of applications, including web apps, internal apps, and APIs like GraphQL, REST, and SOAP. We provide multiple ways of discovering the attack surface, such as crawling the application, using HAR files, and supporting swagger or OpenAPI and Postman collection schemas. Our technology is designed to be user-friendly and optimized for developer needs, with built-in scan optimizations and the ability to configure scans directly from the CLR or via a global YAML configuration file. We prioritize accuracy by minimizing false positives, enabling developers to quickly identify and address security issues. Our engine automatically validates every issue, providing trustworthy results that can be actioned immediately. Our seamless integration with your pipeline ensures fast and efficient security testing without slowing down development.

All of those will increase the scope of the scan allowing you to test more parts of the target but of course will take more time to do so. Anything else, Oli, that you think we should cover?

Yes, I do. Authentications. Oh, yeah, I mean, in the Create scan, I think we covered everything, right? Ah, sorry, in Create scan, yeah, I mean, you can schedule scans. I mean, these are all things, by the way, that you can, really, really self-explanatory. I'm sure you'll be able to pick it up. One of the things that you can do is when you click here on your user setting, in Help you have both the knowledge base and the API documentation. So everything we've done here is fully controlled by API, and we have our own Swagger that you can just access freely here in API Docs, and also we have our knowledge base, which have information about our system, how to use it, about the tests that we can do, how to do things, how to integrate. So for example, if you have CI, CD, you can see that we support a lot of those kinds of integrations. We just demoed the GitHub actions, but we also support Azure Pipelines, JFrog, CircleCI, Jenkins, TravisCI, and GitLab. You can also, in the example actions, see here that we have, as you can see, a lot of those examples. We played around with GraphQL, and I just told you about Broken Crystals, but we have CircleCI project set up to show you specifically how to use that kind of YAML. We have a scan with a HAR file, a scan that uses SWAGger, and ID enumeration test which showcase business logic vulnerability, which is pretty awesome. Which is one thing, by the way, that you didn't focus on when you went through the scan setup, by the way. So maybe, I mean, I know I touched upon it before, but maybe you as the person that built the engine can give them a little bit more detail and why these are so cool, why they're so important, and how we're leading the way here. So basically we have three sections of vulnerabilities. We have the standard common vulnerability tests. We have the business logic vulnerability tests and third-party tests. The business logic vulnerability tests tests for things which require much more than just sending the right payload into the right value. We're looking at the context of the page that we are seeing. We're evaluating, is that the right page to do this kind of test? And then we try to play around with values while keep evaluating what is actually happened from the eyes of a user. So we're using a lot of context analysis. We're using some AI here. If you specifically want to know what kind of AI, so in those cases, we use some image recognition a image recognition for detecting different views, which is pretty awesome. Especially when you want to know if something that you did actually changes the state of the target. Also, we have our own web driver integration. So we created our own proprietary driver for headless Firefox, which allows us to do those kind of business logic based vulnerabilities, driving the browser around manually, making it execute all kinds of actions inside of the pages. So again, that's pretty cool. And also that's what allows us in the common vulnerabilities part to be false positive free, which is also something that Oli talked about. But I think again, when we're discussing CI, CD, and we're discussing tests that should run quite quick while also be very precise, having false positives is terrible because it means that we're just failing for no reason. And this is where really having zero false positives is something that becomes not just a nice to have, but really critical for your success. So that was the tests and regarding authentication. So I guess you all have applications that you developed. All of those applications that you developed have some kind of login screens, most likely, maybe not. And that's also cool. But I guess those of you who develop stuff like that also created some kind of a login mechanism. Testing your website or your application without configuring authentication means that you're just stuck on the login screen, right? We can't go farther than that. And this is where the authentication comes into play. We have few forms of authentication, so we're starting from form authentication, header authentication, API call, Open ID Connect, custom multi-step authentication, browser based authentication and LTLM. Anything that you might think of, custom multi-step allows you to really script your way through as many steps of authentication calls as you want while going all the way into embedding parts of the responses into requests. And really you have free reign on the flow of the authentication. There's also the browser based authentication which allows you to really easily just put the URL of the page that has the login, the name of the fields and what value should be there and just start scanning. So we support all of those. And of course, if you need any information or any help in regards to set those things up then we have again, you know, documentation which is quite robust. All of the configuration options with screenshots. Some of those even have videos attached to them to show you really step-by-step how to configure them. How to make them work, what setting should come, where and how they should behave. So yeah, all of that is available in the documentation. And yeah, we've really tried to make it as self-service as possible. And we have many users that don't reach out to us because the documentation really is clear, simple and easy to read and follow.

Support and Testing Assistance

Short description:

If you have any questions or need assistance, feel free to reach out to us via live chat or email. Our support engineers are available 24/7 to help you succeed. We also encourage you to run tests against the intentionally vulnerable application, Broken Crystals, and let us know how it goes. The scan against Broken Crystals took 36 seconds to find the first vulnerability, which was the open buckets vulnerability, providing access to AWS S3. The tool provides detailed information about findings, resembling a QA report.

If you do however have any questions, then I don't know if you can see it on the screen. But you can also reach out to us via live chat. And our support engineers are always here to help. We want you to succeed. So please feel free to reach out to us either to support at or hit the chat icon at the bottom right-hand screen. And yeah, feel free to ask us any questions. And if there's anyone available, which there pretty much is, 24 hours, then we'll be able to assist as best you can.

Do we have, I'm conscious that Bar has been talking for a lot, but he's been covering really, really good, good ground, some really interesting topics that hopefully you guys all wanted to discuss. But are there any questions at the moment that you would like answering?

Bra in vision is typing. Do these configurations live in your, might need to make this a bit bigger, okay. In your platform, oh, he dropped it himself. I thought to laugh. In your platform, what do you create a config we can put in our app repo? So which configuration specifically? I think he may be talking about the YAML configurations.

Oh, the YAML configuration. So you can pretty much just, okay, so you can just pretty much copy paste what we did in the example actions. It's just a very, very simple way to start that. But as we said, you can have all of this information and how to create those kind of things specifically for your application just by following the documentation. So even if- Click on command list even, okay. So you have command list and that's the place where you have all of the options for the command line. So you can just change it. You can see what's relevant for you, what's not, really what each option actually does. What we have is really just an example. Example actions, right? But it just shows a few predefined scenarios, but just changing that to, you know, in the end, right, if we go into one of those example actions, the only thing we actually have here, which is of interest, I guess, to all applications, is just installing our CLI and then showing you how to set up the environment variables, running the scan, which is this thing, and then pulling and waiting for issues to show up. That's it. That's the only part that you care about. And your target can either be Broken Crystal or my mega cool awesome application or some local thing that is running inside of the Docker Compose, here inside of the CI context, if that makes sense. Yeah. Makes sense. Thank you. Okay. Just in case you haven't got that open. No problem Mark, I'm assuming. Are there any other questions from anyone else? More importantly, has that worked for everyone? And as Bar mentioned, feel free to just run a test against Broken Crystals. It will find, I mean, it is an intentionally vulnerable application. Please don't send any bug bounty reports to us because they are intentionally in there. Bar's laughing only because we receive probably hundreds every day. And, you know, do let us know how you get on. Oh, we can see here actually that the scan against Broken Crystals that he did. Yeah, it finished quite like after a few seconds. Right, okay. So as you can see, right, so this one is more scoped and it's more defined. So it really takes 36 seconds to find the first vulnerability which is the open buckets vulnerability. Basically just access to AWS S3, allowing you to list everything that goes in there. You can really see examples of what we found. You can see this, we enumerate the information. We show it very nicely. You can see the request that we made, the responses. So we have a lot of that. The tool in itself, not just the way we parse GraphQL or the way we crawl, also how we give you as developer the information about what we did and how we have done it. So all of the details about findings looks a bit like QA report, at least for me that's what I was aiming for.

Test Information and Debugging

Short description:

During the test, we provide actual information about what we're testing, the conditions, and the vulnerability. Our documentation provides detailed information on possible findings and how to fix them. We offer code blocks, examples, and verification guidelines. Additionally, we provide multiple ways to copy the request as a curl command, allowing for easy debugging and validation without running multiple scans.

So the following is checked during this test and you have like the actual information of what we're testing, what we're doing. What's our conditions here? And then that's what happened. And you know why you got this information and why we decided that there is a vulnerability here.

Also I think pretty cool is again in our docs when you're going to... That's in... Let's see, I'm testing integration vulnerabilities guide. All right. So in the vulnerability guide you have information about all of the possible findings that we have and inside each and every one of those is information about how to fix them. So for example, in broken JWT authentication if you click on that really have other than more in-depth information about the actual vulnerability you have in the remedy suggestions actual code blocks and examples on how to fix something like that and how it should look, what you should do and how to verify all kinds of things. So we really go into the, you're a developer, we're talking to you as a developer from developers to developers. That's the main idea of our desk and our solution.

One thing we didn't show by the way just to show them now and you can perhaps show it here is the multiple different ways that it can be copied as a curl, we can copy the request just to talk about a bit more about debugging mode for example. Sure. So for example when I have this kind of finding, I want to be able to verify that. So I can either click on the modify script which gives you a curl like capability here inside of the UI. But actually, if you also want you can just take the request, you have copy as curl, row request, URL, copy headers and you can just copy that and put it inside your terminal and run it, basically getting the same information here allowing you to, instead of running a new scan and a new scan and a new scan, allowing you to just enter, you know, a curl command until you manage to fix the issue and until you manage to validate that, and then you can just run a new scan allowing you to ensure that indeed there are no more issues.

Q&A and Summary

Short description:

Any other questions? Can you see how this can be used moving forward? We'd like to show you another example of a finding. It's a nice one that touches upon the levels our engine goes through to validate every finding. We really go far to ensure everything we detect is validated. It's a nice feature to have and something to rely on. We're happy to answer any other questions. Just a wrap up, you've hopefully signed up for our free tier account. You have more functionality with the SuperDuper account for two weeks. Don't forget the crawling for Seamless Automation. Upload your OpenAPI or Swagger documentation, Postman collections, and HAR files. Simplicity with YAML configuration files and you're good to go.

Any other questions from those here? Have you, has your scan been successful? Are you all up to speed? More importantly, can you see how this can be used moving forward, whether for GraphQL or indeed not? Everyone's gone shy.

Okay, Bob, anything else that we needed to cover or anything else because I know that we like to give people a lot of their time back, you can really see how simple it can be. Again, it can be used either as a standalone scanner or indeed, more importantly, integrated into your pipeline to test every build or every commit, every pull request. But if there are no specific questions, then there's one thing that I'd like to show you and then we can just wrap up with a bit of a summary. But speak now or forever hold your peace, as they say.

Okay, Bala, if I could just take, oh, you shouldn't let me, okay, fine. But I'd like to show you actually, because I do like to show it, if you haven't seen it already. I want to just show you another example here of a finding. And it's a nice one that touches upon what Bala was mentioning with the proprietary headless browser that we've got. Because as developers, the last thing you want is to start chasing your tail, chasing ghosts, or to coin a joke that I invented to my last workshop to join, to follow or chase the tail of a ghost. But with which I'm gonna trademark that actually. Because I think... But reflective cross site scripting, okay. And it's a nice one to show you just because it shows you the levels that the hour engine goes through to validate every finding, but you're not going to get with other dash tours. And this is really why we like to, just to shout about this actually. Cause where it's all about accuracy and actionable results for developers. So you can see it as this particular reflective XSS. Those have only be found in the query by changing the value of the XML parameter. It identifies the value that was injected here. Okay? Now we give the remedy suggestions that Baz already touched upon. We have further reading for this. We have a dif like view of the requests. So what we deleted, what we added. We have the headers, we have the response here, you can see the header in the body. But you can see here that we actually provide you an automated screenshots of this popup executable that was generated with this specific payload. Okay? So this is just showing you what Baz was mentioning earlier. We really go very, very far to ensure that everything we detect is validated. If it's an SQL injection, we've exfiltrated the database name and perhaps a little bit more. And Bar, if I don't know if you want to give some other examples of other validations that we provide, then feel free to jump in. But we're running this headless browser, we're looking for the reflection, we're taking a screenshot, you know it's there. Fix it. No false positives, really, really is important. So hopefully you'll all agree that's a very nice, a nice feature to have and something for you to rely on. I am going to be wrapping up. I just want to go through a summary slide that will take me about two or three minutes, maybe. But before I do so, if any other questions popped into your heads, obviously we're happy to answer those. So just as a wrap up, you've hopefully successfully signed up for our free tier account. Do please give it a go. I think you've actually for two weeks, you've got a SuperDuper account, you've got our Pro something or other, you would have got an email to confirm which of those accounts you've got for two weeks. That gives you a lot more functionality for you and your colleagues and your teammates to use. Don't forget, you've got the crawling for Seamless Automation. That's what you use in this GraphQL branch. So we just crawled it, we detected the GraphQL endpoints, and we started to test. You can upload your OpenAPI or Swagger documentation as well as your Postman collections. And the half files so that you can actually start to leverage your existing QA automation that they're carrying out already. Upload the half files, run very, very scoped defined tests. Simplicity. Okay, the simple YAML configuration files and you're good to go. Going back to Mark or Brain Vision's question. Yeah, you can copy our one and then you can manipulate it for yourself or go to our docs, look at command lists, search for configuration files.

Configurations, Support, and Newsletter

Short description:

All the information is there for you to start configuring the tool. We have built-in scan optimizations, minimizing the need for extensive configuration. Our tool fully supports microservices, single page applications, web sockets, and APIs. We prioritize removing false positives, providing actionable results without noise or alert fatigue. Explore our organization's integrations, documentation, and use the system to scan your projects. For any questions or queries, reach out to us on Discord, email, or chat. Follow us on Twitter and read our blog for useful information and code examples. Don't forget to sign up for the newsletter.

All the information is there for you to then start to put together your own configurations. Really about one, optimizing the automation there but also we went through and discussed how we have built in scan optimizations within the tool. And I hope you'll all agree, there really is not much configuration with the tool. If you look at other desks, there can be hundreds of different configurations. Of course you can get a bit more granular with certain rejects et cetera. But we've really tried to make things as easy as possible.

Micro services, single page applications, web sockets, of course, APIs, are fully supported. We've spoken about no false positives but really is that is one of the fundamental pillars of our technology. We remove those, no noise, no alert fatigue, actionable results, seamlessly integrated. So do click on organization, look at the integrations that we have or go to our docs. More importantly, please use and abuse the system to scan your projects accordingly.

If you do have any questions or queries, do put those in the Discord that you're already part of either in the GraphQL Galaxy channel or indeed in our tech support channel under support on Discord. Email us at And what else? Chat to us. Follow us on Twitter. Do read our blog by the way because there's some really useful information about specific vulnerabilities where we provide code examples as well. But if you haven't already done so, do sign up for the newsletter.

Watch more workshops on topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Advanced Conference 2021React Advanced Conference 2021
132 min
Concurrent Rendering Adventures in React 18
Featured WorkshopFree
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?
There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.
Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Featured Workshop
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.
You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.

React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.
The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.
React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.

React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Featured WorkshopFree
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.

GraphQL Galaxy 2021GraphQL Galaxy 2021
140 min
Build with SvelteKit and GraphQL
Featured WorkshopFree
Have you ever thought about building something that doesn't require a lot of boilerplate with a tiny bundle size? In this workshop, Scott Spence will go from hello world to covering routing and using endpoints in SvelteKit. You'll set up a backend GraphQL API then use GraphQL queries with SvelteKit to display the GraphQL API data. You'll build a fast secure project that uses SvelteKit's features, then deploy it as a fully static site. This course is for the Svelte curious who haven't had extensive experience with SvelteKit and want a deeper understanding of how to use it in practical applications.
Table of contents:
- Kick-off and Svelte introduction
- Initialise frontend project
- Tour of the SvelteKit skeleton project
- Configure backend project
- Query Data with GraphQL
- Fetching data to the frontend with GraphQL
- Styling
- Svelte directives
- Routing in SvelteKit
- Endpoints in SvelteKit
- Deploying to Netlify
- Navigation
- Mutations in GraphCMS
- Sending GraphQL Mutations via SvelteKit
- Q

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Remix is a new web framework from the creators of React Router that helps you build better, faster websites through a solid understanding of web fundamentals. Remix takes care of the heavy lifting like server rendering, code splitting, prefetching, and navigation and leaves you with the fun part: building something awesome!
React Advanced Conference 2022React Advanced Conference 2022
30 min
Using useEffect Effectively
Can useEffect affect your codebase negatively? From fetching data to fighting with imperative APIs, side effects are one of the biggest sources of frustration in web app development. And let’s be honest, putting everything in useEffect hooks doesn’t help much. In this talk, we'll demystify the useEffect hook and get a better understanding of when (and when not) to use it, as well as discover how declarative effects can make effect management more maintainable in even the most complex React apps.
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Concurrent React and Server Components are changing the way we think about routing, rendering, and fetching in web applications. Next.js recently shared part of its vision to help developers adopt these new React features and take advantage of the benefits they unlock.
In this talk, we’ll explore the past, present and future of routing in front-end applications and discuss how new features in React and Next.js can help us architect more performant and feature-rich applications.
React Advanced Conference 2021React Advanced Conference 2021
27 min
(Easier) Interactive Data Visualization in React
If you’re building a dashboard, analytics platform, or any web app where you need to give your users insight into their data, you need beautiful, custom, interactive data visualizations in your React app. But building visualizations hand with a low-level library like D3 can be a huge headache, involving lots of wheel-reinventing. In this talk, we’ll see how data viz development can get so much easier thanks to tools like Plot, a high-level dataviz library for quick
easy charting, and Observable, a reactive dataviz prototyping environment, both from the creator of D3. Through live coding examples we’ll explore how React refs let us delegate DOM manipulation for our data visualizations, and how Observable’s embedding functionality lets us easily repurpose community-built visualizations for our own data
use cases. By the end of this talk we’ll know how to get a beautiful, customized, interactive data visualization into our apps with a fraction of the time

React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar