As a developer, you need to deliver fast, and you simply don't have the time to constantly think about security. Still, if something goes wrong it's your job to fix it, but security testing blocks your automation, creates bottlenecks and just delays releases...but it doesn't have to...
NeuraLegion's developer-first Dynamic Application Security Testing (DAST) scanner enables developers to detect, prioritise and remediate security issues EARLY, on every commit, with NO false positives/alerts, without slowing you down.
Join this workshop to learn different ways developers can access Nexploit & start scanning without leaving the terminal!
We will be going through the set up end-to-end, whilst setting up a pipeline, running security tests and looking at the results.
Table of contents:
- What developer-first DAST (Dynamic Application Security Testing) actually is and how it works
- See where and how a modern, accurate dev-first DAST fits in the CI/CD
- Integrate NeuraLegion's Nexploit scanner with GitHub Actions
- Understand how modern applications, APIs and authentication mechanisms can be tested
- Fork a repo, set up a pipeline, run security tests and look at the results
JS Security Testing Automation for Developers on Every Build
As a developer, you need to deliver fast, and you simply don't have the time to constantly think about security. Still, if something goes wrong it's your job to fix it, but security testing blocks your automation, creates bottlenecks and just delays releases...but it doesn't have to...
AI Generated Video Summary
This workshop introduces developers to security testing automation using Neuralegion's Developer First DAST. It covers the challenges of application security testing, the limitations of static analysis tools, and the benefits of using DAST tools. The workshop includes hands-on exercises on forking a repo, running scans, analyzing results, and testing authentication mechanisms. Neuralegion's DAST features include API security testing, automatic validation of findings, seamless integration into pipelines, and optimization of scan speeds. The workshop also covers setting up CI workflows, running scans, and analyzing vulnerabilities. Participants can ask questions and receive continued support beyond the workshop.
1. Introduction to Security Testing Workshop
This is a hands-on workshop on security testing automation for developers. We encourage interaction and questions in the Discord. We will provide continued support beyond the workshop. The agenda includes an introduction to security testing, an overview of New Religion and our DAST technology. We will then proceed with the workshop, covering forking the repo, running a scan, analyzing the results, and testing authentication mechanisms. All the necessary assets are available in the chat and Discord.
This very hands-on workshop on security testing automation for developers on every build. Again, it's going to be very hands-on. I think you will very quickly realize that we want this to be as fun, laid back and chilled as possible. But also we want you to be as interactive with us as possible.
So any questions, any issues, any jokes, whatever it is that you might want to throw out there, do so in the Discord ideally, because that way we can build up continuous conversation. You'll also find a lot of information on there. And like I mentioned, continued support beyond this workshop for any issues that you have. We do monitor it with our support engineers and basically the whole company to ensure that you're successful in your security testing.
So a brief agenda for today. I don't know if you're all at work, at home, whatever it might be, but we're going to go through a very brief introduction into security testing, why it's so important, a bit of an intro about New Religion, about our DAST just so you can understand it in a bit more detail. And I'm going to go straight into the workshop. So if you haven't already done so, I can already see a number of familiar names that have already signed up, which is great. But we're going to fork repo. We've got the example actions there if you haven't seen it already, we're going to run a scan together, we're going to look at the results, understand the results and go through authentication mechanisms, how you can test APIs, basically how you can, by the end of this one hour, 40 minutes, I'll try and give you 20 minutes back of your time. You'll see just how quick and easy it is, that actually you can now start to go away and start automating your security testing with our DAST technology. And so what you'll need, it is on the chat, it is in the discord server, perhaps if you're watching this at a later time, these are all the assets that you'll need to play along with us. If you weren't able to do it live, but again, they're all in the chat and they're all in the Discord if you haven't seen it already.
2. Introduction to Neuralegion's Developer First DAST
Neuralegion is a developer-focused dynamic application security testing tool. It allows developers to build the scan surface from unit tests, schedule scans, and call scans as code. The tool automatically validates findings and provides developer-friendly remediation guidelines. Application security testing is crucial due to the vulnerability of applications and the growing attack surface. Static analysis tools have limitations and often produce false positives. Dynamic Application Security Testing (DAST) tools like Neurolegion's Developer First DAST provide a comprehensive security scan by looking at the built application from the outside in. DAST tools can identify real-world vulnerabilities and conduct penetration tests.
So a quick about Neuralegion, if you haven't already done your homework, which I hope most of you have, we're founded in 2018. We are a global team of developers, security researchers, ethical hackers, I suppose this is something that we are also, very, very passionate about, Barz laughing because he spearheads that side, but very, very passionate about application security testing, but more importantly, application security testing for developers. And we really do feel that we are changing the way that AppSec is being carried out, typically done by security professionals as well as security team, but actually we've been built from the ground up to really provide a developer focused dynamic application security testing tool to test your web apps, your internal apps, your APIs, whether that's REST, SOAP, or indeed GraphQL, server-side mobile applications, and of course their corresponding APIs. And we really are about giving you, the developer, the ability of building the scan surface from your very first unit tests, staying within your environment. Carrying out, scheduling scans, calling scans, as code, with the Command List as part of the CLI, seamlessly integrated into your development pipelines. And one thing that we'll get onto, and I'm sure you're all putting your hands in the air and saying, finally, a tool that actually automatically validates every finding, so no false positives, and actually gives you, the developer, developer-friendly remediation guidelines, actionable results, removing the noise, so that actually you can start to fix security bugs early, and often as part of your pipeline. I haven't even introduced myself. Oli here, VP at Neo Religion, and we're joined today by Bar Hoffesch, our CTO and co-founder. Bar, say hello. Hi everyone, nice to meet you. Just want to make sure you can hear me and your microphone is working. And it's also good to know that actually, I haven't just been speaking for three minutes and no one can hear me. What? No, just kidding. Very good. So if we could all just, you know, just want to make sure everyone can hear. If you can put a hi in the Discord, ideally, if not in the chat, let us know where you're from. And again, any questions, queries... Favorite meme, favorite emoji, whatever. Whatever it might be. We're all here to have a nice relaxing hour and a half. And hopefully we're going to learn something. Which Discord channel James? It is the TestJS one. So under Events, and then TestJS, and you should see it there. So let's... Oh, yeah. And a little bit of gloating. You can see here's a sort of selection of customers that are using our innovative technology, and they range from government, defense, insurance, financial services, anything from startups with a team of two to eight developers, all the way up to teams with 500 plus developers, but actually are moving away from their legacy tools and actually moving to new religion. And we'll go through very, very quickly the differences and how we feel that we're changing the security testing space and making it very easy for developers to adopt that.
So first of all, just why is application security testing so important? Very, very few, a quick sort of quotes here that were taken from Forreter's, the state of application security. Applications are and continue to be, they always have been, and they probably always will be the weakest link in terms of security testing. A large proportion of the attack surface, so it's hard to surface the malicious users hackers are going to be trying to exploit is gonna be on the application layer. We're seeing a massive rise in the use of APIs, and actually that translates into a very, very different threat model at an exponentially growing attack surface. And we really need to make sure that our products are intrinsically secure by design. And I'm sure many of you hate that time of the year perhaps when you get clobbered with a pen test report with issues that need to be fixed on things perhaps that you worked on three months ago, six months ago, or a year ago. You're not stopping. You're developing new features, new products at breakneck speed. And actually security testing is something that needs to keep up. And that's why we talk about shift left. Okay, so shifting security testing left earlier in the process, ideally into your hands, into the developers' hands so that security testing can match your rapid release cycles, right? Integrate it into your pipeline, picking you up on issues early, fix them at the most efficient time as possible, and hopefully the more often that you're going to be picked up on issues, the less time you're going to be making these mistakes. No one wants to produce insecure software, but it's really about being secure by design, finding issues as early as possible.
Now, let's have a look at some of the different types of security testing that you may already be familiar with, that you may be including in your pipelines already. And in fact, for those that are on the Discord or put it in the chat, which of these tools are you already using in your pipeline? Are you using SCA, Software Composition Analysis, looking at your dependencies, looking at your libraries, Snyk, White Source, JFrog amongst many others which are really leading the way with this type of security testing. Pablo uses Sona. Okay, Jalena is using Snyk, great tool as well. Really, really good to look at the libraries and dependencies as I mentioned that you're already looking for, White Source, check marks, wow! Okay, great. All the Israeli ones. They are actually Israeli. That's very, very true. But I noticed that no one yet has actually mentioned any DAST tools, which is quite interesting. If you're holding that back, because I haven't asked for it, please put those in there as well. It would be good to try and understand what you're looking at and perhaps we can look at the differences or try and understand the issues and pain points that you've been experiencing so far and how our technology might be able to deal with that. We then have your static analysis like Susanna uses check marks, for example. Sonocube is another one that's just been mentioned in the Discord. But these are tools that are looking at your code base, looking for vulnerabilities, almost like a spell check, but looks at things in a sort of one-dimensional space. When you're looking at microservices, when you look at single page applications, you know, the use of APIs, et cetera, actually while static analysis is a great tool to find things, there are two or three problems with that. Number one, they're plagued with false positives. Developers are often running around chasing their tale, chasing ghosts or chasing the tail of a ghost. I don't know how you might want to say it. You know, great, but actually it misses a lot of vulnerabilities, a lot of issues because when you look at the compiled application, the built application, actually it's running very, very differently. All the different microservices working together needs to be looked at in a very different, dynamic way of, you know, in the compiled or built application, and this is where DAS, or Dynamic Application Security Testing, and SIGCOMM comes into play like Neurolegion's Developer First DAST. So we look at the, the built compiled application, looking at it from the outside in, looking at it like a malicious user or like a hacker is interacting with your application to try and find real world, live vulnerabilities within your target applications. And this is how you can really do a very, very complete comprehensive security scan. This is what your penetration tests will be conducting, either using automated tools like Neurolegion's or indeed doing things in a manual way or perhaps in a manual way using other tools for that they use for pentesting. So really looking at it in a three dimensional way, looking at authentication mechanisms, being able to understand true logic based attacks for example. And Bar, I don't know if I've missed anything or you want to add anything to that? No, I think it was pretty, pretty comprehensive. Basically the differences between looking at the code and looking at the actual product. Once we compile, once we start running, you know, all of those microservices over all of those interactions between the different parts of the system become real, which means that things like database connection to or from your application is something that's a sust can actually verify, right? Because when it's still code, it's just, you know, words, strings and text. There is no functionality there yet, so running a DAST actually means all right, that's real, that's something which is there and we can verify it and give you actual answers. Yeah, I noticed that no one yet has mentioned which DAST they're using. Are you trying to keep us on our toes, everybody? Well, you're not using DAST.
3. Introduction to Participants' DAST Usage
If you're not using any DAST at the moment, please let us know. We want to understand your challenges and what you appreciate about the DAST tool you're using. This will enable us to have an interactive conversation and address your questions in real-time. We have participants who are already using white-sourcing checkmarks and others who are interested in onboarding themselves. By the end of this workshop, you'll see how simple it is to start using our DAS technology.
If you're not using any DAST at the moment, please put it in the chat or the Discord. Not using one, that's why I'm here. It'd be great to understand if you are using one, what problems you've got, what you love about the DAST that you're using, and then that way we can have more of an interactive conversation. And like I said, we're not just gonna wait till the end to answer your questions. We want this to be dynamic, like as dynamic as our testing, and to answer these queries and questions on the fly.
So, Susannah has already mentioned she's using white-sourcing checkmarks, great. But not using it herself, just getting the report from another department, nice. Okay, good. This is actually something that I think Susannah on the Discord mentioned. She's a team lead in the dev part. Actually, this is something that you can actually onboard yourself and start using. Joseph, not using one at the moment. Well, I think by the end, in an hour's time, you're going to realize just how easy it is to start using our DAS technology. At least that's the plan.
4. New Religion's DAS Features
We have looked at the pain points in the DAS market and built our technology to address them. Our goal is to enable developers to perform security testing efficiently.
So, what is it that we have done that's different? Okay. What is it that's special, let's say, about New Religion's DAS? And these are the sort of features that we're going to be demonstrating within the workshop, and this is something that you'll see as you start to use the technology, running us and benchmarking us against others, perhaps, if you're going to be doing that. But what we really did was looked at the DAS market, the dynamic application security testing space, and looked at where are all the pain points? And I've been to meetups in pretty much every single continent, pre-COVID, of course. I hope to go to many, many more over the next 12 months. But whether you're talking to penetration testers, whether you're talking to developers, they all have the same issues, right? And this is what new religions technology has been built from the ground up to enable. But more importantly, to enable developers to do that.
5. Overview of Neuralegion's DAS Features
We provide full support for API security testing, including Soap, REST, and GraphQL. Our technology enables you to run scoped defined tests by leveraging crawling functionality or consuming HAARF files. We validate every finding automatically, removing false positives and providing actionable results. Our tool integrates seamlessly into your pipeline and offers a user-friendly UI. We optimize scan speeds and minimize configuration requirements, making security testing accessible to developers.
So we looked at coverage, okay? So as I mentioned to in my introduction, we really make it very easy for you to run security tests against all your assets from an application layer perspective. So whether that's your web applications, your websites, your internal applications, maybe those that aren't public facing, giving you the ability to test applications that aren't public facing by using the repeater or the CLI, which if you haven't already done so, you would have seen when you signed up, which we'll get onto in about three or four minutes time.
We have full support for API security testing, whether that's Soap, REST or GraphQL. Microservices based architecture is not gonna be a problem for our DAS tool. And we'll go through multiple different authentication mechanisms that we support. And we do this by enabling you, the developer, or perhaps you're a QA or your QA team. And obviously, if you're from a security background, multiple different methods for discovering the attack surface.
So you may or may not be familiar with a crawling functionality, this provides you with the most automation. This is where our engine will crawl the target application, detecting all the entry points, extracting the parameters, building the attack surface, and then leveraging that to then start carrying out attacks, mimicking basically an automated ethical hack. And then we'll then start to attack. But we can also consume HAARF files, HTTP Archive files. Now, some of you or many of you may or may not be familiar with these, but these are recorded interactions with the target application. Okay, so you may be familiar with them, your QAT may already be using the likes of Selenium, Cypress I O, and other QA automation tools, generating their functional scripts.
And actually you can leverage these existing functional scripts, save these as a HAARF file, upload those into our engine, which we'll demonstrate, to then start running the tests. But actually what this enables you to do is to run scoped defined tests, because whatever's included in the recording defines the scope of the test and it won't go outside of that. So you could record an interaction, perhaps with a login process, or a search form, or a contact us form for example. Maybe it's gonna be a checkout process in your shopping cart. And this also is valid when wanting to test your APIs or perhaps your single page applications that basically is your API schema. We can consume your open API documentation or indeed your postman collections to really enable you to use multiple different discovery methods to build the attack surface and to run these scope-defined tests.
And this, going down to the point three, enables you to run things at speed. Instead of the scans running for hours or days to run scans that last for minutes or a very few hours, subject to the size of the scan itself. By the way, any questions that you have, if I'm going a little bit, I wanna get to the workshop guys. So, if you have any questions, then please feel free to put those in the Discord where Baar is already very active. I've got the chat open as well. But usability and adoption. Everyone talks about shift left. Okay, so we've already discussed your SCA, very much done towards the left by developers. Static analysis. Absolutely, it is already done, pretty much, by developers, is left. So what do people really talk about shift left? For us, or for me anyway, and for a lot of the customers that I showed you before, actually what you want to do is to try and shift the DAST part, or shift the manual testing where they're going to be leveraging DAST tools anyway into the hands of developers. And this is really where our technology comes into play.
So you can configure scans directly from the CLI. Okay, this is going to be the repeater which we're going to install in a few minutes' time once we start. And you can control the scan as code. Okay, so I'll show you the full command lists. We can perhaps bar, just make a note, perhaps you can show the CLI just to showcase how that looks like from a developer's perspective. Building the scan surface from your very first unit tests. Leveraging those mock files to build the scan surface as early as possible to start carrying out the tests. But one fundamental pillar of our technology that you won't find with other DAST tools. And the biggest pain point, whether you're a developer, whether you're a pen tester, whether you're a security architect, doesn't matter what you are. Removing the alert fatigue, removing the noise of false positives is really, really key. The last thing you need is to go through that manual validation process that can take days or indeed weeks.
In my experience, in our client's experience, what happens is you normally tend to do it by hand or just push it to the side. You just ignore the output because you don't actually know what's real and what's there. And that actually leads to a continual technical and indeed security debt because you're just constantly shifting it. Well, it doesn't really matter. From a security standpoint, we need to release this product, we need to release this new feature, but we've got 500 vulnerabilities here that we don't really know what's true or what's not. But we've told our customers that we're going to be putting this new feature out. So we have to do it, we take that risk.
So what we've done is we've got a tool that manually, automatically, not manually, automatically validates every single finding. It verifies every finding, removes the false positives, and only gives you actual results that are there that you can action immediately but do not need that manual validation process. So every time if there's a vulnerability is detected by the Neuralegion scanner, you know it's there and will show you the proof of concept and all the information that you need to then subsequently go and remediate immediately, or perhaps at least now you can start to prioritize. I think prioritization is the biggest issue, and when you look at CICD, when you look at DevOps which many of you may or may not be either implementing now, you might have a mature methodology or you might even be very, very mature but this is still going to be a blocker. It's still going to be a bottleneck, this manual validation. Actually you'll see that Neuralegion has completely removed that so you can start to fix early and often. Integrated seamlessly across your pipeline, it will showcase some of the integrations that we have. We also have an API to be as integrated into your environment as you like, but more importantly you can stay within your environment and security can have a nice glossy UI which we'll also touch on.
I've already alluded to a few things about speed. So scope defined tests, there are multiple different scan optimizations built into our technology that will include or exclude irrelevant tests, irrelevant parameters that are just going to slow down the test. We're all about saving you time, maximizing scan speeds or optimizing scan speeds so that you can test early and often without slowing you down. And from a payloads perspective, everyone's talking about shift left, let's put security testing into the hands of developers. But developers are developers, developers aren't and shouldn't be expected to be cybersecurity professionals. You're not expected to be penetration testers. And actually what you find was a lot of these tools, DAS tools in particular, they're built for security professionals. A lot of the DAS tools have been around for a very long time. As I mentioned, like the customer list that I showed you, they all use the legacy traditional DAS if we want to call it that. But are moving across because they want to put security testing into the hands of developers to make it easy for you to use, easy for you to understand the results, get accurate results that are actually actionable. But you can't put a tool into a developer's hands where you have to configure every scan for every target. So you'll see that our technology has very, very few, minimal configuration. And it's our engine that does a lot of the heavy lifting.
6. Security Testing and Account Creation
We cover SQL injection, cross-site scripting, and business logic tests. Our technology allows for automated testing of security risks and vulnerabilities. DAST can be done earlier in the development process, involving developers and QA. Neural Legion integrates with various tools and systems, providing actionable results and automating ticket generation. We also offer a chance to win a Nintendo Switch by signing up and running a scan. Now let's begin by creating a new account on our application.
So I mentioned the scan optimizations, for example. But we can also generate, therefore, the right tests against the right target, not to slow you down, and more importantly, so that you don't need to do that configuration. Let our engine do all the heavy lifting. You'll see here that we cover SQL injection, cross-site scripting, and we'll go through the testing categories. But you'll see here that we also carry out business logic tests, okay? These are tests that, well the only DAS tool that can do this in an automated way. This is typically what's carried out manually by your penetration testing team. But this looks to bypass the validation mechanisms within your application or APIs. It looks to break the logic within your applications and APIs. So not security issues per se, like trivial attacks or injections. How can I sign up for a Bronze Tier and get an admin rights? Can I manipulate a date in a reporting field to cause a denial of service? A big security risk, for example. And these are the things that you can have in an automated way with our technology. And this is what it should look like. And we're going to be getting on by the way to the workshop in a few minutes. We're pretty much there.
Normally, DAS, Dynamic Application Security Testing, is carried out at stages four and five. Perhaps by your security team or by a third party. Actually, with Neural Legion you can do this a lot earlier. You know, trigger the CI, leverage those unit tests I mentioned. Subject to the maturity of your team, or your methodologies, your QA can actually now start to be part of it. So yes, we're built for developers, but also for QA. You don't need to be a security professional. They're already creating these HAAR files, these HTTP archive files, with their functional scripts, leveraging those to now start running security tests. But ideally, and this is what we're going to be doing today, push a commit to GitHub, trigger the CI, initiate a DAST scan. The build will either pass or fail based on the severity or issue type that you've included in the configuration file, which I'll also go on to, something that I mentioned, you know, you can configure start configuring scans as code with these YAML based global configuration files. But at least you know with Neural Legion your builds aren't going to be failing for false positives. They're going to be true, real issues. And then automatically a ticket can be made, can be generated in JIRA or your other ticketing system. Not having to put it in a queue, wait for it to be manually validated and then generate a JIRA ticket. And from a compliance perspective, from a security perspective, from a prioritization perspective, when looking at your pipelines this really, really is fundamental and key to understand what needs to be fixed and when, right? You'll see we have integrations with Slack, we have integrations with Monday.com if you're using that, as well as a multitude of different others.
Welcome everyone and thank you for joining us. This is the beginning of the actual technical workshop. There are two things that we need to do. We're going to do them together side by side. By the way, this nose, it's my dog. You can say hi. That's great. This is being recorded, so that's going to live on for eternity now. Nice. So, you have two resources. We also put them in the discord. One is our GitHub example actions. It's this repository, and the other one is our sign up, right, page. Oli will also push them through both discord and the Zoom chat. Right? I just put it on the Zoom chat. By the way, something I didn't mention, which is a bit naughty. If you want to win a Nintendo Switch, we do have a Nintendo Switch giveaway, and the best thing about this is, if you sign up for an account, which you're going to be doing now, because obviously you're doing our workshop, we'll put your name in the hat three times, and what we're also going to be doing in this chat, in this workshop, is running a scan. And if you run a scan, we'll put your name in the hat 10 times. So you're actually 10X'ing the chances of you winning. So yeah. It'd be great for one of you who are at the workshop to win. We get some serious questions in the discord. I guess I'll have to answer that. So the sword, yeah, it's basically used for, testing for broken vulnerabilities. So you just hit the server until something breaks and then you're like, vulnerable! So yeah. What is that? Is that like a nod to your D and D or something? It's a sword. Okay, so we're going to begin with creating a new account in our application. So again, the address is here on my screen. So appneurolegion.com, sign up. And you know, just... The link is in the chat and the Discord. So just feel free to join up. Basically, we have a few options here. We can sign up with GitHub, we can sign up with Google, or we can sign up using an email. I'm going to sign up with an email because I've exhausted all other options through all of those demos. So I'm going to use the email option. First of all, you will need to put in your name. So put in Bar Hofesh. But yeah, no. Like use your own name. I know you want to use mine, but it's just so you'll know it's you.
7. Account Creation and Repeater Setup
To create an account, input your email and password. Verify your email through the confirmation link. Join the Discord or chat to participate. Create an organization to collaborate and share vulnerabilities. Start the onboarding wizard to install the nextploit-cli tool and the repeater. The repeater allows scanning of local environments. Choose to run the CLI and repeater in Docker, NPM, or Windows. Docker is recommended. Run the provided command and verify the response.
The next thing is the email. So whatever email works for you, or if you used one of the other options, I'm just going to use my Gmail, which I'm going to totally change so that it will actually let me create an account. I'm going to input the password. Everyone look away please while I'm inputting the password. Thank you. Right. So we need a good password. So it can't be your birthday or something like that. Make sure it's something proper. Thank you. Thank you. It is a good password. And we're going to create the account. So once we click on create a new account we're going to get a confirmation email which means I'm going to scoot over to my email and click on the verify link which should open this new window. That's basically means, and if that opens up, it means that hooray, you're signed up now. Can we please have a hands up or a yes or something on the Discord or chat whichever you use. If you're using Discord do it in there, please. If you're not doing in the chat it'd be great to see who's actually playing along, If you're not playing along and joining us. Thank you, Joseph. Yeah. If you're not playing along, just put a reason why at least. I'm working, I can't be bothered, whatever. I don't trust Oliver. All is okay. It's good to know who's joining in with us. Sebby already created account. James is with us, Paulio is here. Salut, Sim. Sebby, you know like you're making me look bad with the speed that... You see everyone has already finished the workshop. They're done and they're like looking at me, this grandpa. All right, so I'll start- Josep, hurry up, come on. I'll start moving faster, okay? I got you guys. All right, so- This is turning into like a standup comedy thing as opposed to like a technical workshop. They're all waiting for you, come on. All right, all right. So logging in we have that create your organization section. What's the point of the new organization? So I can just read it down if you want, but in large it just means that this is the workplace that you can collaborate in. So basically if you want to invite your friends to your organization, you want to create collaboration with your team, you want to share your vulnerabilities, either you're all working in a team on some open source project or in one company against one product, or you're a team of pen testers that's doing bug bounties and you just want to share the things that you found so you can do, I found more, right? Whatever it is, we accept and you can create this organization and invite your friends. So I'm just going to create, no, I didn't want this, I want this one. So, bars.org, those who will excel will get an invite to my organization, which I will most likely delete after this demo, but you'll still get an invite, which is like a show of respect. So I'm going to create this organization where we will start the onboarding wizard. What's the point? This little wizard will show us how to install the... will be the nextploit-cli tool, which is basically the command line interface to start new scans. This will allow us also to start the repeater, which will allow us to scan local environments. Basically, if you have some kind of a project running locally, even from the first line of Node.js.html, and you just want to run your server and test it at that stage, the repeater will allow us to do it. It's some kind of a little proxy that can allow you to run the scans locally. So it is, by the way, an open-source repeater or proxy for anyone that's wondering. And because we are a SaaS technology, okay, software as a service, that is when you don't need to have all the infrastructure, et cetera, around it, but this is also a sort of get-around of being able to scan and test those internal applications. So if you're wondering how this might work within your environment, actually, your security architect or whoever it may be can actually lock down the repeater, control what all the inbound and outbound, et cetera, that then enable you to start scanning either your non-publicly-facing applications, your development environment, whatever it might be, so that you have that additional layer of security without having to worry about using a SaaS-based technology. And because it's open source, you can just look at the code, see what you were doing there. It's pretty cool, I think. All right, I guess our next step is to decide how do we want to actually use the repeater. So repeat- I've just seen Pablo's message. I'm still trying to ignore it. I can't concentrate now. All right, so we have three options for running the CLI and repeater. Either in Docker, in NPM, or if you want to use Windows you can. I don't know. That's your own choice. Respect everyone's choices in life, even using Windows. So you have all of those options. Personally, I'm just going to use the Docker option. Just copy that, run it in your terminal, and make sure that the response makes sense. So you run it. You see that it gets back some kind of a version. If it says anything, 8.6 or five or two, whatever, you know, just it means it's working. Obviously if it's NPM, you can just install it through NPM. If it's the Windows installer, then you can just download the MSI installer. Next, yeah, I did that.
8. Running Connectivity Test and Forking the Repo
Next, we're going to run a test to see the connectivity. The wizard has created a repeater ID and an API key for us. The ID allows you to choose which repeater to run a scan with. The token is super secret, so keep it safe. The downloaded file, next.cli, is for installing the MSI X Installer for Windows. It installs the NPM environment and our CLI. If you have Docker or NPM already installed, you can use them. Otherwise, use the Windows Installer. Once connected, you can refresh the status. If you've reached this point, the workshop is done. Clicking done will bring you to the organization and dashboard. The most important feature is the dark theme, but you can also use the high contrast or light mode. We now need to go to our GitHub account and fork the repo. Make sure to choose the broken crystals branch. There's a question about the technical differences between the three options and whether Docker requires Kubernetes.
Thank you. Next, we're going to run a test to see the connectivity. You can see that our wizard has already created for us a repeater ID and an API key. We're going to talk about it. The ID is used so that you can choose which, if you have multiple installations of the repeater, you can just choose whoever you want, right? Whichever repeater you want to run a scan with. The token is super secret, so never share it with anyone, especially not in some kind of a recorded event showing a lot of people your screen. So, keep it from Lord of the Rings. Keep it secret, keep it safe. So that's the idea. What's the next.cli, the downloaded file is for? I guess you mean the MSI X Installer? So that's if you want to install the MSI, right? The Windows installations. What it's going to do is it's going to install NPM environment and then use our CLI so you can run it on Windows. If you have Docker on Windows, you can just use Docker. If you already have NPM, like if you're using Windows and you already have NPM environment installed, just use NPM. If you don't have any of those, use the Windows Installer, why not? So I'm going to copy that, paste it into my terminal, wait for it to connect. And once it's connected, I can just do refresh status and see that it's connected. If you got to this point, it means all done, right? The workshop is done and you can go home. Thank you very much. Thank you very much. Great having you all here. WSL is a very valid option, by the way, Ray. So if you're using WSL, just you should have either, you can either install Docker there, or on NPM, but then you don't need the Windows installer, right? Because it's the Windows sub-system, Linux. All right. So clicking done will bring us, ta-da! To our organization and dashboard. Now you have two main options here, and that's to be honest, that's the most important feature of our product. Most important thing to do now, actually, is to zoom in. Oh no, okay, sorry. So, all right. So the most important feature is the dark theme. So if it looks like that, and it burns your eyes, then just remember it's here and you can enable dark theme, right? But if for accessibility needs or for anything else, and you like the high contrast one, then sure. Like, you can also use this one if you want. Yeah, let's have a vote. Can we do polls on here, I wonder. Dark mode or light mode. Let's see what the, I think I know what the answer is going to be, but let's... Just write it in the Discord. Thumbs up mean four, and thumbs down mean, yeah, so... Well four, four dark mode. So just... Grey mode, all right. Okay, I'm gonna put a future request for specifically for names. And we'll see if we could include that in our next sprint. Okay. Pink mode, that's a good thing. We should do that for... I've got a few ideas around that. Now you need like a full, you know, Photoshop style where you can put your use and RGB and you can just fully template... No, I'm not developing that. Anyway, once we got here... Nineties mode, sorry, Pablo. Pablo's one it. Nineties mode would be good. Almost like a Mario Kart version or a... With like 60 levels of thumbnails above with a lot of viruses and, you know, links to all kinds of tools that was installed automatically by all kinds of Adware. I remember those days. Anyway, for us to move forward, we now need to go to our GitHub account. Woo Hoo. So let's just push it back. Right? So I'm going to send all of you this link again in this chord and also here. Everyone make sure I'm going to come here and fork the repo. Why? Because this is what we're going to showcase and show all of the interactions with it. So I'm going to also do that with you now. So I'm going to click fork and I'm going to choose my user. Yay. Right. Now, what we care about is the broken crystals branch. Right? We have many branches. We're going to talk about it, but right now we want to talk about the broken crystals one, which will also be the default. So I guess it should be okay right now. We have a question. What is the difference between the three given option from the technical perspective? Which one does manage better resources, chances, memory? Does the first option Docker needs the Kubernetes software tool to run? So...
9. Setting Up CI Workflows and Secrets
In this workshop, we create a CI process using the repeater, set up a vulnerable environment called broken crystals, run a scan locally using Docker Compose, and analyze the vulnerabilities and UI. We created the user for New Religion and forked the repo. Next, we set up environment variables, including the repeater ID and the next plot token. To do this, go to Settings, then Secrets, and add the repository secrets. Copy the repeater ID from the repeaters section and paste it. For the next plot token, create an API key in user settings with the necessary scopes. The organization API key is for larger teams. Finally, configure the CI workflows and enable them.
That's not one question. No... Oh, we have a question. Once we have a question about Kubernetes, it means that we finished the workshop. Basically, in regards to what is best, I think the best answer here, it depends, right? It depends on your environment. If you have high availability needs, you have clustering in your environment and you're a huge enterprise and you want to run all of it together and create redundancy and high availability, Kubernetes running multiple images of the repeater sounds like the best approach. If you're a developer that already, that just wants to check their own code locally and you already have an NPM environment set up already, just use the NPM approach, because why not? If you have Windows, because I don't know why, then just use the MSI installer or the Windows Subsystem Linux. All of those are options. It depends. All right. So once we fork the repo, we have two things that we need to do. First, we need to go into Actions, right? And click on I understand my workflows, go ahead and enable them. Nice. And then I saw last time that they also added something new. So there is a workflow called CI and enable workflow. Okay. So both of those enable workflows and then click on CI and click on enable that one because otherwise it's not going to run. That's the first thing that we need to do. I'm not even going to answer this one. All right. So once this has been configured, let's talk a bit about, what are we actually doing here, right? What are we doing in this workshop? We are going to create a CI process where we're pulling the repeater, you can call it the local scanner. We're going to set up a vulnerable environment called broken crystals. We're going to run a scan against this environment locally, right inside of the confinement of the CI using Docker Compose. And then we're going to see how, one, we find vulnerabilities and it stops the build, basically fails the build, and also how it looks in the UI. So right now what we did is we created the user for New Religion, which is great because that's our first order of business. And then we forked the repo, which is the second thing. Just going a bit over the flow, again, it's really just that. So we set up the environment, we installed the npm tool, then we set up a few environment variables, which we're going to do right now. Then we start the scan and just wait on the results of this scan and then stop the scan in case of failure. So our next step is to create those environment variables. So we have two of those, two secrets that we need to set up. The next prototype token and repeater. Where do we actually get those? And how do we set them up? Let's go to Settings in our repository and then go to Secrets. Inside Secrets, we want to add new repository secret. And the first one we're going to add is repeater. Now, to make sure that we don't have any, any, any typos, let's just copy paste because you know how it is. Just copy paste the word repeater or from the yml file, and then, we're going to go into the account, we're going to click on repeaters, we're going to create a new repeater and this is important because this one is already running. So unless you want to stop this one in your CLI, which is okay, you can do it because we're not going to use it. So if you stop this one in your CLI, then you can use this ID. If you didn't or you just don't want to risk it, just create a new one, call it, my cool new repeater. Here you can say, I don't know, whatever you want. Okay, and I'm going to create this repeater. I'm going to copy the ID basically just double click on that, go to the secrets and paste. All right making sure it's all here and click add secrets. Right, it's here, which means we got one down, one more to go. The other one is the next plot token. I'm going to also copy paste that into the discord and into the chat. The chat, and we're going to take this one from, if you click on your user settings and right here, scroll down, yeah it will be recorded. Scroll down to manage your user API keys, create API key, give it some kind of a name, demo API key. Alright, whatever name you want, as you can see, we really like to allow whatever text you want to add there. Now there are specific scopes that we need. For this to run, we need, okay, don't worry Udo, I'm going to go back over this step again. I'm just going to ask if everyone's up to date, do we need to slow down? All right, so let's do it again. So from the main screen, click on repeaters. This is where we create a new repeater. Then going back to the main screen, there is the little user profile. Under that, there is user settings. Click on user settings, scroll down all the way, there's create API key. Once this is clicked, it will show you that create API key screens. Just want to say here actually, that we've taken you specifically to the user settings to create your API key. That's because with the free account that you've signed up for, you have the ability of creating a user API key. Now, as part of the larger subscriptions to our technology, that's where actually if you went to organization, you'd be able to get an organization level API key. Perhaps if you have previously been playing around with the technology, I know that a lot of you signed up for the account last night or earlier today, you need the user API key, not the organization API key, otherwise it will not work. Or you may realize that you can't create an organization API key because you don't have that functionality enabled with the free tier subscription. Why would they want an organization API key, by the way, Bo? The organization API key is specifically good for situations where you have a large company and a large team. For example, you don't want to risk a CI with a user being deleted or suspended or for example, someone leaves the company and then their user is deleted from the system and suddenly your CI doesn't work because the API key is invalid. Then organization level API key is just our system keys. They stay there until they are being manually deleted. We'll talk about it. Technically, you need to put there the repeater ID.
10. Setting Up API Key and Repeater
We created the API key and the next exploit token. The API key is secret and grants access to various functions. The repeater ID can be obtained from the organization's Repeaters section. Now, let's move on to starting the scan or CI job. Make a change in the Readme MD file and commit it to trigger the workflow. The account is completely free, and there are no restrictions on the capabilities of the scanner. The scanner itself is not open source. To run the repeater, follow the instructions provided in the Discord channel.
But if you did- We'll get onto. We were already there, but I'll go back to there again just to make sure. Continuing- Oh, the hands. No, you did the hands again so that means you are getting down to business again.
We created the API key. There is a specific scope. You can read about it in the knowledge base right now, just select all because we don't want any issues. This is the most insecure way of doing it, but right now we're just demoing. If you want, you can just select all, click Create, and you get to the API key. Now, this is a secret API key, especially if you enabled all of the scopes, this API key will allow you to start scans, add users, do all kinds of things in the organization. Make sure this is kept secret. Don't share it, don't show it, don't leak it anywhere, don't paste it on Twitter and say, woo-hoo. This is secret. Copy it before you close the window.
Then in the Actions, this is the next exploit token. Make sure this is copied, this is here, and then you can do Add Secrets. About the repeater, where do we get the repeater? You go to the organization, you click on Repeaters, and you can create a new repeater here, which will create a new repeater. You can just copy the ID and paste it here. That's everything which we need for the repeater. Can we get a yes, thumbs-up again? It'd be good to know that we're not going too fast. If we're going too slow, let us know. I can see from the back end, a lot of activity. A lot of people seem to be with us. Great, Jamie, Halina, Stan, Harikrishna. Wonderful. Okay. Timor, Mahomed. Great. Joseph, wonderful. All right.
So, now is the money time. If we managed to set up everything okay and everything is working smoothly, our next step is to start the scan or basically start the CI job. How do we do that? We'll go to the Readme MD. We're going to click on the edit button and we're going to change something. In this case, I don't know, I always just love to add a bug. Let's be honest. I usually just add many bugs because that's what we do in development. Don't let Oli know.
Is this completely free and open source? Is what completely free and open source, Teymel? Are you talking about the scanner? While we get a clarification on that, basically I'm going to do any kind of change. You can do any change you want. I'm going to click on commit changes. Once I did this thing, we should already see that there is an action starting to run. Update Readme should have kicked a job. What's happening now is that we're basically running this thing, the workflow. Just again, for those who weren't with us... Zoom in. Zooming in. And just to answer Tymor's question. The account that you've registered for is completely free. You can look at the different tiers on the new religion website, by the way, but this is a completely free account that you have signed up. There are going to, of course, be some limitations, like the organizational keys that I mentioned, some sort of reporting limitations, but you get the full-blown dust in all its glory. There's no restrictions that we've placed on the comprehensiveness of the security testing. We want you all to benefit from that. The scanner itself is not open source. We are a corporate entity with a proprietary technology, so no, that's not open source, but you are free to continue to use this account for as long as you like. Enjoy it. Start running tests. Like I mentioned, there's no restrictions on the capabilities of the scanner, okay? And you'll also get, and we'll get upon it once a bar jumps onto the UI, you'll get all the testing categories and the logic-based attacks included in that from a security standpoint. Pleasure. Pleasure. Okay, Bar, configuration file. I'm just answering here a few things. So basically how if you already signed in or you want to just know how to run the repeater, I just answered in the discord. So this is the way to just run it manually. Here we're running it inside of the CI, right? So what we do, we just get the npm package. We set up the secrets. Then we get up the environment set up through the Docker compose. Then we're going to run the next.cli. This is how you actually run a scan, right? If you want to run it through the CI, you can also run it through the UI. Once you've connected the repeater, we're going to cover that as well.
11. Running a Scan and Configuring Breakpoints
The scan has been started with a specific ID and is currently in pending mode. The scan will wait until a high severity issue is found or a timeout of 10 minutes is reached. The breakpoints and timeout configurations can be customized based on the specific scope. The tool provides 100% validated results without false positives. The length of the scan depends on the size of the application. Running short, incremental tests against every change is recommended for immediate feedback and validation. The technology is designed to enable organizations to perform security testing efficiently.
But this actually gives you some information about how to do a run a scan. Then you have a command example on how to wait on a scan with a specific ID until it finishes or getting into some kind of a situation. Also, we have the full documentation in the score, or you just share them. We have all of the options. I'm also going to put it in the chat by the way. So what I've shared on the Discord is all our commands from our docs. I'll put that in the chat for any of those that aren't on there so that everyone has visibility.
All right, so what happened right now is that we built the environment, the Docker Compose. We got some logs running. We started a scan using the configurations that we did. The scan right now lives here. So scan was started with ID and you get the ID. The ID is right here. If you go now to your UI, you will already see that there is a scan in pending mode. This means that you've been successful and you got a scan running. You can see right here that we are waiting on the scan to actually start, right? Waiting for issues. It's going to wait until the breakpoint has reached. What's the breakpoint? That's a good question. It depends on what we configured. Right now, in this workflow, the breakpoint is set as breakpoint high severity issue. So on the first high severity issue we're going to find, we're going to fail the build. Basically exit in a nonzero status code. Do note that we also have the timeout option, which means that if in 10 minutes we didn't find any kind of vulnerability, we are saying, all right, don't waste our time. We will continue scanning this application or do whatever we want in our own flow. Just don't stop the CI. And this will release it as if we didn't find any kind of a security vulnerabilities. What are the optimal configurations here? It's really dependent on your, thank you, Vader, on your own configuration. Right. So if you say I don't mind waiting as much time as needed, you can just remove timeout. And then the only break point is by can ending or finding the break point. The break point can be either high issue, medium issue, any issue. And then on each open AWS three buckets. They say that. I don't believe that. Anyway, so let's go to our actions. We can see that the action has failed, which generally will be bad. But for this demo, it means it's good. Let's see. So we got to the break point. Next, it's the life I found the first high severity issue. Oh, oh, no. Yeah. So we will go after the after this example, and we will go into more details and how we can change it and how we can configure it. Right now, it's a good general example, but it really depends on your specific scope.
All right. Now, let's see how it looks in the UI. One thing sorry, one thing is just worth bearing in mind. I just want to always go back to the to the value part from a developer's perspective. Just perhaps if you're watching this later, or maybe you weren't listening before. Going with the breakpoint, by the way, and the timeout. So, first of all, don't forget that any issue that the engine detects has been 100% validated by our engine. Okay. For us, a false positive refinding is a bug. Okay, we don't present false positives. So anything that you detect is there, and you'll see when we go through the results, the proof of concept and the validation that we provide for you there. That goes back to also when you look at your breakpoints, high, medium severity, your build's not going to fail for a false positive, and you can be confident of that. Going back to the timeouts, by the way, you might think that 10 minutes might be a long time, or it might be a short time. But it really depends on the length or the size of the application is going to depend on the length of the scan that's going to be required. But if you're using the scanner, as you should be, by testing every build, every commit, and perhaps using the HAL file, using your API Schemas, where, Bari, we need to make a note to show the API Schema editor, but you can run a test on a single entry point with a few parameters and run it, you know, 10 minutes is going to be more than enough for that, right? So run short, incremental tests against every little change, every sort of thing that you did, get immediate feedback, 100% validated. You don't need to sort of do that manual validation so that, you know, it's there. And that's really how the tool is supposed to be used. And I know many of you might, you know, you might not be at that stage. A lot of organizations that we speak to aren't at that stage because they didn't know how to do it or they realized they weren't able to do that with the scanners that they were using. And this is really what our technology has been built for. So hopefully you can see the value in that and how you can start to really utilize it from that perspective.
So I guess at this point, we want to look a bit about how the scans look. So, yeah, we got our CI process canceled, right? It failed. Usually, this will be inside of a PR and not a direct commit to the master main branch. Right? Right? I'm sure no one here pushes directly to master. No one will do that thing.
12. Explanation of Scan Results and Stopping Criteria
In the UI, we see general information about the scan, including the time it took and the severity of the issues found. We stopped the scan once we found the first high severity issue. The configuration parameters for running the scan are visible, including the repeater ID. We stop after the first high risk to fail fast and save time. It's similar to failing a build for formatting issues or unit test failures. The decision to stop or continue the scan depends on the specific situation and requirements.
Right? All the team leaders are now. Who says no? Okay. So, not allowed, exactly. So this will happen inside of PR, and then you will have the information to say, this is a failure, you need to fix that. Of course, Pablo pushes the main. He was talking about orgies before. So it makes sense. He pushed the main. All right, so how does it actually look like in the UI? So if we go into our scan, we will see a few interesting things. So first of all, we will get just general information. How much time did the scan actually took from the engine's, the scan engine perspective? So that we found the first high severity issue in 25 seconds. That's pretty cool, right? 27, sorry. It's a seven, so 20. Maths telling the time was never his forte. English isn't my main language. I'm allowed to make mistakes. You are, you are, but only one, only one per workshop. And so that's it. That's the last mistake. Otherwise it's a red card. Damn, all right, I'll be extra careful now. So we got the lapse time, we got the average scan speed. You can see that technically the scan didn't complete and that's okay because we stopped it. It's in stopped status because we said fail once we get to the first high severity issue. So we found two low severity issues and then one single high severity. And when that happened, we just say stop, we don't care anymore. We reached our break point from here on out. This PR, this committee's already canceled. We need to first fix it and then test it again. Someone's canceled. Salad China said that his took 10 seconds. Hmm, I don't know what the hmm means but there is a clear explanation as to why that might be, by the way. The clear explanation is that he's just a better scanner than I am. So, just a better person. I'm sorry, I failed you all. So going into the configuration, you can actually see what the parameters for running the scans were. So that's the crawling targets. You can see which security tests. Okay, we'll ask, is there a reason why we stopped after the first high risk and not running completely? Yeah, so there is a good reason and I'll talk about it in a second. So just, this is the configuration and you can see your repeater ID here. This is also why I said the repeater ID isn't a secret. It's just for a unique identifier. So you know what you want to actually scan, right? Which repeater you want to use. But it's not a secret and it's going to be quite visible in multiple places. Let's talk about the issues that we found. Before that, is there a reason why you stop after the first high risk and not let it run completely? Let's take into consideration the next situation. We have an application with 200 APIs and points. You just did a massive refactor and you want to test all of it. It's exactly the same logic as if you have formatting issue or your unit test failures, you want to fail the build, fix it, and then run it again. You just want to fail fast because if you say, okay, security is one of my unit tests, is one of my parameters for a successful CI, if I get into a situation where I know this won't be merged, there is no need to waste time, so to speak, right. So it's a fail-fast logic. Obviously, you can also go with different configurations. You can say, listen, don't fail on anything, right? Just continue on and I'll manually do something in the end, because you don't want to say, you don't want to say run all of the scan and then fail if the scan completed successfully or something like that. So basically if the scan finished and we didn't find anything, that means, sure, we can send it on to merge, but if we find something, it's something that we will need to fix. So if we need to fix it, and we will need to run CI again, there is no point for the CI to keep running. But it's really dependent. You can even just scan the production environment if you want. You don't need to scan the CI specifically. It really depends on what you have, right.
13. Running Manual Scan on Broken Crystal
We found an open bucket in broken Crystal, which is a high severity vulnerability. S3 buckets are often used for storing sensitive information, and if misconfigured, can lead to major security breaches. We will demonstrate how to run a manual scan on broken crystal and explain the different configurations and testing categories. Please note that broken crystals is intentionally vulnerable and not intended for bug bounty reports.
All right. Going into the actual issue, open bucket, who said they found an open bucket? Oh my God, it's true. So apparently we have an open bucket in broken Crystal. Two CSRFs. So basically I'll talk a bit because I see that there is a real need to understand the differences. Here, by the way, I also have the two CSRF. Why do we actually sometimes have differences in time? Unlike a SASS solution or a static code scanning solution, our target is live, which means response millisecond, which means which URLs we pick up first, how the application behaves, those things affect the scan rate. So even if you run a Docker with your application and it has one more CPU or a bit more RAM and it reacts more quickly, yeah, then it will change not to the results, but maybe, you know, in a little bit how the scan will behave. You know, it's just expected because you're testing a real live application and not just static code, which will also actually give us the capability to find stuff like that. Because what did we actually do in this test? We detected in the response that there is an S3 bucket. So we went here, right. We can see that this is the endpoint which has the configuration option. We went through it. We saw that there is an S3 bucket. And then we verified that it's actually open and leaky by looking at all of the things and even showing them to you here. So right, we can actually... Well, wait for the real vulnerabilities. But no, I'm just kidding. So for those that can't see the Discord, Udo is scared to check a real project of his. So yeah, open bucket, even though it might sound like, okay, open bucket. And if we look at the findings, so you know, all right, this is a nice cool picture of a plane, inside of the forest. And we have like maybe another picture here of something, of a cow stuck in a tree. Now, right now this is just a very embarrassing picture to have on your website of a cow stuck in a tree, because I don't know why you would host such thing in your websites and applications. But think that maybe this wasn't just the funny. There is worse. So I trust you, don't explain. So yeah, this is just a silly picture. But usually S3 buckets are used for storing your customer upload information, maybe PDFs, maybe all kinds of internal logs, all kinds of resources that, you know, maybe you don't want people to start listing what's going on there and started downloading everything freely. So this is a pretty high severity vulnerability because it might allow people to do very nasty thing. A lot of the most major security breaches or information leakage were due to open or misconfigured AWS S3 buckets with even the U.S. government holding customer information, citizen information, the post, I think it was the post office that was hacked and it had a huge amount of citizen information there. So this is a very critical thing when it's there. So other than that, we found a few other things, but obviously, because we only looked for high severity. So we didn't fail those, but there are a lot of interesting things out there.
Now, how can we actually scan the target and see all of the things that's going on there? And this is what I'm going to show now. So basically how to run non CI scan. So just maybe before we go into that, is there any questions again about the CI base trigger? Doo, doo, doo, doo. No, please don't sing, come on. All right. I guess there is nothing. No, but I think it's good to show, I think it's good to show, the create scan let's look at it manually and just touch upon the different configurations the things that we've mentioned the testing categories and maybe explain the logic based stuff. And then we can perhaps go on to the authentication side. Right. So first of all, this is broken crystal. So this is the thing that we're actually going to to show the manual scan on. So I'm just sending you all entry points mean and also the parameter I'm going through. So I will be talking about it. I'm just pasting the broken crystal link because this is what we're going to scan. It's going to be pretty cool and interesting. So stay with us. The reminder API scan. Yes, we will cover that as well. So going into a manual scan we're going to click on create scan. In create scan, we have two options. We have standard, which is the more straight forward no configuration level scanning. So for example, I have a website. I want to scan it with automated crawling. I'll just paste the URL, for example, broken crystals and because broken crystals is yeah. So I can just use one of the open repeaters. I just started my default repeater and they can run a scan specifically broken crystal because it's our own application for this kind of testing. You don't actually need a repeater at all and the scan will allow you to start scanning that even without a repeater. But for all other targets, at least in the free tier of our solution, you will need to run through the repeater. So just to confirm, broken crystals is an intentionally vulnerable application. So don't start sending us like bug bounty reports or anything along those lines about anything that you find. It's got a lot of vulnerabilities in there to actually enable you to test it, to run a complete scan from start to finish, have a look at the results with a bunch of different vulnerabilities that we have. I think we're introducing more and more different types of vulnerabilities within there, but actually it can help you understand vulnerability issues, remediation, perhaps where you've gone wrong. So I would even now just run a full and complete scan against brokencrystals.com and come and have a look at it later and have a look at all the results because it's really interesting. So a bit about what we're going to do. So we're going to run the scan first. I'm going to give it a name.
14. Scan Target and Test Types
We're going to use broken crystals as the scan target. We offer different discovery types, including crawling the website, uploading HAR files, and importing API schemas. Our system allows you to customize the scan parameters and optimize the scan process. We perform various tests, including open buckets, broken JWT authentication, LDAP injection, and business logic vulnerability tests. Our technology combines machine learning and context analysis to automatically detect and manipulate vulnerabilities. This advanced feature allows us to bypass validation mechanisms and break the logic within applications and APIs.
So BrokenCrystalsDemo. And project, we're going to use the default one. We're going to talk a bit about projects later, but it's for bigger organizations. It allows you to basically gather all of the information against a specific application. And then later to look at differences, unresolved issues, recurring problems and to give you a lot of high level information. So, from a VPRMD or team leader perspective, it's really useful, as a single developer, I guess you will just prefer to use the standard scans.
Scan targets, so we're going to use broken crystals. A bit about our discovery types. So crawler, you know, the most straightforward thing, you just put the URL, we will automatically find all of the entry points, all of the URLs in the target. We will just scroll the website, gather all of the information, and start scanning it. The other option is recording or HAR file. HAR stands for HTTP Archive. If you're using some kind of a test automation, right? Because this is Test.js, I'm sure all of you know Selenium, all of you know Cypress.io, I hope. Then all of those tools can be used to export a HAR file of those automated testing. This file can be uploaded into the system, allowing us to just keep scanning and just test the specific scenario. So, one single button press. Just logging into as one of the users. Add a new organization. Show them on Broken Crystals on the contact form or something just in case. So, if you go to Broken Crystals, I just open my network tab. I can just do something like, I don't know, subscribe to the newsletter. I'll click Subscribe. And as you can see, I have a request that has been sent. I hope that you can see that. Then I can just right-click on that and save all as HAR. This is the manual way to do it. I can just also refresh the whole site and then it will capture everything, and again, I can export all of this information as a HAR file and use that for the scan. The other option, which is the open API option, will allow us to upload, that basically allows us to upload an API schema. Why do we want to upload an API schema? Let's say we have an application, we want to test a microservice. We want to test only the backend. We want to test something that doesn't have any kind of crawlable UI. We can use either a swagger file, open API specification, or postman file to do automatic mapping of the targets. So for example, in broken crystals, you can get the schema by clicking on API scheme and API reference. This will take you to the API. Then you can click on the JSON one. Take this, just copy it, put it here in the import file from URL. Click on import and you will see that it will automatically imported the open API file. If you click on the open in schema editor, you will see that we automatically also parse the API schema and we allow you to head off, change it, and modify it. This is specifically useful for if you want to say, okay I just want to test only this single entry point. I don't care about the other ones, I just want to test this one, or those two, or just everything inside of, I don't know, App Controller, right? And then you just do Save and Close and it will modify it and keep just the relevant parts of the schema that you care about. So this is one of the options. Also of course we have the crawler which we, I guess, already covered. So I'm going to use the crawler one for this example just because I think it's the most obvious one. We have a lot of information here about which parts of the request do we want to scan. So by default we test for the URL query, the fragments and the body of the request, but we can add more. There is also a lot of configurations about scan optimizations and stuff that we can go through, but it's, I guess, more advanced usage. If you want, I guess those are the most relevant. So stop the scan if the target is not responsive, respond after, I don't know, if your target, if the thing that you're scanning just goes down and stop responding, then you want to stop the scan because we don't want to waste time scanning and getting time out, or skip entry points if response is longer than blah, blah, blah allows you to say, okay, there is an entry point that is a file download that takes forever, I don't care about it, right? Just skip it. A bit about what entry points and parameters means in the context of the scan because it was asked. Entry points are unique URLs. So each URL splitted by method. So it gets to this URL or it posts to this URL with more information, each one of those is what we call an entry point. Parameter is actual modifiable parameters. So?id equal one is a parameter, JSON with all kind of information, those are parameters inside of the JSON and that's what we call parameters. I'm going to go through the sitemap in a moment and show you how it looks in the eyes of our scanner, but right now that's in general. Going to scan tests. So those are all the tests that we actually do. You remember open buckets. So open buckets is here, but as you can see we have a lot of scans and the lot of tests. Anything between broken JWT authentication, LDAP injection, local file inclusion bootforce of logins, cross site scripting, default logins, directory listing, secret token, server side template injection, file upload vulnerabilities, all of those are here. And if you were afraid before, then be more afraid now. There is also a test for business logic vulnerability tests. This is a very advanced kind of testing. Right now we're the only solution that actually does those automatically because we combine machine learning, context analysis, and other techniques to actually understand context of the server that we are scanning. And then we can manipulate all kinds of information that usually require what we call human smarts, right? So business logic vulnerabilities are also an option, and third part of the way- I love the way, Baran, you just scoot over that. Like, it's just like, yeah, you know. It's one of the most advanced features, but- You know, we just press down the pedal and we end up going at like warp speed, whereas everyone else is going, you know. Yeah, as Baran mentioned, it's the ability to understand the context, understand the responses that we're getting back from the servers, from the application, and then being able to then subsequently manipulate and change the attack vectors and process to try and break the logic, as I mentioned. So for those of you that perhaps arrived a little bit late, being able to bypass the validation mechanisms that you have, break the logic within your applications, break the logic within your APIs. Bar, just quickly for one minute.
15. ID Enumeration Vulnerability
ID enumeration is a common vulnerability where certain methods or approaches can leak information about users that should not be accessible. By enumerating multiple IDs, we can identify and prevent this vulnerability. Many enterprise customers have experienced IDOR vulnerabilities.
What is an example of ID enumeration, for example? So ID enumeration, I guess a lot of you are using this kind of approach because it's very restful. Let's say, you have, for example, users slash five slash, I don't know, profile. And then based on the ID of the path after users, you know that you need to show the user the information about user ID five. The thing is that sometimes those kind of methods or ways of work might leak information that you don't actually want about other users. So if I'm authenticated as user six and suddenly I change this in the URL and go to user five or user four, user two, suddenly maybe I can see information of other users that they shouldn't be getting. So this test validate that this situation does not happen. It enumerates multiple IDs, making sure that the structure remain the same and we can understand the answer, but the information changes. And then we can tell you, oh no, right? There is an ID enumeration vulnerability. A bit about ID enumeration, we already found it. Like a lot of our enterprise customers have this kind of vulnerability. It happens exactly IDOR. So it happens. It's very common. So, you know.
16. Authentication Mechanisms and Support
Veyda asks if the free plan provides enough scan hours for CI. The answer is a bit vague, but it's recommended to try it out. The free tier offers five scan hours, which should be sufficient for testing. Even if the scan hours are limited, using the tool can help find vulnerabilities and improve the security of the startup product. Different authentication mechanisms can be set up to ensure a proper deep scan of the application. The tool supports various authentication types, including form authentication, header authentication, API call, OpenID Connect, and custom multi-step authentication. ReCAPTCHA can be bypassed for automated testing. The documentation provides detailed instructions on creating and configuring authentications. The New Religion team is available to assist and provide support through various channels, including Discord, email, Twitter, and LinkedIn.
17. Overview of Projects and Organization Tab
Whether you're a small startup or an enterprise customer, we're here to provide support and make things smooth for you. Projects give an overview of scans and findings. You can mark findings as resolved or recurring. We once found a vulnerability in a Bitcoin exchange that allowed unauthorized transfers. The organization tab allows for configuration, including adding people, groups, and API keys. Various pipeline and ticketing system integrations are available. We're wrapping up soon and welcome questions. Code-based testing is for vulnerabilities in your code, while scanning tests the running application. Large organizations test their own solutions and third-party vendors.
Whether you're like Vader with a small startup trying to get your product to market or you're an enterprise customer, we don't shy away from support. We really are genuinely here to help. So do feel don't forget to reach out to us. We're here to make things as smooth as possible for you. Yeah.
So I guess a bit about projects because we said we were going to talk about it. So projects, as you can see gives you some kind of a high level overview about all of the scans that you did based on hosts or basically based on applications. So for example, here, you can see all of the findings, you can look at them, you can even mark them.
So for example if you say listen, this environment, I don't know, brokencrystals.com is a testing environment. We have the open bucket for, um, So for example we have the open bucket but it's just for development purposes. Please don't show me this vulnerability again. You can just click on ignore or if you say, all right, my team has resolved this thing, All right, so this will be resolved and then if a new scan will find it, it will be spelled as recurring and you will get a notification that you see one recurring issue has been found. Which is, right, basically someone lied or just didn't do their job correctly, so that's pretty important thing to know.
Does this go for blockchain-based software? I guess you mean, if this can scan blockchain things? If that's what you meant then just, I don't know, a funny point. One of the first, years ago, one of our first POCs ever done. I love this movie. Yeah, all right. So one of the first customers we had, basically POCs we've done was with the Bitcoin Exchange. Our system detected basically that the Bitcoin exchange the function for pulling in a deposit from the exchange, from the exchange was not working correctly. Some kind of a client-side validation error, basically allowing you to automate a script to just pull all of the Bitcoins from that exchange into your own wallet without them even noticing because it was a bit by bit by bit by bit. So that was a pretty cool finding, one of the first things that were like, whoa, a nice and cool situation. I think they had something like $60 million worth of Bitcoin in their exchange at a very, very low valuation at the time. Yeah, I think they had around between 5,000 something exact Bitcoins in the exchange and it was like I think 2,000 us deeper Bitcoin or maybe 10,000 whatever it was a lot of money. It was not fun and it was discovered in production obviously they fixed it but those things happen so scan your... That was the logic base. Yeah, it was one of the business logic base vulnerabilities. So basically transferring Bitcoin to your wallet even though there was zero Bitcoin in the account. So it's like going into your online banking with no money in it and being able to transfer money into another bank account of yours.
All right, so the next thing I guess is the accounting and billing. So this is where you pay us money. No, I'm just kidding. So the next thing is the organization tab. The organization tab is basically where you could do all kinds of organization level configuration. If you want to add people, if you want to add groups, team members, if you want to do all kind of, not gonna answer that. If we're going to do all kinds of configurations which are like domain specific, organization specific. You can configure walls, You can create API keys. Obviously, this is the organization level API keys. And unless you have the right, basically the right license for organization level, then it won't be available. You can also see all the possibilities for pipeline integrations and ticketing systems integrations that we have. So anything from GitHub, Xgen, CircleCI, Jenkins, Azure Pipelines, Travis, GitLab, JFrog. If you want we have Azure Boards, GitHub issues, GitLab, Slack and Jira. Wherever you want your vulnerabilities to be or in whatever pipeline do you want us to run this information and possibilities are here just for you to configure. So, yeah. Yeah, it's easy to go. We're gonna be wrapping up relatively soon. I don't want to keep others for that long. Do we have any questions about anything that you've seen now? Put it in the chat or the Discord. Obviously we can answer those now. Were you holding tight? Waiting till we sort of got to the end. Before I just move on to one, I want to show you all, so don't leave yet. I want to show you a really, really cool feature on our automatic validation of a finding, which I think you'll really, really like. And then I'll just wrap up with a bit of a summary. Udo, code-based versus scanning. When, oh, I see, okay. You read that bar? In the Discord? Yeah, I'm reading that. So code-based versus scanning. When do you do what? Scanning when we don't have that and does not have access to the code. So basically this is what we talked about in SAS versus Dust. So you have two things to do. One is test your code for vulnerabilities and then test your application for actual vulnerabilities because there are things that you can see only when the application is actually running. I guess when you do the code-based, if you're, for example, using node.js, you know how when you pull in using NPM some kind of packages and it tells you, oh no, this package has high severity vulnerability? That's the code-based, right? So this is where you actually test your code or see responses from your code or if you use Snyk or if you use other SAS tools, then you can see things about your code. And then when you do scanning, that's actually, basically, dust. So Dynamic Application Security Testing, running an actual test against your running live application. It can happen either against some kind of a docker image inside the CI or some kind of a website that you have. Obviously, it's also a possibility when you don't have access to the code. So if you have a WordPress website that you're hosting, that you're using it, if you have any kind of websites that you didn't write each and every line of code to make it work, it's not a bad idea to run a Dust2 on that just to make sure that everything is okay. That's what a lot of large organizations are doing. They're testing both their own solution and also solutions that they use from third-party vendors. Yeah, absolutely. And that's why we're going back to the slide we had before, with such a broad range of...
18. Demo of Crawler Scan and Automated Validation
We carried out a crawler scan against broken crystals and found numerous issues, including SQL injection and reflective cross-site scripting. We provide details of the injected value, remedy suggestions, further reading, and possible exposure. You can review the request, copy it as a curl command, and access the response. We also provide automatically generated screenshots of identified vulnerabilities. If you haven't already, sign up for our Nintendo giveaway and access our documentation and videos for additional assistance. Security testing automation with New Religious Technology is easy and simple. Share your thoughts on Discord and consider integrating our technology into your pipelines. You can automate crawling, leverage QA automation, upload API specs and Postman collections, and configure scanning scenarios using YAML files. Our technology removes false positives and provides immediate, validated results. Prioritize and fix vulnerabilities without wasting time on manual validation. Start securing your applications and integrate our technology into your pipelines. Thank you for the positive feedback.
Well no, just a broad range of different clients, whether they're startups with two, three developers, right up through to tier one banks. When can we get a demo about how to use the sword? Is it hard to put it back on the wall? Let me just share my screen for one second actually, because what I would like to show everyone is another example, by the way, of another scan that we carried out. So, this is actually, you can see here a crawler scan against broken crystals, a much slower scan that we carried out just for demo and configuration purposes. But you can see here, if you were to run, and hopefully you've all run the broken crystal scan just by putting the URL in a manual way, you can see here, there's a lot more issues. We actually here used the OWASP Top 10 2017 template. So you'll see there. We have multiple different templates that you can utilize. But what I want to show you actually is what some of the issues. So, you can see here, we have SQL injection. Reflective cross-site scripting is always one that I like to see. Because how do you know that reflective cross-site scripting, that XSS is there? So, a little bit different to the OpenS3 bucket that Bar showed you. But you can see here that we provide you with the value that was injected, okay, within this specific malicious scenario. Remedy suggestions on, okay, how can I fix this? Further reading. In this instance, these are all articles and the knowledge base around XSS that we've written ourselves. Possible exposure. Okay, what does this mean if I'm not a techie or if I'm not a cybersecurity person, how can I understand what this actually means? I've never heard of XSS before. You can see here we provide the diff-like review of the request, okay, so you can see the amount, the value that was injected. You can, by the way, I don't know if Bar showed you this, you can copy this request as a curl, copies are all requests. Copy the URL, copy the headers. Okay, all the information's there for you to go straight into debugging mode. We give you the response, the headers, and the body. But one thing, if I just refresh this page, one thing that we also provide, so Bar mentioned the XSS that was generated that produced this popup executable, right? We actually provide you with an automatically generated screenshot of this popup. Okay, so we're running a browser in the background looking for this reflection on this cross-site scripting, on this reflection. This could, for example, a malicious user could then take your unassuming customers to a site or a page that they control, requesting credit card information, requesting personally identifiable information. Whatever it might be, it could be a page that clones and mimics the site that you have. The links that we go to to manually validate the issues, or to automatically validate these issues, you can see here we're actually going through it, providing you with a screenshot. You know it's there, 100% validated, so you can now start to fix them, knowing that the things there. Yes, Hare Krishna, screenshots too. Is there anything that we're not going to provide you? Any ideas of things that we're missing, by the way, let us know, but it'd be good to see if you can think of anything. But yeah, if you haven't already done so, by the way, this is the boring stuff, go to the link in the chat and the Discord. You might as well enter your email address and sign up for our Nintendo giveaway. Like I mentioned, because you've been through this with us, because you're gonna run a scan, you have until I think the end of next, early next week to run your tests, we'll put your name in the hat 10 times, all of our information, all of the support can be found at docs.neurolegion.com, D-O-C-S dot neurolegion.com. You can find all the information you need that. We do have all the videos on our YouTube channel. Everything from creating a header authentication object, installing via NPM, connecting the repeater, but you guys have done that already. Adding a new user, setting up a custom role, using API schemas, setting up a scam with Postman collections. Constantly look at this. We've always got a lot more videos going on basically on a weekly basis. So any additional information that you need or assistance, we've got your back. But just as a I suppose as a little bit of a final wrap up, I hope you guys have realized that actually security testing automation for developers can actually be really, really easy and really, really simple with New Religious Technology. Obviously we're being interactive here. Let's hear your thoughts on the chats, let's see your thoughts on Discord. How easy was it? Can you see yourself now using this technology in your pipelines? Whether you're a startup, whether you're an enterprise company, how easy has it been for you to use? But it's all about how you can start automating today. How have you already started automating basically, if you've gone through this with us, crawling, add the URL, let us do the hard work, HTTP archive files, hard files, leverage your QA automation, record that interaction with the specific thing immediately and upload it. Upload your open API specs, your Postman collections, open up the API schema editor within our technology and include or exclude certain, certain endpoints parameters, et cetera, that you want to test. We showed you the very simple global YAML configuration files and you're good to go. Use our example. Use our example. Look at the command list on the docs, change it up, run your own scans, be in control of the scanning from the CLR using those command lists as well. Run scans, rerun scans, stop, start, include certain testing scenarios. Really, really good to see on the Discord. Paoleo really easy and self-explanatory. Absolutely. James definitely interested in wiring this into Github Actions workflow. Nate, you've done it already. It really, really is that, it really is that is that easy. What can you test? Web applications, internal applications, microservices for us, no problem whatsoever. Single page applications, web socket based architectures, as well as SOAP, REST, or indeed GraphQL APIs. Really, really is, really, really is simple to use. Don't forget, every finding that we detect is automatically validated by the engine, removing false positives, start working on it now. You know, VADA was a startup. The last thing you want to do is start manually validating all these issues. You said you're working around the clock trying to create your product, create new features. Don't waste that time manually validating. You will hear other talks about AppSec talking about triaging, right? Triage means, what are we going to work on? What are we going to, what's real? Okay, triaging, oh yeah, let's just triage it. There's a lot of work that goes into triaging, which includes manually validating issues. You don't need to do that with our technology. Real results immediately reported. You don't have to wait until the end of the scan. Prioritization is easy, start fixing, be secure by design. And again, integrated across your pipelines. All of the integrations are on our docs, they're on our website. Thank you very much for all the lovely comments on the chat, on the Discord, they're coming in really, really well.