Make your CI/CD pipeline smarter with static analysis

Rate this content
Bookmark

CI/CD pipeline became the norm in software development. So is linting, which is a basic form of static analysis. In this webinar I would like to demonstrate how you can go beyond simple linting and improve your pipeline to provide additional insights into your code and allow you to deliver more reliable and safe applications.


Prerequisites:

Familiar with CI/CD concepts.

67 min
05 Jul, 2021

Video Summary and Transcription

Tibor Blenici, a languages developer since 2016, will be our speaker today. To start the presentation, a poll was conducted to determine if participants use static analysis in their CI/CD pipelines. Static analysis works by analyzing the syntax and semantics of code. Control flow analysis and graphs are used to detect bugs in code. Taint analysis is a form of data flow analysis that tracks how data flows through a program. SonarQube is a free service for 15 popular languages, while SonarCloud is free for 24 languages for open source projects.

1. Introduction to Tibor Blenici and Yassine Camoun

Short description:

Tibor Blenici, a languages developer since 2016, will be our speaker today. He has expertise in Java Analyzer and JavaScript. Tibor will be joined by Yassine Camoun, another colleague from Sonar Source, to answer questions at the end.

And our speaker today is going to be Tibor Blenici, who has been with us since 2016. He's been a languages developer that whole time. Those of you who were here a few minutes ago heard us talking about how Tibor has worked on the Java Analyzer as well as JavaScript with some excursions into other languages, but of course JavaScript is his true love.

So we'll get started in just a second. I've got a little bit of housekeeping to do. So I do want to say that Tibor will be joined by another of our colleagues Yassine Camoun, who's been with Sonar Source for a couple of years, also on the languages development side. They'll be taking questions at the end. If you have questions go ahead and enter them through the Q&A function, not the chat but the Q&A function in the webinar and then we'll go through those at the end and answer your questions. So with that and without further ado, Tibor let's kick it off.

2. Introduction to Static Analysis and Demo

Short description:

To start the presentation, a poll was conducted to determine if participants use static analysis in their CI/CD pipelines. The speaker then discussed the bugs that can be caught with static analysis and demonstrated integration with a CI/CD pipeline and Visual Studio Code. The speaker logged into Sonar Cloud using their GitHub account and analyzed a project called Strapi. They explained that static analysis is the analysis of code without executing it and discussed the different levels of implementing static analysis.

Okay, so welcome everyone. And to start this presentation about smarter CI continuous integration slash continuous delivery pipeline with static analysis, I would like to start with a poll. So, so, and the question is exactly the topic of the presentation so do you use static analysis in your continuous integration slash continuous delivery pipeline. So I've opened up the poll, and we do have voting in progress here. So right now, SonarQube and ESLint are about neck and neck with some votes for no and seems complicated. So hopefully Tibor you can dispel the seems complicated and we'll win over the no people today. Exactly. So thanks everyone who voted. And let's start.

So on today's agenda, what I would like to talk about first is what kind of bugs you can actually catch with static analysis. And after this, let's say, bit of theory, I would show a demo about integration with your continuous integration slash continuous delivery pipeline. And afterwards, I will show also a short demo of how you can integrate our tools into Visual Studio Code editor. So before I start this theory part, I will just do a quick preparation for my demo, because it can take a minute, so we will we will kind of optimize the time. OK, so I will switch my screen to the browser. And I will log in into Sonar Cloud. So Sonar Cloud is a software as a service platform where you can do your static analysis of your of your project. So I'm going to log in using my GitHub account. You can also use BitBucket, GitLab or Azure DevOps account. So I'm going to log in with GitHub. And I end up on this landing page with my projects. And I'm going to analyse one project. So the project I'm going to analyse is called Strapi. This is a CMS platform written on top of Node.js. So it's quite popular open source projects. So what I did here is that I forked this project into my personal profile on GitHub because I don't want to interfere with the original project. So I forked the original project into my GitHub account. And I'm going to analyse this project on Sonar Cloud. And actually, it's quite simple. So I will click on Analyse new project. Now, I am a member of two organisations at GitHub, Sonar Source, which is my employer, and my personal organisation. So for the purpose of this demo, I'm going to use my personal organisation. Here it will show all my repositories from GitHub. So some of them are, you may notice, some of them are grayed out. The reason for that is that I am going, my private organisation is not on the paid plan. So I cannot analyse private repositories, but I can analyse public repositories. So the repository I forked is called Strapi, and it's here. You might wonder why I have Strapi 2 here. So this is just in case the demo goes wrong and I still want to show you something. So please ignore that. And I'm going to click on Set up now. So this will create a project, which is linked to this GitHub repository. Now, what's happening in the background is that we are going to check out, clone your repository, and start the analysis on it. It can take a few minutes, so now I will continue with my presentation.

So here we are. So what actually is Static Analysis? So from Wikipedia, if you just Google Static Analysis, Static Analysis is the analysis of code that is done on the source code without executing that source code. So this is in contrast with Dynamic Analysis, which is done during the program execution. So we are not running the program. We are just looking at the code and treating it as data. And how does it actually work? Like, what can you? How do you actually implement something like Static Analysis? So there are different levels.

3. Understanding Static Analysis and Its Levels

Short description:

Static Analysis works by analyzing the syntax and semantics of code. One rule at the syntax level is that all branches in a conditional structure should not have the same implementation. A specific example from the browser Quest project demonstrates this rule. The abstract syntax tree (AST) is used to compare the consequent and alternate branches of a conditional expression, ignoring whitespace and minor syntactical details. Going further, semantic analysis involves understanding the meaning of the code. An example of a mistake found through semantic analysis is a variable being assigned twice in a chained assignment.

And how does it actually work? Like, what can you? How do you actually implement something like Static Analysis? So there are different levels. So the first kind of level to do Static Analysis is the syntax level. So we look at the source code. And we build something which is called Abstract Syntax Tree. And one rule which works on the syntax level is the rule, all branches in a conditional structure should not have exactly the same implementation.

So what does, OK, I should first explain probably what the rule is. So the rule is a check. It's basically a piece of code which checks some particular pattern in the code. So for example, we have many rules. Usually we have, they are coded like S. And some number. So this is rule S3923. We don't really have 309,900 rules because there are some gaps in numbering. So this is just a code. So these rules, that takes the pattern where you have a conditional expression. And both the true and the false branch are the same. So this piece of code you see on the screen is from the browser Quest project. This is some browser-based game, as far as I understand. And inside of this project, you can find this function, which is testing if you are playing on the mobile. And it should return the number of frame per seconds the game should be initialized with. And it returns the same value whether you are mobile or not. So this might have been intentional. However, it's rather suspicious. Because why would you test it if you are not interested to return something else? So you might wonder, I don't need any sophisticated analysis. I don't need any abstract syntax tree for this to work. Well, let me explain further.

So this abstract syntax tree for this conditional expression, how does it look like? So you have a root node, which is conditional expression. And it consists of three child nodes. One is test, which is the test before the question mark. And then there is the consequent and alternate part, so the true and the false branch. And so what this rule does is it just takes the consequent and alternate branches, and it compares if they are identical. However, this comparison is a bit smarter, because it ignores whitespace. And it ignores some minor syntactical details, like parentheses. So that's why we are talking about Abstract Syntax Tree. This is the abstraction. We abstract from the whitespace, and we abstract from some syntactic details. So for example, if you were to implement this rule with regular expression, you would be able to detect this particular pattern. However, you can express the same idea a bit differently. You can put a new line after the colon, and you can put any literal into parentheses. And working on the Abstract Syntax Tree level will allow you to detect also this pattern. And if you abstract a bit further from the implementation, it actually allows you to detect this pattern. So this is the kind of basic Syntax Tree analysis we have for some of our rules, which are rather simple. So if you want to go a bit further and to the next level, we are talking about semantic analysis. So semantic is understanding not only the syntax, but also a little bit of the meaning of the code. What I mean by that? What we see on the screen is a pattern where we are working with some URL string. And so first, we split it by the question mark. Then we split it by the hash symbol. And then we are doing some replacement. But the variable is assigned in this chained assignment twice. So this is likely a mistake.

4. Understanding Symbols and the Symbol Table

Short description:

To detect patterns and understand symbols, we maintain a symbol table with variable declarations and references. The same identifier can appear multiple times in the table for different variables.

It doesn't make any sense to assign the return value of replace into the URL and then to assign that to URL again. So probably the developer intended to do something else. So to be able to detect such pattern, and by the way, this snippet is coming from next JS project, and to be able to detect such pattern, we need to understand that URL is not just somehow the token with three characters. But it's some sort of something we call symbol. And to be able to detect that, different occurrences of URL are actually pointing to the same symbol, we are maintaining something, which is usually called a symbol table. So in the symbol table, we maintain a list of the declaration of variables and all the references. And you can have actually the same identifier multiple times in the symbol table, because you can the variable with the same name can declare in multiples codes. And there are different variables, so there will be different symbols.

5. Control Flow Analysis and Graphs

Short description:

To detect bugs in code, we use control flow analysis and control flow graphs. By analyzing the flow of execution and identifying unreachable code, we can find bugs such as multiple throw statements or blocks not connected to conditions. This analysis helps us raise issues and improve code quality.

So if you go to the next level, we can talk about control flow analysis. So one example of control flow analysis would be the rule of 1763, all codes should be reachable. So what you can see in this snippet is that you have some if condition, and then in the body of this condition, we are throwing an error. However, after we throw an error, there is another throw statement. And this is strange, because this is another throw statement, it's not possible for it to be executed. We will always leave current function when the first throw is executed. So the second throw is unreachable. By the way, this snippet is from the Keystone.js project, and this is likely a bug. So to be able to detect such issues, when we do, we built something which is called control flow graph. So in the control flow graph, we kind of abstract away from all the control flow statements, meaning, for loops, if switch, and all the other statements which are changing the flow of execution. And we only keep expressions in the blocks, and then we keep arrows between the blocks. So this is another program, which is kind of showing the control flow graph for some loop with if conditions. So this is testing that if x equals zero, you branch to y equals one, or if it's false you invoke foo, and then you join again and you invoke bar, and then you go back to the test condition. So this code is really just an example. It doesn't really make concrete sense, but this is how the representation behind the control flow graph looks like. And so if you wanted to detect such a bug, as I've shown before, what we will actually do, we will traverse this control flow graph, and we will be looking for a block. So this is another control flow graph, and the difference between the previous one is that there is actually a block which is not connected to that if condition. So we will traverse the flow graph, and we will find any block which doesn't have any entering arrow, and that would mean that that block cannot be reached in any way, and so this foo invocation is unreachable code, and we will raise an issue there. And so unreachable code is sometimes called also a dead code, so that's why there is a skull next to it.

6. Cross-Procedural Analysis and Suspicious Arguments

Short description:

To perform more sophisticated analysis, we need to go cross-procedural, which means analyzing the interactions between functions. For example, in JavaScript, passing extra arguments to a function that doesn't expect them is suspicious. In the Strapi project, we found a function invocation with a scope argument, but the called function doesn't declare any arguments. This is either a missing declaration or a confusing use of the argument.

Okay, so let's level up, and if you want to go even deeper and to do even more sophisticated analysis, we would need to go cross-procedural. So in static analysis, we use the term procedural. I think this is mostly from historical reasons, so this means to go cross. So usually in JavaScript, you would use the word function for the same purpose. So what does this mean? It means the analysis between the functions. So for example, we have this rule, 930, which is about function calls which should not pass extra arguments. So in JavaScript, actually, it's quite flexible with how many arguments you can pass to a function. And it's often used that you pass less arguments that the function is declaring because function can handle the undefined arguments by using some default values or something similar. However, it is rather suspicious when you are passing an argument and the function expects less arguments. So in this case, the issue which we raised on this Strapi project, you see a function invocation with some scope argument, but the function which is being called doesn't declare any arguments. So there is no way this scope can be used inside. So it's rather suspicious that someone would want to invoke this function with this argument. It's either the argument parameter was forgot to be declared and there is some piece of logic missing inside this init cancel catcher, or it should not be passed because it's confusing.

7. Taint Analysis and Data Flow

Short description:

Taint analysis is a form of data flow analysis that tracks how data flows through a program. The source of data is usually the request object in a JavaScript web application, which comes from the user and can be malicious. The data flows through different endpoints and functions, ultimately reaching a sink, such as a database. It is important to ensure that any malicious or malformed data are sanitized along the path. The presented issue is detected in the Owasp Juice Shop project, an intentionally vulnerable application. The issue is raised on a SQLite query statement, where the data flow is analyzed to identify potential vulnerabilities. The flow of data is visualized, showing the steps of constructing the tainted value and invoking the query. This interface is useful for understanding the full flow of data in complex vulnerabilities.

Okay, and so if you put all these levels together, you reach something which at least for us is somehow a holy grail. So this is the somehow when everything comes together and you are doing something which is usually called taint analysis which is a form of data flow analysis. So this kind of analysis, it's tracking how the data is flowing through your program. So usually what you would have, every time you have some data in your program, you have the source of data, which if we imagine for example, JavaScript web application, that would be source of data, would be usually the request object. So this is something which is sent by the browser. So ultimately it is something which is coming from the user and we cannot trust the user that he provides correct data. So it can be either intentionally or unintentionally somehow malicious. So this is something which we call source. And then this data is somehow flowing through your program. So you might have different end points and you might have different functions, invoking some logic. And then in the end, this data ends up somewhere, which we call usually a sink. So most commonly the sink would be a database or it could be a file on the file system or something like that, something which is on the server and which somehow we expect to trust it, right? So if you have a database, you expect to trust the data, which is there. So you need to make sure that on every such path, you are guaranteed that any kind of malicious or malformed data are sanitized. So you see here that, for example, you have one path and there is this square, which is called sanitizer. So this represents some sort of sanitization function, so it can be some escaping or something similar. And then there is one part, which is in red, which doesn't have the sanitizer. So this is the part where we would like to actually raise an issue and show to the user that, okay, there is a problem there because if the user provides some malicious data here, they will kind of effortlessly flow to the sink and it will keep these malicious data, we will store them into the database and we are at risk of something bad happening. So I will go back to the issue that we are detecting here. So, of course, if you just analyze some random projects on GitHub, as I usually do, you will not find these kinds of vulnerabilities every day. They are quite rare because people are very careful not to need to write such code. So this snippet, it's coming from Owasp Juice Shop project and this is intentionally vulnerable application. So I am just going to show you how this issue looks like on one of our instances. So this is the project. I analyze in SonarQube. So SonarQube is our on-premise solution. So in principle, it's the same engine behind as in the SonarCloud. Although SonarQube will operate yourself, on your side, SonarCloud will be operated for you. And so I analyzed this Owasp Juice Shop project and here you can see the issue which was in the presentation. And so what this is showing you is that the issue is raised here on the SQLite's query statement. So if you are not familiar with SQLite, that's a database framework and we are constructing a query for a database here. And what this query is... Tymor, would you zoom your browser just a little bit? Ah, thank you. Yes, sure. That's good, thank you. Okay, so what this issue does is that... So we raised the issue on this query statement and here on the left, you can see how the data is flowing through our application. So there are two steps, two and three, which are on the same line. So they are kind of put together, but three is the invocation of the query and two is the construction of the tainted value. So this is called tainted value into this string using the template, string template. So what we do here is that we concatenate some select with the request.body.email. So this can be controlled by user. And the first step is actually the configuration of express.js route. So this is saying that if you, in the browser, you reach this rest users.login, this login function should handle it. So you can see in the UI that this shows you in kind of in the one page, the full flow of the data through your application. So we start in the server.js, then it goes to this login and then here you construct the query. So this is kind of simple to follow. You probably don't need this interface, but in practice, these issues, they can get quite long and let's say the real vulnerabilities, they usually consist of many steps and is kind of useful to have this kind of interface which shows you the full flow on one page. Okay, let's get back to the presentation. Okay. Sorry, it just jumped on to the beginning.

8. Demo Analysis and Time Considerations

Short description:

Before we continue with the demo, let me mention that the analysis of the project on Sonar Cloud is still ongoing. The process may take longer due to the real production instance and the additional analysis of previous PRs. Please bear with us for a moment.

So I will go here. I think I almost done anyway. So I will go back to our demo. Let me close this one. So as you remember, before I started with this slides, we analyzed this project on Sonar Cloud and it's still going on. Okay, I was expecting it to be done already but this is not always predictable because this is a real production instance. So it might be that there is a lot of load and this can take a bit longer to set up. And let's maybe wait for one minute. I think it might be useful here, Tibor, to mention that this does vary a little bit by time of day, but also, what's happening is I think you're on the new beta interface so it's analyzing not just current state with the main branch, but also the last five PRs that you would have if you had any PRs on this project. And so that can also add some time to the initial analysis. Yes. Thank you. Indeed.

QnA

SonarSource Analyzer and Rule Database

Short description:

SonarQube is a free service for 15 popular languages, while SonarCloud is free for 24 languages for open source projects. Our analyzer for JavaScript is built on top of ESLint, but we have added additional rules and advanced security analysis. Writing our own analyzers allows us to address questions, bugs, and feature requests directly. We have a database of rules for different languages, including JavaScript, with around 250 rules. The rules cover common vulnerabilities, such as injection attacks, and provide explanations and examples of problematic code and how to sanitize it.

And OK. Maybe, if there is any question to this first part of presentation I did so far, we can take some. So we've got a question here asking whether it's similar to Codacy? Personally, I don't know Codacy. But I heard about it. But I never tried it myself. So I don't know. So I've done a little bit of looking at Codacy. And my understanding is that Codacy has possibly a comparable interface. Codacy does not write their own analyzers. Codacy collects other analyzers, including some analyzers from SonarSource to cover the various languages that they offer, and uses free static analyzers to provide their service. The distinction is that I believe, and I'm not going to swear to this, that Codacy is a paid service, whereas SonarQube is free for 15 popular languages, and SonarCloud is free for 24 languages for open source projects. So hopefully that answers the question. Please go ahead. I will add a bit into that. So indeed, you might wonder, because in the first question I mentioned ESLint. So actually, I wanted to maybe explain a bit how we also use ESLint as somehow the, let's say, engine of our analysis. So our analyzer for JavaScript is actually built on top of ESLint, and some of the issues I've shown before, they are the same as you would get with ESLint rules. However, we added a bunch of rules on top of that. Some of them are available as open source projects, some of the rules we added. But some of them, they are a bit more complicated, and they are using some other technology we developed in-house. For example, the standard analysis is a good example of that. So those are not really available anywhere else. So I think that answers the next question that came in, which is, since we already have TypeScript and ESLint on our project, what could SonarSource provide in addition that's not already provided by TypeScript and ESLint? And I guess that answers it, unless you want to add anything else? Yes, exactly. So on top of ESLint, we built our rules, and especially the standard analysis, which is doing a bit more advanced analysis for security issues. And so does it? Yes. Sorry. We have a follow-up question from Nicolay, who said, what are the advantages to writing your own analyzers? In addition to what you've mentioned, which is going above and beyond, I want to mention that at SonarSource, we do have experience with aggregating other people's analyzers. 1,000 years ago, we did that when we first launched. And what we found was that we became the middleman when people had questions, bugs, feature requests. All we could do was say, we'll talk to the guy that wrote the analyzer versus being able to actually deal with the problem, being able to fix the bug, improve the rule, et cetera. So hopefully that answers that. OK. Strapi somehow resists to get analyzed. So I will go a bit further, and I will show you one of our website, which is called rules.sonarsource.com. So this is somehow our database of all the different rules we have. So as I mentioned before, rule is basically a piece of static analysis, which is checking a particular pattern in the code. So this database is for all the languages. So if you select JavaScript, we have something like 250 rules. And here I filter the injection rules. So these rules are mostly based on this taint analysis technique I was showing before. And you can find usual suspects here, like the most kind of common vulnerabilities, such as I was showing before, the database queries should not be vulnerable. Or for example, another thing is operating system commands should not be vulnerable to injection attacks and similar. So this is kind of the database of all the rules. Also, what we have inside Snare Cloud, and here is a description. So if the issue is raised, you can actually read. I can show that here on this issue I was showing before. You can actually read what the issue is about. And so you have this, why is this an issue? And here is a piece of text explaining why this piece of code is problematic. Here is an example showing some kind of bad pattern which you should not do. And here is also an example of how this should be sanitized.

Database Injection and Analyzing Projects

Short description:

To prevent database injection, use parametrized queries. After analyzing the projects, you will see different categories of issues, such as reliability issues (bugs), maintainability issues (code smells), vulnerabilities, and security hotspots. It's important to prioritize vulnerabilities and ignore the thousands of code smells. In the Strapi project, a false positive vulnerability was detected in a JSON Web Token call. The issue is that the encryption algorithm is not configured, but passing an empty object uses a safe default algorithm. You can mark this as a false positive and explain it to your colleagues. Creating a branch and adding a file to the root of the file system will be demonstrated.

So in case of database injection, usually the most common way to deal with that is actually to use the parametrized queries. So the place where the data is to be inserted is these question marks. And then you provide arguments to the query in location. And that way, the framework or the library you are using will deal with the problem for you.

OK, let's give one last chance to Strapi to get analyzed. And it managed. Thank you.

OK, so after you analyze the projects, this is what you will see. So there are some things which are not fully configured, but I will not go to that now. But here you will have these two columns with three rows. It will show you the different categories of issues which were defected. So we have the reliability issues, which usually you would call that bugs. And then we have maintainability issues, which we call cold smells. And you have vulnerabilities, which are issues which we probably really want to look at. And then you have something which we call security hotspot. So this is not strictly a vulnerability. It's just that there is some sensitive part of your code, for example, of database access or something similar, which you want to keep an eye on. So as you see, there are quite some many. So there are something like 1.4k code smells. And you might wonder, wow, I don't want to review that. That's too much to review. And you should not. And that's the kind of important thing that you should probably look if you have any vulnerabilities, but you can kind of ignore those thousands of issues which are detected. And I will explain that in a minute.

Let's first check this vulnerability here. So this is this open source Strapi project, which is public on GitHub. And if you find a vulnerability here, that's really important. So let's check it out. And this is some JSON Web Token call. And actually, it will be quite unlikely to detect vulnerability in such a project just like that. So this is actually something which is called false positive. So this issue is raised, but it's not a real issue because it's saying that the algorithm for encryption of the JSON Web Token is not configured. Although if you pass empty object, it will use a default algorithm, which is safe. So you can just ignore this issue and the way to actually ignore the issue, and this is maybe a bit different to what other static analysis tools would offer to you, that you actually can manage this issue. So what you can do here is you can mark this as false positive, and you can provide a short explanation for your colleagues that default is actually safe. So you might wonder why I'm presenting you this bug in our static analyzer. I wanted the demo to be somehow real. And also we already fixed this problem, so I think it will soon disappear. So let me comment this out. Now if I go back to my summary page, this will actually display zero.

Okay, let's do something more interesting. Just a little bit, Tibor. Zoom out or in? In, zoom in the browser. Okay, thank you. Okay. So I will switch to the console and I'm going to create a PR to this project. So first I will create a branch and this branch will be called Demo. And I will open this in my editor. Okay. And I'm going to add a file in the root of the file system.

Adding Code to Strapi and Sonar Lint Extension

Short description:

I'm adding a JavaScript file to the Strapi repository with a vulnerable code snippet. After pushing it to GitHub, I open a pull request. Sonar Cloud will automatically start analyzing the pull request. In the meantime, I explain the Sonar Lint extension for VS Code, which provides analysis similar to Sonar Cloud. The code snippet is immediately highlighted with a yellow wiggly line, indicating a suggestion to remove or edit the conditional structure.

So I'm not actually adding anything to the Strapi itself. I don't actually understand how Strapi works very much but I will just add JavaScript file to the root of the repository. And I will copy paste in some piece of code which I know that it's vulnerable. So I'm going to save that. Switch back to console. So this is the file I edit. So I add it to my index, I commit it. And I will push this to GitHub. Okay. So after I push this to GitHub, I am going to open a pull request. Now I need to be careful not to push this to upstream project. It happened to me when I was preparing this presentation, I think they were not very happy with my contribution. Okay. So I am adding this one file with this short code snippet. So what this code is doing, I am going to zoom again a bit. So this is very simple Express application. So this is Express router configuration and the function is calling send file with something from the request. So if you don't know Express, this is just some bad code. If you know Express, you probably understand why this is bad. Okay. So I'm going to create this pull request. So what's going to happen on Sonar Cloud is that here, I refresh, it will discover that there is a pull request going on and it will automatically start an analysis on this pull request. So this will take a minute, but hopefully not so much as the first analysis. And if there are any questions, I can take maybe one or two. So we've had a couple of questions in the chat that Yassine answered. And as a reminder, there's a separate Q and A function. So please do put your questions there. One of the questions was asking for a link to find the rules. And I think Yassine pointed them to that. And Yassine also pointed out that we do analyze React, Vue, and Angular projects. But no other questions. Right now.

So while this is happening, I will talk a little bit about VS Code. So as you might have seen, in VS Code, I have an extension installed, which is called Sonar Lint. So this is one of our companion projects. So this is the VS Code extension. We have the same extension also for other editors, but I will focus on VS Code here. So if you install this in your project, it will provide you the same kind of analysis as I was showing before. So if I write the piece of code I had in the presentation before, so I have the if statement. I'm not sure if I can zoom this one. Ah, yes. OK. That's maybe better. So this is the extension. It's called Sonar Lint. I already have it installed. So you have read the piece of code which was in the presentation earlier, sorry. You will see that it's immediately highlighted with this yellow wiggly line. And if I hover about it, you will find that there is this message, the same one as Sonar Cloud slash Sonar Cube can raise, which is about that you should remove this conditional structure or edit the branches.

Sonar Linked Extension and ESLinkedPluginSonarJS

Short description:

The Sonar linked extension in Visual Studio Code provides suggestions to remove or edit conditional structures. The same description as on the Sonar Source website is available within the extension. Another demo is shown using the ESLinkedPluginSonarJS project, which allows analysis using GitHub actions. Manual setup of the project is demonstrated, including creating secrets and configuring GitHub workflow YAML file for JavaScript projects.

And if I hover about it, you will find that there is this message, the same one as Sonar Cloud slash Sonar Cube can raise, which is about that you should remove this conditional structure or edit the branches. They are not all the same. And you have the number of the rule which raised this, so this is like Sonar linked extension and then there is JavaScript colon and the rule number. And if you click on this view problem, it shows this thing. And if you click on the quick fix, it allows you to open the descriptions, and this opens the panel on the side. And you have the same description as I was showing earlier on this rules of sonar source.com page, but you have it inside the Visual Studio Code, okay.

So, let's check again how is our, okay, somebody added another PR there, interesting. Okay, okay. Okay, while this is happening, I will show you another demo and let's switch context a bit. So, I will show you another project which is called ESLinkedPluginSonarJS. And this is the, so we took some of our rules and we created an ESLinkedPlugin. So, if you are already like very much entrenched with ESLinked, you can use this plugin to have some of the rules we developed. But obviously you will not get all of them and you will not get all this nice integration we have. And I'm just going, so I choose, I picked this project because I wanted to analyze it. And I will just show you how you can analyze this project using GitHub actions. So this time I will set up the projects a bit differently. I will not use this automatic analysis for which we are waiting way too long here. But I will use GitHub actions to execute the analysis and I will push the result of analysis to SonarCloud. So this way you can set up any sort of CI you are using. So if you are using some on-premise continuous integration service, you can integrate with that. Or if you are using GitHub actions or Travis or CircleCI or whatever. Okay, so I will again go to SonarCloud and my project, sorry, and I'm going to add a new project. And this time I will not start here because this will trigger automatic analysis, but I will click on the link here to create the project manually. So I will, again, choose my personal organization and I'm gonna choose a project name. So you can see some values when I was testing this presentation. So we are going to pick the next one. Okay, and I click on setup. So now it will show me different CI services. So we have direct integration with GitHub Actions and with Travis CI, but you can do the same thing with other CI tools. So I'm going to choose the GitHub Actions. So first, what it's going to ask me to do is to create secret in my repository. So the secret is named sonar underscore token. And so I will go to settings in my repository, secrets, and I already created it when I was preparing this, so I forgot to remove it. Okay, so it's this secret. I will just update it. So you would click here on new repository secret, or you know what? Let me remove it. Okay, I'm gonna remove it. So this is how it would be if you are really starting from scratch. Okay, so we will add new repository secret. The name is sonar underscore token. The value I'm gonna copy paste from here, and I will add a secret. Okay, secret added. Let's continue. So now it's going to ask me to create the GitHub Workflow YAML file. So if you are not familiar with GitHub Actions, this is a bit of configuration for GitHub Actions to actually like see what should be executed. And it's asking me what is my project about. So we have different setup for Java projects, maven, Gradle, then for .NET. We are in JavaScript, so we fall under this. Okay, so it's asking me to create this kind of workflow. So what this workflow does that on every push to GitHub on master branch or pull request, you should execute this SonarCloudAction.

Setting up Project and Configuring Analysis

Short description:

The setup involves checking out the repository, running the scan, and using the GitHub token secret for authentication. The project is opened in the editor, and a workflow is added to the .github directory. Additional steps are added to set up the node, run tests with coverage, and execute the Sonar cloud scan. The next step is to configure the analysis by creating a sonar-project.properties file with the necessary parameters.

And it has following steps. So first it will check out the repository and then it will run the scan. And it's using the GitHub token secret. So this secret is implicitly provided by GitHub on your repository. And then it's using the secret which we just created. So these secrets are used to authenticate between SonarCloud and GitHub.

Okay, so I will switch to my editor. I already have them. I'm sorry, I will just take some water. So this is the Strapi project. I already have the project checked out here. So I will just open it in the editor. So this is the project where we are going to add the workflow. So the workflow should be added in this .github directory. I will create new folder called workflows here. And here I will create a new workflow which is called build.yml. Okay. And so I can just copy paste this one. So this will execute the scan without doing nothing. But actually, I want to also build this project and I want to run the tests. So what I will do here, I will add some additional steps before I execute the Sonar cloud scan. I'm just going to copy paste because I prepared it before. So what I want to do is I want to set up the node and then I want to run the tests with coverage. Okay. So set up node with version 12. Okay. Run yarn build, run yarn test with coverage. Okay. I'm going to save this. And let's see what's the next step in setting up the project manually.

Okay. So we did this second step with setting the workflow for GitHub action. And next I need to configure my analysis. So usually this is done by putting the sonar dash project dot properties file into your source repository. Okay. Sorry. I have a lot of windows open. I'm going to create sonar dot project, sonar dash project dot properties file here. And here I can put some analysis parameters I want to be used for this. So I have the file here already prepared. So I don't need to type it out. I can just change a few things. Okay. So most important is the project key. So it's number four now. This is my organization project description. This is not mandatory. Then I configured the directory with sources. So this project is structured such that you have src directory and test directory. So I just configure what are the sources, what are the tests.

Configuring Coverage and Test Execution Reports

Short description:

In this part, the speaker explains the purpose of configuring coverage and test execution reports. They add the necessary files to the repository and push the changes. The speaker then mentions checking the GitHub Actions and invites questions from the audience.

And I will explain later what is the purpose of this. And then this is probably the most important part. So I will configure, where is my coverage report? So the coverage report when you are using Jest, we are using Jest in this project. It's in this LCOVE format. So this is a format we support. And then there is this also test execution report, which is basically just saying like, which tests were executed, which failed, and which succeeded. Okay, so I am adding these two files to my repository. And let me see. Okay, so we have this GitHub workflow file and smart project properties. I'm gonna add both of them to the commit. So there is this build.yml and some project properties, and I will just commit them. So this will be at Sonar Cloud analysis. Okay, I'm just doing this on master branch because I'm so rich, but you will probably do it on some other. Okay, so I'm going to push this. Let me just check here. Yes, this was the last step we needed to do to configure this, okay. So I'm just going to push this and let's see what happens in this project. So let's get back to this repository, okay. Actions and I have the actions executed. So this is running GitHub Action and it will soon start, hopefully. Okay. Dadidadida. Okay, maybe I can take some questions if there is any.

Sonar Cloud Analysis and Sonar Lint

Short description:

While waiting for the Sonar Cloud analysis to finish, there was a question about HIPAA compliance and the possibility of signing a BAA. Sonar Cloud may not be HIPAA-compliant, but SonarQube provides the same analysis on-premise. For more serious environments, it is recommended to reach out to salespeople for guidance. The Sonar Cloud GitHub action is executed, including the build and coverage steps. Sonar Lint in the IDE runs locally for analysis but can be connected to Sonar Cloud or SonarQube for up-to-date analyzers and synchronized issue triaging. The more advanced state analysis is not executed locally in SonarLint, but can be viewed when connected to SonarCloud.

So while we're waiting for that, Tibor, we did have a question in the Q&A asking whether Sono Cloud is HIPAA-compliant or whether it's possible to sign a BAA. I'm not sure what a BAA is, but I just want to point out that HIPAA is about protecting patient information and presumably you're not going to have any patient information in your code, so I'm not certain that it would be relevant. Probably Sono Cloud is not HIPAA-compliant, but you do have the same underlying analysis in SonarQube which runs on-premise. So if security is an issue for you, if you need to tightly control what's going on with your code, then maybe instead of Sonar Cloud, you would use SonarQube. And also I would add that if you're asking such a question, you're probably thinking about using it in some, let's say, more serious environment. So I would really recommend to reach out to our salespeople and they will surely answer with certainty what's the best way to go forward. That's a great point. We have consultants who professionally answered this particular type of question rather than... Yeah, so, okay, it's starting to run. It's a bit longer than usual. Okay, so now this is executing the Sonar Cloud GitHub action. It will also execute the build and coverage I added to this build step. Okay, so now it's checking out the repository and starting the node container and now we are running the build. So this project is in TypeScript, so this is running a TypeScript compilation. Get are some warnings. We should check maybe later and now it's running TypeScript. Okay, I'm gonna zoom in again because I think it's a resets every time. Okay, and now we are executing the tests with coverage. So hopefully all of them will pass. They should have. I didn't modify anything so it should still be green. Okay, all tests passed. We have the coverage and now we are starting the Sonar Cloud Scan. So while I'm looking at this, I'm thinking that maybe the order of steps is not the optimal one but it should not matter actually. So while we're waiting for that to finish Tibor, we've got another question. Does Sonar Lint in the IDE use cloud for analysis or does it work without internet? Okay, so yes and no. So the analysis Sonar Lint is doing is run locally. So you don't need to be connected to any service. You just install the extension and you are completely offline and all the analysis is done offline. However, you have the option to connect it to Sonar Cloud or on-premise SonarQube. This is something which we call connected mode. You can configure that in your settings. And what that gives you is that you always have up to date analyzers. So it will use the same version of the analyzer which is used on Sonar Cloud. And the reason for that is that we are fixing issues. We are adding new rules. So you always want to have the same setting you are using in your continuous integration and which you are using locally. Otherwise you can have some discrepancies between the two. So it might be that locally you would have some issue which is false positive, which was already fixed and the fix is already on Sonar Cloud. So you will always want to run like the latest analyzer to have the latest version of the analysis available. So that's one reason to use this connected mode. The second reason to use this connected mode as you have seen earlier, I marked this issue as false positive. So this was also reflect in your code editor. So if someone marks the issue as a false positive, you can actually benefit from the work of triaging the issues and the issue will disappear from your editor when you are in the connected mode. So you can be fully offline if you wish to and you can connect to Sonar Cloud if you want to have some benefits by doing that. Also, the final little remark, this state analysis, which I was mentioning before, like this more advanced analysis, this one is not executed locally in SonarLint and it's not that it's computed on the cloud. Usually it's just that SonarLint extension cannot do that by itself, as of now. So what SonarLint does is that if you have such data, such a issue from such a rule, it will display it only if you are connected to the SonarCloud. Okay, let's switch back. Okay, so this has finished the SonarCloud scan.

Opening SonarCloud and GitHub Actions Demo

Short description:

One way to open it is by going directly to SonarCloud or finding the analysis successful line in the logs. The GitHub actions demo provides more control over the analysis parameters and additional options compared to the automatic way. The automatic way is suitable for smaller projects with minimal configuration. However, if you have your own CI, you can execute tests and coverage, which will be displayed in the analysis.

So one way to open it, okay, so I could go directly to SonarCloud to the project or here in the logs you will find this kind of analysis successful line with a hyperlink going here. So one thing why I wanted to show this GitHub actions demo is that this gives you better opportunity to control the parameters of the analysis and it allows you more options than this automatic way. So automatic way is perfect for smaller projects or quick set up and it's really nice. It really works quite well. There is like very little configuration you need to do. But if you are using your own CI, you can execute the tests and you can execute coverage for your tests. And we will display that information.

Demo of Project and Test Coverage

Short description:

This part showcases one of the speaker's projects and highlights the high test coverage. The speaker demonstrates the use of colored-coded gutters to indicate test coverage and points out specific areas that are not fully covered.

So here, this is one of our projects, a little bit of self-promotion. So this is because we are writing these tools, we actually also use them. So you see that there are almost no issues and coverage is quite high because we really care to have very high coverage in our projects. So this is showing you the test coverage. And here, if I open any files. So these files are sorted by a sending coverage. So this file is the least covered one. And if I open it, here I have this gutter with colored coded things. So this is fully covered by tests. Here is some branch missing. And this return is not executed. And it has, it also shows you like the information for GitHub. So you know who is the author, when it was committed and you have the SHA-1 for the commit.

Using GitHub Actions for CI/CD

Short description:

GitHub actions can be used instead of automatic analysis for coverage display. Jenkins is not required if using GitHub actions. GitHub actions serve as CI/CD.

So one reason why you would want to use GitHub actions for example, instead of this automatic analysis I used for Strapi is that you would like to have coverage displayed. Otherwise they are almost equivalent. Okay. If I'm using GitHub actions, do I have to set up Jenkins too? No, no, no. You either use GitHub actions or Jenkins. You would not use both probably. So GitHub actions is my CI-CD. Yes, GitHub actions is like, yeah, CI-CD. Yes, exactly. Okay, so I will close this ESLint plugin with GitHub actions project. Sorry that I'm kind of interleaving my demos. I just want to optimize the waiting time.

Analyzing PR on Strapi and Quality Gate Checks

Short description:

The PR on Strapi has been analyzed, revealing bugs, code smells, and one vulnerability related to JSON web tokens. SonarCloud shows only the issues specific to the pull request, following the scout rule of leaving the codebase in a better shape. Bi-directional connection between GitHub and SonarCloud allows for easy tracking of issues and comments on the pull request. A quality gate in SonarCloud determines if the checks have passed or failed, and this information is pushed back to GitHub. The failed checks, including SonarCloud code analysis, can be configured based on the repository's requirements.

Okay, so this PR on Strapi is analyzed. So just to recap a bit. So we did analyze the main branch of this Strapi and we have this many bugs and many code smells here. There was also this one vulnerability, which was about the JSON web tokens, which I close. And some duplications are computed. Coverage is not computed because we didn't execute the test. So we actually don't know the coverage by the test. So if I open the pull request, I really would like to know who is adding this PRs here, but okay. So we analyze this pull request here. And just to remind this pull request was about me adding to the root of the project, one vulnerable file. So as you can see here, all the bugs are at zero. The security reviews is at zero and there is one code smell, one vulnerability. So if I open the vulnerability, this is the vulnerability I added in my PR and SonarCloud is smart enough to not show you issues which are in the main branch, but show you the issues which are only in your PR. So it's doing some sort of diff. So you are only dealing with the things you edit to you are going to add to the code base. So this is kind of following a scout rule. So you leave the place in a better shape than you found it. And you are not adding more to technical that. So if I open this issue, it will show me the code and it will just show me that indeed, I am getting the user data and I'm doing some, some dangerous operations. So I should probably fix this. And also one thing I wanted to add. So if I go back to GitHub and I opened this pull request, so this is the pull request I created. And here, Sunr cloud. So there is also kind of like bi-directional connection. So not only GitHub is pushing data to Sunr cloud and you can check your issues in Sunr cloud, but also Sunr cloud will comment on your PR with such a small summary to show you how your PR is doing. So it's showing us that there is this one vulnerability and I have the option to click on this and it gives me, it redirects me back to this interface. Okay. So that would be probably all for the demos if I didn't forget something. So we had another poll we wanted to ask. So let's maybe have this poll quickly. So this poll was about this extension in your editor. So do you use a static analysis extension in your editor? Please vote quickly. So I've opened a poll. We've got votes coming in already. And maybe while people are voting, if there is any question, I can answer the question. So we've got a question. Is the PR blocked from SonarCloud? So I think they're asking about the checks. Is it blocked? Yes. Okay, indeed. So I didn't go into the detail. So in SonarCloud, what we have is this concept of quality gate. So this is some sort of gate, which is either green or red, depending on if you failed or not. So you can actually, so there is some default configuration. By default configuration, you cannot have any vulnerability there. So here, the quality gate has failed. And this information is pushed back to GitHub. And it will actually say that some checks has failed. And one of the checks which has failed is this SonarCloud code analysis. So then depending on how you configure your repository, there is a setting, which checks for branches are required to be there. Okay, I did not configure this here, but I believe there is such fields.

SonarCloud Code Analysis and Status Checks

Short description:

There is a requirement for status checks to pass before merging. SonarCloud code analysis can be checked, and if it fails, it can prevent branch merging. The configuration allows for making this check optional.

Yes. So there is this require status checks to pass before merging. And I can check the SonarCloud code analysis. And if I do this, okay, it's going to ask me for password. Okay. I will just quickly... Ta-da. So, so indeed I require a status check to pass SonarCloud. So if I configure this, it will actually prevent me to merge my branch if SonarCloud analysis failed. So this can be made optional or not, how you wish. But the SonarCloud will create a check on the PR and you have a way to configure if this is kind of a blocker or not. I hope that answers the question. I think it probably does.

Watch more workshops on topic

DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Workshop
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
WorkshopFree
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.
DevOps.js Conf 2022DevOps.js Conf 2022
163 min
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
Workshop
The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.
DevOps.js Conf 2024DevOps.js Conf 2024
25 min
End the Pain: Rethinking CI for Large Monorepos
Scaling large codebases, especially monorepos, can be a nightmare on Continuous Integration (CI) systems. The current landscape of CI tools leans towards being machine-oriented, low-level, and demanding in terms of maintenance. What's worse, they're often disassociated from the developer's actual needs and workflow.Why is CI a stumbling block? Because current CI systems are jacks-of-all-trades, with no specific understanding of your codebase. They can't take advantage of the context they operate in to offer optimizations.In this talk, we'll explore the future of CI, designed specifically for large codebases and monorepos. Imagine a CI system that understands the structure of your workspace, dynamically parallelizes tasks across machines using historical data, and does all of this with a minimal, high-level configuration. Let's rethink CI, making it smarter, more efficient, and aligned with developer needs.