DevOps 2.0: The ultimate JS delivery pipeline

Rate this content
Bookmark

Buddy is a CI/CD tool that lowers the entry threshold to DevOps simplifying pipeline configuration to bare minimum, enabling teams to embrace CI/CD with no dedicated engineers. During the presentation, we will show you how to create a fully functional JavaScript pipeline and adapt it to the workflow in your company in a couple of minutes. We are going to put special emphasis on features that help secure code quality, share some tricks that speed up the deployment (trigger conditions, cache, Docker, changeset deployments, parallelization), and show you how to preview deployed websites with no server infrastructure using Sandboxes – a unique feature of Buddy.


Prerequisites:

Code , Git installed

GitHub account

164 min
17 Jun, 2021

Video Summary and Transcription

The DevOps 2.0 Workshop covers topics such as CI/CD, Git workflows, Docker deployment, and automated browser testing. It emphasizes the importance of continuous improvement, transparency, and adaptation in CI/CD setups. The workshop also highlights the use of pipelines for feature branches, pull requests, and vulnerability fixes. Additionally, it explores visual testing, smoke tests, and SEO metrics for application monitoring and optimization.

1. Introduction to DevOps 2.0 Workshop

Short description:

I'm Tom with Buddy C-A-C-D, presenting the DevOps 2.0 workshop. We'll show you ways to have the best time with your JS delivery pipelines. Feel free to ask questions in the Discord chat. You can participate by listening or following along. If you get lost, don't worry, we'll provide a recording for you to catch up.

♪♪ I'm Tom. I'm with Buddy, Buddy C-A-C-D, with which you can get the fastest deployments ever. And the topic and the name of today's workshop is DevOps 2.0, and we are going to be showing you a few ways to get to a point where you are having the best time of your life with your JS delivery pipelines.

Let me check if I'm sharing the screen, because I don't think I am right now. Right. First of all, let's start with a bit of housekeeping. Even though we have 3 hours slotted for this workshop, we are on a pretty tight schedule. Since I'm not very good at multitasking, which you might have probably noticed, I asked Paul on the Discord chat to take over answering questions that you guys might have. So at any point during this presentation, please feel free to post whatever questions you might have in the Discord chat. Paul and my other friends from the Buddy team are there to answer all of them, right in the very minute you answered them. So I won't be switching over to the Discord chat not to get distracted.

A workshop, right? But the participation mode, it's all up to you. If you want to just sit here with us and listen to what I'm saying, that's perfectly fine. If you want to follow along and try out what I'm doing along with me, that's great as well, but if you get lost at any point, something goes wrong with you, and you're confused as to what went wrong, you need some time to do some troubleshooting, well, don't worry. Maybe that will be a good moment to stop trying to follow every footstep of what I'm doing, because of this workshop, we're recording it, and when we're done with this workshop, you will get a recording to your email. So you will be able to rewatch from the point where you got lost and follow the other steps, and do the required troubleshooting.

2. Workshop Setup and CI/CD Concepts

Short description:

To follow today's activities, do three things: go to the Discord chat, fork the repository, create an account in Buddy. Continuous integration, delivery, and deployment are practices for delivering software. Continuous integration helps avoid integration hell. Continuous delivery extends automation to the release process. Continuous deployment automates the entire journey of code to production.

And for those of you who would want to follow today's activities, to see the things I'll be talking about in the flesh, there are three things you should do before you do that. To get started, head over to the Discord chat for this workshop.

First of all, fork the repository of the application that we're going to be using as our sample app in this workshop. The link, as I mentioned, is in the Discord chat. You can see I have forked it to my own GitHub account just a few moments before the workshop so that we are on the same page.

The second thing to do is creating a account in our CICD tool, which is Buddy. Simply go to Buddy.Works, create an account, and you'll be able to create all those great pipelines with me.

The third thing to do, if you're planning to follow along with what I'm going to be showing you today, you're going to need a couple of features that are not available upon registration to all users. So make sure to create your account now and copy the workspace link and send it over in the Discord chat so that our team can sort you out and enable these additional features that you are going to need in the later part of the workshop.

If you're wondering what's the workspace link, that's the link that you get right after you sign into Buddy. As you can see, my workspace is JSNation and I'm going to be creating my projects in this here workplace.

And one more thing, since we all love free stuff and goodies, we've thought about that and we have prepared a goodie bag, swag bag, so to say, for you. So if you want to grab that, there's a form also in the same Discord chat. Leave us your email and we'll sort you out with a shipment of Buddy-themed goodies.

So now I'm going to switch over to my slides and we are going to be doing a bit of theory. I'm also going to take a sip of water because I'm already parched. It's very hot in here. This is the full name of our workshop and the next slide is the introduction to what we are going to be talking about today for what's one of the most important things we are going to be talking about today. And I think that this introduction, it's going to be a nice bit of warm-up for all of us and it might be necessary, as we have learned today, as we have listed this workshop as suitable for everybody, so you don't have to be an expert in these matters. We are going to attempt at giving you the knowledge you require to understand everything you hear and see today.

So without further ado, let's start. We are on a tight schedule. We have three hours slotted. We should be done in about two hours, 15 minutes, I suppose, but we'll see about that.

So first of all, CI-CD, we have two abbreviations, but actually there are three concepts behind these two abbreviations. These three concepts are continuous integration, continuous delivery and continuous deployment. And keep in mind, this is a very basic thing, but I see that people get it wrong quite often that none of these three, continuous integration, delivery and deployment, none of these are a tool, so to say. So if somebody asks you, hey, if you're using continuous integration in your project or what do you think about continuous delivery, don't start talking about any tools. None of these are a tool. These are practices, approaches, that all focus on creating an efficient and automated process for delivering software.

As you can see in this nice graph here, these processes, they each go a step further in this timeline of this life of code and I'm about to tell you in this little peel of knowledge how they do that.

So first of all, what's continuous integration? To keep it short, when you hear continuous integration, you should be thinking about merging your changes back to the main branch as often as possible. Continuously integrating your changes with the main line of code. When people are talking about continuous integration, there's always in the back of our heads when we are talking about this, there's always automated tests. So whenever you integrate your code with the main line, there are automated tests running and making sure that nothing breaks, that everything is going to be fine. When this code that you have just created or refactored, that when it goes to the main line, nothing blows up. And what does continuous integration help with? It helps you avoid what's known as integration hell.

If you haven't heard about it, or if that's a term that's unfamiliar to you, imagine a situation where you work on a feature, let's say for a month and a half. And then you're done and your manager says, Well, okay, so you and your two other developer friends, you guys are going to integrate on Monday because on Thursday we're releasing. So, from Monday to Thursday, there are three days and you have three days to integrate your code, make sure it plays nice with all the features developed by other developers in your team. And this can be very stressful. Things might go wrong. You haven't really tested your changes against what other people in your project have done. So, you have no idea how this is going to go. And continuous integration, so automatic testing and getting your changes integrated into the mainline. Very often, this helps you mitigate that and be much more calm and less stressed.

Now, onto continuous delivery. You can think of continuous delivery as an extension of continuous integration. So, continuous integration is all about taking your code, building your application, testing it, and getting these changes integrated into the mainline. What continuous delivery does is it extends this automation to the release process. So, everything gets built and gets ready for your app to be deployed with just a click of a button. And this click of a button, this human element, is very important and, I'd say, is crucial to what continuous delivery is. The automation is introduced up to a point. So, you still have a control over when your application, your code gets released to production. It's not that you don't have an opportunity to look over the logs or maybe to manually test the app on the staging environment. You still get a high degree of control when you are going to release your code to the hands of the users or customers.

And then, finally, the third of the trio, Thress Ombres, is continuous integration. No, sorry, continuous deployment, I mean. Which is, again, takes this entire idea one step further. And what I mean by that is that the entire journey of code, from developing, writing, anything you do manually, to then automated tests. This has been automated in the previous two instances. But continuous deployment automates the deployment as well. So provided your changes don't break anything, they pass all the automated tests that are defined in your project, your application gets to production very, very quickly.

So that's the three concepts behind the two abbreviations. And you might ask yourself, why would you want to do that? Why would you want to put so much emphasis of this continuous aspect of these three concepts? Right? And I think that there's this old wisdom that if you practice more, you get better. And it's been paraphrased to fit the IT world a little bit better. So if it hurts, do it more often. So things like releasing, integrating, and testing. These things can be, well, maybe not, they won't hurt you physically, but they might be problematic, they might be stressful, they might create some situations that you are not sure how to handle.

3. Benefits and Pillars of CI/CD

Short description:

Continuous integration and delivery offer several benefits, including fewer errors, less stress, and constant feedback from users and customers. The 2010 Standish Group report revealed that 50% of software features are hardly ever used, highlighting the importance of feedback and customer input. When implementing CI/CD, it's crucial to remember the continuous aspect and focus on three pillars: transparency, inspection, and adaptation. Your CI/CD setup should be transparent, visible, and clear to everyone involved. Inspection ensures that successes are amplified and repeated, while mistakes are fixed and not repeated. Constantly adapting and evolving your setup is essential for continuous improvement.

These things can be, well, maybe not, they won't hurt you physically, but they might be problematic, they might be stressful, they might create some situations that you are not sure how to handle. But if you do this more often, you will get a little bit better at these processes each and every time.

Now, I've just... Okay, sorry, I was checking if my camera is on. It is on, but maybe you're not seeing that because my presentation is full screen right now. Right. So this is the main reason why you should adapt this continuous approach. Just basically practice makes perfect.

And now with this practice makes perfect approach, there are benefits in that for you. First one would be that there is less errors. The changes that you make are tested more often if you integrate more often. And I think that you will all agree that it's much easier to fix bugs that crop up after small changes instead of creating huge changes and then trying to figure out what actually went wrong.

And the second benefit is the aspect that I mentioned that I think is very important with continuous integration. So there is less stress in that no-integration hell. You don't wait until the last moment to integrate. When you are integrating as you go, you get a very many opportunities to automate mundane and error-prone tasks. So you don't have to do them yourselves and worry that you're going to make any mistakes.

Now, the third aspect and the third benefit would be the constant feedback that you get. And this feedback is twofold, because you're not only getting feedback regarding the quality of the code that you yourself are writing, creating, maybe editing, but you are also getting constant feedback from the users of your application. You're also getting constant feedback from your customers.

It's not that unusual to see that there's a lot of work and a lot of thought, a lot of passion even, put into development of a project only to then turn out that the project is full of features that nobody really wants. And why is that? It's often because the devs are guessing what the customers want instead of getting that feedback loop going and finding out what features their users or customers want.

And this has been proven a few times already, but I think that a report, a finding that made waves for the first time with that revelation that a lot of features in software are hardly ever used was the 2010 Standish Group report. And if you go online and you research this report a bit, it's been picked apart for the reason that internal apps have been scrutinized, but… and only a few internal apps have been scrutinized, but still, I think that this should work in your imagination. 50% of features are hardly ever used. Would you want your project to get that stat? I really… I wouldn't want mine to have 50% features that are hardly ever used.

And so remember about that, remember about these benefits, but when you introduce the CICD, so this continuous integration and continuous delivery or deployment, is it okay to call it a day, to just high-five yourself and be proud of yourself? Well you can be proud of yourself because it's not that very easy, but can you call it a day? I say nope. You cannot really do that.

Remember that I put emphasis on this continuous aspect of continuous integration, deployment and delivery, and this word continuous, it should be ringing out in your ears as you go and as you look at your setup, and here are three pillars of a successful CI-CD setup that keeps that continuous approach, and keeps its spirit so to say, going.

So first of all, your CI-CD setup, it should be transparent. So transparency is one of the three pillars. The process you introduce, and the practices, they should be visible and clear to everyone. Everyone should be using them, and there shouldn't really be anyone working on your project who's above the law, who's not adhering to the process. If you're going CI-CD, if you're dedicated to get this thing done good, make sure that everyone understands it and uses it.

Then the second pillar is inspection. Just don't set and forget. You should look at your setup continuously, again, and you should make sure that everything that was done well, because of the setup that you have in place, is amplified, is repeatable, and that everything that went wrong, any gaps that you might have in your process, fix them so that with the next release or next integration, nothing wrong happens. You don't repeat the same errors, and as you inspect the process, the pillars 2 and 3 are directly connected. With continuous, constant inspection, you should constantly adapt your setup, make the setup evolve to suit your needs a little bit better.

So this has been a very brief introduction to CI-CD. I hope that you guys are warmed up. I know I am. So right now I'm going to be moving on to a little bit of live demoing. I'm going to be showing you what you can do with Buddy and how you can implement various CI-CD practices to automate the delivery, the testing of your application.

4. Setting up the Buddy Pipeline

Short description:

Let's set up a Buddy pipeline called Prod that will build, test, and deploy the application. The pipeline will run automatically when changes are pushed to the main branch. The first action will install dependencies and build the application. The next action will run the unit tests. Assuming the app is built and tests pass, the build application will be transferred to a server using the SFTP-H action. Finally, a script will be run on the server to deploy the app.

Let me just check again if everything is okay here. Okay, I have my video, I have my audio. Right. So we're done with the basics again. We're over the intro. Time to get our hands dirty. And we'll need a Buddy, so to say, to use as the app that's going to be molded and processed by all of the pipelines that we are going to be creating today in Buddy.

And that app, for those of you who haven't had a chance to check out the repository that we've made available for you yet, it's a simple counter app written in React. Remember to fork it and you'll be able to try out all these things that I'm going to be doing right now. This, what you can see here, is the initial view available to … The view of your workspace available to you after you sign into Buddy, and to start working with a repository you click Create a New Project. Right here you can see that my Buddy workspace is directly integrated with my GitHub account, which allows me to take any of the Git repositories that are available to me, both personal and from other organizations that I'm a member of, and start working on it with Buddy.

So what I'm going to do now is I'm going to filter the repository list to find the name of the app. And here it is. Now the repository is imported into Buddy. You've seen how quick that was, and all I have to do now is go over to the pipelines tab and start automating different things that we can achieve with Buddy. So the first pipeline, the first thing that we are going to automate is this very basic delivery pipeline. The entire idea behind this pipeline is first of all, that it runs automatically whenever you push new changes to the main branch of your project. The second is that the pipeline is going to build your app and test it, and then it's going to transfer the files over to a server and run the application on the server. So let's set up a buddy pipeline that's going to do just that. So let's call this pipeline... Let's call it Prod. It's a good pipeline that we can consider, our production pipeline, that's going to build, test, and deploy the application. And as I mentioned, I want the pipeline run every time it detects that there are changes pushed to the main branch of my project. So I have chosen the On Push Trigger mode, and I'm going to add the pipeline.

Right, so as you can see, Buddy detects, depending on the contents of your repository... Buddy suggests you what actions it thinks that would be cool to add to your pipeline. And I agree that a Node.js action is a good choice here. So my first action, I'm going to have it install all of the app's dependencies and I'm going to have it build the application. Oh, I'm sorry. Right, just a note. I'm probably going to try to stay quiet when I type to avoid making mistakes. Still, I'm going to make a lot, so please forgive me for that. Okay, so I'm going to add this action. That's the first one.

Now, the next thing I want this pipeline to do with the code that's in my repo is to run the unit tests that are defined for this application. So I'm going to click the plus icon here. And I need another Node.js action. So I'm using the filter, the filter field here to find a Node.js action and what I want to do here is run npm run test. And this command is going to trigger the integration test that that are in the application repository. Now when the application is built and it's tested, now remember that. These actions here this visual representations. These are the steps that the code is going through and the order that this all happens is from top to bottom. And whenever any of these actions fail, and we can define the behavior. But the most basic one is that the pipeline fails. So let's assume that the app is built successfully and that the tests run successfully. What we want to do next is to transfer the build application to a server. So I'm going to do just that with the SFTP-H action. Now, I have set up a server beforehand the server. It's a simple server, a live server Node.js live server. So what I'm going to do with this action and the next one is I'm going to transfer the build up to that server and then run a script that's going to run this up on our server. So first of all, what I need to define is the path where the app is built in the pipeline file system. So this is the path and then I'll need to point the action to the server where I want the app to be transferred. Now let me enter the server details here. Right. And I'm going to authenticate with my private SSH key. So I'm going to enter the passphrase for my private key. And as you can see, I can upload the key straight from my disk or I could paste the contents here. I think that uploading the key from the disk is very convenient. So I'm going to do just that. And right here, I want to specify the directory to which the contents of the app are going to be copied. So if I'm not mistaken, I've set up this path to house the app. Okay, something went wrong. Let me check the server details, the passphrase, let me just to make sure. Upload the key from home. And let's see if okay. All right. And now let's see. Okay, see this. These are there. These are the, this is the beauty of live demoing.

5. Running Scripts and Pipeline Workflow

Short description:

To run a script on the server, connect to it and execute the necessary commands. You can save server details or SSH key as an environment variable for reuse. Set up a Slack notification to receive failure alerts. Push changes to trigger the pipeline. Buddy uses caching for faster subsequent runs. Visit the server to see the deployed app. Consider the limitations of a single-branch workflow.

When you get confused, you get a blackout and you're not really sure what to do for a moment. But now it's okay. I found the directory where I want to put my app. So I'm going to add this action. And now the last thing to do or the next to last would be actually running a script that runs the app on the server.

So to do that, what I have to do is I have to connect to the server and run some commands on it. So the first command I need to run is I need to make the run script that is one of the build artifacts executable and then simply run the script. The script. And again, I need to specify the server details.

Now, you might wonder, do you really have to do this manually every time you add a new action? Well, really you don't. I just want it to be starting as naturally as you guys are right now. I could easily save the server details or my SSH key as an environment variable in my body project and then just reuse it across multiple pipelines. But as I'm starting fresh, I wanted to show you how this is going to be done when you're doing this for the very first time. So now, let me go quiet for a second and set this up, again, uploading my private key from the disk. And here I have to specify the directory in which I want these commands to be executed, and this looks good to me. So I'm going to add this action.

Now, this pipeline, this set of actions running from top to bottom should get my app built, tested, I mean, tested, it should test its basic logic with integration tests, sorry, not integration, but unit tests, then upload the build app to the server and then make it run on that server. But what if something goes wrong? Well, in body, we have a few different categories of actions. I've added these four actions in the primary actions category. So these are the main actions that run in the pipeline, but you also have actions that run on failure, actions that are supposed to run when something was wrong with the pipeline, but it went back to normal, and there are two other categories of actions. But the most useful for many of us is going to be the category of actions that run on pipeline failure.

So here, I think a very good practice is to send out a notification when something goes wrong with your pipeline. So what I'm going to do is I'm going to add a Slack notification. That's going to send a message to a test channel that I have set up on my Slack. And you can see here, in the content of the Slack message, that is going to inform me if there's a failed execution. So the whole idea of this setup is that if everything goes okay, I'm not going to get any notifications. And if something goes wrong, I'm going to get a notification about that send out to the Slack channel I defined. Okay, so this looks good.

Remember that I set this pipeline to run on push. So now what we have to do, we have to push some changes to the repository to trigger this pipeline. So I'm going to leave you with this view and I'm going to open my terminal window. Actually, no, I'm first going to edit the code of the app so that we have some changes to push, right? And let me think how to show you that the application has been built and deployed right now. So as you can see, this is the initial state of the code of our app and the header of the app reads best voting application ever. So maybe let's change it to something more specific like hellojsnation. And now if everything goes according to plan and I sure hope it does, we should get this pipeline triggered automatically by me pushing changes to the main branch of the repository. And then when we open the app running on the server, we should see that the counter has this hellojsnation, this hellojsnation header. So let's do that. Let's see if there are onstage changes. Yes, there are. I'm going to add them all. Then I'm going to commit them with a message. Oh, let's say V1. That's good enough. And now what I have to do is push the changes to my repository. Right, and now, as you can see, the push has been completed and the pipeline is triggered and it's running. You can view it either this way if you're not interested in the details, but you can also go into the activity tab and see exactly which step is currently running. You could inspect the logs of each step, make them full screen, so whatever you want, whatever works for you when it comes to inspecting, inspecting what the pipeline is doing at the moment.

So what you have to keep in mind when you're working with buddy is that there's caching involved, so first run of every pipeline is going to take a little bit longer than each subsequent run. Because things with the first run things whatever buddy can cache is cached, and then with subsequent runs, this cache is used. So everything takes just a little bit less time than initially. But even though this was the first pipeline run, you can see that it took us, what, a minute and eight seconds. So this is very quick. So now let's go and visit that server I set up and see our app. Hello JS Nation. So the app is running. This is the app that's going to be our companion for this workshop. Now how it works and it's good to know that for the later part when we are going to discuss tests, the app starts with the counter at zero. When you click this button, this is the add button, it adds one. And this button is the subtract button and it subtracts one from what you can see in the counter here. So that's the app. That's our companion for the next steps. We are going to be exploring different ideas and different pipelines with this little thing here.

Now this pipeline, this pipeline that runs on push to the project's main branch. This is something that we could employ when we work in this very simple workflow where we push every change to master. There's a single pipeline that makes, that runs some basic tests and it makes sure that everything is okay. And it automatically deploys our application to a server we defined. And while this might be a viable solution in the short term, with time you're probably going to see that working with a single branch is problematic and it can create a lot of trouble for you. Quite a kerfuffle when you have to do a few things in parallel. For example, imagine you're working on a new feature and it's like halfway merged.

6. Setting up Pipelines for Feature Branches

Short description:

When using feature branches, it's recommended to dedicate one long-running branch as the production-ready branch and delegate other work to different feature branches. To set up pipelines for feature branches, create a new pipeline triggered by different criteria. Remove deployment actions from the new pipeline and leave only the actions for building and running unit tests. Sending notifications on pipeline failure is also recommended. To demonstrate, create a new branch and make changes that cause the pipeline to fail. After receiving a notification, fix the app and push the changes again to trigger the pipeline.

For example, imagine you're working on a new feature and it's like halfway merged. So half of that feature of that work is merged in the main branch and all of a sudden it turns out that there's a serious bug on production. And what you instinctively would want to do, naturally what you want to do is you would want to patch up the app but your master branch it's not deployment worthy. Because you've taken it apart, you've dug deep and you need to still do a few things. So it's a good practice, good idea to dedicate one long running branch to be your production worthy, production ready branch.

And then all of your work, whether refactoring or back fixes or whatever else it is you might want to do with your code, you should delegate to different feature branches. Now, with feature branches, how would you want to set up your pipelines when you start employing feature branches? As you remember, we have set up this one to run only when there are new changes pushed to the main branch of your project, but with feature branches, you're obviously going to push new changes to new feature branches. So what I would suggest you to do is create another pipeline that's going to be triggered by different criteria, so to say, let me show you how to do that.

So our starting point, it can be this production pipeline. So you can hover over the pipeline bar and click copy. This will allow you to copy the pipeline. You can also copy over pipelines from your other projects. You could, at even this point, you could choose if you want to copy all actions from this pipeline or only the select ones. But to show you, I'm going to clone all actions, copy them, and as you can see, my pipeline has been duplicated. So I'm going to edit that. Now, first of all, I'm going to change the name of the pipeline so that we don't get confused. So the pipeline is going to be run for branches and it's going to be running unit tests. So maybe let's call it unit tests branches, a very descriptive name, but it should work. Now again, the on push trigger mode is just fine for this one as you want your unit tests, to run whenever you push something new to one of your branches so that you are sure that at any point in time you're not breaking your app. But instead of this pipeline being triggered by the single main branch, I'm going to change the trigger mode to branches by wildcard. And what that does is that it expects pushes to all branches other than the project's main branch. So if I create any other branch and I push any changes to it, buddy is going to take that branch and run automated tests on that branch. So this looks fine. I'm going to save that. And now I'm going to switch over to actions and show you how to adjust this pipeline to fit this use case better. So what we would need to do is we'd need to get rid of these two actions that were added to deploy the application to the server and run it. So what I could do is I could simply use this switch to turn them off, but just to keep things nice and clean I'm going to remove these two actions. So if you open an action there's to the right-hand side there's this remove this action button which allows me to completely get rid of an action from a pipeline. So what we are left with right now is two actions building the app and running the unit tests. One more action that I'm going to leave in this pipeline is sending notifications to the channel I defined because you'd really want to know when your pipeline fails, right? So this looks fine. So now to show you how it works what I'd have to do is I'm going to I'll have to create a branch and maybe let's first see what happens when this pipeline breaks. And from what I can say, I do not have my Slack open. So let's, let me open Slack because remember when the pipeline fails, I'm going to get a Slack notification on my channel. Okay, right. So over to the code of my app, and as I mentioned with our unit tests and defined for the application, now the unit tests, test the logic of the app and they expect the counter to start at zero. They expect that after the addition is made, the result is one and after you subtract the result is zero. So the easiest way to show you what happens when there's an error in the app logic and the unit tests fail is simply altering the initial state of the counter. And you can see that we simply have to change this state from zero to one to make this little poor app blow up. Well, maybe not blow up, but it's going to be a bit confused as you are going to see in a moment. Okay. But you know what? I haven't... Oh, there's caps lock. I'm going to restore this, the state of this file. And the first thing I'm going to do is I'm going to create a new branch from my project's main branch. Am I the only one who gets confused with the naming? For all the years, it was master, now it's main, but right. So, the project's main branch. So, I'm going to run... Let's call this branch, Break Stuff. Why not, right? Okay, so now that we are on the break stuff branch, Limp Bizkit reference, Limp Bizkit's not the best band, but they had a few good songs. Now, I'm going to change the initial state of the counter. And now, when I push these changes to this branch, this pipeline will run and it will fail and send a notification to the Slack channel. So, let's check that. What I have to do is, now you can see that they're unstaged changes, I'm going to add them all, and I'm going to commit with a message like yikes because I know that I'm doing something wrong. And now I need to push that of course, the first push to a new branch, I have to do that. Okay, and now three, two, one, bomb's away and we should see that the pipeline is triggered and I don't expect nothing else from it, but a nice fat red failure that's going to show up here in a minute. I'm going to use this moment to hydrate a bit. Right, so remember that our main production pipeline took about a minute and eight seconds to run from start to finish, including the deployment to server, so I'm expecting this one to run in under a minute, I guess. We might not be able to get it done in under a minute. And now the tests. And as you can see, as I predicted, the tests have failed. So now the interesting aspect of this pipeline was that action that was supposed to run on failure, which is sending a notification to a Slack channel I defined. So let's switch over to Slack and check if I got a new message. So 347 PM, which was a few seconds ago, you can see that there was a failed execution on the Breakstuff branch in my project. Right, so what should I do now? Well, maybe it would be a good idea to fix the app just so that we don't have our app handicapped for the future. So I'm going to revert the initial counter state to zero. I'm going to again add changes and commit something like fix, here we go. And now I'm going to push this. And now let me switch quickly over to Buddy and show these two things at the same time. Now, when I push again, the pipeline should be triggered. It is triggered indeed.

7. GitFlow Workflow and Staging Environment

Short description:

Subsequent runs take less time due to caching. First execution is the longest. The pipeline runs successfully. GitFlow workflow divides branches into master, feature, and develop. It allows for new features and releases. GitFlow is widely adapted but has some critics. Another pipeline is needed for staging environment. Sandboxes in Buddy are virtual machines for staging. Sandboxes can be integrated with pipelines. Create a new sandbox with desired configurations. Install Node.js and Live Server. Set up the app in the staging environment.

You can see, as I mentioned subsequent runs due to caching, thanks to caching, they take a little bit less time. So the first execution always takes the longest time. The subsequent executions are snappier and quicker. And this pipeline, since there is nothing wrong with the app, is going to run successfully. As you can see, we can go to activity and we can see that execution number two with the comment name fix, it has been successful and it took only 29 seconds to run. Okay. So that's it for a very basic setup. Now, at least we are not using our production branch to work directly on when we are working on new features. So just to prepare ourselves for the upcoming parts of this workshop, I'm going to change the trigger mode of this pipeline to manual so that it doesn't get triggered when I show you new things. Now we've created these two pipelines, two basic pipelines, and we are getting very close to, in terms of the workflow, we are adapting when we have the main branch and feature branches for different features. And with this setup, we are getting very close to a workflow that's very popular and that's not so new. So let's go back and do a little bit, go back to the presentation, do a little bit of theory and discuss first of a couple of Git workflows I wanted to talk about that can help you organize the work, your CI CD setups in your projects.

The first Git workflow that I like to discuss and that our setup, these two pipelines that I just created for you are getting us closer and closer to is GitFlow. And GitFlow, as I mentioned, it's not a new invention, so to say it's been described in a blog post back in 2010 by Vincent Dreisen I suppose it's Dreisen. And the main premise of this workflow is dedicated different long running branches to different purposes and a couple of different short running branches to different purposes. So, we have our master branch, which is the branch that is supposed to be, as I mentioned production ready, it's supposed to have its code ready to be deployed at any given time. Then a clone of that branch is the developed branch, and this branch is set up to accept whatever new features or new changes you and your team are working on and are preparing. And how do you work on these changes? You branch off from the develop branch and you create new feature branches. When you are ready, done with working on a feature or refactor you merge your changes back to the develop branch. Now, how do you create new releases in this workflow? Well, at some point when there are enough new features, maybe a set amount of new features that you and your team have decided that you need to have before you create your next release a release engineer or release manager comes inspect the develop branch and says, hey, we have like five new features here that we said that we are going to deliver in the next quarter. So, let's take these and release them. So what you do then is, or the release engineer or a manager, they branch off from the develop branch and take the code from that moment at the develop branch and say that this is going to be our release. You then tweak the release branch, make sure that everything there is as tight as can be, that everything, all kings are ion out, nothing is missing, so to say, from that release. And when you're ready, you create that release and you push that release to the master branch. Then from the master branch you synchronize your master branch, you merge it back to develop, and then you can start the cycle over again. There are also, so you have two types of short-lived branches, the short-lived branch, number one, is the feature branch, and the other short-lived branch is the release branch, but I think that it could live a little bit longer than the feature branch. Then you have the hotfix branches, which aim to focus on fixing bugs that are found on the master branch. Now, this workflow, it's been adapted very widely. Remember that it's been first described as a great idea for a workflow back in 2010. So now, nowadays, if you try to do some research on GitFlow, there are a lot of people that are frowning upon GitFlow, that are saying that GitFlow should no longer be recommended, but I think that everybody should try it when it's managed the right way. It could work very good for projects of different sizes. And remember that this is just an idea, a blueprint of a workflow that you can use and you can then tailor further to suit your needs. But this GitFlow workflow, it has this three-tier division between branches, at least in my opinion, there are three main tiers. So we have the master branch, we have the feature branches, and we have the develop branch.

Now let's close the presentation for a minute and go back to our pipeline setup. And as you can see, we have the pipeline that runs for our production. So for our master branch, we have our pipeline that runs unit tests on branches. So we could rename it to, to say, Feature, right? So that we know that it runs on feature branches. And what we're missing here, we have two pipelines in this three-tier setup, three-tier pipeline division. So what we need is another pipeline that would build our app and deploy it to some sort of a staging environment from the develop branch. Now to save some time, I'm not going to create another long running branch in my repository, but I can show you how you could set up a pipeline to release an app to a staging environment. And you might be thinking now, well, but you've just shown us how you can deploy the application to a server. So how that's going to be different? Well, if you think about creating a staging environment, there's this aspect of infrastructure. So you'll have to host that staging environment somewhere. And as we know, the more servers, the more money you have to pay for infrastructure. And if you're using buddy, you could benefit from one of our new internal features. That is, where is just great for setting up these staging environments. And this feature is called sandboxes. If you look over to the left navigation bar, you can find it right after pipelines. So I'm going to click it and what are sandboxes? Well, again, not the main focus on this workshop, but to explain it to you in a few words, sandboxes, virtual machines running different distros of Linux. There are a few to choose from, they're running internal on a body infrastructure and you can set up such a virtual machine and integrate it almost seamlessly with your pipelines. And for example, use it to be the deployment target for your application and to act as a staging server where you can for example, test your app manually or run some additional tests, which I'm going to, by the way, show you in the later part of this webinar. So right now let's create a new sandbox quickly. As I mentioned, a couple of Linux distros to choose from, you can with playbooks, you can define what you want your sandbox to be pre-installed with. I'm going to need node.js since I'm going to be using a Node Live server to run the app. And you can define, of course, the name of the sandbox. I'm going to call it staging so that it fits our GitFlow theme. You can adjust the resources you give the sandbox, but I'm going to leave it at the default value and click create sandbox. It's going to take a few seconds to launch. As you can see, it's already online so very quick. And very quickly deployed and it waits for a service at the port AD. So what I have to do with this sandbox right now, as I mentioned, it's a Linux virtual machine. It comes with Node installed. So I have Node.js and npm installed here. And what I want to use to run this app in this staging environment is Live Server. So I'm going to install that. Okay. And just quickly, I'm going to create a new directory. Okay, it's here. Hmm.

8. Deploying to Staging Server

Short description:

We set up a pipeline to deploy the build application to a sandbox server. The pipeline runs the sandbox upload action to transfer the files and the run commands in Sandbox action to run the script that fires up the app. We also added a notification action for transparency. The pipeline ran successfully, and the app is fully functional on our staging server.

Right. And now we have this server. This sandbox running Node-Live server. And I have created a directory that's where I'm going to copy over the build application. Let me switch over to pipelines. And since this pipeline in our scenario is going to be very similar to the production pipeline, I'm just going to copy the prod one. I'm going to clone all actions again. And let's tweak that. So I'm going to call it stage or even develop by the branch name, but let's call it stage. Deploy to staging environment. That's going to be the name of the pipeline. I'm going to trigger it manually from the main branch just to show you the app running on a staging server that's within body infrastructure. So that's it now for the actions. Again, I need to get rid of these two actions, the deployment actions. And either place, what I want to do is I want to do basically the exact same thing, but instead of transferring the files to an external server, I want to transfer them to the sandbox server I have just set up in front of you. So let's filter the actions by the word sand, and here you can see that we have a few actions available that deal with sandboxes. So the first action I need to add is sandbox upload, and this one is going to upload the files from the build directory to, to that directory I created. So let's add this action. And now, as you remember, the final step before the app actually runs is running the script that fires it up. So again, I'm going to filter by send. And here you can see run commands in Sandbox with this SSH icons, so it looks familiar, right? We have this terminal view. Again, I need to make the run script executable and then I just need to run it. And as it was with the actual server, I need to tell body in the context of what directory these commands need to be run. So let me find that staging directory that you saw me create, it's here. I'm going to use it and I'm going to add this action. Now, I'm going to leave the notification action here just as I think it's a really good practice to add notification actions to your pipelines to be as transparent as you can. But for now, let's run the pipeline and see how our app looks when it runs, when it runs on our staging server. And again, as I mentioned, I'm running the pipeline of the main branch because I didn't want to spend additional time configuring the repository and I didn't want to create the develop branch. So for today's workshop, technically, it's enough for me to show you everything running on the main branch and any additional feature branches. But if you were to employ this in your project, you'd want to create a separate develop branch where you would run this pipeline and preferably have it run on push. So whenever a work as you remember from that presentation I showed, I've shown you when I was explaining Git flow. Whenever you work is done on the feature branch and you merge your feature back to develop, this pipeline will trigger, test the app again with any tests you want and then deploy the app to a staging server where you could check it out, maybe your QA team and these guys could tweak, play around with the app a bit and check it out on their own. Right. So, as you can see, the pipeline has run successfully again about one minute, nine seconds as you remember, subsequent runs and get a bit quicker every time around. So now let's switch to sandboxes and let's check out our app running on body infrastructure. As you can see, the same heading as we've set up with our first pushed to the main branch. So, hello, JS Nation, and the app is fully functional. And as you can see here in the address bar, the application is running on our very own staging server and not on the external server, I have set up, which you can see here, this has a different address.

9. Exploring Forking Workflow and Branch Protection

Short description:

Git flow is not the only Git workflow. The forking workflow is common in open source projects. Forking protects the main repository, allowing changes to be made in a cloned version. Pull requests are used to propose changes to the main project. To ensure code quality, pipelines can be set up to run on pull requests. Branch protection rules can be used to prevent merging of code that doesn't pass the chosen pipeline.

Okay, so this has been Git flow. So, so far we've been over the basics, we've created a few three pipelines that when working together with what great, would work, I'm sorry, would work great with Git flow. But Git flow is not the only one when it comes to Git workflows. It's not the best one, and I'm not telling you that you should use one or another, but I think that it's worth exploring some other workflows, so let's do that.

Another Git workflow that you're going to encounter in the wild is the forking workflow, and this one, when you see forking, I bet that it's an open source project. When you have to fork things, in 90% of cases is going to be an open source project. Now, you might ask why? Why this assumption that this is a good workflow for open source projects? Well, first of all it protects the main repository of the application, so no changes are made directly to the main repository. Instead, the application is cloned, and cloning is not, I'm sorry, not cloned, but forked, and forking is nothing else but cloning, but server-side cloning. So just as if, just as when you clone the application to your local machine, when you fork it, you actually clone it but within the infrastructure of your Git provider server. So the idea behind forking is that you clone the application within this Git provider infrastructure, then you clone it to your local machine, and only then you can start working on any features or on any refactors, but you commit to your copy of that main repository.

Then when your work is ready, you create a pull request, which pull request, I bet that all of us we are familiar with what a pull request are, but in a nutshell, you're asking the maintainers of the main project, so to say to the initial project to have a look at your changes and add them to the main line of their code. So this way, this is a very secure way to work on code for a large distributed themes, not themes, but teams, which are very common in the open source world. Now, you might think that it would be good if you are actually an owner of a repository that you expect that's going to be forked, that it would be a good idea to protect, to somehow validate that the changes that people proposed to be added to the main line of your code, that these changes are validated by some sort of a test suite that you defined so that you don't have to test everybody else's ideas manually.

So let's step back from the theory part again and have a look at creating a pipeline that would take these pull requests coming from forks into consideration. But also let's have a quick look at securing your very own repository, your own mainline of code and making sure that nothing goes untested before it is merged to the main branch. So quickly, let's have a look at how you could create a pipeline that would run when somebody creates a pull request because they've forked your repository, they wanna introduce some changes, so they have created a pull request. What next? So what I would do is I would take this pipeline that runs our unit tests, and I would copy it because all we need we don't really... Just to have a quick sanity check and ensure that nobody tries to introduce any changes that break our app, I would suggest that the quickest way is running unit tests. But what we would need to change then is changing what triggers the pipeline.

As you can see here, we can have the pipeline triggered by a single branch, single tag, all different branches or tags. But what we are missing is a pull request triggering your pipelines. Doesn't buddy have such a feature? It sure has if I'm talking about it, right? Yes, sure it does but by default, just to keep things nice and secure, it's disabled. If you're not expecting any buddy to fork your repository, when this is disabled and somebody forked your repo and maybe not tries to run some malicious code in a pipeline you set up, they won't have a chance because the pipeline won't trigger automatically because you haven't enabled a pull request support in your project. But if you wanted to do that, it's very simple. When you're in your project, you simply go over to project settings in your left navigation bar. And if you scroll down, you'll see that there's this pull requests area that allows you to enable support for pull requests from Forks. And all you need to do is click that switch to make it switch, make it green and save the changes. So right now, when I go back to pipelines, I go over to this copy of the unit tests pipeline, you will see that I will be able to choose more trigger modes and more, sorry, more to define that pull requests trigger this pipeline. As you can see, I can choose that pull requests by wildcard which means that all pull requests will trigger this pipeline to run. So on push, so whenever there are new changes, push to any pull requests for your repository, this pipeline will run and check it. So let's rename the pipeline so that it stays here. So this will be PRs and this will run PRs from forks, right?

Now, let's say that you have this pipeline setup implemented so it's either a pipeline that runs a unit tests on a feature branch or maybe unit tests on one of the forks, and then the tests fail. But with current setup, there's nothing really stopping you or somebody else from your team from merging these changes that are not going through your pipeline successfully into the code mainline, which will break it, right? So now what do you do to ensure that you are not able to merge anything that doesn't pass as a chosen pipeline into the code mainline? So it's very easy to get this setup. Three words that you need to know branch protection rules. So this is a feature in GitHub. It's available with other Git providers as well. But I'm going to show you how it works in GitHub. So if you went to have a look at your repository, there's this bar on top here and this row of settings of different tabs, there are repository settings. And if you go over to branches, you can see that there are branch protection rules. Let's add a new rule. Now, first thing that we need to set up is define a branch name pattern or a specific name of a branch that's going to be protected by this rule. And what I'm interested in, in this case to show you is securing the project's main branch. So I'm just going to set this up for the main branch. So the main branch is going to be protected with the rule I'm about to define.

Now the first of two options that is of the interest for us is that we need to require status checks to pass before merging. Now, what this means is that GitHub will expect a status check to be successful, to get this successful status from Buddy before it allows you to merge code into the main branch. So here you can see that the field is empty, but thanks to our integration between Buddy and GitHub, if I start typing the pipeline name, let me quickly see what was the name of the pipeline and started with feature. So if we do, sorry not this, but this. Okay, right. So yeah, why it didn't find the name of the pipeline with a feature in square brackets, because the initial name of the pipeline was unit tests and the name still hasn't refreshed with the integration between body and GitHub, but this is the same pipeline, if you remember, unit tests on branches, this was the first name of the branch before I switched it to feature unit tests on branch. So this is that pipeline and GitHub will require this pipeline to pass to be green before it allows code from any branch to be merged to the main branch. Now the other option that should be checked for your own good and for that transparency pillar that you should remember from our introduction is to include administrators. So nobody's above the law, nobody can push directly to master, nobody can merge anything that doesn't pass the pipeline that we have chosen here. So I'm going to save that and I'm going to show you how this works. And I'm again going to go back to our break stuff branch but first, let's create a pull request. All right, because now I'm not even able to, I won't be able to push directly to master, this setting up this protection rule effectively turns off the ability to push changes directly to master. So every change has to go through a pull request. So let's create a pull request. Now, this is a tip for everybody who's working on forked repositories, I've done this wrong so many times. When you want to merge something to your fork, not the repository you're forked, make sure to choose the right base repository. I've done this wrong so many times. So I'm going to choose this one here. This is my fork. Yeah, I want the break stuff. I want to merge it into the main branch. So here's a pull request and now as you can see from the comment history. As you can see from the comment history, the first comment was the yikes comment, which broke the app. And the next one was the fix comment, which actually fixed it. Now you can see that I can merge this pull request because the required check. So the unit tests on branches pipeline, it's passed, but let's see what happens if I break the app again and I make sure that the unit tests fail.

10. Fixing Pull Requests and Trunk Development Model

Short description:

To fix a broken pull request, change the trigger mode of the pipeline to manual, fix the application, and push the changes. In the trunk development model, there is a single main branch called the trunk, where long-running code resides. Feature branches are created for small chunks of each feature, and as soon as a piece of the feature is completed, it is merged to the trunk. This approach emphasizes small, well-tested changes and frequent releases. However, it requires continuous deployment and may not be suitable for all projects.

So if you remember that my favorite way to break this up is going to... Is change the internal state of the counter. So I'm going to change it to one again, but first let's make sure I'm on the right branch. Yes, I am. So let's save it. Let's see if there are changes. Yes, they are. There are, sorry. And now let's add this change and commit that break again. Okay, do we have this ready? And I'm going to push this change. And again, this change, it triggers our unit tests on branches pipeline as well as our PRs from forks pipelines. So let's cancel the execution of that change. Fork pipeline. We don't need that. And as I'm working on a very limited set of branches and with a lot of pipelines, let me just make sure that this pipeline doesn't get in our way again. But in the meantime, as you can see, the pipeline has failed. I broke the app, the unit tests, they don't pass anymore. And you see that I can not merge this pull request. So what I can do to make this pull request mergeable, again, as I can simply fix the application. So first of all, what I'll do as I mentioned is I'm going to change the trigger mode of this pipeline to manual. So it doesn't run on its own when we push changes to the branch. Okay, now back to the application. I'm going to return the counter to its original initial state, which is zero. Now, push commit and push changes. Fix again, that's going to be the name of the commit. Now I'm going to push it. Right, and here we go. The pipeline runs again. And now, since we've run this pipeline a couple of times, let's check if the name, remember that we had this name that was outdated. Yep, you can see that since we've run this pipeline a few times already, it's actually running for some time. a few times already, the name has been added to the list of available pipelines. So I'm going to make this check with its current name, a required check and remove the check with the old name so that Github doesn't get confused because I can see that it's already getting a bit mixed up when it comes to which check should report a green status and which shouldn't. So let's see how this works out for us. Can we merge this up or can we not? Says that it's still pending, yes, it is. Now, when this runs successfully, the status will be reported back to Github and we'll see that we can merge the branch to our code mainline, which allows us to finish this epic journey with our branch that was a broken and fixed, then broken again and then fixed again. Well, this is how life goes, we are all initially fixed, then we're broken, then we're fixed again and so the story goes. Okay, but enough philosophy, let's go back to Git workflows. So we were at the last Git workflow we discussed was the forking workflow. We discussed branch protection rules and requiring, and running tests when somebody creates pull requests to your repository after forking it. So Git workflow number one that we discussed, which is maybe outdated but widely adopted, I think it can work just fine if you're planning it out just okay. Then we have forking, which is very common in open source and also allows big groups of collaborators to work securely on one repository. And then let's have look at something which is considered to be very modern and is preached by many to be what all software development projects should adapt right now. So the trunk development model. Why trunk? Trunk because we have a single main branch, main long running branch and this branch is considered to be the trunk of our beautiful code tree, right. We already have this term branches, so why not add another biology-related term to our GitHub terminology. So we have our master branch, our long running main line of code is the trunk of the project, and then we work on code on feature branches. So this, on the surface, this seems to be very similar to GitFlow. But as you can see from this diagram, there are only two main types of branches. So we have your master branch and your long living master branch and your short lived feature branches. And when I say short lived, I mean it. Whereas with GitFlow, you had short lived branches that basically lived for as long as you worked on the feature. With trunk development, you create branches for small chunks of each feature. Well, that might be a bit unclear, so let me elaborate on that. Imagine that you're developing a feature or changing your code and you need to do, let's say three things. So first of all, you need to change something in the form on your website, you need to update a database schema to accommodate that change, and maybe what else could you need? And maybe you change some styling to make this updated form look just right. So if you worked in GitFlow, you'd work on these three steps on a single feature branch, because as a matter of fact, you're working on a single feature, maybe from a single task that your product owner created for you and defined for you. But in the trunk development model, you'd chop this one feature into three smaller chunks and as soon as you're done working on one of the pieces of that feature, you'd merge it to master. So very small changes, integrated extremely quickly, tested extensively with automated tests. So a big part of working in the trunk development model is creating automated tests that make sure that whatever you've written works just fine within the context of the app. Let me consult my notes to tell you more. Yeah, of course, the smaller the change, the better, again, this idea of doing things often of small chunks, it's a recurring theme, and the trunk development model, it takes it to the next level, so to say. And why would you do that? Well, again, this is, here comes the idea of risk and how much work you're doing within a given period of time and the time between releases. Because not only are you integrating very small chunks of code into your code mainline, you are also releasing your features, I'm sorry, not your features, but your application, your website, whatever to the public very often. And if you look at this graph, we have risk and we have time between releases and you can see that the risk gets bigger when you make time between releases longer and with short periods between releases, the risk is relatively small. So with trunk by trunk development, this idea of small chunks, well-tested with automated tests releasing often, this is the way to go. Many people argue that this is the, this is what software development projects, what they should do right now. There's nothing wrong with that, there's nothing, well, there are definitely many things right with that approach, but it's not perfect for everyone. Basically working in the trunk development model, it implies that you are going to be employing continuous deployment. And this might not be viable for your project if it's not very mature, if you don't have a team big enough or experienced enough. If you have a lot of junior developers working with you, people whom you need to micromanage a bit because they're on the beginning of their journey.

11. Setting up Pipelines and Release-on-Tag Workflow

Short description:

Setting up pipelines in this model involves creating two types of pipelines: one for testing changes on branches and another for testing and deploying the app on the production environment. The release-on-tag workflow is recommended for projects like MPM packages or Dockerized applications. It involves creating a pipeline that builds a Docker image with the application, pushes it to a Docker Hub repository, and connects to a server running Docker. Docker is a virtualization technology that allows you to bundle an application with its dependencies into a single image, making it portable and easy to run on any machine. Building a Docker image requires a Docker file, which defines the image's contents and build process.

So again, keep in mind that this is something maybe to aim for as you're going forward, but this is not something that you should force yourself to implement, for example, after this workshop. So again, gains from developing this model, you quickly delivering your stuff, you probably will have a better relationship with your clients because there's this small feedback loop and you can often check in what sticks, what doesn't right. So you'll know if the direction you're heading is the right one or if you change it, you should change it. You'll probably get better stability, you will detect bugs quicker since there are so many automated tests running in this model. What it requires from you, discipline, automatization and something that might be hard to obtain is approval from manager. Management tends to like big releases and frequent deployments and small updates. They tend to look at a bit suspiciously. So yeah, good luck convincing your managers to employ that workflow. But if you can, this could be highly beneficial for your project.

Now, how would you set up your pipelines in such a model? Well, let's look at what we have already. You'd really be good to go with two pipelines. With one that tests the changes that are on your branches. So with the pipeline that we set up, that's the feature unit tests on branches. That could be one pipeline that runs on all your branches and runs maybe unit tests, some additional tests that you define. And then, a pipeline that tests the app again on the production environment and deploys it to your target infrastructure. So, really, two types of pipelines. But they need to have very well-written tests that thoroughly test your app. Because remember, as soon as you push and it goes green, it gets to production and there's no turning back.

Now, from the practical point of view, I think it's worth to have a look at what I call a release-on-tag workflow. Which is, again, it's not something, let's say, standalone. It's not something very different from the workflows I described in the previous section. But it's something that, it's a workflow that would work great if you are working on something like an MPM package, right? That needs to be origined because your package is used as a dependency in different versions in different projects. So you're creating releases in your repository tagging. Or maybe you are using Docker in your project. You're Dockerizing your application and versioning these Docker images. So how would you handle a workflow when you release a new version to production when you create a new tag? Well, of course you could create a pipeline. I could add a new pipeline just as you saw me do that a couple of times already. And I could walk you step by step, but I'm going to show you a pipeline. In fact, I have created one before we started this workshop, but instead of adding new actions to the pipeline manually, I'm going to use the import pipelines option. And what this allows me to do is it allows me to import a new pipeline described in the yaml config file. I have such a file on my desktop in the pipelines directory. A little bit of a spoiler by the names. You can see that there are two other pipelines that I'm going to import and show you, but first let's focus on this one. And as you can see, we have a new pipeline in our repository which is called release on tag push. Let's dive into it and let's check out what's this pipeline all about. Let me take a sip of water. Talking a lot, not drinking enough. Now, this here pipeline, again, the trigger mode should be familiar to you. Right now it's the on push trigger mode. So whenever something gets pushed to the repository, this pipeline will run. But what is it paying attention to? It's paying attention to all tags, tags by wildcard. And what does it mean in practical terms? Whenever a new tag is created in our repository, this pipeline will run. And what are the actions that this pipeline is going to perform? First of all, it's going to build a Docker image with our application. Again, let me stop at Docker for a second. I think that at this point in time, all of us know more or less what Docker is. If there's somebody among us that doesn't know it, if there's somebody among us that doesn't know what Docker is, you really should know because this is the future, or maybe even the reality now of development is very widely used. But in two words, it's a virtualization technology that bundles bakes in the application with its entire environment dependencies into a single file, a single image. And it allows you to run it on any machine and without meeting any requirements, recreating an environment for the app to run it, all you need to do is install Docker on a supported machine and then use Docker to run that image which is in fact the application in a little jar with all of the things that it needs. You just run it with Docker and the app runs beautifully, the same on all machines that run Docker. So basically, just as if you had a console, you just put the disk in and it plays beautifully whether you have PlayStation 4 or PlayStation 4 Pro, right, so you just insert the disk, you don't worry about anything. So we're doing that with our app. We are building a Docker image. How to build a Docker image. To build a Docker image for your application, you need to create a Docker file and if you're wondering, if you haven't noticed that already, we do have a Docker file in our directory. The Docker file defines what's going to be in our Docker image, how it's going to be built. So there's a Docker image on the main level of our repository. So we are building a Docker image with our application using that Docker file. Then with the second action, I'm pushing the image to my Docker Hub image repository. And I'm talking the image with using an environment variable that's a body execution tag. So basically, body is going to read the tag that I created in the repository. It's going to be applied to the execution of this pipeline. And this execution tag is going to also be applied to the Docker image that we built and that we are pushing to the Docker Hub repository. So that's the second action. Now, let's return. And there's this final action that what it does is it connects to a server. Yeah, this is a yet another server I have set up. It's on external architecture. And the only thing that's fancy about that server is it runs Docker. It has to, if I want to run a dockerized app, I need the server to have Docker running.

12. Deploying Dockerized App and Auditing npm Packages

Short description:

To deploy a dockerized app, the server needs to have Docker running. The app connects to the server, pulls the image from the repository, and runs it. The pipeline can be triggered by creating a new release and tag. We have discussed various aspects of CI/CD, including protecting the main branch, running pipelines on pull requests, and automating app deployment with Docker. It is important to ensure project security by auditing npm packages. The npm audit command can identify vulnerabilities and provide remediation options. A pipeline can be set up to run npm audit and fail if vulnerabilities are found.

It has to, if I want to run a dockerized app, I need the server to have Docker running. So the application connects to the server. It pulls the image from the image repository. And as I mentioned, you can use in your projects, you can use environment variables to store different data. And in this example, the image repository environment variable defines the image repository from which body pulls the image. And also it pulls the image with the execution tag. So that's what I explained a minute ago. It stops the app if it's been running on the server already and then it runs it using the recently download that pulled image.

Now, what I've done here additionally to make sure that this imported pipeline works in every case is I've changed the exit code handling to execute all commands regardless of the result of the previous command because if your server was vanilla, you didn't run any Docker containers on it before. The docker stop app command fails because there's no such container, there isn't such a container called app on the server. So this command fails and the subsequent command won't be executed. So this is the pipeline. And it's very easy to trigger it. I just need to go over to my repository. And if you look to the right-hand side, of your view, you should see the releases section. When you click create a new release, you can create a new release and a new release and a new tag. These things are on the very basic level, these are interconnected. So what I'm going to do here is I'm going to create a new release called Tagged V Let's 665, like not entirely evil. And let's go to my first release. Okay, and now when I publish this release, we should see that we know that our release on tag push pipeline that it runs. And here what you can see, this is not the tag that I've created. This is V1 is the name of the last comment that's been pushed to the master branch. But here you can see that the tag is present and the tag is V665. And again as it is true with all these actions, the first execution might take a while. In my experience it should be under two minutes. But we can check this pipeline in a moment. Now let me just see what else do we have in store.

So a little bit of a summary of what we've done so far. So first of all, we've glazed over the basics of CICD. Then we have created three separate pipelines and that mimicked the division used in GitFlow. We also discussed protecting your main branch by requiring specific pipelines to run successfully before you allow any merges to before you allow the code to be merged to the main branch. We also explored making sure that your pipelines run when there are pull requests created to your repository. Now we discovered how you can dockerize your app and automate deploying it to a server when you create a new tag. And as you can see, our pipeline has run successfully. So the last action runs successfully and on this here address, we should have our app running. Okay, hello JS Nation. And as you can see, I'm overusing, as you can see a bit, right? And we have the same app deployed in three different environments. The first one is the server that's from production deployment pipeline. This one is our Dockerized app and this one is our app running on the staging server created with a body sandbox. Now, what else can you do to improve your code? And the answer is smart automation. But before we dive deeper into the well, obvious answer to the question, what should we automate? And we are all hearing the voices in our head saying, tests, tests, automate tests. First thing that we should really consider is making sure that our project is secure. And the great majority of us, we use the benefits of npm packages, and truth be told no matter how our code, how secure our code is, there still can be vulnerabilities in the npm packages in the dependencies in our project that can make our application insecure, maybe not directly, but indirectly. And more often than not, we devs are not even aware of the fact that there are such security holes in our project. And why are we not aware? Because we are not auditing that often enough, or we're not auditing that at all, and we really should. And there are very cool, convenient tools to do that. And one tool that is literally at our fingertips that many of us don't even realize that this, there is npm audit, a very cool, very simple command. And I'm going to show you how you could use this command to fix some of the holes, some of the vulnerabilities, hardware to say, if you're Polish, in your dependencies of your app, how you could quickly fix them and even automate patching up these insecure packages in your repository. So again, to save some time, I'm going to import a new pipeline from a YAML file. So I have such a pipeline created right here. And as you can see, we have a new pipeline called npm audit. Right, let me just, okay, change one thing. I've changed the execution trigger mode to manual because I wanted to do that. Now, what is npm audit? Let me quickly open the documentation for this little command because it has a very good description. I think that this is, it's clear as a day if I read it just out loud. So the audit command submits a description of the dependencies configured in your project to your default registry and asks for a report of known vulnerabilities. So a list of your dependencies, there's another list that has dependencies listed with their vulnerabilities. There is a comparison going on and the command checks dependencies against the list. And now, if any vulnerabilities are found, then the impact and appropriate remediation will be calculated. If the fixed argument is provided, then remediations will be applied to the package tree. So not only does this command give you the idea of how many vulnerabilities are there in your dependencies, it also allows you to fix them. Of course, as it's the case with many of these automated solutions, not every vulnerability can be fixed automatically, but many of them can, and let me show you how. So here's a pipeline I set up. The first command I'm going to run in this pipeline is the npm audit command. Now, you can see that I'm running it with the audit level flag. And what does this flag do? It actually makes this command fail. How and why? If you run npm audit, even if there are packages due to exit code handling, even if there are vulnerabilities found, the command is considered to run successfully because it did its job. So it run npm audit. It found this many vulnerabilities and here we go, do with that whatever you might want to. But to increase the security of my project, I want this command to actually throw an error when there are some vulnerabilities found.

13. Running the Pipeline and Fixing Vulnerabilities

Short description:

I can set the threshold of vulnerabilities and trigger an error with the audit level flag. There are three available levels: low, moderate, high, and critical. The next actions run on failure if vulnerabilities are found. If vulnerabilities require manual attention, a Slack notification is sent. Fixes are automatically pushed to the branch if no manual review is required. It's a good practice to run this pipeline alongside unit tests to ensure no security vulnerabilities. Running the pipeline manually is a viable option. The npm audit action failed, but the audit fix is running. The next action fixes the vulnerabilities. The pipeline is prevented from running successfully due to branch protection rules. Changes can still be pushed directly to branches. The pipeline can be run on the break stuff branch to find vulnerabilities and push changes directly. You're not able to push to master after employing branch protection rules. The pipeline run is successful, and changes are made in the pull request.

And I can set the threshold of how many found vulnerabilities, trigger an error with the audit level flag. And now there are three available levels. If we jump back to the documentation, where are they, here they are. So as you can see, we have audit level low moderate, oh, I'm sorry there four actually. Audit level, low, moderate, high, and critical. So you can make this command as sensitive or as insensitive as you might want. So I've decided to make it fail when there are not that many, even the slightest amount of vulnerabilities found when the command fails. Then, sorry. Then since I want this action to fail when there are vulnerabilities found in my project, the next actions are going to run on failure. So this next action, npmoditfix, it runs only if... First of all, only if the initial action fails and then I have also used another feature of buddy which is TriggerConditions. You can find them here. And I've chosen the option to run this action only if an environment variable has a specific value. And here I defined the environment variable which is the buddyFailedActionLog, and in this environment variable, by default, logs of the last failed action are saved. Did I say it right? Yes, I guess I did. And the contents of this environment variable must not contain this phrase, vulnerabilities require manual review. Now, what this means in practice. If npm audit runs and doesn't find any vulnerabilities, it's okay, the pipeline runs successfully, nothing needs to be fixed. If it fails and it shows that, hey, there are this many vulnerabilities found and they can be fixed automatically, then npm audit fix runs. If the phrase that I read out loud appears in the logs, it means that these vulnerabilities require my manual attention. So it's not enough to just run npm audit fix. So in such case, as Slack notification is going to be sent to my Slack channel. And again, I've defined this with a trigger condition, but this time, the failed action logs, they have to contain this phrase. Okay. So if there are vulnerabilities that require manual attention, manual review, instead of doing anything, the pipeline would just let me know, Hey man, it's above my pay grade. I'm not doing anything with that. You handle that. But one, if there are vulnerabilities, so the first action fails, two, all of them can be fixed automatically, this runs and then the fixes created, done by this action, they are pushed automatically to the branch on which the pipeline was executed. Then again, this action has a trigger condition. That's the same as with the npm audit fix. So if the buddy failed action logs environment variable does not contain the phrase that there are vulnerabilities that require manual review, then it is okay for buddy to push the fixes not main branch, but to the branch on which this pipeline has been triggered. That's quite a lot, right? So this might look complicated or I might have made it sound a bit complicated, but believe me, it's very logical and trigger conditions are a very powerful tool that you can find in buddy and that you can use these to tailor these pipelines to do the only thing that they can do probably is coffee. I bet that you can do that with pipelines as well. Anyway. So a good practice, a good idea would be to have this pipeline run alongside with maybe the pipeline that runs unit tests. And so that with every push you're ensuring that there are no security vulnerabilities on your branch that you're working on at this moment. Now, let me show you how this works. In practice, I want this pipeline to run once when I tell it to, so let's just click run pipeline. This is also a very viable option to trigger pipelines in Badi. You don't have to resort to any automated triggers. You can just run pipelines manually or recurrently. So just run the pipeline and I want to run this pipeline on the repositories, the main branch of the repository. And I suspect, I mean, I'm almost sure that there are vulnerabilities in the packages found in the dependencies of this little counter-op of ours. So let's just wait patiently and see what's going to be the result of the npm audit. Yep, so as, as predicted, the npm audit action failed, but the audit fix is running. So if I look over at the logs, I can see that, yes, there are 109 vulnerabilities but the logs don't say anything about having to fix them manually. So since that is the case, the next action that's supposed to fix these vulnerabilities, it runs. So it's fixing away. Now from this pipeline, we are going to be moving on to some more testing. Oh right, see, so, actually I forgot about that. I forgot about that. We just, what I did is I prevented this pipeline from running successfully because I protected my main branch with a branch protection rule. And as you can see here in the logs, protected branch hook declined. Which means and required status check body uh-huh, is expected. So I'm not able, body is not able to merge to my main branch automatically, because it, because all changes to the main branch have to go through the pipeline we defined a while back. So we can see that the securing mechanism was very, is very successful indeed. But to show you how this pipeline runs, anyway, we are not protecting our branches with any, we can still push changes directly to branches. So instead of running this pipeline on a branch, or sorry, on our master branch, I'm going to run it, but I'm going to run it on the break stuff branch. And there, it will, again, find the vulnerabilities, but it will be able to push changes directly to that branch. See so, even though we failed in this step a little, we learned something, or at least I was able to show you something cool. So a takeaway, you're not able to push to master after you employ branch protection rules, right. So let's not wait until this pipeline completes, although it's run successfully, although it's going to be just a minute. Oh, but, never mind. Let's... Yep. Okay, nevermind. It took 43 seconds, takes me much more time to collect my thoughts than for this pipeline to run. Again, machines win. So now, if we were to go to our pull request, and this here pull request, you can see on the commit list that we have a commit from buddy. And this commit contains changes to the package lock file and to the yarn lock.

14. Automating Vulnerability Fixes and Browser Tests

Short description:

Automation saves time. We've automated many things in this workshop, including security audits and browser tests. Selenium, Puppeteer, and Cypress are popular tools for automated browser tests. Cypress, designed for testing, supports multiple browsers and mobile websites. To create a pipeline for browser tests, optimization techniques can be used to run them as fast as possible. The Cypress tests pipeline uses parallelization to run multiple tests simultaneously. The tests visit the app, find buttons, perform actions, and check expected results. Running steps in parallel saves time.

So here we go. This is how you can automatically fix your vulnerabilities with npm audit. Right, so we still have one hour left until the very end of this workshop. We're looking at smart automation. We've discussed using smart automation to increase the security of your project. But now let's, I'm sorry for hitting the microphone. I hope your ears are okay. Let's talk about something a little bit less exotic. Let's talk about, yeah, I talked about time. But you can probably guess that automation saves time. This automation saves time. And even though we've been in this workshop for two hours already with me rambling a lot and all that theory that's been going on, we've automated so many things. Just have a look at all the pipelines that we run today and that we have set up, right? And so let's say that we spent an hour coming up with all of that. So we spent one hour and how much time this could save us in the long run? Well, let's have a quick lesson in hypothetical math. So let's say that this automation that we implemented, it saves 10 minutes a day, 10 minutes a day over developer. So it comes down to over five days in a year. So five days in a year during which you are free to do anything else, either work related or just browse YouTube videos if you play your automation game just right. Now, aside from automating these security audits, there are much more other common things you could automate. Now, you're probably all familiar with the topic of browser tests, right? Browser tests often used to create integration testing scenarios. What you can do with these tools is you can create scenarios automatic where browsers are opened and the tools they get into interaction with the websites, the apps that they open. They test the behavior of these websites or of these sites. And what are the three tools, maybe not the most, well, one of them is not the most popular, but three popular and important tools for automated browser tests that we could use to create such tests, such tests that are integration tests because the tests are entire application, they interact with it directly as if they were a customer, a user. So the first logo to the left is logo of Selenium. Then we have to the right Puppeteer, and then we have Cypress. Now Selenium is the most popular one. It supports multiple programming languages and multiple browsers when it comes to testing. But when it was created, it was not intended to be a dedicated testing tool. So it was created as a browser automation tool, but not as a testing tool per se. And it shows with the way it's built. With the way it's built. But even though it hasn't been built, created with that intent, it still works great as a testing tool and hence it's overwhelming popularity. Then we have Puppeteer, which is a Google created tool. And since it's Google created, it has this caveat that it works with JavaScript only and with Chrome only. Again, it wasn't really created as a dedicated testing tool, but still it works great for testing but only if you are using Chrome. If you want to test on Chrome, if you want to test JavaScript. So there's that, it works for many people, not probably the perfect solution for cross-browser testing, to cover the most use cases. But then we have a contender in this area, a tool that's gross in popularity that gains traction, which is Cypress, created by a company of the same name. It was designed, created with the purpose of testing in mind. And therefore, it has a couple of cool built-in tools such as command logging or test debugging. It's relatively simple to use. It allows for cross-browser testing because it supports multiple browsers. It supports mobile website. So it's maybe not the most popular tool on the market, but from our point of view here at Buddy, it was it's one of the more important and feel free to try it out. Maybe it will be better for you than, for example, Selenium, right?

Now let's have a look at how we could create a pipeline that runs browser tests on our application. These browser tests is integration tests. Well, what we should know about integration tests is that they tend to take some time to run. You can imagine that running a headless browser and then going through a defined scenario, clicking buttons, filling in forms, logging the results, et cetera. This is going to take some time. So it's not uncommon to have such tests relegated to some pipeline that doesn't run that often or that runs only on a staging environment. And this is often done to simply save time. Time is money, right? But we cannot recommend that. So let's say that integration tests such as these browser tests that I'm going to show you, they can detect a lot of holes, a lot of things wrong with your application. And instead of relegating them to a pipeline that doesn't run very often or runs at the very last mile before the application is deployed, what we should do is to create a smart pipeline that's going to use some optimization techniques to run as fast as it possibly can. And again, I'm going to use the import feature to pull a pipeline that I have created before this workshop into Body and show you what I mean by that. Okay, so, yeah, we don't need two of these, right? So this is how we delete pipelines in Body. You must type in the full name of the pipeline to confirm that you actually are doing this with full understanding that you are getting rid of this pipeline of all the executions, the logs, et cetera. So we have the Cypress tests pipeline. Let me show you what it does. So this pipeline uses parallelization to run two Cypress tests simultaneously. Instead of waiting for one test scenario to finish before you run another one, we can actually have multiple testing scenarios, multiple steps run in parallel. And as you can see from the name, we have a test at and test subtract that are running in parallel. Let's open the app and let me show you what these tests are supposed to do. So these tests will visit the app that's running at a set address and they'll find a button. This is the test dash ad. So this test finds the ad button, it clicks it, and then it checks what is the value that the counter shows? And the expected result is one. So when you open the counter, you click once, you should get one. And the situation is similar with the subtract test. So the Cypress visits the address, clicks the subtract button, and expects minus one as a result. So instead of running these two steps, one after another, I'm saving time by running these two steps in parallel. So let's see that in action.

15. Parallel Execution and Test Feedback

Short description:

Running actions in parallel can save time in your pipeline. When steps are run in parallel, they can provide better feedback and allow for faster test execution. This is especially beneficial for integration tests with Cypress, as running them in parallel can make the runtime easier to manage and provide more comprehensive results.

And again... Again, you can use running actions in parallel in many other cases. As you remember to keep in mind these three pillars and inspect and adapt, when you see that your pipeline has grown and it's very chunky and it takes quite some time to run, you can look at it and think about... And think which steps don't need to be run in your regular order, which steps could be made to run in parallel in order to save you some time. And this could be beneficial as it is with these integration tests with Cypress. They when they are run in parallel they will be much easier to digest to digest well their the runtime will be much easier to digest rather than simply waiting for one test to run and then another test to run after the first one is complete. And this gives you another benefit because when steps run in parallel they could fail. They could fail. So when steps are run in this basic order so from top to bottom, if for example the first test which tested adding if it failed the other test wouldn't even run, right? But if we run these two tests in parallel they both start running at the same time and they will both show you a result. So it could be the case that one of your integration tests test scenarios it runs successfully so all the points that you wanted to cover there they all work but the other scenario fails for some reason. So you have better feedback about what you should look at what you should fix before you move on.

16. Implementing Visual Tests in Buddy

Short description:

Visual testing is a useful way to detect changes in your app's appearance. By comparing the production version with the staging version, you can ensure that the app has not changed unexpectedly. To implement visual tests in Buddy, you need two servers: one for production and one for staging. By adding a visual test action to the pipeline, you can compare the two versions of the app. Approving the results confirms that the changes are intentional. Visual tests are important for ensuring that your app not only functions properly but also looks great. Cypress, Selenium, and Puppeteer are popular tools for automated browser tests.

All right, so we have audited our dependencies and their vulnerabilities. We have taken a quick look at our browser integration tests in our case case with Cypress. Now one more very, very useful in modern day web development especially kind of tests that we can implement and we can have it running entirely on body infrastructure, which is very cool is visual testing. And what visual testing does is that visual testing in its simplest form, it can look at your app at a website or a specific area of that website and it can detect changes and let you know that, hey, there's something changed in that area and maybe you should have a look at it. Did you really want it to look this way? Please check and confirm.

I want to implement a test scenario where my visual tests, they compare an existing version of the app that's on the production so we know it looks great and runs great to a version of the app that I have just deployed to the staging environment. So this could give me a good idea whether the app has changed in some sort of funky way before I decide to move along with the deployment process. And now, how, I said that we can achieve that entirely within BUDDY infrastructure and how can we do that? Well, first of all, we need two servers to do that. The first is the server that has the app running already on production, right? Our production server. And then the other server that's going to have a new version of the app running. And this is going to be our staging server. The server that we are running on one of our sandboxes. And to make these two come together and compare the way they look, all we need to do is we need to add a single action and I'm going to show you just that right now.

So the best starting point for this showcase is going to be copying the staging pipeline since it already has 90% of the I'm sorry, of the steps that we need to make this work. So let me clone that pipeline and have a sip of water. wait a second. Okay, so here's the pipeline copied. And I can see, let me change its name so that we are not, so we don't get confused when we look at all these pipelines. Usual tests, we've created quite a few pipelines so far. Right? In two hours, 20 minutes. So a lot of pipeline work here. Okay. Visual tests. All I need to do to get this pipeline to perform these visual tests, these cool visual tests is to add a dedicated visual tests action. And let me show you how it looks and how it works. So the visual tests action and takes two addresses. The first address to the left is the URL. So this is the address that we want to test and the baseline address, which by the way is optional, as I mentioned you could only check the areas that you defined when they changed or you could compare to addresses. So you would need the baseline address and when you start comparing two versions of the app. So we are going to do just that. So what we want to do here is we are going to deploy our new changes to the staging server and then compare the way the app looks on the staging server all to the way the app looks on the production server. So as you can see, I've chosen the production server as our comparison baseline and our staging server as the one to be compared to the baseline. So to show you that this actually works, first of all, I'm going to remove that branch protection rule so I can quickly push straight to the main branch. I know this is not really very secure, but for the purpose of this workshop, I hope that you could forgive me that I'm breaking my own rules. So let's delete that and this has been deleted. So now, since there are no protection rules, I'll be able to quickly push changes to the master branch. Let's change something that will be visually. That will change the app visually and I think that the simplest and the quickest thing would be again, changing the header text from hello JS Nation to visual tests rule, right? And, oh, what branch am I on? Let's go back to the main branch and now let's introduce the change, visual tests rule, right? Right, okay, so we can see that there are unstaged changes, let me put the pipelines list in the background here, and let's do this, let's add all changes to our stash, let's create a commit, let's call it v2. let's call it v2, and now let's push the changes. Okay, right. And what I can do here, I can run this pipeline manually, we can stop this execution, we don't really need that to run right now, we can see that since we have disabled the, since we have disabled the branch protection rule, our pipeline, since we can push to master, master is also naturally a branch. So our pipeline that runs unit tests on branches is triggered also by pushing changes to the master branch. But for now, let's focus on the pipeline that runs the app on, that runs our visual tests. Let's have a look at the execution. With 35 minutes left, we are nearing the end of this workshop. I hope you're having fun with this counter-app. You can break it in oh-so-many ways and deploy it to oh-so-many environments. So I hope you will explore these possibilities with Buddy. We have two more things that we can test to make our code one bit better to discuss. And then we can call it a day. Okay, back to our visual tests. Now visual tests, this action, by default it stays in the state and asks you to approve the results. So when you click approve, you are not actually saying, yes, I approve, but you have to have looked at the app. Not sure if you're going to be able to see it with me sharing the screen, let's try to zoom. Okay, as you can see the visual tests, they mark the area where the changes have been detected. Now, this is the behavior of the difference tab, but if we use the swipe option, we can swipe between the two versions of the app. So the first version is hello, JS nation. So this is our baseline. And this is the version that we have just deployed to our staging server. And we know what visual test rule, this is as good as a heading for this app as any. So I'm going to approve that. And as a result, our pipeline run the visual tests successfully. So there you go yet another way to test your application. And the looks might not be that important or important at all when it comes to people, but when it comes to applications, well, you better get them sorted out. There are so many beautiful and functional apps that yours should be not only functional, but also look great as well. So these visual tests might come in handy and it's great that you can run them all in Buddy. Right? So auditing your packages for vulnerabilities, browser tests, automated, Cypress, Selenium, Puppeteer, I have chosen Cypress today as it's testing oriented, but you can choose whatever you want, Cypress is huge, no, Cypress, Selenium is hugely popular. I can see why. So whichever tool you use, just run these automated tests. They are a great source of information about your app.

17. Smoke Tests and Recurrent Pipeline

Short description:

Smoke tests are rudimentary tests that poke your app to see if its basic functionality is working. They are often integration tests, such as automated browser tests. When running smoke tests, be mindful not to overload your app. A simple smoke test can be achieved by using a ping action to check if the app is online and responsive. You can set up the pipeline to run recurrently at specific time intervals. Running the test recurrently is a great idea, especially if you deploy your application frequently. You can trigger one pipeline to run another pipeline once a set of actions in the first pipeline finishes running. This allows you to deploy the application and then run the SmokeTests pipeline to check if the address is reachable.

And the last, last two testing categories that I'm going to discuss today before we run out of time. First of all, smoke tests. And smoke tests, well, smoke tests are, are these kinds of tests that when I was thinking about them this phrase, plug and pray, came to my mind because the origin of the name smoke tests is kind of, leads to that. You might think to yourself, what was the origin then? And basically the name, smoke tests come from a very rudimentary method of testing. Not software, but hardware. So imagine that you were this inventor and you've come up with this great invention of yours that let's say makes coffee twice as fast as any other thing on the market. And you've assembled all the components, this is the coffee maker 3000 and what not. And you finally plug it in. So is it good when you see smoke coming out of the damn thing? No, it's not. So this is a very rudimentary way of testing before you start pushing buttons and making coffee lightning fast, you need to ensure that there isn't smoke coming out of your thing. So it's the same with software, with apps. Smoke tests are these very rudimentary tests that basically poke your app and see if its basic instincts, if its basic functionality is working. It's done at a bit of a different phase than what I... Well, not exactly on a different phase because you have to have your coffee maker ready to plug it in, right? And it's similar here. You have to have your app running already to create smoke tests, to run smoke test on that app. And very often the smoke tests are integration tests. For example, these automated browser tests make for a good smoke test. Just one thing to keep in mind, when you run these smoke tests, make sure not to make them too hard on your app. So for example, if I were to smoke test Buddy, I wouldn't want to do my automated test, integration tests, my smoke test, I wouldn't want it to open the pipelines tab and add 5,000 new pipelines because this could overload my database, this could create unnecessary traffic, it might turn out that my smoke tests actually create so much traffic that there isn't enough resources left for the actual users of the app. So keep that in mind. Smoke tests, they can be simple but sweet and let me show you how simple smoke tests can be. Look, there are so many pipelines, I'm going to add yet another one because why not, I can do that. So, a smoke test for example, let's call them smoke tests. I'm going to trigger this manually. A smoke test, it's enough to test if the app is online and if it responds, right? So, you could achieve that very simply by using a ping action that will simply ping a host name. Let's use the host name that we have here. All right? And we can have a very simple pipeline. It's, I think it's refreshing to see such a simple pipeline. We had like four, five step pipelines, seven step pipelines, actions running in parallel, environment, variables, conditions, et cetera. This pipeline, just one step. And I think it would be a good idea to run this pipeline recurrently. What it means, it means that it runs in specific time intervals. Again, in Buddy, you have a huge flexibility when it comes to defining the time intervals at which the application is going to be run. You can have it run every five minutes, every seven days, whatever. Daily pings, sounds okay to me. Maybe even more frequent. Let's see if this actually works. I'm going to leave the trigger mode to manual, but remember that you can have such a pipeline run recurrently. I'm going to save it and I'm going to run it. And let's see. And you can see that it took no time. Right? 17.2 milliseconds. And the address with our app, it's responded, it is responded. So we can assume that it's there. And having such a test running recurrently, great idea. But what if such a test runs, let's say you set it up to run every 12 hours, but you deploy your application more often? Well, you could add an action that triggers one pipeline, another a given set of action in another pipeline finishes running. So you could, for example, have this pipeline, the first one that we created that deployed the application to that very address that we just tested. We could have it deploy the application and then at the very end, I could add this one more action that's simply, what was its name? Trigger Another Pipeline. No, Run Next Pipeline, I'm sorry. And here you could define, you could even run a pipeline from another project. And here, you can define which pipeline should be triggered at the end of this action chain. So I would want that SmokeTests pipeline to be triggered. Right, so after all of these actions run, then the one final thing to do for this pipeline is going over to the SmokeTests pipeline and run it and then ping and check whether this address is reachable.

18. SEO Metrics and Workshop Summary

Short description:

We can create a pipeline to check SEO metrics, such as the Google Lighthouse report, dead links, and SSL certificate validity. The pipeline can run recurrently or be triggered by another pipeline. The Lighthouse report provides insights on how Google sees our application. The link validator action checks for dead links, and the SSL certificate action verifies its validity. These actions can run in parallel to optimize the pipeline runtime. After running the pipeline, we can review the results and make necessary improvements. Throughout the workshop, we covered various topics, including creating delivery pipelines, implementing GitFlow, protecting branches, conducting tests, auditing npm packages, and exploring Docker. We also discussed SEO smoke tests. Thank you all for attending the workshop and creating your Buddy accounts. Have fun building your 2.0 pipelines!

Now, finally, one more thing you could test and I'm going to just glaze over that as I bet that you guys are tired and there are so many things that have happened during this workshop. Look at the amount of pipelines we created. I hope you're having fun. If you forked the repo, if you've pinged our guys on Discord chat and you have sandboxes available and you can run pipelines in parallel with sandboxes, you can do many cool things. So I hope that you are trying to follow along. And if you got lost at some point, I hope you're going to go back to the recording and try to debug and find out what went wrong. So to finish off one last thing, also important, which is S-E-O, so Search Engine Optimization, all these metrics, all these stats, the way Google sees our application, very important, not always very clear, but there are a few obvious ways, few obvious things we can do to make our application look better to Google. And again, these are actions that are built-in to Body. We can simply create another pipeline that runs either recurrently or similarly maybe to the smoke tests that it runs in some specific time intervals and then is triggered by another pipeline. Let me just add these few ideas to this smoke test. Pipeline, get this ping action some friends, right? So, first of all, when it comes to SEO, an important thing to check is the Google Lighthouse report. And we have a dedicated action built right into Body that allows you to check, to get a Lighthouse report for your application. So what you need to do, simply provide the URL at which your application lives. Then you can choose the criteria whether your app is considered to be a mobile, a desktop or whether it should work on both on mobile and on end or desktop. I'm sorry, I'm getting my tongue twisted a bit. So I'm going to choose, maybe since this is a desktop count, or let me just desktop. And here you can choose the thresholds for when the app, the report is considered to be unfavorable for you and when this action should fail. So I've tested this counter app before. It easily surpasses these thresholds. So this action is to go, this action is going to run successfully, I know, let me add it and we'll see the report later. What else can you do when it comes to SEO? Again, it would be very embarrassing to have any four or fours on your website, right, on your application. You've been working on it for so long and then somebody finally finds it on the web and it turns out that there are four or fours on the first level, yikes, right? So we have a link validator action, again, just provide the address where the app lives. You can specify the depths of how deep the action goes to check for the deadlinks. There are some other configuration options which you can explore with our account or after there isn't really that much to explore in terms of deadlinks. So let's leave the action with its default config, and you can also check your certificate, whether it's there whether it's expired, which might be a good idea to run this check. Sorry if I saved that too quickly, but you can see if the certificate is valid, whether it's due to expire in the next seven, 14 days, one month. So it can give you a heads up. When your certificate is about to expire so that you can take care of that before it actually expires and you then don't get any penalties in the terms of the way your site or your app is positioned. So I'm going to save that action. And here, again, these actions, they run very quickly. The Lighthouse report actually takes a few seconds to run from start to finish, but the best setup here, I think, thinking about optimizing the runtime of this pipeline, is having the first action run, and then these three could run in parallel. So if the website is available, if the app is available at the address that we specified, these three actions, they can run in parallel. So our pipeline will handle Lighthouse, link validation, and SSL certificate verification at the same time.

After a two hour and 41 minutes, 42 minutes long adventure, let's run this pipeline and see how our application does in terms of SEO friendliness. You can see that, as expected, there's a response from the address, this time 17.7 milliseconds. So, if I remember correctly, 5 milliseconds, 0.5 milliseconds slower than before, unacceptable. And now we have Lighthouse report is being run, link validation is being done. Again, remember that first run of every pipeline is always a little bit, takes a little bit longer than subsequent runs. They were implemented. And that's fine. That's one function as a placeholder, and one function as a primary team member. Let's move on to our second question. Which does Lighthouse four need to learn the most? It starts with, it calls a polling engine. So all the create, implement, build, build and tighten your setup. And remember about making in transparent, remember about the protection rules and making sure that everybody is included in these rules. I've seen so many cases where the admin had these privileges where they could push straight to the master branch, like there's no tomorrow. And they would often break something that went wrong. And usually not suspect their changes to be the culprit of everything that went wrong. So you can see that the pipeline failed because the link, the certificate verification action, it failed because obviously, our app does not have a certificate. And I think that this is a very fitting last pipeline run for this workshop. To sum it up, again, I've started the summary a bit earlier, but we've went through some theory, I hope it wasn't too much for you. Then we created our first delivery pipeline. We extended it to be running on different branches automatically. We mimicked the three-tiered division that's customary to GitFlow. We've looked at protecting our main branch with GitHub protection rules, very useful. And we've looked at protecting our repository in case somebody wanted to fork that and create pull requests to our repository. We've looked at different testing scenarios. So we've had browser tests, we've had visual tests. And remember that visual tests, you could run 100% with internal by the infrastructure if you use some boxes. We have also took care of some security concerns with NPM audit, very cool command. I recommend you start using it today and to check out the documentation because there are even more cool things you can do with that command. We've looked at, very briefly, a Docker. I'm really sorry if I haven't explained Docker enough, but I think that on YouTube, there are so many great people who explain Docker much better than I do. So look these guys up if you still don't know what Docker is about. And then we looked at SEO smoke tests and now after two hours and 46 minutes, I think it's time to say goodbye. Yep, I see that some of you guys are leaving for another appointment. So I'm going to end the workshop here right now. I guess that we are going to still be available for a couple of minutes on the chat. Thank you all for your attention. Remember to, if you're watching this after the workshop has happened, remember to create your buddy account. Remember to fork the repo is going to be there for you and it's going to stay there. So don't worry, it's not going anywhere and have fun creating your 2.0 pipelines for your apps. So have a good evening or good afternoon. So I don't know if there are many time zones here, but I'm out, buddy's out.

Watch more workshops on topic

DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Workshop
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
WorkshopFree
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.
DevOps.js Conf 2022DevOps.js Conf 2022
163 min
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
Workshop
The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.
DevOps.js Conf 2024DevOps.js Conf 2024
25 min
End the Pain: Rethinking CI for Large Monorepos
Scaling large codebases, especially monorepos, can be a nightmare on Continuous Integration (CI) systems. The current landscape of CI tools leans towards being machine-oriented, low-level, and demanding in terms of maintenance. What's worse, they're often disassociated from the developer's actual needs and workflow.Why is CI a stumbling block? Because current CI systems are jacks-of-all-trades, with no specific understanding of your codebase. They can't take advantage of the context they operate in to offer optimizations.In this talk, we'll explore the future of CI, designed specifically for large codebases and monorepos. Imagine a CI system that understands the structure of your workspace, dynamically parallelizes tasks across machines using historical data, and does all of this with a minimal, high-level configuration. Let's rethink CI, making it smarter, more efficient, and aligned with developer needs.