Deploy a Web Component App and Set Up a Continuous Integration Workflow

    Rate this content
    Bookmark

    Join us for a workshop in which you’ll deploy a simple Node.js app built with Web Components and set up a Continuous Integration (CI) workflow. You’ll learn about the power of the Lightning Web Runtime (LWR) and GitHub Actions.

    111 min
    12 Apr, 2022

    AI Generated Video Summary

    We'll be using Node version 14 or 16 and GitHub Actions for continuous integration. We'll create a Lightning Web Runtime application and assemble web components into a single-page app. We'll add code quality tools and explore configuration files. We'll set up testing, create a Git repository, and configure CI workflows using GitHub Actions. Finally, we'll explore bonus tasks such as adding badges to the readme and creating pre-commit hooks with Husky.

    1. Introduction to Workshop

    Short description:

    We'll get started with the workshop. We'll be using Node version 14 or 16 and GitHub Actions for continuous integration. We'll create a Lightning Web Runtime application and assemble web components into a single-page app. Then, we'll set up code-quality tools and put the product in a new GitHub repository. In the third part, we'll configure a workflow using GitHub Actions and test the continuous integration workflow. Finally, we have bonus tasks if we make good progress. Let's get started with the first part of the workshop, where we'll create a Lightning Web Runtime application using the NPM in its LWR scaffolding tool.

    We'll get started. One of the key things you want to check looking at the prerequisites is making sure you have the right Node version. We're looking for Node version 14 or Node version 16. You want to run Node-v in your terminal to make sure that you are using the correct version for Node. Some people may have multiple Node versions in their systems. So let's make sure you don't bump in any surprises by not using the right version.

    All right. Well, let's get started. Hello, everyone. Thanks for joining. My name is Philippe Oziel. I'm a DevOps Advocate at Salesforce. In today's workshop, we're going to take a look at how you can use Web Components app and how you can set up a continuous integration workflow using GitHub Actions. So, all of the instructions, again, are in this GitHub repository. You can find the link in the chat there. And we're going to be together for the next two hours. I'm going to be giving you the instructions and you can try to replicate those on your computer. One thing I want to stress out is that I'll be trying to look at the chat, but don't hesitate to ask questions as we go forward. This is an interactive format. I'll try my best to answer while moving forward. We will also try to do a break in the middle so that people who are late can catch up.

    Before we start the workshop, there are a number of things that we need pre-installed. So I hope that you got the information before joining. We will be using Node, one of the recent versions that are on long-term support, version 14 or 16. You'll need a GitHub account because we'll be using GitHub Actions and you'll also need the Git command line so that we can run a few commands there to manage our repository. So during the course of the workshop, we're going to be following a couple of main tasks and all of these tasks are described in this GitHub repository. We'll start by creating a product using the Lightning Web Runtime. So this is a technology built by Salesforce which allows you to create a web application that uses web components. And we'll be using a framework called Lightning Web Components to assemble those web components into a single-page app. So I'll explain a bit how this works as we move forward with this. And once we have a very basic Hello World sample app, we'll set up some code-quality tools. So this is going to be working with NodeJS and some common JavaScript tools. We'll also put this product in a new GitHub repository. After that, we'll move to the third part where we'll set up continuous integration. For this, we'll be using GitHub Actions. So we'll configure together a workflow which runs a number of tasks before it passes. And then we'll put our continuous integration workflow to the test by doing a contribution to our app, just like if you were an external contributor. So we'll be doing a pull request, making sure CIO passes, and we'll be building some additional web components. Finally, if we make good progress, which I hope we will, I have a number of bonus tasks that we can cover during the session. So this is the program for the workshop. Let's get started. So we can just get ready. Let's go and we'll be looking at the first part of the workshop.

    So to get started with Flighting Web Runtime, we're lucky because there is a little scaffolding tool called NPM in its LWR, which allows us to create a Lightning Web Runtime application with a little interactive script. So I'm going to open a terminal on my machine, and I'm simply going to dump the command here, NPM init lwr. This will fetch the initialization package. And after some point, it will go into an extractor mode where it will ask me for some questions. So the first question is, what's the name of the project? Here, you can be creative if you want, but make sure that you update this string here. If you follow the rest of the instructions from the workshop, I'm assuming that you keep default project name. So let's keep it for now, lwr-project. And this will be used again in all the URLs and paths moving forward. Then we get a second question here. What kind of project do we want? So for here, we do not want a static site. We are going to be using a single-page application because Lighting Web Prod Time lets you build two kind of sites, and the one we want right now is single-page application because it will let us build some web components. So this is one we wanna explore. So we take the second option, single-page app here. Next, we have a question about the technology we wanna use to build our Lightning Web Components. It's essentially JavaScript or TypeScript. So we wanna keep this first option, which is Lightning Web Components, or LWC, with JavaScript. So we keep the first option here. The output looks a bit strange, but that's because I have an update on my NPM. I'm not using the latest NPM. But anyway, what is good is that everything past my product is here. So the next step I need to perform is to go in this project. So I'm gonna clear my screen here so that you can follow. Now that I've created my project, I can go inside my project directory. Don't forget this, because the next commands will rely on being inside the project. And then you can install the project directory inside. And then you can install the product using npm install. So this will take all of the dependencies that the scaffolding tool set up for me.

    2. Exploring Project Structure and Files

    Short description:

    In this part, we will open a code editor and explore the project structure and files. We'll start by examining the package.json file, which contains the dependencies for the project, including the Lightning Web Components (LWC) framework and Lightning Web Runtime (LWR). We'll also discuss the available scripts, focusing on the 'dev' script for running the Node server in watch mode. Additionally, we'll explore the lwr.config.json file, which maps the routes and modules for the project. We'll then delve into the structure of a web component, consisting of JavaScript, HTML, and optional CSS files. Finally, we'll address any concerns or issues raised by participants.

    Thanks for the answers that I provided. And it will retrieve them from the NPM registry. So this can take a couple of seconds depending on your network speed. And it's assembling the list of dependencies here. The next step is going to be to open a code editor. So here you're very free to use whatever editor you prefer. For my part I use VS code. So I'm just going to do code and dots for the current product, but you can use another editor if you want.

    The idea here is that you want to open the... But you're going to open the project repository. So before I give you the next instructions, I want to give you a tour of the products. I'm going to wait just a few seconds for you to catch up. Let me know when you're ready to proceed. And we're going to look at the project structure, the different files and folders that were created and how they were structured. And then we're going to look at the code as a whole. The files and folders that were created thanks to the scaffolding tool. If the build script is still running in the background, don't worry, you can follow what's going on here. Let's take a look at what has been generated for us.

    So, first of all, we're going to look at package.json. This is a very simple node project descriptor. You'll see that there are very little dependencies here, there are two of them actually. The first one, LWC, is the framework that we use to build web components. So, it's Lightning Web Components. It's a Salesforce framework that builds on standard web components with just a bit of syntactic sugar to make it easy to use and quick to deploy. And the second thing that we have is LWR for Lightning Web Runtime. And this is really the backend side of Lightning Web Components. So this is a framework that runs on Node.js that will help us aggregate a number of Lightning Web Component pages. It will be doing routing, caching, and things like that that are more on the backend of our server. And these are really the two dependencies we have in a default project. Now, there are a number of scripts also that are available here that we'll be using later on, and I will explain some of them. The one we're going to be relying on for this workshop is the dev one. And this is essentially running the Node server in watch mode. And we have a little shortcut. Instead of just running directly Node, we are using the LWR script to setup our server so that it runs the right tasks for us in the background. And we're using serve mode, which does live updates. So let's look further down in the different files we have.

    We have also a lwr.config.json file, and this is essentially something that maps the different routes and the different modules for our project. Our project is very simple. If we go in the source folder, we have some static assets, like images. And in modules, we have some web components. So the web components are placed under the modules folder and they have a namespace. Example is the default namespace we have, and app is the name of our web component. So we have one web component named app. And in the routes, you can see in our configuration that we are serving this web component, example, slash app, as the root component in the home page of our server, of our Lightning Web Runtime server. Alright? Seems pretty simple, right? Now, let's go over the structure of the web component. A web component is simply three different files. Some of them are optional, but it boils down to a JavaScript file, an HTML file, and a CSS file. The CSS is optional. Sometimes you don't have it, but generally, you want the JavaScript and the HTML file. The JavaScript part is really kept to a minimum here. We're using ECMAScript modules here, and one of the key things, that is defined in this module, is that it's a JavaScript class that extends Lightning Element, and Lightning Element is provided by the framework. Lightning Element is simply a wrapper around the standard web components, to put it simply. I'm looking at the chat. I see someone has a problem missing Python. I don't believe that we're using Python in this set of libraries. Anybody had other issues with Python? I don't. I've never seen that, and I'm not even sure I have Python installed on my end. Yeah, this is really weird. There is normally no relationship between this product and Python. It's just pure JavaScript here. Maybe you'll need a reinstall node or something. I don't know. Sorry about that. Back to... Okay, cool. We got it. Perfect. Back to the description of Light Web Components. So essentially, this is the core part of the component, the JavaScript part. And we're using other key elements of the web component standards.

    3. Exploring Web Component Structure and Live Update

    Short description:

    We explored the structure of a Hello World application using web components in the Lightning Web Runtime. We learned about the template HTML5 tag and how it allows us to inject fragments of pages as web components. We also discussed the concept of shadow root, which provides isolation and protection for web component content. Additionally, we saw how to run the developer mode for our application and observed the live update feature in action. Finally, we examined the lwr-cache folder, which contains compiled code and modules for efficient performance.

    For example, here we're using the template HTML5 tag. So this is a standard tag that is part of the HTML5 spec. And it allows us to build fragments of pages that can be injected as web components. And everything that is inside this template will be injected as an HTML component or web component within a page.

    So very simple, but this is the structure of our Hello World application. Now let's move on to the next step, and this is where I'll need you to type npm run dev in a terminal. So at this point, if your IDE supports it, you can run these tasks directly within your IDE. So for example, here in my VS Code, I can just open a new terminal, and I can run this directly from here, npm run dev. I will close my other terminal. We no longer need this one. And we can have everything here in a single window.

    So at this point, we've run the developer mode for our Lightning Web Runtime application, and we can see that our application is running on port 3000. So all we need to do now is open this URL localhost 3000 to get it running. And I'm going to do that in a new tab. Here you can see as I opened the app, the Node server is actually watching the changes for the resources that are being viewed in the app. And this is the Hello World app that is represented by this template here that we built. So just an image that is in the assets folder and our Hello World H1 element here. Very simple, right?

    Now let's take a look at this and see how it's interesting because you can do this with raw HTML, but I'm going to show you why this is a bit more advanced than just a static HTML site. What I did here that you cannot really see is that I pressed F12 on my computer, on my browser. So that I could open the developer console for Chrome. And if I inspect my elements, for example, if I inspect Hello World, I can now open the elements of my pages.

    And I want to bring your attention to two things. Actually, let me increase this here. I think I can increase. Yes, I'll increase the font size. I want to bring your attention to two things. First, this element here, the example app here is a standard element called… Sorry, a custom element called example app. And this is a web component. With custom elements, you can declare an HTML specific subcomponents, which are web components. One of the other standards that are part of web components is shadow root. And you can see here the sub-element we have here, called shadow root. This is not a DOM element per se, but it's a wrapper around the content of my web component. And what this brings to my web component is isolation. Essentially anything I define inside this, under the web component, the main tag, etc., are isolated from the rest of my page. So, for example, if I had some CSS rules that were defined at the top of my elements, they do not reach inside my web component. They're isolated. Anything that I do in terms of CSS within my web component is strictly bound to the context of the shadow root. So you're setting aside one element of my DOM tree, making sure it's isolated from the rest of the page. Here you can see we are in open mode. Open mode means that you can still query elements from my web components from within the parent. But there are other shadow root modes. You can work, for example, in closed shadow root mode. And when you're closing the shadow root mode, it means that you act as a black box, meaning that if I have JavaScript code running in my parent component, I'm not able to query elements within my shadow root. So it's a way to protect your content from interference with other JavaScript contents. So this is really how you've built your first web components using Lightning Web App runtime. Pretty easy, right? We'll be adding more to this, but this is really the basics here.

    So at this point, I hope everyone was able to see the hello world application, that everyone was able to configure it. And what we're going to do now is make sure that the live update works. So we're going to be opening our HTML file here, the app.html file that is under modules, examples and app, and we're going to be changing something here and making sure that the app refreshes because we're supposed to be in watch mode. So let me just close this, put it a bit on the side. And I'm just going to replace the keyword here called world with lwr. And if everything goes well, our browser will refresh. So I'm changing the file, and now one thing that I forgot to write in the instructions, but it is actually very important, you need to save the file because otherwise the change won't be detected. So as soon as I hit Cmd S or Ctrl S to save the file, here you can see that my browser got the change and recompiled our code. You can see here, it detected the changes and updated the page. So this is how the watch mode works, and this is how you can do development at runtime. And you can see that in my product I have a folder called lwr-cache, which contains our compiled code or modules, all of the different web components that are inside our project trapped in a convenient cache. So I won't go into the content of the cache, but just to show you that when you're building this, there is a cache that is being put in place so that it goes really fast. Obviously the cache is not being used that much when we are in watch mode, but as you're moving into production mode, the cache here is pretty fast and efficient.

    I hope that everyone's still following I'll be looking at the rest of the instructions now. So we have our first web component application, I'll show you the elements, I went a bit faster here to show you the different elements. Of our application, you can also do this yourself if you want to inspect the components and see, again, the custom elements, the shadow long root, the shadow root elements. And you can really see the basics of web components. So this is the first part of the workshop. Really, the basics of setting up a project, and let's continue with the second part now. Unless you have questions, obviously, which I can take any time. For the second part, we'll be looking at how we can add some code quality tools. Right now, we saw that our product has a minimum number of dependencies, it's not doing much.

    4. Adding Code Quality Tools

    Short description:

    In this part, we'll add code quality tools to our project. We'll modify the package.json file to remove the 'type module' and add a list of development dependencies for testing, formatting, and linting. These dependencies include Jest, Preacher, and ESLint, along with their respective plugins. We'll also add scripts to run these tools and copy the necessary configuration files. This will ensure that our project has the necessary code quality tools and configurations in place.

    For the second part, we'll be looking at how we can add some code quality tools. Right now, we saw that our product has a minimum number of dependencies, it's not doing much. And so, we can improve it a bit by adding some tooling. So, what we're going to do now is we're going to open again our package.json file, which contains the project dependencies. And we're going to modify a few things.

    The first thing we want to do here is remove the type module. The reason why we're doing this is that we're going to be adding some tests later on in the project. And we're going to be using Jest for this, but Jest doesn't really work well with modules. So we're going to go back to regular projects. So I'm just going to remove this line, test module, and we're going to work in classic Node project here. And then I'm saving this.

    The next thing I want to do after this is adding a bunch of dependencies. So here, there's a large list of dependencies, and I will go over them just as the script runs. There's a big, big, big line of dependencies we're going to add. I suggest that you use this little convenience icon here for copying the entire line instead of just trying to wrap everything because you don't want to miss anything here. So just copy the line here with all the dependencies and these are all development dependencies. So you can see here there's quite a few of them. And I'm going to hit enter, and this is going to add all of these dependencies to my project. So while this is working, we'll see in a few seconds the list of dependencies being added here. And I'm going to explain a bit what these do. Because I think it's important that we understand the list of dependencies. We're essentially adding three key things. We're adding Jest for testing, Preacher for formatting, and ESlint for linting.

    However, it's not as simple as adding the dependencies directly. We have to add a couple of plugins for each of these dependencies. So let's start with Jest. Here you can see the Jest version that we're adding. This is the base Jest library, which allows you to test vanilla JavaScript. But we are building lightning web components, so we have a few specific syntax rules and a few things we want to mock for Jest usage. So we'll be adding, for example, this plugin here, for Jest presets. So let's add a few Jest-specific features to Jest. We'll also be using Preter. And Preter is a popular formatting tool, which essentially makes sure that everybody working on your project has the same formatting rules being applied to their code. But again, we're not just using Preter vanilla. We need to add a few more things. So we have a few... actually, not for this one. We don't have configurations for this one. This one, the vanilla version is good enough. It's for ESLint, however, but we have a few dependencies. So, for ESLint, which is linting for JavaScript, we have a bunch of plugins. We have a specific plug-in for Lightning Web Components. And we have some other plugins and rules for the underlying technologies. And also some rules for working with imports and just test linting. So that's a lot of extra plugins. And you can just install them with a single command, which I think is quite convenient here.

    Now that I have installed a lot of these dependencies, I need to make sure that I can run them because they're in my node models folder, but without having scripts to run them, I cannot use them. So the next step is to copy this bunch of JSON here and add it to the scripts that I already have in here. So I'm going to add a new line here at the end. Don't forget to add the comma at the end here so that your JSON is still well formatted. And then you can add the rest of the code here, not leaving the trailing comma at the end. So I've added four lines here. I've used the standard test command, a test command built-in node that can be run directly from npm, and this one will be calling Jest directly. For format, I will be using calling preacher, and I'm passing the riot flags, but it rewrites files that do not match the formatting rules. And then I pass in a list of filters for the different file types that Jest is using. The different file types that Jest is going to be looking at. I'm also adding another command here, format verify, which is doing pretty much the same thing, except instead of right here, I have a check flag here. So when I'm going to be working in my local environment, I'll be using format, but when I will be working in CI mode, I will call verify, and verify will actually do the same thing that the right thing does, except it will just throw an error when it sees that a file does not respect the formatting rules. It will not try to overwrite the file itself, it will just fail when there is an error, which is very convenient for running continuous integration. Finally, the last script we've added is lint, and this calls the ESLint on all of our JavaScript files within our project and within the source directory here.

    Now we're almost done with setting up our tools here. One thing we need is to copy a bunch of configuration files. This is the part which is a bit, I would say, not the best part when setting up a new JavaScript project. There are loads of different files you need to set up and configure, and for each of these different plugins we have, well, we have our own configuration files. Now, you want to use this link here to open the right directory in your project, and you're going to be grabbing those files from the Git repository here. So, either you copy them one by one or you clone the entire project and you move them in the right folders. So, the file files that we need, we'll be using the CI folder later, but we essentially need those. If you want to copy everything, you can also copy CI. It doesn't matter.

    5. Exploring Configuration Files

    Short description:

    In this part, we'll explore the different configuration files for our developer dependencies. We'll discuss the purpose and content of each file, such as .eslintrc for linting rules and eslint.config.lwc for recommended ESLint rules. We'll also cover the importance of excluding certain files and folders, like cache and coverage, from the git repository. Additionally, we'll examine the pre-ture ignore file for preserving automatically generated files and the PreToRC file for setting formatting rules, including the override for Lightning Web Components. Finally, we'll look at the jest-config file, which extends the defaults for the Jest plugin and provides shortcuts for module imports.

    It doesn't matter. We'll be using the content of the CI folder in other steps. But for now, I need you to grab.ts.link,.gnore,.pretyoregnore,.preturerc, and just a config file. All of these are specific configuration files for our developer dependencies. I will just do that. I'll just copy this here. Preetify. Actually, we want to put them at the root of our project and in the subfolder. Be careful to place them in the right level. I will explain the content of these files just after I want to make sure everyone has enough time to copy the content here.

    Oops. Right. So let's take a look at these different files. I'll look at them in the same order that they're defined. The first one.eslintrc tells us about which linting rule we're going to apply on our project. And what it does is that it starts by extending one of the plugins we've added. It's actually a set of rules for ESLint we've provided. And remember, we added this particular module here in our development dependencies. We added eslint.config.lwc, and this dependency comes with a list of ESLint rules that are bundled in this dependency. And we're using the recommended set of rules. There are other set of rules that you can follow. You have most strict or relaxed rules. And you can basically use these as a starting point. And we've also added an override for our test files. And this will make sure that our configuration and our test files are actually using the node environment and not just a browser environment. Because the difference between our JavaScript files located in src and the modules, and for example, the Jest config.xml, the jest-config.js file is that everything that is a web component goes under modules. And this is going to be served to a client. So this is going to be JavaScript code for the browser. But what is in the other folders, like the Jest config file, is actually running on Node.js. So then the environment is different. And the linting rules obviously are different because browser doesn't behave as a node backend. And the other thing is not... the other exclude rule here is not yet used, but we'll see later that we're going to be adding some tests here. And tests are not running in a browser. They will be also running in Node. So different set of rules again, different global variables, for example, stuff like that. So this is the basic for our yes-lint rules.

    For git-ignore we have quite a few things, mostly junk temporary files that are being added here. It's not critical, if you forget about it. But one thing that is important, I would say, specific to this folder, to this project, is adding the cache to the ignore list because you don't want to include cache in your git repository because it can generate a lot of noise and unnecessary changes. So make sure that you at least have this one, the other ones are more optional. The next one is the pre-ture ignore. Pre-ture is going to be looking at the formatting for a lot of different folders and files, and so we want to make sure that we do not change certain files that are automatically generated. So VS Code, in my case, it's my VS Code preferences. Slash coverage is if you generate the code coverage for the tests for your project, it will create a coverage folder at the root of your project. So you don't want to change this, because this is automatically generated. And by the way, this is also something you want to exclude from Git. The dist folder is when you build the package. We're not going to redistribute the package, we're not going to use the production mode so we won't have a dist folder. And there's, again, the caching and the resources. The resources here are excluded, and actually it's... I made a mistake. It's not resources, it should be assets here, but this is only important if you start adding some text in your assets, static assets, like if you have a JSON file for data, but you want to keep track of so you don't rewrite it, you could be excluding assets. In the end, most of what will be rewritten is the configurations of the root of the product and also everything that is under modules when you work with any web components.

    The next one is PreToRC, which is pretty close to a template with the base formatting rules. You can obviously fine-tune them to your liking, but this is, I would say, a basic set of instructions. I will not detail all the rules here, but I think what is really important here is the override, which is really important, critical for working with Lightning Web Components. When you are working with Lightning Web Components, most of the syntax in your HTML file is standard HTML, but there are certain rules, certain things, like templating, which are specific to Lightning Web Components. So we need an override and not use the default HTML parser, we need to use the Lightning Web Components parser for PreToR when working with HTML files. Otherwise, you'll see some errors. So it's important to have this particular override when looking at HTML files in PreToR. And finally, the last thing we need is the jest-config. This is going to be useful when we're going to be adding tests, and what it does is that it extends the defaults for the Jest plugin for Lightning Web Components. That's a good start. And then it also provides shortcuts for our modules. And you've noticed that our first Web Component is nested pretty deep— not pretty down, but several levels down in our project center. I saw C-module's example app. So the class app.js is something which you would need to call with a long path. And instead of having to import it by using the full relative path, we're actually building a shortcut here with a regular expression. So when we want to refer to the app.js class here, we can just import it with example-app.

    6. Setting Up Testing and Git Repository

    Short description:

    We'll be using Native Shadow DOM for testing web components. We'll run Preacher for formatting, ESLint for checking JavaScript files, and Jest for running tests. The test script will fail initially as we have no tests yet. Next, we'll set up a local Git repository using the Git CLI and commit our changes. Finally, we'll create a new repository on GitHub to push our code.

    This is a pretty convenient shortcut we'll be using in our tests. And another thing we are using is... here is a parameter for the landing web component test mode using Native Shadow DOM. That is because there are several other options for configuring Shadow DOM. You can actually use a synthetic Shadow DOM, which is a polyfill for older browsers but does not support the Native Shadow DOM. But here, we're using the real thing, we're using Native Shadow DOM supported by the browser itself.

    Alright, that's it for the tour of the different configuration files. It was a lot of information. Let's move forward. Cool. So we have all of our product settings. Let's now make sure that our scripts are running. So go back to the terminal and start running those scripts. The first one is running Preacher with npm run format. And at this point when you run it, you'll see that it scans all the different files there. And I don't know if the screen will show you that, but some of these files are showing up a bit more grayish. Actually, it's because I have also the syntax coloration in my terminal. But you will see some differences here. Oh, yup. Here, adding the link to the repo. So you'll see that some of these files actually appear more visible than others because some of them have been rewritten. In my case, like the CSS was rewritten, the JSON file here was rewritten, et cetera. Now that I've run a preacher, I can move on and use the next command, which is running eslint. So I'm gonna do npm run lint. And this will scan all my JavaScript files and check in for errors. Now, everything is fine. You saw my first JavaScript file is very empty, there's really nothing in it. So it passes linting without any issues. And the next thing we want to do after that, the third and final check we want to run is running tests with npm test. Notice how we don't have to call the run part here. This is because test is one of the default commands built in npm. So we could just do npm run test, but there's even a shorter way to do just npm test. And this is called jest here. And what you'll notice is that the test here will fail. It's giving us an error. And the reason why there's an error, well, it's because we found no tests. So this is perfectly fine. No tests by found the test script is failing. It is indeed checking for tests in our project. So the configuration is working. It's just that we don't have any tests at this point. We will add some tests later on. So make sure these three scripts do work and don't worry about the error when you run npm test for now.

    All right, now we've set up all of the dependencies for tooling, at least to run in our local environment, we can move forward and start pushing our code to a source-controlled repository, and we'll be using GitHub to do that. Oh, the first thing we need to do before even talking about GitHub, we need to set up our local Git repository. And here we're going to be using the Git CLI to do a couple of things. I'm going to run the git init command here, and this creates an empty Git repository on my machine. So this is just on my computer, this is not online or anything, and there's no history information at this point. But you can see already that my command line changed here. I'm working on a main branch at this point, and I do have some changes in my product. Obviously, we can do this with the command line, but we could also use a more graphic user interface here to do this. And you can see here that these are all the files that I have in my project, obviously without the files that are covered by git ignore. So we're going to be using the CLI here so that we can move forward very quickly. I'm going to do git add dot at the root of my product. This is essentially adding all the files that have been modified in my product. I'm going to put them in staging mode. So when I do this, it will take the 14 files that I have modified and put them in staging mode so that they're ready to be committed. And I will run the last command here, which is a commit, with a message. I've used here a message from my comment. I'm using the conventional comment style here. If you're interested, you can read more. It's basically a structure to describe what these comments do. There's a comment prefix, whether it's a feature, it's a fix, or something that deals with the build or CI, and then a short message just after. So here I'm just saying this is a feature, this is the initial comment for my repository. And then I commit my changes, and now I have a local Git repository with a small comment history of just one comment with my changes in there. Again, everything is just in my computer at this point. Nothing is in GitHub. So the next step is to go to GitHub, and you want to log in there and you want to create a new repository. So I'm going to just use this URL here. I'm going to create a new project here.

    7. Creating GitHub Repository and Setting Up CI

    Short description:

    To start from scratch, delete the previous project and create a new repository on GitHub with the same name as the project. Follow the provided instructions to set up the remote repository and push the local main branch to GitHub. After completing the instructions, reload the GitHub page to see the files in the repository. In part three, we'll focus on continuous integration by adding CI to our GitHub repository.

    Actually, I will need to delete. I had a previous project. So I'll just make sure that I delete my previous project before. And my repositories, let's see. Yeah, here, let me just get rid of the previous product that I did for my tests. And I will re-create and we can go over it together as I re-create the project. All right, deleting this project. There you go. So now I can start from scratch.

    So when you're on the GitHub home page, you can already have next to recent repositories a new button that you can use. I think this is the simplest way to go forward. So I'm going to click on new here. And this will ask me some basic questions on how I can create my repository. For the sake of simplicity, I'm going to keep the same name as the project. So I'm going to call it LWR project. And this is the name that you should have in your editor at this point. The root product is the default name when we created the project. So I'm going to keep it this way. You can put it public or private. It doesn't really matter here. For what we're going to do it, we can keep it private. It's perfectly fine. And you don't have to check anything here. Just go directly at the bottom and click create repository.

    Now that the repository is created, you have a URL that you can remember easily to get back to it. And there are set of instructions here. Now, what we want to do is follow this second set of instructions, because we already have a local repository. So we want to follow the instructions from here. So simply copy this whole block of free lines here, and you want to run it in your terminal, inside your project. So copy, paste. There you go. The first thing adds a remote origin, saying that there's going to be a remote repository tracking your local repository. The next thing, the second line is actually not needed. We're already on the main branch, but this would force the local branch to be renamed to main. So this is not really needed. And the third one is really important. We are pushing the local main branch on the remote origin, so on GitHub. And we are making sure that we keep track of this branch as we move forward with all the comments. So I'm going to go enter, and it will do everything for me. Now, you want to make sure that the last line here is correctly saying that branch main is set up to track origin main. Or origin being the remote, which is GitHub.com. If everything went well when you did these instructions, you can now go back to the page here on GitHub and just try to do a reload or refresh, and you should be seeing all the files from your product. And obviously the readme is just the boilerplate readme, but at least you have your own configuration files that you just added, and you can see in the package.json that you do have the extra script you added, everything we did so far.

    I'm going to leave you a few seconds to make sure that everyone catches up. So this is the end of part two, and we'll be looking at the next step afterwards. Just keeping waiting a few more seconds for everyone to get ready for part three. In part three, we'll be looking at continuous integration. So we'll be using our GitHub repository, and we'll be adding CI to it, so that we can run the code quality tool we set up locally to be run remotely first.

    8. Setting Up CI Workflows

    Short description:

    We'll be building two CI workflows: one for the main branch and another for checking pull request content. The workflows will ensure that the main branch passes a set of checks and that pull requests are verified before merging. Although the workflows will be similar, production environments may have different checks for the main branch and pull requests.

    All right. I'm going to give you a couple more seconds, and we'll be looking at GitHub Actions to test on project and set up CI workflows. We're essentially going to be building two CI workflows. One of them is going to be running on the main branch. So it's going to be making sure that everything is on the main branch, passes a set of checks. And we're going to be adding another one, which will be checking the content of pull requests. So the scenario we're going to follow is actually each time someone does a pull request, we want to verify the content of the pull request before merging it into main. And then once it's been merged into main, we want to rerun a check to make sure that the main branch is always valid and always passes CI. The two workflows are going to be very identical. Obviously, in a production environment, you would have different checks that you'll be running. You'd probably have more checks running on the main branch than you have on PRs, for example.

    9. Implementing GitHub Actions for CI

    Short description:

    For continuous integration, we'll use GitHub Actions. Set up GitHub Actions by creating two folders, .github and .workflows, and copying the provided CI YAML files. The workflows are triggered by a push on the main branch or a pull request on any branch. The CI workflow includes a job that runs preacher, ESLint, and Jest to check formatting, linting, and run tests. The unit test part is commented out to ensure CI passes before adding tests. Commit and push the configuration files to the remote GitHub repository. Check the status of the CI workflow on the GitHub website in the Actions tab.

    All right, let's take a look at part three now. So for continuous integration, we're going to use GitHub Actions. And to set up GitHub Actions, it's pretty simple. You need a special set of directories and files. And everything is going to be taking place in our project file folder. So what I'm going to do now is I'm going to create some new folders. At the root of my project, I'm going to create a dot GitHub slash workflows folders, so two folders. The first one is dot GitHub, and then we have dot workflows under GitHub. Now, we are going to be copying two files that were provided in the base repository with the instructions here. If you copied them over from the previous part, you can just move the content of the CI folder directly under workflows here. If you haven't copied those two files here already, you can go and open the directory here and copy the content of these two files. So that's what I'm going to do here. I'm going to create the two files CIPR-YAML with its contents and CI.YAML. As you can see by the name of these files, these are two workflows and CI workflows. And one of them is working directly on the main branch. The other is the one that is going to be running on the pull requests. So on other branches that you want to merge in in Main. So I mentioned this earlier, but the two files are very similar. If you want to compare them, you'll see that there's very little difference. There's only four lines that are different. Obviously the name is different, but also the trigger for the workflow, the CI workflow is different. But the rest, all the steps are identical. So let me carve basically really quickly the base structure of a GitHub Actions CI workflow. The CI workflow is configured as a YAML file, it has a name. This is what the, what is gonna be used in user interface that you can see in GitHub to follow the execution of your tasks. It has a set of triggers that are described by the on property here. There are a number of triggers. The first one that I've added here is a manual trigger. Workflow dispatch means that you can use the github.com user interface to manually trigger the CI workflow. If you don't have this trigger here, you cannot start it manually. So this one is very convenient for testing. If something goes wrong, you can execute manually at any time a workflow. The other one, which is different from the two branches, workflows is the trigger. This one is the base CI workflow, and it's actually being triggered when there's a push on the branch and the main branch. It's kind of simple to read. When there is a push on the main branch, it will run the CI workflow. Now after that, the file is structured in jobs. Jobs are the high level sets of groups of tasks I would say that are visible in the user interface. And in our case, we only have one job, which is called format lit and test. And this is the same set of instructions that I've instructed you to test when we're working on the part 2. It's going to run preacher, it's going to run ESLint, and it's going to run jest to run tests. And then, most importantly, you have the list of steps that you're going to follow. So I'm not going to cover all of these in details, but what we're going to do is check out the content of the GitHub repository, install a couple of node dependencies. We have some specific instructions to cache the dependencies so that it goes faster. And then if you scroll down to the end, these three last instructions are the ones that are really important. We're gonna use our scripts that we've added earlier to check the formatting. Here, we're gonna use the verified version of the format script, because we are not gonna rewrite the file. We just wanna make sure that they match the creature configuration style rules. Then we're gonna run linting. And finally, we will run tests. Notice that the last part, the unit test part is actually commented out, and the reason why I've commented this out is that you do not have tests at this point in your project, so we want to make sure that CI passes before we start adding tests. So this one is disabled for the moment. We will enable it later on. But you already have the template in here so that you can move forward faster. Cool, so we saw our two YAML files. Again, very similar files. They just have different triggers because one is working on a push on the main branch and the other one is working on a pull request on any branch. All right, so now that we have our two configuration files for the CI workflows, what we need to do now is to commit them. And I'm gonna be running the same commands I ran earlier to add these two files to my remote GitHub repository. So I'm gonna be doing git add.github, which is the name of the folder that contains my YAML files. So this will stage my folder, my.github folder. I'll add a comment message, Git commit CI added CI workflow, notice again the conventional comment style, this is a task that has to do with CI, so I'm adding a little CI prefix here. And once I have committed locally, now I'm going to push my local commits to the remote GitHub repository, so I'm going to do git push. All right, and at this point, the CI workflow is going to be deployed to my remote repository and it will start to run. So the next step is to use the GitHub website to check the status of my CI. So I'm going to go back in my project here and you can look at the Actions tab here which tracks the execution of the different CI workflows. If I click on the Actions tab, you can see now that there is one workflow job running at the moment, and this is based on the commits I just did, this is the name of my commit, ci-add-ci-workflows. I can click on the workflow run now to see more details. Here, you can see the list of jobs. In my case, there's only one job in my CI workflow.

    10. Fixing CI Formatting Issue and Taking a Break

    Short description:

    In this part, we fixed the formatting issue in our CI workflow. We intentionally introduced the error to demonstrate how to diagnose and fix it. By running the formatting scripts, we rewrote the YAML files and pushed the changes to the GitHub repository. The CI workflow was rerun and this time it succeeded. We then took a break before moving on to the next part, which involves contributing to the app.

    So I can click on the name of the job, and here I can see the steps of my workflow. So it's gonna be installing my workflow. I have a bunch of warnings, but nothing serious here. And we're gonna see all the different steps being executed one by one, and we'll see if there are some errors. Alright, so here in this first run, you'll notice that there is an error. And this is actually intentional. This is something that I have made on purpose in the instructions. You'll see that the pre-ture formatting failed. And when you open this step here, you can see the detail of the step, and you can look at the output for a particular step. You can see the command that was run. So NPM run format verify. This was the command from this particular step. And below, you can see the output of the command itself. And what you can see here is that pre-ture reporting, formatting issues for the two YAML files we added for our CI rules. And to give you the truth here, I made this on purpose. I've added extra spaces in the configuration file so that we had a first PR CI run that would fail so that we can see how we can diagnose these errors. So CI failed here. Okay, that's fine. We can fix it. So in order to fix it, all we need to do is go back in our project. We need to run our formatting scripts, so NPM run format, and this will rewrite our two YAML files. So if I open a bit more, this window here, you can see that these two have been modify. On my terminal, these show up as white instead of gray. This is because they have been rewritten. And you can see here, I have some files which have been changed and that I need to push and comment. So I will do the same bit of logic that I did earlier. I will add everything, comment with a conventional comment message, and push my changes. So copying all of those three instructions and dumping them in my terminal and that's it. So what this does is that it fixes the formatting and we push our changes to the GitHub repository. Now, because I made a change again, directly on main, which is again, which is not really the best practice for generally want to work with pull request. It will rerun our CI workflow. So I can go back now to the homepage of my product and go into the actions tab. And because I just sent another comment to the main branch, it triggered another workflow run. And you can see our workflow run with the name of our comments, again, fixed formatting. So let's take a look at it. Let's take a look at our job and wait for the different steps to run. And this time, because I ran Preacher and it fixed my formatting, we should have the first workflow that succeeds. So we'll just wait a couple of seconds for it to run and we can see the output. Generally installing the NPM dependencies is a bit the long part, but this is even shorter now because you have normally a cache that should be working. For some reason, I still find this kind of long. I wonder if my cache is really working, maybe there's something wrong with the computation. So much as good point. If it's not working for somebody, see if you're not using a master or main as a name of the branch. So normally in the bunch of instructions that are provided at when you were linking your local repository with the remote repository, there was an instruction that was git branch minus capital M main, which was supposed to rename your local branch to main so that would avoid any conflicts. The other question was, what was the change you did in the preacher config again? So what I did in preacher was I rerun it so that it removes some extra spaces in the CI files. Okay. So I don't remember exactly where, but I left some extra blank space at the end of the line and this was forcing a CI to fail. Once you fix this, you'll see that new CI workflow format all files and doesn't report any errors. All matched files use preacher code style. And then again, there's a bunch of other things we do, but it's running fine. Cool. All right. So this is how you set up CI and we'll be adding tests later on to the workflow as we add some tests and as we contribute to the app. Before we jump into contributing to the app, I suggest we all do a small break. This will allow people who are struggling a bit behind to catch up. So let's take a five minute break and join again in five minutes. I will keep online and I will, oh yeah, good point. Cache wasn't created because the workload didn't finish the first time. Good point, thanks. Yes I did, I didn't do the post checkout source code and post caching. It's a good point, thanks. So yes, let's take a five-minute break and resume in five minutes. I will stay online here for questions if there are. Otherwise it's a good time to stretch your legs grab a drink. And after that we'll move on to part number four. Part number four, which will be contributing to the app. All right, let's see. I think we're done, so let's take a look. Oops, let's take a look at the next step. So we're gonna be doing a contribution to the app.

    11. Creating New Web Component and Adding Tests

    Short description:

    In this part, we'll create a new branch and a new web component called 'hello-right-next-to-app'. We'll create the necessary HTML and JavaScript files for the component, including a template with dynamic variables. We'll then include the new component in our app by editing the app.html file. After saving the files and restarting the LWR app, we'll see the new component in action. We'll ensure that the component is reactive by replacing a variable value and observing the changes. Finally, we'll explore how web components work by inspecting the source code in the developer console. Next, we'll move on to adding tests for our web component using Jest. We'll create a subfolder named '__test__' in the Hello directory and follow Jest conventions for writing tests.

    And the idea behind the next step is really we're gonna be like in the shoes of a contributor not necessarily the offeror of the app, maybe someone else. And the first thing we're gonna do when we contribute to an app, obviously is create a separate branch because we're gonna be adding some features to the product. So I'm gonna go in my terminal and I'm gonna check out a new branch. This branch is called My First LWC. So we're gonna create a new web component. And there we go, I created my branch. You'll notice in your terminal that your name of the branch appears in your prefix in the terminal. Now we're gonna create a new directory and we're gonna create a hello component. So when you go in your source directory, modules, example, and under example, you're gonna create a new folder called hello-right-next-to-app. So notice I'm at the same level of the app folder. Make sure you place it in the right directory. Now we're working in the example naming space and this is the second web component called hello-right-next-to-app. And then we're gonna create a bunch of files. We're gonna create a hello HTML file with a basic template. It's actually very similar to what we have already in the app HTML file, except this time we're gonna be using some dynamic templating. I will explain the structure of this file just after. I'm just gonna make sure that I copy all of them first and then I will explain what these do. So two files, an HTML file, hello HTML, hello GIS files with the provided source code. The HTML file contains the templates. This is standard HTML and we do have one thing which is non-standard, which is the interesting part. That's the templating part for bringing in dynamic variables in our HTML. And here you can see, we have a greetings variable, which we display in a header, but we also use as the value of an input. So it's very, very simple code here. We have one text input which has a handler we'll use to update the variable. And then we're displaying the same variable in header. And when we go to the hello JavaScript file, we have again, our class that extends lining element, which is a web component with a few extra features. It has a variable, and default value is world. And then we have an event handler, which is very much standard JavaScript for handling an input event. And we're just taking the value of the inputs and assigning it to the greetings variable. Now, what is nice with the templating structure here is that the template that is here is called something reactive. Each time a variable changes, the DOM is being re-validated for changes. So whenever I change the value in greeting, we'll be re-hydrating or re-calc, re-rendering the component. So when I type something in input, the title here will change. And that's what we're going to see when we push our code and run it again. So very simple code here. Obviously, there's a lot more behind Lightning Web Components, but I just want to give you a small taste of it. And now that we've added our new component, we need, of course, to include it in our app. So the next step to include all components in our app is to edit the app.html file and to include a reference to our sub-component. So to do that, I need to replace this line with H1 hello LWR, and I need to replace it with this, which is a reference to our child component. You'll notice that I am not using a self-closing tag. You could be tempted to add a slash here to have a self-closing tag, but this is not something Web Components support. So this is not something specific to Salesforce, it's just that Web Components require you to have the opening and closing tags for Web Components. So again, you'll find here, example is the namespace and hello is the actual name of the component that we're using. So what is gonna be done at runtime is that it's gonna be injecting the content of our child Web Component within the app Web Component. So saving this file, remember to save all of your files, obviously, because otherwise it will not work. If you have closed your server because we were using some git commands, we need to restart the LWR app in dev mode. So that's npm run dev. It will compile our different components and then we can refresh the page we had earlier. And there we go. Now, you can see that we have a new title here and we have our inputs below. Now, the next thing we want to do is making sure that our web components behave in the right way. So I'm going to replace world by LWR. And you can see that I think type in the input. The header here is being updated. That's because the component is reactive. Very simple, a little JavaScript involved and it works fine. Again, if you want to check how web components work, you can open your developer console and for example, inspect your source code. And you can see here, there's a bunch of custom elements, like example, hello, and example app here. And we both have their shadow routes open at this point. We're really using standard web components in our browser. So this is how we can build complex apps with components which are isolated from one each other. And it's even more fun when you start playing with CSS because you can build pretty interesting UI's with those shadow DOM encapsulation, I would say. All right. I hope that everyone was able to get to this point. Now that we have the first web component we actually wrote, let's work now on adding some tests to it. So we're gonna be creating a test for our web component because CI is kind of, what use this, I would say, if it doesn't run any test. So the next step is to run some tests. To do this, we're gonna be using jest and we're gonna follow some jest conventions for this. We are gonna create a subfolder in our Hello directory here. And it's gonna be named __test__.

    12. Creating Test for Hello World Component

    Short description:

    The name of the subdirectory and file is important. It needs to end with .test.js. Use the same name as the component for all files. Copy the provided template code and we'll explain what the test does before running it.

    Now, the name of this subdirectory is important. Oops. Because there is a specific filter for the file named template that will pick up tests in this directory. So make sure you do name it correctly. And then under this new directory, you wanna create a hello.test.js. Again, the name of the subdir is going to be again, the name of the file is important. The previous suffix here is important. It needs to end with.test.js. Otherwise, the test will not be picked up. And as a convention, we use the same name as the component for all files. So it's always named hello with different extensions afterwards. I'm gonna grab the provided template again, make sure to copy the entire block of code. It's a bit more verbose and what we did so far because I'm obviously running tests is a bit more complex than just writing the component itself. I'll let you do this and I will explain a bit what the test does. And after that, we'll be running a test, obviously, just to make sure that it does pass. Just give you a couple more seconds to copy everything.

    13. Exploring Test Suite Code

    Short description:

    Let's take a look at the code in our test suite. We import the Lightning Web Components runtime method to create elements and a reference to our Hello class. The import path uses a shortcut mapper for convenience.

    All right, let's take a look at the code in our test class, not class, sorry, test suite. So what we have are a couple of imports. The first import is a method, import from the Lightning Web Components runtime. Not Lightning Web Components framework, sorry, that allows us to create elements. And it's something that we'll need to use to inject the element on our test page. Then we import a reference to our Hello class, which contains our Lightning Web Components. You'll notice the import path here, which is using the configuration we saw in Jest content. There is a shortcut mapper for the import path. Instead of calling SRC modules, example slash Hello, Hello.js, we could just use example Hello because we have this shortcut pattern being placed here for the module mapper.

    14. Test Suite and Clean Page

    Short description:

    We have a first test suite in Jest where we add web components to an empty document. After each test, we remove any child from the body to ensure a clean page for the next test.

    We have a first test suite. This is a Jest test suite. And inside this test suite, we have one test and we have also a after each special set of instructions that are gonna run after each test. And the way we work with test, when we're running with Jest here, is that we have an empty document and we're gonna be adding our web components to document. But between test, we may need to make sure that we don't have anything leftover between the different test runs. So after each test is run, we make sure that we remove any child from the body of our test page. So whatever it is, we just loop on, making sure, well, there's a first child attached to our body document, we remove it. Very simple. This way, we can make sure that the page is clean during each test.

    15. Running Test for Hello World Component

    Short description:

    We're running a test for a small Hello World component to ensure the correct greeting message is displayed when input is entered. We create the element using the createElement method, providing the name of the Web Component and a reference to the JavaScript class. After creating the element, we append it to the page and check that the default greeting is displayed. It should be Hello World, not Hello test.

    Now, for our specific test, because this is just something that happens for all the tests we are gonna be using, but this is now the test that we are running. It's just gonna be one test for this small Hello World component. Obviously, you may want to have more as you build more complex pages. And what this test does is simply making sure that we display the right greeting message as we input something. So for our test, we'll be trying to input tests in the input box, and making sure that it does update. One thing that is common to all Lightning Web Components tests is the fact that we first need to create the element. And so this here is, oops, this here was the method that we imported from the framework above here, createElement. And we're providing two things. We're providing the name of the Web Component, which is the name of the custom element, what you'll end up seeing in the DOM tree, and then a reference to the JavaScript class that holds the definition of our Web Component. So this is the code that is defined hello.js. And then we, once we have created the element, we just append it to the page. Now some elements have attributes and properties, and we can start specifying those properties before appending to the page. Our component is very simple. We're not modifying any properties, we're just directly appending it to the page. At this point, we're doing a first check. We're making sure that we have the default greeting being displayed before changing anything. So we're actually checking that it's not already Hello test. We're making sure that it's, in fact, it's Hello World because that's the default but it mustn't be Hello test.

    16. Testing Web Component and Pushing to GitHub

    Short description:

    To check the content of the H1 element within a web component, we need to access its shadow root. We programmatically simulate input events to update the component's variable and check if it re-renders correctly. After running the test, we can push the changes to GitHub, add the test file to the repository, and enable the tests in the CI workflows. Remember to uncomment the necessary lines in the YAML files and run 'npm run format' to avoid formatting issues. Finally, commit and push the changes to create a remote branch on GitHub.

    Now, you'll notice something. This is standard HTML here. We cannot just do a document query selector to find our H1 tag. This will not work. Actually, I can prove it to you by opening our Web Component here. So if we were using standard vanilla JavaScript, we could do documents.querySelector and we could just grab our H1 elements. So we can do this normally if we're using vanilla JavaScript. However, we cannot do this because the H1 is nested in some shadow root element. So it's isolated from the root of the body. To get to this H1 element, we need to use this element shadow roots to go inside the shadow root of our element and then your query selector because otherwise, it's not visible from the document body. So we need to go to the parent element that holds the H1 call shadow root and then did the query selector on it. So this is how you check what's happening within your web component. And then there's just a regular expect making sure that the text content that is displayed in the H1 is not hello test. So this is the initial behavior. Now, what we're gonna do now is making sure that when we enter something in the input mode, we can see the change happening. So obviously, we cannot directly type in the input mode, in the input element because we don't have a screen here. But all we can do is we can do this programmatically. So we grab a reference to the input elements, we set its value manually, and we simulate a input element, just like if a user was typing on a keyboard. And this will fire on inputs event handler, which will in turn update the variable, and at this point, we can check if the component re-renders with the right text. Now, to simulate the fact that we need to wait for the re-rendering of the component, we have a promise that automatically resolves. So it's just a way to make sure that we leave a microsecond or two for the page to be refreshed, and then after that, we can just check the text content of our H1 element, and making sure that it is matching hello-test, as we would expect it to. So this is how it works. Remember that we need to introduce some asynchronous elements here when there is a re-render of the page, because otherwise we would be checking too quickly for the content of the H1. It wouldn't have time to be updated because the web component wouldn't have time to update. Now that we saw the code that is behind the test, we can just run it. So npm test to run our test, and you can see now that just detected the right test file, it ran it, and you can see this little green check mark in front of our test, and after that, we can see the status of all of our tests. So we have one test that is passing, that means everything is green. Sounds pretty cool, right? And we can now move forward and push all of this into GitHub. So I'm gonna go back to the instructions and we're gonna add our, oops, our valid test file to our repository. To do that, we add the source directory, which contains our new test file, and also, of course, our new web component. And then we commit the new changes. And it has a feature, we're committing our first web component. So that's a first comment. Notice that we're not gonna push yet. We wanna do one more thing before pushing this to the remote repository on GitHub. We want to add, to enable the tests in the CI workflows. Remember what we saw earlier? They are disabled by default because we didn't have any tests, so these would have failed otherwise. Now we can go back to.gitHub workflow and look at the two YAML files and uncomment the last lines in our file. Now make sure to not uncomment the name of the step here, which is just a regular comment. You just wanna uncomment the last two lines here, the name part and the run part. So do that and do it for both files, the cipryaml and cinyaml file. Make sure you save all of your files. Don't forget to save them. Close them. And at this point, what I recommend you do, it's not written in the instructions, but I do recommend that you run npm run format so that you don't have any formatting issues because there's a chance that you maybe added an extra white space somewhere or you maybe added an extra blank line somewhere. So let's try to save on a CI runtime so that we don't fail CI with a formatting issue. Cool. Now, we changed the two CI workflows and we wanna make sure... Actually I need to add everything. So instead of... Yeah, I modified more than just my CI workflows. You can see I have five changes. I actually rewrote my HTML and JavaScript files here. So I'm just gonna go back. Oh, I'm gonna... Well, let's say I'm just gonna do it quick and dirty. I'm gonna add everything. So I'm gonna do git add everything, and I'm just gonna say, added unit test to workflow. I'm commenting everything. Obviously the comment message is not the right one. I would have to redo this first comment feed with first animal components. So now I have a comment, which does include two things, the formatted animal component and CI workflow. But at this point, I have committed everything from my local git repository. You can double check this by running git status. Git status will show you that you have nothing to commit. So you wanna make sure that there are no staged files and no unstaged files, that you're ready to go. Once this is the case, once everything is ready, you can do a git push to take your changes, all your different comments from your local repositories and create a remote, my-first-lwc branch on github.com with this command. So this creates the branch remotely and make sure your remote branch is tracking your local chain, the local branch as well. Now, once I've done this, we can go in our GitHub repository, back in my GitHub repository, and at this point, you should be seeing this yellow label on top of your project, saying you have a branch which has recent pushes. This means that you were able to deploy the branch on GitHub.

    17. Submitting Pull Request and CI Workflow

    Short description:

    Now, we're gonna do the next step as a contributor. We'll submit a pull request and wait for the CI workflow to run. After confirming the merge, we can delete the branch. The CI workflow will be triggered and run successfully. GitHub Actions is a powerful framework that allows for different sets of instructions based on branches.

    Now, we're gonna do the next step, just like if we're a contributor. So the contributor would be able to submit a request and click on compare and pull requests. And here you can see that you can provide a message for your pull request. If you look down various lists of your comments, you can see the comments that were done, you can see on the list of change files, this is what I did earlier, so adding new component, changing some files there. I can put a name, I'll just keep the default one, I think it's good enough. If you want, you can also do a conventional-style comment, so adding a feet in front of it, and removing the uppercase here, this is following the conventional comment. Normally you would add also probably some comments, describe a bit what you've been doing, and once you're ready, you can just create a pull request.

    Right, now wait a couple of seconds, and this page is gonna refresh automatically, and you're gonna see that the CIPR workflow now is being triggered, and the thing after the slash here is the name of the first job that is being worked on. Format-lint-test is actually the only job for this workflow, so it's the only one that will appear. We can click on Details here, and we can see the live updates of what's going on here. You can see that at this point, the cache was there, so it was very fast to install the dependencies. It ran my only test, my test passed, and it automatically completed. So yeah, the overall test run just lasted less than 10 seconds or something like that. Very, very fast. Now that the cache is properly set up, installing the NPM dependencies, you can see here was skipped because we were able to restore the node cache there. And all of the steps here succeeded because of the style and formatting was okay. Our test test passed and linting also went great. We don't have any linting issues. You can always open the details of the steps here to look at the outputs of the steps. You can see here our test passed successfully. And this is how you have the first CI run, successful CI run. Anytime you can go back now to your pull request, click on pull request and you can see there's a little check mark next to it now showing that it's a successful run. And you can see the detail now at the bottom of the PR showing you the different steps the workflows and jobs that ran on this particular branch.

    So that's part two. We did the PR, we merged the PR, we waited for the CI workflow to run. And now we can actually merge the pull request now that we've seen that the workflow is valid. Now what I'm not showing you yet is that at this point, we don't have any protection on the merge pull request button. But what you can do in your real life production projects is add a protection on this button and disable it when you see that jobs are not passing. And this is something that I highly recommend. It can prevent merging pull requests when there are some issues with the CI jobs. This is part of the bonus steps for the workshop, but I want to make sure I finish the basic steps first and we can talk about the bonuses after. So let me just finish this part. All I need to do now is confirm the merge. I can add another comment or say something else with my merge, and I can confirm. Now the branch is merged. I can delete my branch. Same thing here, this is the default behavior. By default, the branch is kept alive here in GitHub.com. GitHub.com, we generally don't need it. So there's an option that you can set up in your Git project to automatically delete the branches that have been merged. If not, you can just manually delete them because you won't be using them afterwards. So I'm deleting the branch here. And what I can do now is go back in my project and I can see that my pull request was merged. I can see my comment messages. And what is gonna happen also, if I move to the action tab, we're gonna see that we have a new CI run. And actually it went so fast I didn't even have time to finish talking. It was already validated. So what happened is that, sorry, I went a bit fast here. What happened is that when we had our pull request, we ran the CI PR workflow. This is the workflow that runs on the pull request. But after we merged the pull request, it triggers the CI workflow. In our case, it doesn't make much difference because the two files are pretty much identical. These two files are running the same tasks. But again, you could have different set of instructions depending on which type of workflow you're running. And so obviously, since the first one, CI PR passed, CI also passed very quickly because, well, there's no difference between the two really. They are doing the same tasks. But it's interesting to see how you can play with multiple branches. And for example, you can have a user acceptance testing branch that has a different set of instructions. You have a release branch, for example, which has our instruction branch and other instructions. You can use regular expressions in the name of your branches on the filters here. I've directly named my main branch here, which you can do more advanced patterns using regular expressions. If you're interested in looking into this, we have some examples for this. If you look at, for example, one of the projects I maintain at Salesforce, it's called the Sample Applications on Trailhav's LWC recipes. I have a bunch of more advanced GitHub actions and one of the things we use in our workflows, for example, is we have a pre-release branch. And what this does is that we use regular expression patterns here to specifically run our CI workflows on specific branches. So, spring with the year number, so spring 22, for example, summer 22, which are specific releases for Salesforce. So when we name our branches with these names, pre-release slash the name of the release, we have a specific CI workflow for pre-releases. That's something I think that is very convenient. Obviously there are a lot of things you can learn from GitHub Actions. So, it's a very powerful framework.

    18. Exploring Bonus Tasks and GitHub Settings

    Short description:

    There are a bunch of steps and tests that you can add to them. We used shell scripts in this workshop, but there are also more advanced pre-packaged steps available. You can import steps for Java, JavaScript, Python, and more. For example, you can use Codecov to track code coverage and generate reports. The GitHub Actions marketplace offers a wide range of options. We covered the main steps for working with a small project, including Lightning Web Runtime and Lightning Web Components. We also explored configuring CI workflows with GitHub Actions and setting up developer tooling on our local machines. Additionally, we discussed bonus tasks such as disabling the merge button until the CI job passes, adding a CI badge to the project readme, creating a pre-comment hook with Paskey and Lintstage, and tracking code coverage with Codecov. I'm available to answer any questions or cover the bonus tasks. Let's get started!

    There are a bunch of steps and tests that you can add to them. What we used in this small workshop today was mainly using shell scripts. So you can, other thing we did here was just using shell scripts, but you have also more advanced steps that are pre-packaged. So you can import steps which can run on Java or JavaScript or Python, whatever, which can do more complex behavior. So for example, if you go further with the bonus steps, you can use for example, Codecov, which is a tool that will take a look at your local code coverage and the uploaded to another tool. And with this, you can get fancy reports on the evolution of your code coverage, for instance. So there's a whole set of like a marketplace for GitHub Actions, and you can grab from. You don't need to script everything. Obviously here, which is very specific to our project, but you can have more steps in the GitHub Actions workspace.

    So that's a wrap. We covered really the main steps that you need to see when working with a small project. I gave you a sense of what LWR is, Lightning Web Runtime, and how Lightning Web Components helps you to create web components very quickly. We saw how we can configure CI workflows to run automatically with GitHub Actions. And we saw also the basic setup for the developer tooling also on our local machine.

    Now, I'm still going to answer some questions and I will probably go over also the bonus tasks, if you want. These are not described with step-by-step instructions, but these are things you would want to do if you were to take this project and move it a bit closer to a more production-ready projects. So, for example, the first thing in this list would be to making sure that the merge button on the pull request is disabled as long as the job does not pass. So that's pretty simple to do, it's just a bunch of things in GitHub. Another thing we can do in the bonus task is add a CI badge on the project readme, that would show the status of CI directly on the homepage of the project. Another thing we can do is create something called a pre-comment hook. And we use two libraries to do that, Paskey and Lintstage, which would automatically apply Preacher and Linting on our code before we comment. So it would be automated and this would avoid us from having issues by forgetting to run a Preacher before commenting. And lastly, another thing that's pretty convenient and totally optional again, is to keep track of code coverage with a tool like CodeCoff. This tends to be useful because when you have, especially when we have open-source applications, people can do a lot of contributions and not necessarily take the time to write proper tests. And with CodeCoff, you can make sure that you have certain rules that would make a PR fail if it doesn't provide sufficient code coverage. All right, so if you have questions please go ahead, otherwise I'll be taking a look at those bonus tasks and I can get you over them. All right. All right, I'll go ahead and cover those bonus tasks. The first thing I wanna do is gonna make this, the CI PR job mandatory for merging pull requests. So I'm on my project homepage here and let me open it a bit more. We have a Settings tab here and I'm gonna do two things. The first thing I'm gonna do on the Base General page is I'm gonna do one thing which is actually not something I added in the bonus parts but I think that is worth adding. When you scroll down you have a checkbox called automatically delete head branches. This is not mandatory again and it's totally up to you but one thing I like to do is automatically delete the branches once I have merged my Pull Requests. So I suggest that you check this one out because yeah, it removes a bit of noise as you merge your Pull Request in your project. It's under Pull Request and automatically delete head branches. The other thing we wanna do which is more specific to the way we control or code is in the Branch Menu here, Branches Menu and we can have something called Branch Protection Rules. Oh, hi. I just realized that this is not a default setting. Okay. Well, all right. I forgot about this because I always use my Enterprise Grid when working with GitHub, but I can actually show you this in another project, you may not be able to do it if you're using the free GitHub version, but what you can do is basically specify some rules for specific branches. And if you work with an Enterprise Application, you can go into settings, branches and from there, you can have Branch Protection Rules. So Branch Protection Rules, like, for example, this one on main, allow you to specify some requirements for merging on these branches. So first of all, you can prevent people from pushing directly to your branch. I think this is a really good best practice. You wanna work with pull requests so that your team have the ability to do some code reviews before accepting incoming changes. You can also require approval by the list of code owners. Basically you can select a list of people which are approvers for your pull requests. And going down a bit further, you can also require some status checks to pass before merging. So these are basically selecting jobs from your CI workflows that are mandatory for PRs. So in my case here on my main branch, I have two jobs that are mandatory. And if these two jobs haven't passed, people cannot merge the pull requests. So I wanna have scratch org tests and format linked lwc tests And once these tests are passed and I have approval for one of my coders only then can I merge pull requests. Obviously there's an exception project administrators are enabled to, sorry. Project administrators are able to bypass these rules and can force merge any PR. Ooh, Oh, oh, ooh. Sorry. Excuse me, just a moment. Sorry for the interruption there. All right. The interruption was my, sorry. The interruption was my child barging in, but yeah, we're almost done. Yeah. So branch protection rules. Sorry. I totally forgot that this isn't a GitHub enterprise feature, but I think it's very important when you work on a production to make sure that you, you have some level of control on who does what and who can merge. Otherwise it can quickly become chaos. So that was the first thing I wanted to show you in, you may or may not be able to do it. The next step is pretty easy to do.

    19. Adding Badges to Readme and Using a Dashboard

    Short description:

    To add a badge to your product's readme, go to your project homepage, click on actions, and select the CI workflow. Create a status badge and copy the code to your readme. You can preview the badge and commit the changes. Badges are convenient for tracking project status and can be used in a dashboard to monitor multiple products. Clicking on a badge takes you to the workflow and allows you to view job details.

    And actually I think it's quite important to do. It is adding a badge on your product, read me. So to do this, it's pretty straightforward. You want to go in your project homepage. You want to click on actions and from there you want to pick one of the workflows. For example, you want to take CI because obviously you want to track the CI workflow for your read me. Don't want to use the branches or CPR workflow. And from the free little dots there, you can create a status batch. And this gives you the default code that you need to place on your read me to have the batch here. So I'm just going to grab this and after that, obviously you can do this in your ID or you can just do this quickly and dirty and just pick your read me file here, edited directly with the web ID, the web browser, add the markdown for your badge and you can already preview it here. And you can see CI is passing and I could just commit it by going down there and just leaving the default commit directly to the main branch and commit changes. There we go. And now you can see that on my product homepage, I now have this little CI passing batch. Now, if I make some mistakes and if I introduce some errors, obviously this will turn red and you'll see that CI is no longer passing. Another thing that is convenient with this batch is that you can click on it directly to jump to the workflow and this automatically filters all your workflow runs on this particular CI workflow. Here you saw that I've updated the readme and this triggered another CI run on my project. I was lucky I did not include any white space or something like that because obviously Preacher could have failed here. Yes, so where I got the batch is from the three little dots there after selecting CI on the three little dots here, we have create status batch. And this brings us some templates here for adding the batch. The default options are fine. You want to take default branch and the default events for it, you copy it and you add it to your readme file. So obviously it's convenient for the project. And as you go forward, you can add more badges. So if I go back to my other sample applications, like this one, we can add more badges. For example, you can add other CI workflows and also code coverage with other tools, for example. And this is just a small subset, obviously people have more than these, but badges are really convenient when you have a project to keep track of everything at hand. You can also do something more advanced. That's something I did because I managed quite a few apps. So I did a little dashboard for myself and using those badges, I was able to take them in another project. So here I did a little dashboard view and you can see here I have some products which are failing. So that's another example when you wanna monitor multiple products and you don't wanna navigate around the place, well, you can include those badges in like, for example, an internal... It's not really a project. It's just readme file, but you can easily display them in the central place. I think this is very convenient. And from this interface, obviously I can click on one of my workflows and navigate directly to the jobs that are failing or not. So I click on Packaging Failing here and see what's going on, maybe what's the error, what's the problem. So, I think that's a little useful thing you can do when working with the badges.

    20. Creating Pre-Commit Hooks with Husky

    Short description:

    We won't have time to cover everything, but I will show you how to create pre-commit hooks with Husky. You can publish LimeWare components at NPM or CDN by separating them into other NPM modules. The LWR config file allows you to manage other bundles, import dependencies, build other routes, and do more advanced configurations. To set up Husky, create a .husky folder and add dependencies for Husky and lint-staged. Then, create a pre-commit script in your Node project and a post-install script to install Husky. Provide specific instructions for lint-staged to filter and run operations on relevant files.

    Next. I think we won't have time to cover everything, but I will try to show you how you can create pre-commit hooks with Husky because one of the common things we do is train... Oh, sorry. There's a question. Can I publish LimeWare components at NPM or CDN? Is there extra configuration I need to do or LWC create isolated components that I can publish? If you want to separate your components in other NPM modules, you can do this. And this is visible in your LWR config file. Here, in my case, I only have a single... I mean, a single set of LWM components which are local modules in my projects, but I could separate them and I could import some dependencies by specifying other things in here. The syntax for this is described on the resources for the workshop. So when you go back on the homepage of the project you'll see there I have left a link for the LightWeb runtime documentation. And from this documentation here, you'll find the configuration for the project and you can see here how you can manage other bundles, how you can import dependencies, how you can build other routes and do more advanced configurations. But yes, obviously you wanna place your JavaScript file on CDNs so that your website goes faster and this is how you could do it working with separate packages and models that you can import in your master project. So let's see, I'm gonna try to see if I can cover a Husky. In case we don't have time well, the configuration that we use is kind of similar to what we use in our sample applications for, it's not using Laniweb runtime but we have the same configuration. So I'm gonna just copy the part I need so that you can see what I need. I don't remember my heart, the exact configuration. What you do need to do to set up Husky so that it automatically does pre-comment is taking the., creating a.husky folder. So I'm gonna do that. No, actually let's do things in the right order. First, we need to add dependencies. So to add dependencies, I'm gonna need to do two things. I'm gonna need to npm install dash dash save dash dev. These are development dependencies. I'm gonna install Husky, and I'm gonna install lint-staged. So let me just dumb that in chat so that people can grab it from there. And then I'm gonna run this. So this will add the dependencies for these two things. And then after that, I'll copy the configuration so that we have it. All right, so we even installed Husky and lint-staged. Husky is, provides the ability to run pre-commit hooks and lint-stage allows us to execute certain tasks on a restricted set of files. So it's basically a filter for Husky that we can use. Now that we have the development dependency set up, I can just create a new folder, a.husky folder, the root of my project. And from there I can install, well, what is it, it's like a code that I have, it's like a pre-commits file. So that's the name of our trigger, so pre-commit with no file extension. And I will just simply grab the content from this. So normally, this is something that it can be done with a script, I don't remember the exact script to pre-fill all of this. I will show you a link so that you can grab it and just give you the link. Come on. I'll show you the link to the sample app. This one is actually using LWR. So for the husky pre-configuration, you can grab it from here. You'll find the rest of the syntax. So looking at the pre-commits, that's the content we need to grab, it's basically telling us to run a Node script called pre-commits. So we need to write this pre-commit script now in our Node project so that it does what we want to do. So I'm gonna go in package.json and I'm gonna need to add a new script there. So I'm gonna add a new property at the end and I'm simply gonna copy it from the project that I shared with you because there's no need to re-type everything. So I'm gonna go in package.json and I'm gonna grab what we need. So new script that we need to add there is called simply pre-commit, and what it does, actually we have two things to do. We have this pre-commit thing we need to add and we also need to add a post-install script which will install Husky for us. And this is normally what does the trick of adding this.husky folder. It can be done automatically with a script for people who don't have it yet. So two things, the pre-commit and the post-install. The post-install is a standard Node script that is automatically run on the user's environment after doing the npm install. And the pre-commit is custom script that we just wrote that we'll call lint-stage. Now, what we wanna do is, we wanna tell lint-stage, Oh, yeah, we're out of time. What we wanna do is provide some specific instructions for lint-stage. So again, I'm not gonna reinvent the wheel, I'm just gonna grab from this configuration here and I can take it down there. I'm just gonna dump it at the end of the file here after the development dependencies. And, oops, sorry, it's a bit small. I'm adding my lint-stage configuration. Now, lint-stage, as I mentioned, acts pretty much like as a filter. So Husky is providing the list of changed files to lint-stage and lint-stage is applying some operations on a set of filters for every file matching these file extensions. So obviously there are a few here that are not really relevant to our project like Trigger, RXM, extra. For each of these file types, we are gonna run preacher write on them. Now, we are not gonna call preacher on all files at once. We're actually in the loop and call preacher for each individual file that matches this filter. And the same thing applies to ESLint. We're gonna write ESLint for individual JavaScript files under the SRC folder. And one last thing we do also is we are gonna run jest but just on specific files that have been changed. So we don't run preacher, we don't run ESLint on all the files in our repository.

    21. Running Tests and Formatting with Husky

    Short description:

    We rerun tests for changed files, ensuring quick execution. After running npm install, Husky Git hooks are installed, enabling triggers for the pre-comment hook. By adding changes and committing them to GitHub, Husky automatically reformats files using Preacher and runs tests to catch errors. To make the pre-comment file executable, change permissions using chmod. Once executable, Husky detects changes in stage files, runs pre-commit to fix formatting issues, and performs tests before pushing the code online.

    We just rerun test for files that have been changed. So this is very quick. Right. So let's see. Now that I've added all of this, I'm gonna run again npm install so that it does call the post install script. And we're gonna see at the end of my install call, if I scroll back up, I can see this line, which is very important. Husky Git hooks installed. And this means that husky is installed and has triggers with my pre-comment hook. If I did everything correctly at this point, I can now do a git add. Add everything. Actually, let me add some preacher issues. I'm gonna add some white space here and you can see it working. So this normally would need to be reformatted using preacher. So I'm gonna add all my change files to my changes here, and I'm just gonna copy it directly, commit it directly to GitHub. Some changes, oops. And English comments. Oh, yeah. Well, I will need to making sure that the pre-comment file is executable. So that's something I need to do on the pre-comment script it's not executable, so I need to change permissions. So this will depend of course on your OS, but I will need to go in a C.husky and I will need to do a chmod to make the file executable. So it's a plus X to make this file executable and pre-comment. Now that script is executable, I will, whatever, I will, yeah, introduce some new changes, redo the comments so that we detect the change. Go again, commenting and, oh, I need to pull my, oh, yeah. I'm still in my branch. Okay, nevermind. I got burned by, because I was, I'm going off script. But you see the point, now, if I was back on my main branch, I would comment and husky would be looking at my stage files and it would automatically run pre-train and fix the formatting issues on the files that are gonna be commented. So that I would not have any issues, and it will also run tests, so that I could catch basic errors in the code that I'm trying to push online.

    Watch more workshops on topic

    DevOps.js Conf 2022DevOps.js Conf 2022
    152 min
    MERN Stack Application Deployment in Kubernetes
    Workshop
    Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
    DevOps.js Conf 2022DevOps.js Conf 2022
    13 min
    Azure Static Web Apps (SWA) with Azure DevOps
    WorkshopFree
    Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.
    React Summit 2023React Summit 2023
    88 min
    Deploying React Native Apps in the Cloud
    WorkshopFree
    Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
    Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
    In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
    DevOps.js Conf 2022DevOps.js Conf 2022
    163 min
    How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
    Workshop
    The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.

    Check out more articles and videos

    We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

    React Advanced Conference 2021React Advanced Conference 2021
    19 min
    Automating All the Code & Testing Things with GitHub Actions
    Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
    DevOps.js Conf 2022DevOps.js Conf 2022
    33 min
    Fine-tuning DevOps for People over Perfection
    Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
    DevOps.js Conf 2022DevOps.js Conf 2022
    27 min
    Why is CI so Damn Slow?
    We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
    DevOps.js Conf 2022DevOps.js Conf 2022
    31 min
    The Zen of Yarn
    In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
    In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.
    DevOps.js Conf 2024DevOps.js Conf 2024
    25 min
    End the Pain: Rethinking CI for Large Monorepos
    Scaling large codebases, especially monorepos, can be a nightmare on Continuous Integration (CI) systems. The current landscape of CI tools leans towards being machine-oriented, low-level, and demanding in terms of maintenance. What's worse, they're often disassociated from the developer's actual needs and workflow.Why is CI a stumbling block? Because current CI systems are jacks-of-all-trades, with no specific understanding of your codebase. They can't take advantage of the context they operate in to offer optimizations.In this talk, we'll explore the future of CI, designed specifically for large codebases and monorepos. Imagine a CI system that understands the structure of your workspace, dynamically parallelizes tasks across machines using historical data, and does all of this with a minimal, high-level configuration. Let's rethink CI, making it smarter, more efficient, and aligned with developer needs.