End the Pain: Rethinking CI for Large Monorepos

Rate this content
Bookmark

Scaling large codebases, especially monorepos, can be a nightmare on Continuous Integration (CI) systems. The current landscape of CI tools leans towards being machine-oriented, low-level, and demanding in terms of maintenance. What's worse, they're often disassociated from the developer's actual needs and workflow.

Why is CI a stumbling block? Because current CI systems are jacks-of-all-trades, with no specific understanding of your codebase. They can't take advantage of the context they operate in to offer optimizations.

In this talk, we'll explore the future of CI, designed specifically for large codebases and monorepos. Imagine a CI system that understands the structure of your workspace, dynamically parallelizes tasks across machines using historical data, and does all of this with a minimal, high-level configuration. Let's rethink CI, making it smarter, more efficient, and aligned with developer needs.

25 min
15 Nov, 2023

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Today's Talk discusses rethinking CI in monorepos, with a focus on leveraging the implicit graph of project dependencies to optimize build times and manage complexity. The use of NX Replay and NX Agents is highlighted as a way to enhance CI efficiency by caching previous computations and distributing tasks across multiple machines. Fine-grained distribution and flakiness detection are discussed as methods to improve distribution efficiency and ensure a clean setup. Enabling distribution with NX Agents simplifies the setup process, and NX Cloud offers dynamic scaling and cost reduction. Overall, the Talk explores strategies to improve the scalability and efficiency of CI pipelines in monorepos.

Available in Español

1. Introduction to Rethinking CI in Monorepos

Short description:

Today, I would like to talk about how we could potentially rethink how CI works in monorepos. My name is Joris Sturmfloner, and I have been using monorepos for six years. I am also a core team member of NX and a Google developer expert in web technologies and Angular.

[♪ music playing ♪ All right. So today, I would like to talk a bit about how we could potentially rethink how CI works compared to the current CI situation that we have, with a particular focus on monorepos and potentially large monorepos. So how we could optimize that. So before we go ahead, my name is Joris Sturmfloner. I've been using monorepos for probably six years already. Since about four years, I'm also a core team member of NX, which is a monorepo management tool. And so I'm also a Google developer expert in web technologies and Angular and also an instructor on AgHead, where I publish courses on web development and developer tools.

2. Considerations for CI in Monorepos

Short description:

When working with monorepos, we need to consider the local developer experience, automation and rules, and task pipelines. Current CI solutions are not optimized for monorepos and require low-level manual maintenance. Developers want a high-level way of defining their CI structure and need strategies to ensure scalability and manageable speed and throughput.

So when we go into the direction of a monorepo, it doesn't come for free, right? So there's some considerations that need to play in. One big one is obviously the local developer experience. So how do we structure a project in a monorepo? How do we make sure that we have a consistency in how these products are being set up? Which version do they use? How are they configured such that we can also have some sort of team mobility between projects, potentially, and it will also help us obviously maintain. Automation and rules around those projects is also a very important part, especially looking at maintenance and the longevity of such a monorepo.

And also things like features like task pipelines, being able to run things in parallel. Because clearly in a monorepo, we don't run just one project anymore, but potentially a series of projects where there are also dependencies. And so we need to be able to kind of build dependent products first before we actually run our project. And those are things that we don't want to do manually, but rather want to have tooling support. But today I would like to specifically focus on the elephant in the room whenever we talk about monorepos, which often is not being paid attention to immediately, which is kind of a mistake, which is CI. Because clearly there is some— the current CI situation is basically not optimized for monorepos because it is very machine-oriented, so we need to focus on exact instructions that we want to process. We need to actually have a very instructional kind of approach. It is very low level in that sense as well. It requires a lot of maintenance because we no more, as I mentioned, run just one project and that's it. We run a series of projects. We run multiple projects. And so we need to have strategies of tuning the CI in order to make sure even as our monorepo structure changes, as more products come into the monorepo, that it still works. It is also, I would say, a bit removed from what developers want, because as a developer, I would want to have a more high-level way of defining my CI structure, my CI run, my CI pipeline, in the sense of saying, hey, I want to run all these projects that got touched, for instance, in that PR, rather than having to fine-tune every single aspect of that project. And as I said before, they don't really work for monorepos, so they design much, much more general purpose and more into single-project workspaces in general. So today I would like to dive into some of these aspects, specifically looking at speed and throughput, because that is one major thing that we need to pay attention to, because otherwise our monorepo would be a problem. Because if we have good collaboration going on locally within Teams, but our pipeline takes over an hour for each PR, that's going to be a problem.

3. Managing Complexity and Maintainability in CI

Short description:

To maintain complex and maintainable CI in monorepos, leveraging the implicit graph of project dependencies is key. By parallelizing runs and using a task pipeline, we can optimize build times and account for inter-project dependencies. Additionally, running affected nodes allows us to selectively test and build upstream projects. These considerations are crucial for an efficient and scalable CI configuration.

And also looking at the complex and maintainability aspect, because clearly, as I mentioned before, we don't want to have to do all the running in parallel, spinning up machines manually, things like that. So I want to look a bit into how a potentially different approach could help with that complexity and maintainability. So this is actually pretty obvious, right? So if we have more projects in a single repository, this CI time will sum up, because we need to now not just one project that needs to be built and tested and linted and running end-to-end tests for it, but now we have like five, ten, a hundred, which can easily read a number. Like I've been working with clients which have like over 600 products in a single monorepo. And so we're talking potentially about these sizes.

And so, as I mentioned before, if we don't pay attention to the CI aspect of a monorepo, we might run into congestion basically. And so we get results like these, which are basically not manageable because developers will find workarounds. Like they need to ship, like they have like a sprint going on where they need to deliver certain features. And if each PR run takes over an hour, they all start collecting PRs into, like collecting features into one PR. And then we have those gigantic 200 plus file changes, which are impossible to review. And therefore, the quality obviously suffers quite a lot on that.

So what we want to have is something like that, obviously. Like we want to keep that as low as possible and keep it almost flat, ideally, and not something that kind of grows up exponentially with more and more products that we have on our monorepo. So how do we do that? One of the fundamental things that every monorepo has somehow implicitly behind the scenes is a graph. So projects in a monorepo reference each other. So that might be either through some dependency resolution mechanism, depending on the tool you're using, but it could also just be the reference in a package JSON, where you reference the source of the other package you depend on. So there's a graph that is being built in a monorepo, which reflects the dependencies.

And so we can leverage that graph information, saying things like, if you run the build for this product, or you serve this product if this is an application, for instance, then we can look at the configuration of your monorepo and say, potentially we might need to have to build also its dependent children first, because that might be a dependency and an input for that other product to actually run, because those need to be pre-built first. And so those are things that we can specify, usually for which we need to have a tool in order to not have to manually configure that and change it over time as the structure changes. And so we don't clearly want to have things like this, but we want to have a parallelized run as much as possible to save time. And again, as you can see here, account for these dependencies potentially that are between projects where this might depend on the output of the previous one, as you've just seen. And therefore we need to have a mechanism that is intelligent enough that when it does such a parallelization, it also keeps those things into account, which usually is being referred to as like a task pipeline, where we can in quite easily way define how these dependencies look like, also leveraging the graph behind the scenes. And so this is something for instance, that we at NX do, because like that is something of the basis that you need to have in a monoreaper.

Another thing is also being able to just run affected nodes in CI, but also locally. So with affected nodes, I mean, basically if you change a product here in the middle, we can infer that potentially we need to run the testing and building and trend tests, also of these upstream projects, because they depend on that product that we changed. And so potentially we might need to rerun because otherwise maybe we introduce some breaking change and they don't build anymore or the test pass anymore now. So we need to kind of adjust that. And so with such a mechanism, which we call affected, because it's literally a command that you can run, NX affected, and then you give it a target for which you want to calculate those affected nodes. That gives us the ability to just now create like, or run a subset of the entire amount of projects that we might have in a monoreaper on our CI system. And so you can already see like, these are some considerations that should play into our CI configuration to make sure that that curve that we have seen before keeps as low as possible.

4. Optimizing CI with NX Replay and NX Agents

Short description:

NX Replay is a distributed remote caching mechanism that optimizes computation by storing and replaying the output of previous computations. This feature, along with NX Agents, enhances the efficiency of CI pipelines in monorepos. By leveraging cache hits and selectively recomputing affected nodes, we can significantly speed up the CI process. Additionally, when changing a node with many dependent projects, such as a design system or authentication libraries, the computation can be further optimized.

Overall though, what we've discovered, like working a lot with like companies and large monoreapers is there are those three aspects. Like these that we have just seen like the parallelization and affected. Those are kind of prerequisites, but those that make a real difference are these three roughly. Now, these are some marketing terms as well. So don't be distracted by those, but basically those are those three features while NX workflows is something that's coming more down the road. So we are focusing on NX Replay here and NX Agents.

So what NX Replay is, is basically a computation cache. Now we call it Replay. I said, like, it's kind of like a marketing term if you want, but it's basically distributed remote caching that you can add to your projects. And Replay therefore, because it literally replaced computation. So meaning if you have a project, it will, an expert compute hash key out of various sources. That is obviously the source code itself, environment variables, things like runtime variables that you might also play into, give into that key. And then it computes a key and stores the output alongside with that key. And so if we run another computation again and we have a matching key, we will restore and basically replay both the terminal output as well as artifacts. Because clearly some of the commands, some of the tasks might produce compiled JavaScript files or CSS files or some other sort of processing that you might be doing. And so those are being replayed as well. And so that's why it's called NX Replay.

Now, if we take a look back again at our affected situation, where we change the node in the middle, we push it up as a PR to our repo, our pipeline runs through, it identifies these are the affected nodes. But what happens during the processing, now one of the upstream project actually fails the tests because as we said before, we change something, the tests don't pass anymore. We need to fix it. So what we do again, kind of download that PR, like we fix it, we push it up again. Now, if we just had affected, we would need to rerun all these nodes again. But if you look at it, there's no point actually to compute some of them because they literally didn't change. So that's part of the now smaller graph, the affected graph is not affected because it will be pulled out of the cache because we have a matching cache key that we can just replay now. And we just remain with some other two projects that are here, as you can see here with the hourglass, which need to be recomputed. As you now you can see, we can constrain that path even more and make that even faster. And obviously, as more team members work on our projects, we'll get more cache hits naturally out of it.

Another point though, is like, what happens if those lower level more, let's say worst case scenarios happen, where we change a node that is simply, has a lot of dependent projects, because it is a central part. It could be, for instance, a design system, which is a common one, but also some authentication logic libraries or things like that, where a lot of our projects depend on it.

5. Distributing Tasks with NX Agents

Short description:

When dealing with projects that have many dependent projects, such as design systems or authentication libraries, distributing tasks across multiple machines can improve efficiency. However, manual distribution can be challenging due to varying task durations and the need to tune the number of machines. Custom scripts can help distribute tasks, but they require maintenance and don't adapt well to changing project structures. To address these difficulties, NX agents have been developed to dynamically distribute tasks based on historical data and the project graph. With NX agents, you describe what tasks to run without specifying how they are distributed. Additionally, NX agents offer dynamic scaling, fine-grained distribution, and automatic rerunning of flaky tasks. By activating distribution with a simple command, tasks can be distributed across multiple machines, improving the efficiency of CI pipelines.

It could be, for instance, a design system, which is a common one, but also some authentication logic libraries or things like that, where a lot of our projects depend on it. And so if we change that, we will hit very often a worst case scenario where defected kind of projects are almost entire graph, because like that entire project depends on that. And so what is being done there usually is to distribute. And so rather than just basically parallelizing things as much as possible within the same machine, what we're trying to do is distribute them across different machines.

And so here you can see like here, distribute them across machine one, machine two, machine three. What you can see here though, is it is a very uniform kind of distribution, right? And so that's usually what happens if you manually set it up. Because distribution is hard, because like you need to code it somehow. And so a very straightforward, potentially naive way of doing it is, you just cut it by the different tasks that you have, and then basically run it through different machines.

And so here you can see like, the different running times might lead potentially to a low efficiency, because some tasks might take a long time, and therefore the entire run takes longer, but other machines are already idle, because their task, like let's say the linting was quicker, and they're already done. There's also the number of machines is kind of static. So usually you define the number of machines, and then basically that is the number that is kind of like nailed or fixed there. And you need to kind of tune it over time as your monorepair structure obviously gets bigger and grows. And there's also complexity potentially in spinning up those machines, depending on your project or CI provider that you're using.

And so very often what I see is like, scripts that are being done on CI. For instance, NX has a programmatic API, so you could compute those affected nodes programmatically, and then kind of try to shuffle them in some intelligent form, where you kind of distribute them across different machines and dynamically even generate pipelines. So this is an example for GitLab, which allows you to dynamically spin up nodes, and therefore have some sort of distribution going on. But it's a very static thing as you can see, because it's kind of hard-coded into your code, and cannot really adapt to a changing monorepair structure. So this needs a lot of maintenance.

And so having seen some of these difficulties that team face in creating these custom scripts, we've been looking into what we call NX agents, basically to help with that distribution, specifically to set up the machines, but also with the dynamism of the distribution itself, so that you don't need to tune it continuously as your monorepair structure grows, but the distribution would happen dynamically based on the number of tasks that's being run, but also based on the kind of running time of these tasks based on historical data basically.

And so the NX agents tries obviously to leverage the project graph, because it has access to the NX graph behind the scenes, and so it knows how the projects are being structured, what dependencies there are, which is a major important point when you distribute, such as to distribute them in a way that is efficient. And you describe what you want to run, so you don't describe how you want to run them, but you rather just say, I want to run the affected task build and linting, and 10-to-end tasks in a certain way, but you don't define how they're being distributed.

And one big part that comes with the first version of the NX agents is like the whole dynamic scaling aspect, so spinning up more machines depending on the PR, but also fine-grained distribution and also flaky task rerunning. Now, we specifically focused on tests and 10-to-end tests right now, but in theory, we can detect flaky tasks and automatically rerun them on CI. So let's have a look. How does it describe what you run look like? Well, all you need to do basically for the distribution to activate is basically this line. So specifically the dash-dash distributes on, where you give the information, I want to distribute my tasks that are being run right after on 15 machines, in this case, of the noted Linux Medium plus JS. And so those are in nomination of names of machines based with a certain characteristics, which you obviously can define based on the needs that you have. It's kind of like a Docker setup almost. And then you run the actual commands. And so here you can see how you describe what to run and not how.

6. Enabling Distribution with NX Agents

Short description:

Enabling distribution with NX Agents leverages project graph information to split tasks into fine-grained ones, which are then distributed across specified machines. The coordinator machine on NX Cloud infrastructure manages the distribution, allowing for automatic balancing and scalability. This infrastructure can be hosted on-premises as well. On your CI, you define the distribution, while the cloud infrastructure handles the management.

So we just simply say enable distribution, so many machines, these are the commands. That's it, right? And so what happens is, the project graph information is being leveraged, and that is available on CI. You trigger that script, as we have just seen, where you run the affected build with the distribution on. That is being passed to a coordinator machine, which is on the NX Cloud infrastructure, which then takes those tasks and tries to split them up in a more fine-grained tasks, which are then distributed across the number of machines that you specified.

And the interesting part is that if we see that we need more machines, we can add some or even dynamically expand them, but we don't have to change the actual configuration underneath. And so that helps greatly with the configuration overhead that, or complexity that we mentioned initially, because now these can just balance out automatically. It's just as if the coordinator now has four machines, while the whole distribution can now happen in a different approach, in a different potentially more efficient approach. And as I said, these are hosted on NX Cloud infrastructure, which can also be on-prem. So if you need to have it on-prem, obviously that works as well. But the core part here is like on your CI, you just define the distribution and kick that off, but on the cloud part, basically the whole infrastructure is then being managed.

7. Fine-Grained Distribution and Flakiness Detection

Short description:

Fine-grained distribution allows for splitting end-to-end tests into individual file level test runs, improving distribution efficiency and balancing machines. NX can automatically create targets for each file, enabling running tests at the file level. A concrete example shows how NX distributes end-to-end tests across different agents, reducing run time from 90 minutes to 10 minutes. Flakiness detection detects and reruns flaky tasks on different machines to ensure a clean setup.

So how does fine-grained distribution look like? Well, basically one of the big concerns that we often have is that machine in the middle. So that end-to-end test, the red block that you can see, because like very often what happens, you have those gigantic playwright or Cypress tests that run for a given application, and they're really hard to split up because it's just one big block, and they occupy an entire machine, which can then lead to the whole distribution not being efficient.

And so we looked into it with a feature that is landing in NX 18, which allows us basically to split dynamically, to dynamically basically create tasks for targets, basically for an end-to-end test at the file level, because Cypress as well as playwright allow you to run single files with a filter. And so NX can do that automatically with the result that we can kind of split these up into the individual file level test runs. And so we can group them in some intelligent way, and now we have much more fine-grained pieces that we can distribute, and so we can balance out the machines much better, and therefore lead to much better results.

So if you look at a concrete example, here's a workspace, an NX workspace, with a couple of Next.js applications with Cypress end-to-end tests, and I just have a couple of tests here which are very similar, which have some sleeping in here, some waiting to artificially boost or bump up their running time. I also have NX Console installed, and what that allows me to give is to see what targets are being defined for such a project. And so if I go here and click on that link here, we can see we have an end-to-end test, but NX automatically detects that this is a Cypress end-to-end test, and so per file it gives us basically a target that is being dynamically created.

And so that means we can now not just run the entire end-to-end test, but we can run each single file as well automatically. And this is what NX will do at the CI level. So if I make here a couple of changes on that crew pages here, let's say we say here, HideArea3, and let's also do the same here for the flight simulator just to have a couple of changes that are going on here so our tests will change. And now if I add that and push that up on CI again, so once it's being pushed up, what will happen is CI will run my script here.

And if you look at the script, this has been auto-generated by NX. We have enabled distribution here with the distribute on for now five Linux machines. So this is a static number of machines I'm running the distribution on. And what I want to run is first of all, all the tests and builds, and then the end-to-end-CI, which are the fine-grained distributed CI pieces that I can run on CI. Now here I have just the runMany, so I want to run all of them. So if I go to my repository here, you can see here the run that is being started. So right now the end-to-end test run.

And so if I go to the CI page for NX Cloud, we see there's an in-progress pipeline execution, which now you can see already distributes those end-to-end tests across these different agents. So you can see now one test per agent is being run. And this depends highly on how many agents we have at disposal and how the distribution is happening here. You can also see like some things have already been completed, and here you can see how they were being stacked up and how that distribution is happening. But important part here is like with a simple flag, we distribute now all of these tasks across these different available machines. And we have seen some examples there where we came from like 90 minutes of an end-to-end run in like CI down to 10 minutes just with such a more fine-grained distribution by NX agents. Similarly, we can do flakiness detection because we can understand based on the cache that if there are some flaky tasks that have, like basically if you have a test run on CI that has run before with a certain hashkey, and we find that hashkey again down the road at some point, but it now leads to a different result. We know we can detect these and know for certain that these are flaky. And so we can basically spin them up and move them to a different machine just to make sure we have a clean setup, a clean startup phase, and rerun them again. So here you can see, for instance, there was an attempt on the left-hand side where one test run failed and we knew that was potentially a flaky test. And so automatically a retry has started and the second one then turned out to run properly through.

8. Rethinking CI for Monorepos and NX Cloud

Short description:

Surface information and retry failed pipelines. Dynamic scaling optimizes machine usage and reduces costs. NX Cloud simplifies distribution setup with a single flag. Task splitting and flakiness detection improve efficiency. Get connected with NX Connect and explore our resources for more information.

And that was marked. And so we can first of all surface that information, but as you can see also, we can kind of retry automatically. And so basically the whole pipeline again, succeeds rather than goes and fails. As you can see here, that run happening.

Another part here is also the dynamic scaling, because clearly some of the PRs might look very different. So you have some PRs which are running all those end-to-end tests because some of the parts were affected where we need to run them. Some like PR2 or PR4 are super quick because we just changed one project. We just need to run a test for it, that's it. And so if we have a static number of machines, that might be suboptimal because some of them might just run idle or distribution will just end up being one task per machine which is really inefficient also cost-wise.

And so we also allow to not just give the machines, but actually give a change set, a dynamic change set file where you can define like machines for small change sets, medium and large. So right now we give this type of distribution and we will automatically then based on the affected notes compared to the entire set of notes in a project graph, distribute them accordingly and just like launch four, eight or 12 machines. But in the future, we can actually start doing a much more fine-grained approach where you can potentially define a percentage of the graph that will classify for you like a small, medium or large. So it gives you even more flexibility there.

This has been just some of the things that I wanted to give you a glimpse into how we want to start rethinking how CI works, especially for monorepos. What we really try to do at NX is kind of give you the end-to-end experience, right? So we have the smart monorepos side on the local developer experience where we come with like tools that help you structure monorepos, kind of facilities for allowing you to easily automate such monorepos. But we also want to make sure that you're not kind of left alone on CI with like complex setups and complex scripts on CI to make sure you succeed there as well. Because if CI doesn't work, you can have the best local developer experience, but like that will simply be a problem and you will move away again from monorepos just because of that reason. And so that's kind of what we try to fill in.

We do that with NX Cloud, which is our product there. You can connect just with NX Connect and that will kind of walk you through the setup of the whole distribution. And again, the major important part here is the simplicity of defining such a distribution with just a single flag. And so that single flag gives you the distribution across these machines, the potential dynamic scaling of these machines, depending on the size of the PR. So you save costs actually, and you don't run machines unnecessarily. The task splitting, which is kind of combination of NX being able to dynamically split those, but then also then help on the CI side where NX agents can pick those up, those more fine-grained tasks and distribute them much more efficient way. And finally also the flakiness detection of tasks. So that can be end-to-end task, but potentially can be any other task as well that could be flaky. And definitely check out our YouTube channel. So a lot of those in that info and more deeper dives, we publish there. And otherwise, get in touch, reach out. If you have questions, my DMS are open also on X, or we have a Discord, a community Discord, which pretty active. Or obviously you can go to nx.dev.com slash docs. We have where we have all the details. Thanks.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2024DevOps.js Conf 2024
25 min
Atomic Deployment for JS Hipsters
Deploying an app is all but an easy process. You will encounter a lot of glitches and pain points to solve to have it working properly. The worst is: that now that you can deploy your app in production, how can't you also deploy all branches in the project to get access to live previews? And be able to do a fast-revert on-demand?Fortunately, the classic DevOps toolkit has all you need to achieve it without compromising your mental health. By expertly mixing Git, Unix tools, and API calls, and orchestrating all of them with JavaScript, you'll master the secret of safe atomic deployments.No more need to rely on commercial services: become the perfect tool master and netlifize your app right at home!
DevOps.js Conf 2021DevOps.js Conf 2021
33 min
How to Build CI/CD Pipelines for a Microservices Application
Top Content
Microservices present many advantages for running modern software, but they also bring new challenges for both Deployment and Operational tasks. This session will discuss advantages and challenges of microservices and review the best practices of developing a microservice-based architecture.We will discuss how container orchestration using Kubernetes or Red Hat OpenShift can help us and bring it all together with an example of Continuous Integration and Continuous Delivery (CI/CD) pipelines on top of OpenShift.
React Summit 2022React Summit 2022
21 min
Scale Your React App without Micro-frontends
As your team grows and becomes multiple teams, the size of your codebase follows. You get to 100k lines of code and your build time dangerously approaches the 10min mark 😱 But that’s not all, your static CI checks (linting, type coverage, dead code) and tests are also taking longer and longer...How do you keep your teams moving fast and shipping features to users regularly if your PRs take forever to be tested and deployed?After exploring a few options we decided to go down the Nx route. Let’s look at how to migrate a large codebase to Nx and take advantage of its incremental builds!

Workshops on related topic

React Summit 2023React Summit 2023
145 min
React at Scale with Nx
Top Content
Featured WorkshopFree
We're going to be using Nx and some its plugins to accelerate the development of this app.
Some of the things you'll learn:- Generating a pristine Nx workspace- Generating frontend React apps and backend APIs inside your workspace, with pre-configured proxies- Creating shared libs for re-using code- Generating new routed components with all the routes pre-configured by Nx and ready to go- How to organize code in a monorepo- Easily move libs around your folder structure- Creating Storybook stories and e2e Cypress tests for your components
Table of contents: - Lab 1 - Generate an empty workspace- Lab 2 - Generate a React app- Lab 3 - Executors- Lab 3.1 - Migrations- Lab 4 - Generate a component lib- Lab 5 - Generate a utility lib- Lab 6 - Generate a route lib- Lab 7 - Add an Express API- Lab 8 - Displaying a full game in the routed game-detail component- Lab 9 - Generate a type lib that the API and frontend can share- Lab 10 - Generate Storybook stories for the shared ui component- Lab 11 - E2E test the shared component
Node Congress 2023Node Congress 2023
160 min
Node Monorepos with Nx
Top Content
WorkshopFree
Multiple apis and multiple teams all in the same repository can cause a lot of headaches, but Nx has you covered. Learn to share code, maintain configuration files and coordinate changes in a monorepo that can scale as large as your organisation does. Nx allows you to bring structure to a repository with hundreds of contributors and eliminates the CI slowdowns that typically occur as the codebase grows.
Table of contents:- Lab 1 - Generate an empty workspace- Lab 2 - Generate a node api- Lab 3 - Executors- Lab 4 - Migrations- Lab 5 - Generate an auth library- Lab 6 - Generate a database library- Lab 7 - Add a node cli- Lab 8 - Module boundaries- Lab 9 - Plugins and Generators - Intro- Lab 10 - Plugins and Generators - Modifying files- Lab 11 - Setting up CI- Lab 12 - Distributed caching
DevOps.js Conf 2022DevOps.js Conf 2022
76 min
Bring Code Quality and Security to your CI/CD pipeline
WorkshopFree
In this workshop we will go through all the aspects and stages when integrating your project into Code Quality and Security Ecosystem. We will take a simple web-application as a starting point and create a CI pipeline triggering code quality monitoring for it. We will do a full development cycle starting from coding in the IDE and opening a Pull Request and I will show you how you can control the quality at those stages. At the end of the workshop you will be ready to enable such integration for your own projects.
DevOps.js Conf 2022DevOps.js Conf 2022
155 min
Powering your CI/CD with GitHub Actions
Workshop
You will get knowledge about GitHub Actions concepts, like:- The concept of repository secrets.- How to group steps in jobs with a given purpose.- Jobs dependencies and order of execution: running jobs in sequence and in parallel, and the concept of matrix.- How to split logic of Git events into different workflow files (on branch push, on master/main push, on tag, on deploy).- To respect the concept of DRY (Don't Repeat Yourself), we will also explore the use of common actions, both within the same repo and from an external repo.