1001 Packages – Strategies for Managing Monorepos

Rate this content

When working with a monorepo, there are multiple Challenges: packages installation, packages linking, development processes (build, lint, testing) and deployment processes. Those challenges vary the type of artifacts in our monorepo (micro services, FE App, packages etc.). We will explore different approaches and tools for monorepos and their pro and cons.

24 min
25 Mar, 2022

Video Summary and Transcription

This Talk discusses strategies for managing monorepos, including release strategies, building strategies, development processes, and linking packages. The speaker highlights the challenges and complexities of monorepos, such as large codebases and potential coupling of software parts. They also mention the importance of suitable tooling for successful monorepo management and the potential for standardization in the future. Additionally, the speaker shares their personal journey in programming, starting at a young age and expressing their love for the field.

Available in Español

1. Introduction to Monorepos

Short description:

When I think about a monorepo, I think about an elephant. So my name is Tali Barak, I work for Youbeak. Today we're going to talk about strategies for managing monorepos and how do we tackle that. In the mono repo world, we have still the same one repository in the source control, but we are generating multiple artifacts from the same repository. In the JavaScript world, a mono repo is one repository with multiple connected package JSONs. It is crucial to understand the complexities that we encounter with a mono repo, which is a Direct Acyclic Graph.

When I think about a monorepo, I think about an elephant. You might think this is because a monorepo is a big thing or because it's gray, also this one is red, but it's actually because of this story. The story about six blind people that are trying to figure out what is an elephant. So each one is touching another part of it and they try to understand whether it's a spear, it's a snake, it's a tree or what is this thing. And sometimes when I talk with people about monorepos, this is how I feel. Everyone sees it is slightly different.

So my name is Tali Barak, I work for Youbeak. And today we're going to talk about strategies for managing monorepos and how do we tackle that. It's called 1001 packages, but it's not just about packages.

So let's start, what is a monorepo? And you can see all kinds of definitions. So this is my definition. And in order to understand that, we need to understand what is an artifact. And in our world, an artifact, you need to have this definition, a dictionary definition. An artifact is an object made by a human being. But in fact, when we talk in the software, we're talking about things like packages, tools, website or front end application, maybe a mobile application, a back end server or part of a server, which is a service, all of these are artifacts.

And traditionally, we would have one repository. And from each repository in your source control, you would create a single artifact of the above. In the mono repo world, we have still the same one repository in the source control, but we are, in fact, generating multiple artifacts from the same repository. But if we try and focus and this is a DevOps JS conference and we are talking about the JavaScript world, it actually means that in one repository, we would have multiple package JSON, because what identifies in the JavaScript world an artifact is a package JSON file. So this is our mono repo.

But there is one more thing. If we have a mono repo that looks something like that, it is likely that one package JSON, let's say inside foo, would point to another package JSON, there would be a dependency inside this mono repo. So here is my definition for a mono repo in the JavaScript world. One repository with multiple connected package JSONs. And in fact, what we have in the mono repo is this kind of graph of all the internal packages and how they are connected. And this is crucial to understand all the complexities that we later encounter with a mono repo. This is called a DAG, a Direct Acyclic Graph. That's because we have each package pointing to another. And hopefully there are no cycles because that's not a really good thing. And we are familiar with the tools even in this conference.

2. Strategies for Managing Monorepos

Short description:

A few tools like LearnA, NX, Yarn, PNPM, and TurboRepo help us deal with mono repos. Before deciding on a tool, let's discuss the strategies to employ in our mono repo. The common use cases include open source tools, microservices, and design systems. Monorepos offer easy code sharing, atomic commits, unified publishing, and deployment. However, they also come with challenges such as a large codebase, difficulty in tracking changes, security concerns, and potential coupling of software parts. Despite these challenges, let's explore the strategies for treating a monorepo, starting with the release strategy and its application in microservices.

A few of them were discussed. We have tools like LearnA and NX. We know about the package manager like Yarn and PNPM and tools like TurboRepo. And all of them are great tools that help us deal with mono repos.

But before we jump and decide which tool is best for us, let's stop. And let's talk about what are the strategies that we want to employ in our mono repo? And I'm a developer, so I start with zero when I do accounting.

And the first strategy is about scope. What do we want to have in our mono repo? And the common use cases that we see are things like open source tools. We have plenty of them. There are really a lot of the tools that we know and work with are built as a mono repo micro service, having multiple applications design system can be also a great use case for a mono repo. Or just as we know from the Google Facebook discipline, just put all of the company code in one place, but that's not a requirement for a mono repo.

And should I or should I not use the mono repo? And there are two great articles, one a repo, please do and mono repo, please don't. So the good thing about monorepo is that it is easy to share the code. You can run, you can make changes, which are atomic commits and are changing multiple packages or multiple projects at once. You can have a unified publishing and just be able to deploy or publish all of your packages at once. But we know one thing very well, there are no free lunches. So, with the good things come some other things that can challenge us. To start with, it's a large codebase, large codebase has lots of commit. Maybe lots of people are working on it. It's not always easy to find your way around and see what everyone did. Also, if you need some security policy and you don't want people to access certain kinds of the repository, then monorepo is not the good tool to use. And also, the fact that all the code is stored in one place might encourage people to couple the different parts of the software more than it is acceptable. And of course there are all the configuration, it's also for monorepo and when you use the repo, you will encounter some challenges.

But let's say that you decided that you actually want that. Yes, I will go with the monorepo. So, let's discuss now about what are the strategies that you want to have when you decide and what are the ways that you should treat your monorepo. And let's start with the first strategy, which is the release. So let's take two common use cases that people use monorepo for. The first one is having microservices. So you can have multiple services that are all stored in your repository and with some packages that those services depend on.

3. Managing Monorepo Releases

Short description:

When you have multiple frontend applications in the same repository and want to share code between them, you can bundle the applications instead of publishing packages. You can manage the release pace by using a fixed approach, where all packages and changes are published together, or an independent approach, where each package has its own changelog.

So when you deploy, you also need to install those packages. A somewhat similar, but very different, different case is when you use it for frontend application, when you have multiple frontend application in the same repository and a very classic example is if you have your customer facing on one side and the internal dashboard is the same side on the other side and you want to share a lot of code between them. In this case, you don't, because the applications are typically bundled, you don't really need to publish the packages and you can just bundle them and then release them.

And then the question is, how am I managing it? What is my release pace or cadence that I'm going to use? So one way of doing that is I'll use UNIFI the learner term for that is fixed. That means every time I decide to publish or deploy, I will publish all the packages and all the changes and everything that was changed with a single version. I will only have one change log with all the changes for the different parts. And if you look, for example, at the Angular repository, you will see that this is what they do, whether or not the package was changed, it will be included within your version in the next release. Another approach is actually to make it independent. That means that when you are making a change to some package or some service or some application, you want to deploy and publish it regardless of all the other packages, it will have its own changelog, and we also see that in some examples. So this is the first thing that you want to make that you want to understand how you're going to work with your Monorepo.

4. Building Strategies and Dependency Graph

Short description:

The second strategy for building a Monorepo involves two main categories of tooling: individual build scripts in each package or using builders/executors defined at the root of the Monorepo. Different strategies can be employed to determine what to build, ranging from building everything to using a dependencies build that analyzes changes and the dependency graph. Tools like Lerna, NX, and TurboRepo optimize the build process and cache intermediate artifacts for faster builds.

The second strategy, which is directly related to the first one, is how you're going to build your Monorepo. We have two main categories of tooling for this. The first one is to have a build script in each package, where you can run whatever you want, such as webpack or TypeScript. The other approach, used by tools like NX, is to define builders or executors at the root of your Monorepo. These executors are wrappers that know how to deal with each package, allowing for different types of builders to perform specific jobs. But how do you decide what to build every time? There are several strategies to choose from. The naive approach is to build everything regardless of where the change was made. Another extreme is to build only what has changed and publish it, installing the changes in other packages or applications from an external artifact registry. A more sophisticated approach is to use a dependencies build, where the tool analyzes the changes and the dependency graph to determine what needs to be built and in which order. Tools like Lerna, NX, and TurboRepo take this a step further by optimizing the build graph and caching intermediate artifacts, resulting in faster builds.

5. Strategies for Building Monorepos

Short description:

If we're working on a CI, we can do a differential build by path. The most sophisticated approach is using a dependencies build, which analyzes the changes and the dependency graph to determine what needs to be built and in which order. Tools like Lerna, NX, and TurboRepo optimize the build process and cache intermediate artifacts for faster builds. TurboRepo, in particular, centralizes the build and uses pipelines to further optimize and minimize the build process at every stage.

If we're working on a CI, we can do a differential build by path. GitHub Action, for example, has a support for that. So you can tell them just build the path that will change. But the most sophisticated, and this is what all the tools take pride of, is using a dependencies build. So you know what was changed by doing some sort of a Git from previous commit. The tool knows what is a dependency graph, because it knows what are the relationships within your packages. And that will result in a build graph that will determine what needs to be built and in which order. So this is one, for example, the way Lerna is working. You can tell them to build everything, and all the other packages that depend on it. NX and TurboRepo are taking that one step further. Not only you know the changes and the dependency graph, but you can actually cache some of the intermediate artifacts. And then you can have a smaller build graph which means faster builds. When the build is actually centralized and there is some tool that is controlling everything and knows exactly what needs to be installed, you can even further optimize. And this is what TurboRepo, for example, is doing. You're defining some sort of pipelines and then it can optimize and run in parallel and see exactly and minimize what needs to be built at every stage.

6. Development Processes and Linking Packages

Short description:

The third strategy is about development processes, specifically testing and linting. There are three ways to approach this: global testing, local build, and centralized configuration. When it comes to linking packages within a monorepo, there are two strategies: package.json dependencies and implicit relationships. Package managers like npm, YARN, and PNPM also support this aspect of monorepos.

The third strategy, and again, tightly related to the previous one, is the development processes. And I mostly talk about testing and linting, unit testing, in this case. And we will see three ways of doing that very quickly. One of them is I will do a global testing. That means I will only have one command at my workspace. I will run the test for all the spec or test files that I can find in my repository every time. Naive but effective.

The second one is actually doing a local build. That means each package will have its own test script, and you can decide on which packages you want to run it. Maybe just on some of them, or you can, with similar rules to the building. And the third one is centralized, similar to the build. You have one workspace configuration to define how the tests are being done or the linting in each one of the packages, and you can use it in order to control which one to use. I would like to also say here about the about Just that has a Just project, which is also a centralized configuration. So you can define for each project specific Just configurations. So if you have things that need Babel, if they need the different presets and so on, you can use each one of them.

The fourth strategy is about linking. And when I say link, I mean how do we manage the relationship between those packages, all the packages that exist in my monorepo. And this is not external npm packages, only the internal ones. So, again, two strategies that are used by different tools. The first one is that each package json in each one of the packages is specifying what are the packages it depends on, including the internal one. So, you can have, you would say, this version. You can change version. You can have a generic version. The other one, and this is what Annex is doing, is you can have implicit relationship, which actually happens when you do the input. So, you don't need to specify for each one what packages are dependent on. But you can actually import, and there is a relationship there. And how do we do that? So, again, if you take it to the extreme, to the almost poly repo way, you just make a change to the package. You build it, you publish it to the artifact registry, and then you install it from your artifact, from your private registry or whatever, into the same repository. The other one, and this is one thing that is important, is that the package managers like npm and, to some extent, YARN, PNPM, they also support this part of Monorepos. This is the part that they know to do pretty well.

7. Strategies for Monorepo: Linking and Installation

Short description:

If you use Lerna with YARN, Lerna delegates package linking to YARN. For TypeScript and modules, specify the dist folder for correct builds. Another option is using TypeScript paths for building multiple applications. For installation, you can specify all packages at the workspace root for easier upgrades and fewer version mismatches. Alternatively, each package can have its own dependencies with a local install or hoisting for a single node model at the root. Remember to verify dependencies to avoid errors during deployment.

Actually, if you use Lerna and you want to use it with YARN, Lerna will simply delegate the package linking to YARN. And in this case, we do specify what are the packages, but when we are doing the installation or we want to link between them, it will install the node modules as if they are normal packages, but under the hood, it will actually link to the folder in the repository where the package is stored.

It works beautifully when you have, when you're working with plain JavaScript. It is slightly less that great if you work with something that requires the build process like a TypeScript or modules, and so on, if you need the babel, then you would need to point to the dist folder and not to the build and make sure that everything is correctly built.

I would like to mention again, some less orthodox way is using the TypeScript path. Instead of specifying what are the packages, you can specify paths of your TypeScript to the different packages and just use them. But it could be a good way if you are using it for application and you're building multiple applications. So not purely mono-repo, but also a possible way.

And the last strategy is about installation. And here I refer to the external modules, to the NPM modules that you want to add in each one of the packages. So some strategies here. You can specify all the packages that you need at your Workspace root. And this is only useful if you're building an application that there is some sort of a bundling process. Otherwise obviously you won't be able to install the other like microservices or other packages because the dependencies will not exist there. But if you're working with the applications and using a bundling, this might be a good way. You will have less problems trying to upgrade, less mismatches between versions. And this could be useful.

But if you go to the more traditional way, meaning each package has its own package and has a set of dependencies, then the first option is to do a local install. Which means that you have the dependencies, and under each package you have a node modules and it is going to be installed there. And then you make sure that you can have different versions for different packages. It makes it very autonomous.

Another option, and again, the package manager support this, is doing a hoisting. Which means everything is still defined at the package level, but when it is being installed, it is actually installed under a single node model at the root, unless there are version mismatches. And the reason this works is because when node is searching for a package, it will just go up the tree and try and find the package somewhere higher in the hierarchy. The risk here is that you forgot to add some package in one of your packages and then when you and you won't get any problem during testing or during the local development because it is stored somewhere because of another package. But then when you go to deploy just this package on your production machine or standing machine, you might get some errors and there are linked rules. There's an ESLint rule that is verifying this. So we don't have to rely on our memory and brain power.

So to sum it up, if we want to go with the monorepo, first we need to understand what is the scope, what do we want to put in our monorepo? It doesn't have to be all of the code. Then we want to define what are the strategies that we are going to use for the release, the build, the development process, how we are going to link the different packages and what is our installation method.

8. Monorepo Processes and Tooling

Short description:

When we decide what processes to use, we need to consider the capabilities of the tools available. Thank you for your attention. The audience showed a preference for using monorepos in frontend applications, followed by using them for both backend and frontend. This aligns with the popularity of microservices. Managing a monorepo for both backend and frontend poses an interesting challenge. The question of package manager replacements for monorepo tools like Lona and NX was raised, highlighting the importance of suitable tooling for successful monorepo management.

And when we decide what are the right processes for our organization, we can check what capabilities the tools bring and what is the right tool for us. And that's it. Thank you very much for that. And those are my contact details.

Hey, Tally, how are you? Hey, I'm good. How are you? Awesome, awesome. Really energize after your talk, like really this notes which I took, and I'm sure the audience must have taken lots of notes today.

So... Okay, so you ask the question of, to audience, do you use a plan to use monorepoint organization? Yeah, frontend application is winning by 48% and then 32% people are using for both backend and frontend. Yeah, so this was a multiple answers question, so it doesn't sum up to 100%, it is over, in fact. And it's pretty much what I expected, I have to say. I was expecting that monorepos are more popular for frontend application than they are for backend. And I guess also it has to do with the popularity of microservices in general. And the amazing thing is actually the 32%, like one third of the audience, are using it for both backend and frontend. And I think that's a pretty interesting challenge for people to manage such a monorepo.

Yes, definitely. It's pretty interesting. And I also imagine that it would be like frontend would be the highest number for monorepos. Yeah. That's awesome. So let's take some questions. So I see a question which is our package manager's replacement for monorepo tools like Lona and NX. Yes, I mean, this is probably brought up in some of the other tools. And we had the PMPM and the NPM talks. And so definitely the tools are important. And the advancement they had towards dealing with monorepos is what actually enabled them. It's great to think that you want to have monorepo, but if you don't have the right tooling. The reasons might, unless you want to come up with all kinds of scripts. So they cater for part of it. For the package linking mostly and the package installing.

9. Managing Multiple Teams and Domains in Monorepos

Short description:

In a monorepo, you need additional tools for building and publishing, as package managers do not cover all the functionality. Multiple teams in a monorepo can work, but it requires proper management of commits and pull requests. Consider the noise and potential messiness of having all the code in one place. Monorepos can be created based on different domains, such as microservices-driven domains. However, managing different tooling, including test runners and frameworks, can add complexity to a monorepo environment.

But then on top of it you need the actual tools for building and sometimes for publishing like Lerna. So there is a lot of functionality and the package managers do not cover all of it. Just a subset I would say.

Mathias Gheno asks what is your opinion about multiple teams in one monorepo? Well I think teams is a very inaccurate description I would say. Because you don't have a monorepo for one person. So once you have multiple people it depends. It can work definitely. I think it is challenging. I think the whole monorepo is a decision that has to be taken, to be considered seriously. And having multiple teams it means a lot of commits, a lot of pull requests. You need to properly manage them. Even if I don't talk about the tooling itself, it means there is a lot of noise around and sometimes because we are humans and have a limited amount of attention that we can give to something, if we see all those codes around us, it can get messy. I'm not saying it will not work, but just like think about it. Do you really want all of this noise in one place? Do you really want to run build every time someone from a totally different team pushes something? I don't know. That's a consideration, that's a strategy. Yeah, subjective, depends upon your scenario and I guess, yeah.

Okay, so do you think that monorepos can be created considering domains of the company, like microservices-driven domain? Can be created considering domain? Yes, definitely. I would expect in a monorepo to have different domains and then you can publish it for micro frontends. I don't think I really mentioned micro frontends, but yes, you can put in a monorepo an application that is like micro frontend build or my microservices. Absolutely. That's actually, I think, where monorepo will shine. I would definitely go with that approach.

Florian Rappel asks, quite often I struggle between setting up the test, just run over everything and running the test individually. I see the difference mostly in would it make sense to have either different test runners or different test environments for individual packages? What are your thoughts? I have plenty on every subject. No, but I think it's I will take the question one step away. How do you deal with different tooling, not just test runners, what if you use Linter or even packages, but test runner is definitely one of the or frameworks like would you have your React and Angular in one place and so on. And again, this is where the complexity is you get more and more complexity. Not all tools are working so well in a monorepo environment.

10. Complexities and the Future of Monorepos

Short description:

In a monorepo environment, not all tools work well. Different test runners may be required, but ideally, putting everything in a unified tooling environment can benefit a monorepo. When structuring monorepos with Deno, although there is no package manager, tooling is still necessary for running tests, linting code, and sharing files. The future of monorepos looks promising, with more tools supporting them and the potential for standardization. It will be interesting to see how monorepos evolve without the challenges posed by package management. The speaker expresses excitement about the future of monorepos and the possibilities it holds. They also mention the presence of new tools and the potential for further development in the monorepo space.

And again, this is where the complexity is you get more and more complexity. Not all tools are working so well in a monorepo environment. I don't want to mention, but I've encountered tools that are very problematic to work in a monorepo. Sometimes working in a nonroot path is not and then later on, they might add support. So would it make sense to have different test runner? Yes, if you have some legacy project, but I think ideally you put things in the monorepo, maybe it's a good time to unify and work in a unified tooling environment, which is where monorepo can actually help you. Thank you.

So next question, Sebi asks, how would you structure monorepos with Deno as there is no package manager and therefore not actually packages, but just files and folders? Maybe this is the future when we don't have to deal with all the problems, but even if you put a Deno application and I only have a little knowledge about Deno to be fair, you still need to run tests, you're probably still linting your code, I mean, Deno developers probably want to do that. So you still need some of the tooling. And yes, the package horror show you can skip it, but you still have some other things. And I can imagine also, I did not work in that environment, that at some point, you will want to share some files. And you have your Deno mapping files from the packages to the actual implementation. So you might want to share them between your microservices and so on. But I think this is something, it will be very interesting to see how this evolved when we don't have all the packages troubles. That's interesting.

So, just expanding it on a bit, because throughout this conference, we saw different tools and Monorepo, there were multiple talks. So, I just wanted to ask, what do you see the future of Monorepo looks like? What do I see? So, actually, the Dino example is a good one. Definitely in the short term, I mean, in the coming few years, we are going to see more and more tools supporting Monorepos. Tools that did not, and we already see a lot of tools coming to the play. You have Tuberipo now as a new player. Leona, we're so rushed at some point, so we see tools and I guess this is not the end of it. There will probably be more tools around it. And at some point, maybe we will be able to standardize among the way to work with Monorepos and we won't have to reinvent everything every time we build a new project. But I think it will be a very interesting and nice future, if you ask me. It's interesting, yeah, interesting for sure. Really looking forward to good times ahead, I would say. Good times are coming. Yes, cool. So I would just like to ask just a non-technical question, because I'm really amazed by you and your experience. So I would like to know just a little bit of your journey, like how you got started and all with this web and development. Wow.

11. My Journey in Programming

Short description:

I started programming at the age of 13 in 1983. I got my first computer and knew that programming was what I wanted to do. I joined the Army and was part of a computers unit. Over the past 30 years, I have been involved in programming and love it.

Yeah, so I started programming at the age of 13, and that was in 1983. So everyone is now invited to do the math. You end up with 51. I got my first computer. I was actually seeing someone programming and I knew this is what I want to do. It is where we have a compulsory military service. So when I joined the Army, I was actually in a computers unit and then university. And ever since, over 30 years, this is what I'm doing. I love it. I was away from programming for some time, but doing some other things always in this area.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.
DevOps.js Conf 2024DevOps.js Conf 2024
25 min
End the Pain: Rethinking CI for Large Monorepos
Scaling large codebases, especially monorepos, can be a nightmare on Continuous Integration (CI) systems. The current landscape of CI tools leans towards being machine-oriented, low-level, and demanding in terms of maintenance. What's worse, they're often disassociated from the developer's actual needs and workflow.Why is CI a stumbling block? Because current CI systems are jacks-of-all-trades, with no specific understanding of your codebase. They can't take advantage of the context they operate in to offer optimizations.In this talk, we'll explore the future of CI, designed specifically for large codebases and monorepos. Imagine a CI system that understands the structure of your workspace, dynamically parallelizes tasks across machines using historical data, and does all of this with a minimal, high-level configuration. Let's rethink CI, making it smarter, more efficient, and aligned with developer needs.

Workshops on related topic

React Summit 2023React Summit 2023
145 min
React at Scale with Nx
Featured WorkshopFree
We're going to be using Nx and some its plugins to accelerate the development of this app.
Some of the things you'll learn:- Generating a pristine Nx workspace- Generating frontend React apps and backend APIs inside your workspace, with pre-configured proxies- Creating shared libs for re-using code- Generating new routed components with all the routes pre-configured by Nx and ready to go- How to organize code in a monorepo- Easily move libs around your folder structure- Creating Storybook stories and e2e Cypress tests for your components
Table of contents: - Lab 1 - Generate an empty workspace- Lab 2 - Generate a React app- Lab 3 - Executors- Lab 3.1 - Migrations- Lab 4 - Generate a component lib- Lab 5 - Generate a utility lib- Lab 6 - Generate a route lib- Lab 7 - Add an Express API- Lab 8 - Displaying a full game in the routed game-detail component- Lab 9 - Generate a type lib that the API and frontend can share- Lab 10 - Generate Storybook stories for the shared ui component- Lab 11 - E2E test the shared component
Node Congress 2023Node Congress 2023
160 min
Node Monorepos with Nx
Multiple apis and multiple teams all in the same repository can cause a lot of headaches, but Nx has you covered. Learn to share code, maintain configuration files and coordinate changes in a monorepo that can scale as large as your organisation does. Nx allows you to bring structure to a repository with hundreds of contributors and eliminates the CI slowdowns that typically occur as the codebase grows.
Table of contents:- Lab 1 - Generate an empty workspace- Lab 2 - Generate a node api- Lab 3 - Executors- Lab 4 - Migrations- Lab 5 - Generate an auth library- Lab 6 - Generate a database library- Lab 7 - Add a node cli- Lab 8 - Module boundaries- Lab 9 - Plugins and Generators - Intro- Lab 10 - Plugins and Generators - Modifying files- Lab 11 - Setting up CI- Lab 12 - Distributed caching
DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.