Next Generation Code Architecture for Building Maintainable Node Applications

Rate this content
Bookmark

In today's fast-paced software development landscape, it's essential to have tools that allow us to build, test, and deploy our applications quickly and efficiently. Being able to ship features fast implies having a healthy and maintainable codebase, which can be tricky and daunting, especially in the long-run.

In this talk, we'll explore strategies for building maintainable Node backends by leveraging tooling that Nx provides. This includes how to modularize a codebase, using code generators for consistency, establish code boundaries, and how to keep CI fast as your codebase grows.


30 min
14 Apr, 2023

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Today's Talk focused on code architecture, modularization, and scaling in software development. The speaker discussed the benefits of separating code by domain and using tools like NX to improve productivity and enforce modular architecture. They also highlighted the importance of automating library creation and configuration. Additionally, the Talk covered code scaling and deployment strategies, including caching and automated code migrations. The speaker emphasized the flexibility and scalability of Fastify and the advantages of using a monorepo for front-end and back-end development.

Available in Español

1. Code Architecture and Scaling

Short description:

Today, we will discuss code architecture and building maintainable node applications from a tooling perspective. We often see a problem with scattered features across different folders, which hinders scalability and causes merge conflicts. A better approach is separation by domain, which allows for atomic and localized features. We will also explore domain modules, automation, and code scaling as the product grows.

So let's dive right in! Quite a mouthful of a title actually, but what I would want to look a bit into today is kind of the code architecture and building maintainable node applications, but from a tooling perspective.

The main reason is, independently of front-end or back-end projects, which I've seen as part of my consulting, as part of working with some of the clients, I often see a structure like this, which is perfectly fine when you start a new project, but the main problem here that you see is if I'm talking about adding features to products, I'm kind of having it scattered across those different folders based on the structure that I have here.

And the thing is because this is a separation by type. So we have all APIs that are REST or whatever we're using, maybe TRPC, are in that API layer while then the services are in the service layer and the data access in the data access layer.

And the project doesn't really scale. As you add more features, you will not just have one file, as in this very, very simple case here, an example, but also if you add new team members, they can constantly work across these folders, and it's very easy that they get into problems like merge conflicts, stuff like that.

So there's an alternative for that, which is separation by domain. I'm pretty sure you've seen this. A lot of people actually do this. I think that that's a better approach there, simply because now you can sort the different features out in their own areas to become more atomic, more localized into one single area of your entire product.

And again, you get the benefits out of that. And these are the things that I would want to touch a bit and talk today. So talking a bit about domain modules, how we can structure that, how we can add in automation to help us with that and make sure that we stay within those domain modules. And then also a bit about code scaling in the sense of what happens if I add more of these and keep adding, maybe add a Monorepo, stuff like that. So how can I make sure that my code scales as my product gets bigger?

2. Introduction to NX and Modular Architecture

Short description:

I'm currently the senior director of developer experience for NX, Google developer expert and AI instructor. NX is open source and helps improve developer productivity. It can be used in both Monorepo and single project setups. Modularizing by domain boundaries improves maintainability, flexibility, reusability, and testability. However, importing modules from different domains can still happen accidentally. NX addresses this issue by providing guardrails and modular architecture. The base layer of NX includes the workspace, while plugins offer technology-specific tooling automation. Standalone product was introduced to repurpose the structure of NX.

I'm currently the senior director of developer experience for NX, Google developer expert and AI instructor. NX is open source, and it is a tool for kind of like helping you improve developer productivity. So there is a set of tools and techniques, you can incrementally adopt it, like on the lower level, and then add more stuff on top of it. We're kind of known for Monorepos, but today I'm actually more about talking about the standalone product side of it. So you don't just can use NX in a Monorepo, but it's actually also useful for single project.

So why modularized by domain boundaries? Nowadays, you almost need to ask chatGPD just to make sure you're on the right track, but I actually came up with some good answers there. And especially, obviously, maintainability part of it, right? Because as we mentioned before, you have those small, more cohesive features, those modules are nicely encapsulated. The flexibility, because you can go ahead and potentially rip out a module, because it's kind of out of consistent, not always that easy and reusability. As you start splitting up, like, you might see patterns of things that are reused that are kind of similar across domains. So you can extract them even more and kind of then actually reuse them in your code. Testability is also a nice side effect. And there are more of these types of things, because now that you have modules, you can potentially just test that single thing in isolation as well.

So what though prevents me from doing something like this, right? Because now I have my nice structure, structured by modules, domain driven development kind of approach. But nothing actually prevents me from just importing, let's say, here my order service imports something from the product list API because someone has a function in their node file layer and their node API, and I'm just importing it. Maybe even not intentionally, just my idea auto-completes and pulls in that utility function. Can we do better? Like, can we have something in place to actually restrict that a bit more? And this is where we started thinking about at NX quite a lot. Because, as in the intro was mentioned, we actually did consulting for quite large companies, which have usually large code base. They run into these type of issues continuously. Because they have like 60 to 300 developers on that same code base and they keep adding features whole day, right? So you want to have some guardrails in place. Now, as mentioned, NX is known for the monorepo kind of thing. But if you look at the architecture of NX, it is actually Modular itself. So at the base layer, you can see at the very top there, there is your workspace. That can directly just use the base layer of NX, which means you get the task running, you get some caching, which is useful for monorepos. But then on top, optionally, you can also have these plugins. And plugins are, you can imagine them like technology-specific tooling automation that make your life easier. So they can generate code, they can abstract some of the lower-level build tuning, provide code migrations, all that sort of things. And these plugins specifically are actually very interesting in a single product setup as well. These code generation doesn't have to do anything really with monorepos. It's just like an ergonomic tool that can use to make your life easier. So standalone product was introduced a couple of months ago, it was well almost a half year ago, where you kind of just repurposed how the structure of NX looks like.

3. Standalone Product and Modularization

Short description:

Standalone product was introduced to repurpose the structure of NX. It generates a single Node application with Fastify, Express, or Coa. Extracting logic into separate local libraries provides encapsulation and modularity. The application imports and registers the libraries, making it lean. Nx provides path mapping for cleaner imports. The extracted parts become integrated into the application, which serves as the bundling container for deployment. Further modularization of local modules is possible.

So standalone product was introduced a couple of months ago, it was well almost a half year ago, where you kind of just repurposed how the structure of NX looks like. So we don't generally like apps and libs folders but for standalone product you have like a source folder at the very root. And there is a generator for that where you can for instance for Node add like a preset Node standalone and it wouldn't generate the normal Node monorepos setup but it would generate just a single Node application with Fastify, with Express, with Coa. We have a couple of templates there which just facilitate you or make your life easier to get started with it.

And the structure looks like the following. So you can see here there's a source folder, there's an app in here, I already have a couple of features that I added which is like again by domain. So we have the orders, we have product details, product list, things like that. And so one approach obviously that you have in a monorep already, you have an app and multiple libs but you can take that same approach to also modularize your standalone application, right. So we can go ahead and rather than just having that single if you want monolithic code base where you have just folders, you can actually extract them and create separate local libraries. Now I'm calling them local libraries because we're not gonna publishing them, right. But they have their own setup, they have their own build process, they have their own testing. And so you can extract that logic into more modular part there.

Now what is the advantage? Well first of all these are more encapsulated because every of these libraries come for instance with an index.js file, which is your public entry point if you want, your public API to the outside world inside that single project there. Here for instance I'm exposing the routes for that library, so this is like my order library for instance, and then I can further like internally have facilities. Maybe I have like a dedicated function for each route handling and so on like here you can basically add whatever you need in your project. But you can already see how this is kind of modularized. Now this is a Fastify application, that's just because Fastify has actually, I use this example because Fastify has some built-in mechanisms that are really nicely playing together with modernization. And Matteo Collina actually has a talk later today, I think it's one of the last talks, so you should definitely check that out, where he will actually go more deeper into just Fastify and why it has that modularity already built in. But you can use it with Express, as mentioned, with COA and all other node frameworks as well. And then in the upper level, right in this case Fastify, all you do is really just import that library, right, so you can see this has kind of an NPM scope, and then register it with Fastify. Now probably there's even a better way to, I'm not a Fastify expert, so maybe there Matteo will unveil some better way to dynamically load it, for instance Fastify has auto-loading properties, auto-loading capabilities. But it's kind of the way how you plug it in. So the application actually gets really really lean. Now you might wonder why or how I can actually just import with that at my node app, slash, modules and so on. The reason is why, because in Nx for instance, whenever you create such a library, we already provide the types of path mapping, and so the import is much nicer, you don't have weird relative imports to some location within your project. This is just again, for developer ergonomics. So what we end up with is a structure almost like this, where the node app is our application, and the parts that we extract become kind of almost independent. So we cannot run them alone, but they're kind of integrated into the application, which just bundles them together. So your application really becomes the bundling container, the deployment part, because in the end, when you deploy the app, that's what you built, it will pull in, and compile the modules, and then you ship it somewhere on your server. But we can actually go further and modularize those local modules even more.

4. Code Structure and Enforcement

Short description:

This is an example of how you can structure your code by splitting it into API, data access, and services. However, there is currently no mechanism in place to enforce the rules of importing from other libraries. We may want a more streamlined flow from API to services or data access.

And this is just an example, so you can structure them however you want. Like I use the same API service data layer structure here, just for simplicity. But it could be a potential way of how you split things up and structure them.

And at the code level, it looks like this, where you have the API, you have the data access, and you have the services. So it's just respective forms where each of these is like its own library, and just situated below the orders folder. That becomes my domain boundary.

But still, if you notice, we still don't have a mechanism here in place of enforcing these rules, right? Now we have a more clear API. We have public indexed TS file of each of these libraries. So it is stronger than just having folders, but there's no mechanism still in place that doesn't allow just to import from some of these other libraries, right? I can directly grab in. And for instance, maybe within an order's library data access, just access some function API, which we potentially don't want, right? We might want to have a more streamlined flow from API going through services or from API data access.

5. Managing Dependencies with Nx Rules and Automation

Short description:

To handle different domain areas accessing features of another domain area, Nx defines rules in two dimensions: type and scope. Types include API, service, and data access, while scopes encompass orders, profile management, and product checkout. Tagging types and defining rules allow for controlled dependencies between domains. Automation through linting ensures these rules are enforced automatically, preventing manual checks during pull requests. Linting also offers extensions for various editors, such as VS Code.

And even differently, what happens if a different domain area accesses features of another domain area? Something like this is perfectly legit, right? The process just, it should be conscious where I consciously allow such an import. And to handle those situations, what we did in Nx is define rules, right? And they're usually coming in two dimensions.

First of all, is the type dimension. The type dimension is basically what type of project am I, and which type of project can depend on which other type of project. And the second dimension is more around the scope or domain area. So which scope can depend on which other scope? There might be some that can share things and others that cannot share things. And so type in our case, in this simple example, but it really depends on your project structure, actually. Here could be API service and data access. Those are different types. And the scope is just orders, profile management, product checkout. And usually you also have some share types, which are entities, maybe just utility functions. So things all along those lines.

Now to add those types, you kind of need to tag them. And that's why we added to the project, kind of a configuration where you can specify a string. This can be an arbitrary string, right. So I usually specify them with a colon in terms of type, and then the name or the value as scope and actual value. But this is completely free. You can come up with your own notation if you want. And once you have tagged those, you can go ahead and define these rules. So you can say, well, there's type API, which can depend on services, on utils, on entities. Maybe even on data access, depends on how you want to have it. And then it's the type scope, where you can say this type of domain can access this other type of domain. And you could go potentially arbitrarily deep, depending on how complex you want to go and how strict you want to be on these. And then there comes automation.

So this is a crucial part here, because obviously you don't want to check that manually on each like PR, rather you want to have that automated. And Linting is actually a pretty good candidate for that, because this is static code analysis. So you look at the rules that you have, you look at the text that you have associated, you know, which file imports, which other file, which products those reside, because you have that information. So it's all of a matter of like wiring those things together, and actually enforcing them via some custom lint rule, which we actually have wrote and integrated in X to make sure to have these running. And the cool part then obviously is Linting has a lot of extensions for various editors. This is an example for VS Code, for instance.

6. Automating Library Creation and Configuration

Short description:

You can use plugins and generators to automate the process of creating libraries and setting up configurations. This makes the whole process easier and avoids manual copy-pasting. The generators allow you to set up an entire library with testing and configuration, ensuring it references the necessary definitions. You can also create your own plugins and use them locally to automate tasks. These plugins can be used to modify existing projects or create new files, such as Docker files and project JSON configurations.

So you get that information ready when you write the code, you already see immediately, this can be imported from some other part, because the rules don't match, don't allow. And obviously, it's just something you can run on the comment line on your CI pipeline, right? So you have insured even there that things that go into the main branch are consistent.

Cool, so one thing you might think now is like, yeah, this is nice, right? But it's a lot of effort. So you need to create those libraries, you need to associate tags, you need to come up with those rules. Well, the rules, probably just once you need to think through, and then maybe it's just over time, but still it's a lot of like ceremony to some degree.

And this is actually where another feature comes in from those plugins, which are especially those generators. So they are there specifically for making that process a bit lighter and bit easier to approach in that they allow you to scaffold some of the things. For instance, all these libraries that we have seen are just the result of running this command to generate them automatically. So you have here usually a concept where there's the plugin, then it's the generator that you invoke, and then there's the parameters you give to that generator. And this obviously depends on the generator you're running, but running this command allows you then to kind of set up an entire chunk of this library with its testing configuration, types of configuration, making sure it referenced the whole workspace level types of definitions, such as path mappings, all that stuff works out of the box.

So this is obviously important because this makes the whole process easier because otherwise if you have to copy and paste and manually do it, it will be kind of cumbersome. Here's a dry run of it. So you can also kind of add dry run and see what would it generate without touching the file system. But you kind of get an idea, for instance, of running here the lib generator on the node package. It would create me that Modulus check out API library with all the readme and all the things that I have in there. And interestingly, you can also build your own.

So from the NX Core team, we ship a couple of plug-ins that we use ourselves so it makes sense because we use it for our clients. We maintain them. We have over 80 community plug-ins and those are just the published ones. Like a lot of people actually just use those plug-ins locally in their workspace to automate things. So you don't necessarily even want to publish them. Just create a plug-in locally, list in that same repository to automate the generation of libraries. And what often happens is it just grabs an existing node library project, runs it first. So it wraps it basically and then adds stuff on top or removes or adjusts it based on company guidelines, project guidelines and so on. And it's not just libraries. Those are technology independent in the sense that you can literally just write files, right.

There are templates that have placeholders. So here, for instance, something that we added to our own node plug-in is setting up Docker, right. So it creates a Docker file for you. It creates already a configuration that project JSON.

7. Code Scaling and Deployment

Short description:

So we have now a target that we can run. We've touched on domain modules automation, but code scaling is still missing. By expanding into smaller pieces, we achieve a more modular setup, making team allocation and replacement easier. We can test individual pieces and deploy different targets at varying frequencies. NX provides features like caching, distribution, and automated code migrations, ensuring scalability alongside a monorepo structure.

So we have now a target that we can run. And here specifically, you can see on the lower side there where it says like whenever you run a Docker build, that depends on the actual build of the project. So we'll run the build first, then wrap it up in Docker, have a Docker container you can then run and deploy that.

So we've touched a bit now the domain modules automation. But then one piece that is still missing a bit is the code scaling. So how does that look like? Well, this is the current situation that we have. So if we really expand these into smaller pieces, we already get a more modular setup. So allocating teams is easier. We have to quote rules, like the boundary rules that ensure that we don't have weird references across those projects. It's potentially easier to replace. It is kind of like the microservices philosophy. If you want, where you can add a new service beside and remove the old one, because if they're out of consistency enough, that might be easy because you have a very clear API around them. And obviously we can test link pieces individually. So there's no need to test the whole projects that we have in our workspace, but rather we can just test the product API if we changed only that. So things like that can help, but it could also help in the sense that what if you at some point need, not just in terms of code scaling, but different deployment frequency. Like maybe you have like a project in that local area or like the product that you need to scale more or needs to deploy more often. That could potentially be a point where it then also switch to a monitor. Where you say, well, we don't just have one node app, but we add another application and that just imports some of these models that make sense. And now we have two deployment targets that we can deploy maybe at the different frequency, maybe unit service that scale differently. Because maybe for one part we don't need the entire scaling that is needed. But you can see with that modernization, that import is much easier. Because all we do is to have second app. We just import again those namespaces and that's almost it, right. And if we talk about speed, obviously NX comes from a model repo scenario. So it scales perfectly alongside that, right? Like those features that I've highlighted here in that overall architectural diagram here is it has all these features like caching built in distribution, workspace analysis, automated code migrations. Like all these things that you might potentially need if you then scale really up like have multiple apps and hundreds of flips. So you're not kind of left alone at that point there. And I call them kind of layers of speed because you can add them on top, right, you can start just with intelligent parallelization that NX provides where you run all the projects in parallel based on their dependencies and caching and other things distribution on top of that.

So what does this look like in real life? I think I have a couple of minutes, so I really just want to give you a high level overview of what this project looks like. So this is exactly such an application.

8. Fastify and NX Structure

Short description:

This is Fastify. NX knows the structure based on imports and visualizes it. It optimizes by running specific tests and respects build orders. Caching speeds up subsequent runs. Automation is possible with local plugins and generators. A new domain can be generated with a chosen setup. Thank you for your attention and feel free to ask questions.

This is Fastify. So if I go in here, you can see here the Fastify imports. I have here my routes that I imported, and these are the modules that I have extracted here, right? So if I look for instance to the orders I have here an API, has an index.ts, has the routes in here, and from there, then I structure it however I want to structure it, right? And again this is just an example. I can actually run mbx.NXGraph to visualize that, where it shows how the structure looks like in terms of the modules.

So what NX behind the scenes does, it knows the structure based on the types it imports. And so this is basically just a visualization where you can click on these and see, okay why does this edge exist? Why is there a connection? And you can figure it out and debug it. But NX also uses that for speeding up things, right? So if you change something here in the checkout services it will only run like the tests of the checkout service API and Nodab and so on, right? So you can do such optimizations where you don't need to run everything. And that's kind of coming from the Monrepo background that NX is in.

And similarly if I want to run let's say all the lint and testing of this this workspace, you can see it runs now all the tests for all the projects and models that we have in there, which I could also run individually, right? If I just work on one and develop one, I could just run it for one and it runs all through them in parallel, right? And this respects also potential build orders between the packages, right? If one package depends on an order that one is built first, so it is parallelized also in kind of an intelligent way. This took like 30 seconds now, 13 seconds, and then if I run them again for instance, it would be immediate because then this the caching that kicks in, right? So if I have already some tests ran before, then if as part of some other build, I ran those same tests again, they wouldn't be re-executed, right? So it would benefit from that caching and put it out from there. Plus I can automate things, right? So here, for instance, I have generate such a local plugin, which has a generator and all it has is like template files that have like, for instance here, some placeholders in there and I can then just run it. I don't even have just to run it over the UI or over the CLI. We have even developed like an extension where it can run my local plugin here, provide it a name, let's say new domain, right? You see directly the Dry Run output below and I can just run it and it would then generate a new kind of domain in here with already the setup that I want to have, right? Depending obviously on what setup you choose. So this kind of super quick high-level overview.

I'm outside so just find me and we can dig a bit deeper if you have some more questions. Otherwise I'd like to thank for attention. Thank you ever so much. Would you like to take a seat with me? I've got some questions for you. Just a reminder to all those in the room and online you can still ask questions using Slido until we run out of time. Thank you so much for the talk. Found it really interesting.

9. Making Domains Communicate

Short description:

To make two different domains talk to each other, you can import code via path mapping. Opening up domains depends on the product structure. Services can talk between each other, or you can have dedicated internal APIs. You have the freedom to define boundaries within your project.

One of the first questions we had was how do you make two different domains talk to each other? I guess the question is in terms of boundary rules, right? If you just want to share code you can just import it via those types of path mapping which I've shown before. In terms of how do you want to open those domains up, it really depends on your product structure. So I've seen people either say services, service layers can always talk between each other, that would be the most lightweight way, right? Where you say the API cannot directly grab a service layer from another domain but I rather want to have it go through my, if you want, business layer, right? And so open it up that way. Or you even have dedicated internal APIs where you have like, I don't know, internal service APIs, something like that, a dedicated project where you re-export things from your own service layer, right? So you could go as fine-grained as you want really there. Within the boundaries of your own project. You can define them completely freely.

10. Module Boundaries in Lerner Projects

Short description:

Lerner is a simple tool that delegates task running to NX for caching benefits. It doesn't have additional model boundary rules, which are provided by plugins in NX. This approach allows for a thin layer on top of existing monoliths, without deep buy-in. The NX package itself doesn't provide these rules, they must be added through plugins.

Next question is can you use module, or put my teeth in, can you use module boundaries in learner projects? Lerner doesn't have it right now. So Lerner is basically, we started maintaining Lerner just for context like a year ago. So Lerner is, if you remember that NX diagram is at the basis, right? So Lerner does just task running, that's the same thing, it actually delegates a lot of the stuff to NX behind the scenes for getting all the caching benefits, but that's kind of it, it's on purpose kept simple.

What it has in addition is the publishing, so automatic version changes, semantic conversion, that type of thing, but all in that it's kept as simple as possible. So it doesn't have those additional model boundary rules and things like that. The model boundaries almost exclusively come from like the plugins that NX provides on top, right? But we kind of, we actually discussed integrating some of those aspects in NX directly, but on the other side, we want to keep it as simple as possible, because we usually have two situations where people are like, hey, I have my monolith already, I don't want to have any deep buy in anything, I want to just have a very thin layer on top that speeds up things, right? And so to keep that as small as possible, the NX package itself doesn't provide any of those additional boundary rules and stuff, rather, you have to add the plugins on top.

11. Module Structure and Nested Routes

Short description:

In the case of nested routes like slash order, slash ID, slash products, the structure of a module depends on the specific requirements. It is possible to have these nested routes within the same project and API, or even create a sub-module to handle them. The approach of starting with a simpler structure and then extracting as needed is often preferred. Fastify is a flexible framework that allows for easy delegation of routing to sub-modules.

Thank you. How would a module work or be structured in the case of things like nested routes, like slash order, slash ID, slash products? There it really depends, that's also a bit of like probably related to the talk that Matter is going to give, I guess, like, I don't know exactly his talk, but he's going to go into that modularization approach, but potentially you could either just have those nested routes within the same project in the same API, you could potentially even go a step further and have a sub-module again and then delegate that to that. Usually I go with simple first and then extract, but there's no limitation in that sense. That is almost like the limitation is on what Fastify can do for you, basically, and what I have seen, again, I'm not a Fastify expert, but what I have seen so far is it's actually pretty flexible in a sense, you can easily delegate that to sub-module like internal routing.

12. Monorepo and Modular Monolith

Short description:

We define the difference between a monorepo and a modular monolith. Monorepos are suited for front-end and back-end co-location. TRPC can be used to leverage the routing mechanism. The build for Node TypeScript projects with multiple modules leverages ESBuilt behind the scenes.

Cool. We have got lots of questions and confidently not enough time to do them all, so just a reminder, I'll do it again in a moment, there's a speaker Q&A room kind of near the reception, for those of you in the venue and for those of you online, go to Spatial Chat, click Q&A in order to join that space, because I just know we won't get through them all, but we'll try our best.

How do you define the difference between a monorepo and a modular monolith like the one you showed? Exactly, yeah. So we at NX, our team, we have actually a web page that's called monorepo.tools where we collect different monorepo solutions. We got a lot of contributions from different monorepo tool providers where we list what a monorepo is for us. Like we usually say it's when you have multiple applications. If you just have one application, sure you could technically see it as a monorepo because you have an app and multiple packages, but with that standalone project that's our modular monolith approach. If you want, where you have one project, that's the default. In fact, if you run NXserve, it's serving that project, you don't have to specify the name and once you then add another one, another application, as your deployment container, deployment bundling system, then I would start speaking of a monorepo. But obviously the line is fuzzy, there's no one rule basis.

Do you see this architecture for full stack applications as well? Somewhat like TRPC? Yeah, yeah. It totally works. I mean, especially the monorepo scenario works perfectly fine. And that's what, I mean the standalone project came just recently, but we had support for Node for a long time because a lot of our clients and users of NX like had a scenario where they had like React on the front end and then a Node back end, right? And so having them co-located, you could already share types. Now, things like TRPC makes that even more easier, right? Because you have the routing mechanism built in, so you can leverage that more. We actually have a blog post. So if you go to dev.to.nx, there's a blog post about how to use TRPC in an NX workspace, for instance. And again, it's just, how would you stick them together? How would they fit in? I think there's even a local generator that kind of makes things easier to set up as the stuff like that. But yeah, monorepos are very, very suited for having front end and back end co-located and therefore share things between them. Yeah, special types.

Next question. I think we might have time for two more. How does the build work for Node TypeScript projects with multiple modules? Do you need a bundler or is just TypeScript compilation enough? Yeah, so basically, we leverage their ESBuilt behind the scenes. So if you look at the project configuration, there will be an ESBuilt plugin base that we use to kind of build this from TypeScript. And obviously Node, you wouldn't necessarily need the actual build process. The only thing that we additionally do is that when you have those models, and with the TypeScript path mappings, once you build them to TypeScript, JavaScript, sorry, you obviously won't have the TypeScript path mappings anymore. So we have basically done a layer in between where we actually implement a module name resolver, and that kind of maps them. So we kind of generate a map for those modules such that you can still just deploy them as it would be a single Node application. It is in a dist folder, and then the models will be copied there, precompiled, and just linked over that module resolver that is kind of your entry point for a Node application. So that's kind of how we solved that thing.

13. Bundling and Deployment

Short description:

You can bundle your code into a single file without needing Node modules. This allows you to deploy simple functions or modules outside of your monorepo. The bundled Node docker container consists of a few single files that can be deployed directly to a provider. There's a flag that enables this bundling into a single file. Thank you for your questions, and please join the speaker Q&A section for further discussion.

But you don't need any bundling mechanism at all. You can, like we added the option for bundling it into one single file where you don't even need Node modules. And that was mostly our idea of like, what if you want to deploy like serious functions, stuff like that, which are simple, but you still might maybe deploy them out of your monorepo or these modular monoliths, then you might want to have that just bundled Node docker container, just a couple of single files that you just deploy straight up to some provider, like Edge function provider or something.

And so you can configure that. I don't remember the exact flag, but there's a flag that it can enable, which is there by default, which could bundle them into a single file. That's possible.

Thank you ever so much. We have run out of time for questions. There's some lovely questions yet to be answered, including one from Felix. Thank you so much for submitting it. I'm going to encourage all of you to head out to the speaker Q&A section by reception. If you want to keep having a chat with Yuri, those of you online, once again, you can hit chat in spatial chat in order to join in that physical space. And can we just give a huge round of applause to Yuri for their time. Thank you so much.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
JSNation Live 2021JSNation Live 2021
19 min
Multithreaded Logging with Pino
Top Content
Almost every developer thinks that adding one more log line would not decrease the performance of their server... until logging becomes the biggest bottleneck for their systems! We created one of the fastest JSON loggers for Node.js: pino. One of our key decisions was to remove all "transport" to another process (or infrastructure): it reduced both CPU and memory consumption, removing any bottleneck from logging. However, this created friction and lowered the developer experience of using Pino and in-process transports is the most asked feature our user.In the upcoming version 7, we will solve this problem and increase throughput at the same time: we are introducing pino.transport() to start a worker thread that you can use to transfer your logs safely to other destinations, without sacrificing neither performance nor the developer experience.

Workshops on related topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Top Content
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.
React Summit 2022React Summit 2022
164 min
GraphQL - From Zero to Hero in 3 hours
Workshop
How to build a fullstack GraphQL application (Postgres + NestJs + React) in the shortest time possible.
All beginnings are hard. Even harder than choosing the technology is often developing a suitable architecture. Especially when it comes to GraphQL.
In this workshop, you will get a variety of best practices that you would normally have to work through over a number of projects - all in just three hours.
If you've always wanted to participate in a hackathon to get something up and running in the shortest amount of time - then take an active part in this workshop, and participate in the thought processes of the trainer.
TestJS Summit 2023TestJS Summit 2023
78 min
Mastering Node.js Test Runner
Workshop
Node.js test runner is modern, fast, and doesn't require additional libraries, but understanding and using it well can be tricky. You will learn how to use Node.js test runner to its full potential. We'll show you how it compares to other tools, how to set it up, and how to run your tests effectively. During the workshop, we'll do exercises to help you get comfortable with filtering, using native assertions, running tests in parallel, using CLI, and more. We'll also talk about working with TypeScript, making custom reports, and code coverage.