Decoupling in Practice

Rate this content

Deploying decoupled and microservice applications isn't just a problem to be solved on migration day. Moving forward with these architectures depends completely on what your team's workflow experience will look like day-to-day post-migration.

The hardest part of this can often be the number of vendors involved. Some targets are best suited for specific frontend frameworks, while others are more so for CMSs and custom APIs. Unfortunately their assumptions, workflows, APIs, and notions of security can be quite different. While there are certain advantages to relying on a strict contract between apps – where backend and frontend teams work is limited to a single vendor – this isn't always realistic. This could be because you're still experimenting, or simply the size of your organization doesn't allow for this kind of specialization just yet.

In this workshop, you'll have a chance to explore a different, single vendor approach to microservices using Strapi and Next.js as an example. You'll deploy each app individually, establishing a workflow from the start that simplifies customization, introducing new features, investigating performance issues, and even framework interchangeability from the start.


- Getting started

- Overview of Strapi

- Overview of workflow

- Deploying the project

- Switching services

- Adding the frontend


- A trial account created

- The CLI installed

Chad Carlson
Chad Carlson
102 min
11 Apr, 2023


Sign in or register to post your comment.

Video Summary and Transcription

Platform.SH is a platform as a service provider that supports various frameworks and runtimes. In this workshop, a two application project called food adviser is deployed using Next.js as the front end and Strapi as the back end. The process involves creating a Strapi application, managing content collections, building the Strapi API, setting up the database and admin panel, deploying the project to Platform.SH, integrating with GitHub, configuring routes, and deploying the backend. The workshop also covers deploying the frontend, configuring the frontend and backend, and additional topics such as email settings in Strapi.

Available in Español

1. Introduction to Platform.SH

Short description:

Hello, I'm Chad Carlson, the DevRel team manager at Platform.SH. We provide a platform as a service with support for various frameworks and runtimes.

Hello again, my name is Chad Carlson. I manage the DevRel team at Platform.SH. This is my second Node.Congress, second time doing a workshop. Previously, I did a version of this one as well.

My background is primarily in Python, but since working at Platform, I had been tasked a few years back to do a lot of the exploration on this decoupled topic on our platform. And a lot of the time, or certainly when I've come across it, and what many people do, is they will often have two separate targets for deploying a front end and back end of an architecture like this whether that's many, many microservices or just the back end CMS and your front end of framework of Next.JS or Gatsby, whatever it may be.

So what makes platform a little bit different is that, first of all, we're a platform as a service provider, and we don't have any restrictions on the types of frameworks that you can deploy on our platform. So that can be Next.js, Gatsby, Nuxt, Express, Remix, but it can also be Django, FastAPI, Drupal, Magento, all these different things that may fall into a number of different runtimes. So we have support for all of them.

2. Deploying Multi-App Projects on PaS

Short description:

Platform.SH is a platform as a service provider that supports various frameworks and runtimes. It allows you to deploy applications using different frameworks and runtimes, such as Next.js, Gatsby, Nuxt, Express, Remix, Django, FastAPI, Drupal, and Magento. Platform.SH also allows you to add resources to a singular environment and deploy copies of containers across each environment. This is especially helpful for security and data obfuscation, as well as making changes to both the front end and back end simultaneously. In this workshop, we will deploy a two application project called food adviser, with Next.js as the front end client and Strapi as the back end headless CMS. We will explore the logic of the back end, configure the platform, and deploy the project to production.

So what makes platform a little bit different is that, first of all, we're a platform as a service provider, and we don't have any restrictions on the types of frameworks that you can deploy on our platform. So that can be Next.js, Gatsby, Nuxt, Express, Remix, but it can also be Django, FastAPI, Drupal, Magento, all these different things that may fall into a number of different runtimes. So we have support for all of them. If you are interested at all, we'll probably be hopping back and forth between the documentation while we're doing things. Here is our public docs, and all of the runtimes, all the programming languages that come with built-in support on our platform.

So as we'll see in a little bit, if I want to define an application that uses Node18, I can do that while at the same time, say I'm pulling from a backend Drupal CMS, which many of you are probably familiar with, you have my content team in the back, I can deploy that with this same type key but using PHP 8.2 instead. And on top of that, instead of having sort of this language-agnostic support, we also allow you to continue to add resources to a singular environment, whether that's production, staging, or development so that you can deploy copies of these containers across each environment. So when I was talking about if I've got two different deployment targets for front and back end, separated workflows and separated teams as we're making changes to the front and back end. But on Platform, as we'll see, I can have in a single environment in what we'll call a project for our repository for a fully deployed set of applications that has the front and back end included in that single environment. And then as we make changes, I create a new branch, a new environment. I'm going to get exact copies of both of those app containers across each environment. And so it's not necessarily going to be the most important thing all the time for developing new features. But two reasons that I've come across, this becomes especially helpful. One is the security and PII, personally identifying information, angle. If we have a production website that users are logging into and we want to create development environments for any particular change. It's very easy within this model to sanitize the database to exclude obfuscate or totally fake that kind of production data in these environments. All becomes part of the same workflow. And then the second is if we're doing any kind of migrations of adding a different front end or adding a secondary front end for mobile or whatever the case may be, And we need to not just make changes to the front end or just to the back end, we're really dealing with changes to that contract in the middle of that API. So we may need to make changes to both apps simultaneously. So it becomes really helpful to have them in their own set aside environment in order to do that.

So that's a quick on the project, what are we going to be doing today. So our goal here is to deploy a two application project on our platform on platform SH, and the project that we're going to work on is called food adviser. So I'm sure many of you are familiar with next JS, that's going to be our front end client. And it's going to be consuming data from a back end headless CMS called strappy. If you haven't heard of it, it's it's simply provides a UI for defining collections, whether that's a blog post or in this case the thing we're deploying is a restaurant review website that includes blogs, users, reviews, restaurants, food, categories, things like this. Strapi makes it pretty easy for you to build those content types in the UI in the back end, to then put together this production API that can then be consumed by something like Next JS. And so even for Strapi, which they're, this is their main demo. And I claim no credit on developing the applications themselves, but we can take a look at a little bit of how they're put together to understand the logic of, especially the back end of Strapi. And then add the platform configuration around it to deploy it to deploy this thing to production. So that's where we'll be focused on today. So, I am going to share a link with all of you. So this is a repository that I've set up for us to go through this workshop, so I will add it both to discord and to the zoom chat. And so it's basically the starting point of, of how we're going to both build this demo and then deploy it piece by piece. So let's get rid of that for now. From this main repo that I just shared. Right, sorry. Let's do the users templates, and let's just create a new one in our namespace. And we're going to just do all that. So this is me. If you have any questions, I'm pretty much Chad W Carlson on everything. Chad Carlson has taken on pretty much everything. So we're going to go ahead and clone this repository locally. And so while that's going, the way that this is sort of structured is a brief description of the workshop is in the main read me license information for the upstream food advisor demos included in this front note, and then a. Excuse me, structure of what we're going to be going through is included here. So like I said goal here is to talk a little bit about this back end and how it's put together. Deploy it, connect to the front end. And then if we do have some time, start talking about what it would mean to start switching to production services and developing new features like adding email support. Most of the steps are going to be inside this help steps set of sub directories, and then client is the front end code which we'll we'll get to towards the end. Let's see is this done, yes. Alright, so let's open this in VS code.

3. Creating a Strapi Application and Managing Content

Short description:

We will start by creating a Strapi application in a subdirectory called API. We will use yarn to generate the project. After navigating to the API subdirectory, we will run the development server scripts to build the back-end UI. From there, we can create an admin account and manage content collections. In this case, we will create an 'articles' collection with fields for title and body. After saving the fields, we can create an article and check the API to ensure it is functioning correctly.

And feel free to like I said I'll try to keep track of everything, but if something is not very visible. Let me know. So where do we want to start here. If we go ahead and go into the first sub directory we can get ourselves started. As far as what I have on my computer right now I'm working with node 18, the latest version of yarn. I have gets, and I have an account on GitHub, obviously to create from that template, as well as the two protocols that I talked about before.

Let's create a Strapi application. So we can use yarn to do that. With this yarn create Strapi app in a subdirectory called API. And we can just do that quick to take a look at what this looks like. Let's see if this works. I say that because there is an error that comes up with some you probably have seen it before some node apps that I've noticed on Mac OS, having to do with Sharp. If you notice an error at this stage or otherwise you can export this environment variable and that usually will do the trick to get around it. If for whatever reason you can't generate the project which will do twice in this workshop.

Alright looks like we're good. So if I go into that API sub directory I should be able to what we got. Yeah. We are in develop will run my development servers scripts. And it'll build that back in UI, and we'll be able to see that at 1337 not five, seven. This is where we can get started. And so from here, as you would imagine, create account quick so we can sort of take a tour. So now I've created an admin account for looks like. Anything to do with plugins sort of sits around here. We're not really going to deal with that settings for my server included here. Obviously, anything that is uploaded images and whatnot go here. Here is where I can manage my content for existing collection. So in this case, the only thing that comes prebuilt and like many other frameworks is the user model. And then I have this space which I can edit because I'm in development mode where I can create new collections. So from here, let's create one. So in this case, a real basic one is I want to create articles or blogs. It's going to automatically generate where that's going to end up in the API, which I can edit. But right now, if I'm going to get an article, it's article otherwise articles. I want a couple different fields for this, like title with short text. I want another field, which is body. Let's see. I definitely want that to be required. And that's really fine for now. So I'm going to save those fields. All right. So it's restarted. Now I have articles within the content manager section of my backend. Let's make sure I'm not missing anything. So let's create something. So this is just going to be my first article. And let's do something like... I think I included inside the steps there this Loren Ibsen feed, which I use all the time. I'm going to put that into my body, and I'm going to save that. All right. So we've saved the article, which means we can do things like check out what is going on with the API now that we have this collection and some content in there. Like we saw before, it should be API articles.

4. Building the Strapi API

Short description:

When building the content, permissions may not be set up correctly. By going into settings and roles, we can enable public access to the API endpoint. After publishing the article, we can retrieve the full article from the endpoint. Strapi allows us to build APIs using basic components. We can create different collection types for the API, such as users, authors, restaurants, reviews, and categories. After setting up the API, we can delete the default directories and generate a new API subdirectory. The content types define the schema for the API, including attributes like title, slug, image, body content, and category relationships.

And the first message we get is that that is not something we are allowed to see. So that probably means our permissions aren't set up correctly. I assume it's not great.

Okay, so when I built my content, there's nothing really for permissions. I'm going to need to go into settings and roles, and we're making a public request. So inside my public for unauthenticated users, we'll see that new collection here defined, there's article and I'm really just trying to figure out all of the articles rather than just one. So I can enable public users to hit this, find action at the endpoint. So if I save that, all right, so if I go back to the endpoint and refresh, I now see that I am not forbidden, but I have nothing in here through this data array. So if I then go back to my content manager, I see that I actually have this in draft stage. So now if I go ahead and publish the article, then hit the endpoint again I can get the full article. And so it's got an ID already attached to it. It has my custom attribute fields that I set up. So my title and body and then some other built-ins like when it was created and updated at, the size of the page and whatnot. So now we're building out an API out of some pretty basic things. If I wanted to pull information about the users that get registered in this backend API, I can do the same thing and we can build out a lot of other collection types to build out this API. But this is the basic workings of how Strapi helps you do that.

So now that we've gotten a little familiar with how this works, I'm actually going to remove what I've done here of this API subdirectory and you can do the same. Make sure we're here. I need this anymore. OK. So we are already an initialized repo, so we can skip most of these steps right here since we started from a template. And let's create a new app with the same name. And let's create a new app with the same command that we did before. So that was yarn createStrapiApp called API. In this case, the only other flags that are included here are Quickstart, which just sets up the same default user base collection, and NoRun, so that's just not going to run that development server automatically. And so what we're going to do after that finishes is that we're going to build out all of the collections that are included in this demo application. So it is a – it's called FoodAdvisor. It's a restaurant review site. So like I said, I'm going to have a few collections that are based off not just blog posts, but authors of those blog posts and the relationships between them. I'm going to have collections for the restaurants themselves and the reviews left by, again, users, categories for those restaurants, and a few more. So let's start doing that. I tried to include all of the commands that are needed here, so let's go through them. The first one is, oh, right. This is just cleaning up some things. So, by default, we're going to get these two directory set up for us. So let's go ahead and delete them because we're going to move some things in here. So that's deleting the default API and extension subdirectory, but we'll fill them out with some things. And now inside the, let's make this a little bigger, inside of help, we have this sort of first stage, which is dealing with generating this API. And so what I want to do is move some things in here. So that first command is going to move the API. Well, I guess I should show you that the first time but we can take a look at it now. If I run that command, what it has done is built out a new API subdirectory. So this is how strappy handles this schema for this API. By default the database itself is going to be in SQL light, and we'll go into that. But then in representing those content types. So this is very similar to the article one that I set up previously, we will say that this content type article has a definition of the following. It's called articles, here are those endpoint differences that end up building that API. We do want pieces of content that are included in this content type to have a draft publish schedule to it. In this case, we have some other attributes that are included here, the same like the ones I defined before, it has a title, a slug, slug for the URL, we'll use to pull the next JS, it has an image, the main body content, and a few other blocks that get defined that we'll take a look at. It has a category, which in this case is a relationship to a different content type, so we'll say there's a different content type.

5. Setting up the Database and Admin Panel

Short description:

We set up the code part, moved the components and extensions, copied over the scripts to build the database, and ran the seed command. Now we have an initialized SQLite database and a collection of uploaded images. The admin panel is accessible at localhost 1337 with pre-initialized credentials. We can log in and skip the tour.

We'll see that there's a category collection here. And so it has a many to one relationship. A article can belong to many categories. And that is what this definition does here. It has published at author. Same thing. In this case it is many to one in the opposite direction, where a, an author may may have written many articles. And so that's that attribute there. And that's kind of all that's worth going through. And that's kind of all that's worth going through. All these are really built in handlers for this collection, but we can see that by moving what we had in that help. I actually created quite a few collections article blog page category, and etc. But we're not done yet. So that just sets up the code part of that. So I'm going to move the components which we saw in that schema, which will do things like do components. Those were the blocks that we looked at so if I want to have a piece of the block of the website that is dedicated to showing related articles on a blog post. That is what this sets up and then it can be pulled into an attribute associated with that blog post that we're currently viewing, cta, FAQ, a lot of these blocks that are going to be reused on the backend of the API, setting up that model. We're going to do the same with another folder called extensions. And in this folder. The content type is user permissions. So this is simply saying or any user that is signed in. They have all the attributes associated with a normal user. But are they allowed to comment on an article or leave a review on a restaurant, basically, based off their permission type. We include that in the user permission extension, which comes with Strapi. And then the last few things I'm going to do is copy over the scripts that are allow us to actually build out this database. So if we look at it now, structure is basically how the server is configured, a place for migrations, a public directory where we can upload images, and then where are APIs to find. So if I move this over I get a seating script, it is going to pull from a data file, which which is also inside this help directory, and so we'll take a look at what the consequences are of running that. And I guess we will go ahead and copy that zip out of the help directory. And so now we'll see that zip file here inside of API or strappy is set up. And then the last thing that we need to do is just give us a few dependencies that will help us actually run this seating script as dependencies up here. Alright, so that is our done. So let's see what the consequences of that are. Let's go ahead and run seed. Alright, and so now I should have two things I should have my initialized SQLite database here in dot tmp. And I also should have. Or you should have a collection of these uploaded images inside of public uploads. And that really should be all we need. And we can go ahead and run. Let's go ahead and run here, yarn develop. Okay. And if we again go to localhost 1337 here is the same admin panel that we saw before. In this case, however, the database has been initialized with the admin user already. I wish that was my URL. If I was able to get And so that's those credentials are included inside this readme file as well, such as admin at strappy demo. And this dummy password. I've been at strappy demo calm. Well, the other one is, yes, we can, we can go ahead and log in. So it looks exactly the same as before, but the tour shows that we don't need to do any of these steps anymore we've completed them already. So let's skip the tour.

6. Verifying Strapi Initialization and API Setup

Short description:

We have initialized the media library with all the necessary images and predefined attributes for different collections. The content manager already has data filled out for these collection types, such as restaurants, categories, and reviews. We can leverage this data to build the front end. Strapi has streamlined the process by setting up permissions and providing an API for us to add new restaurants and collections. We can now proceed to build the back end and consume it with any front end of our choice.

Verify. That's here all of our images, including what this final site is going to look like here. Inside of our media library. And so to try and create additional collections, I see that I have them here at article, category, page, place, and so on, with all of the attributes predefined, based off of what we had in those sub directories, and in the content manager, I have some data already filled out for these collection types. If I go to restaurants, and let's get this first one, Alstutainbrahim. has a description, it's got open hours and address, some social network categories, and some images. So we're going to leverage these things, ultimately, to build up that front end, but everything is initialized how we want. And you can verify that those permissions have been set up already for you so I can go to localhost at articles again. I think the other ones here are categories of restaurants. It's built out for us. I can go into the restaurants themselves. I think another one is reviews. You get the idea. So, we we've streamlined with some seeding essentially the same thing that we did first off, we built a few collection types inside the strappy code base filled out a SQL lighted base for now. But now, since the permissions have been set appropriately. We have this API already for us and we can add new restaurants we can add new collections, whatever we want to do but we can use them to build out this back into the consume with whatever front end that we want. And so that's what strappy sort of taken care of for us. So far.

7. Deploying the Project and Integrating with GitHub

Short description:

Let's save the changes and deploy the API subdirectory. Platform.SH is a platform as a service provider that supports various runtimes. We'll create a new project and choose a location. We'll integrate our GitHub repository with the project for continuous integration and deployment.

So, let's go back to that generate API read me. Oh hey, we got 23 people already. Hi everyone, higher than my last check. Yeah again, feel free to leave a message inside of the discord for in the main zoom chat, and I'll try to reference if you have any questions or maybe you want me to slow down.

Okay, so let's go ahead and save these changes, because we won't actually deploy and do stuff. So, Let's do that now. And strap. So now that should be updated here. Yep, so here is our API sub directory that we just created. Try and think if there's anything worth sticking on. I don't think so. Let's get to deploying this thing.

Okay, so the next thing we want to do. I think, so let's deploy this thing. So, like I said, platform SH is a platform as a service provider. It's not specific to any sort of runtime. Today we're going to be talking about JavaScript, but it could very well be the same for any other language used to build up that back end. But right now that's we're going to do is use platform to deploy initially this one JavaScript application. So let's do that. I'm going to create a new project with this command. You probably won't have this same initial breakdown of the organization to put in, but if you do go ahead and just select your own username. And this is going to be Node Congress 23. And then I have some choice of where in the world, I want to put this server usually will tell people put it closest to where you think visitors are going to be requesting your site from or closest to you, if that makes more sense. So I'm choosing a location in Charleston. Since I am in Florida. I do want to make it a remote for my repository, and I am okay with paying $0. Alright so platforms going to put together this project for us. This will be the ultimate target that we're going to deploy these applications to. Alright, so, we will see that as this is finished. I now have a project in this region us for looks like I said this in Charleston United States. It has a project ID associated with it, that will see matched here, and a get URL. We don't need that get URL low because I now have an origin setup called platform that is using that get URL, we're going to continue pushing to get up but it has been set up for us. Alright, so let's take a look at this URL, that's output from creating the project. Right-click on the tab and do that. This will change in a minute this is just a hash of my username. There we go. If you notice it's a little small on your screen, but this just says, which is the main UI for dealing with projects and deployments on our platform. Which in this case you have the option to organize projects into organizations that may be the same for you as a GitHub organization, or if you're an agency putting together projects for a lot of different clients. It could very well be, say, the PHP team or the next versus the Express team may exist in their own organization, but it provides that little extra flexibility for organizing your projects, and then that same project ID. If at some point you need it, that project ID is also available from this three-dot dropdown in the upper right, but this is the management console. So if I go to this project's link, obviously, my job is deploying these things all the time, so I have many, many, many projects in this space. You likely will only have one unless you had a chance before we started to play around with it. But then a singular project that exists here for us to work from. I'm missing a message here. It doesn't look like it. All right. So what is getting set up right here? What we want to do is take our GitHub repository and integrate this project to it so it becomes a mirroring remote of that GitHub repo so we can continue going through the review process and pull requests and GitHub Actions for our CI and then allow this to really be the CD aspect of how we deal with both production and our deployments in those revisions. Right now I have selected my default branch is main because that is what is going to match my current repo. And so really we just need to set up this integration. So let's do that.

8. GitHub Integration and Environment Setup

Short description:

To integrate GitHub with Platform.SH, generate a GitHub token using the provided link. Use the CLI command to set up the integration by linking the token to the repository with the project ID. Answer a few questions about how Platform.SH should handle the integration. This includes building pull requests, draft pull requests, post-merge actions, and data cloning. Platform.SH takes care of handling data between environments, eliminating the need for a separate process. The integration will synchronize with GitHub and create an environment associated with the branch.

We were in the O2 deploy section of this help. All right. So what I need to do first is to generate a GitHub token. So I can do that with this link here. So I have included it here. But call it whatever you want if you're going to keep the repository that we just used what I shared in chat as a template from, we need at least the public repo and the admin repo hook permissions in order for this integration to work. And that should be enough to to start sending things to Platform.

Inside this readme is also what you need to put this as for a private repo. So go ahead and use this link to create a new token. I'm not going to do that because who wants a token out there? And we are going to use this command in order to set up the integration. So let's enter it, and then I'll enter it secretly to actually do it. So we are going to use the CLI to add an integration of type GitHub using the token that you just created to link to the repository with the project ID. Right? So we can do that in parts right now, if you will.

So let's see here. Warm integration. And type. GitHub. Projects. Like I said if I go to that project, I can get the project ID right here. Projects, then there should be a repository, which I can get from here. I don't need to include the because we have the type there, I just need the user name and the repo. And I think that that was it. Right. Type token repo project. Alright, so now. So again. Secret. Alright, so here is what it's going to ask. It's going to ask a few questions about how you want platform to handle the integration. So the first one is do I want to build a pull request. I do. Draft pull requests. Sure. Post merge, no, I want it in the state of the current commit. Clone data. So we'll see a little bit about the consequences of this. But in the way that I just will see how environments relate from a branch or a pull request to an environment on platform. Part of when we get into configuring these app containers is that platform takes care of a lot of how data is handled between environments. So we don't need a separate process for say, ensuring that a staging environment of our API contains the same data and endpoints as production. We're going to do a lot of an inheritance for you. So I'm going to say true because I want to look at that behavior. I want to know about all the branches on GitHub. But I want to get rid of them when they're deleted on GitHub. And so we want to synchronize. So we're going to continue. And if I go back to the project, we should see some things happening. Actually, we see a failure. Here is I was just on my project view. Now I'm on my environment view for the repository that has only one branch main. So this is now the environment associated with that branch.

9. Creating Platform Files and Configuring Routes

Short description:

If I look at the GitHub activity, I'll see an error indicating the missing platform app yml file and root.yml file. We need these files to specify the program language, build process, deployment, and startup configuration. Let's create a new branch and write these files. We'll create api.platform.yml and roots.yml files. The roots configuration defines the routes, including redirect rules. The default placeholder in the config file is used for generating URLs. The app configuration specifies that all requests should go to an app called Strapi, which uses a JavaScript app container and Node 18.

If I go ahead and look at that activity where GitHub pushed my project, I'll see an error that looks like this, which is what we expect. We just haven't done anything. Platform has no idea what to do with it. Excuse me. So what's missing? Right now it's saying I don't have a platform app yml file. So I don't have anything there that's saying what program language is it? How should it be built? How should it be deployed? How should it start? So we need that file. And then I'm also missing a root.yml file, which this is going to say when somebody requests this environment. Where does it go? Which application container does it go to? So we definitely need those two files.

So let's write them. So let's go and get out of here. Let's go ahead and create a new branch. We're going to call this platformified. All right. So where am I on API. So I want to create a new file in api.platform.yml. And I actually want to create a new shared directory called platform in roots. Actually, we did that already when we set the integration. So if it for some reason didn't create that for you, go and create it. And I want to create a new file in that directory called roots.yml. OK, so now we got two files that we need to do something with. We got app.yml and roots.yml. So if we go back to the readme, I believe that I've included some steps on how to do here. I'm actually going to duplicate this. I don't want to keep jumping around. All right. So we create the project integration. Here we go. So, again, this is what your structure should look like at this point. App.yml nested within API of the strap yet we just created and roots within this dot platform directory. All right. So first, if we go into roots, this is what our config looks like. And so the idea here is I have two top level keys for those main roots. If we start at the bottom, actually, I want requests that go to the API subdomain to go straight to an application we'll define in the other file called strap. That's really all this is doing. And then I want a paired route with that upstream one, that's going to handle a redirect. So in this case, anything that goes to www gets redirected to the same route that we defined up here. And so one thing to sort of take note of here in this config file is this default placeholder that we see three times here in this file. And this really just says, for our main branch, for example, when we finally do put this together and configure it platform, S-H is going to create, is going to generate a URL for that environment. It's going to be something like main-biglonghash.US4, which is the region that we chose,.platform-SH-site or something. But any new branch that we create from then on out, a similar generated URL is going to be created. What this config does, what it's smart enough to figure out is that when I create a new branch where I'm going to focus on just Strapi or the front end eventually, I want this same configuration to apply, and so this default just becomes the place that that generated URL gets substituted into here. Cool, so that's our roots configuration.

Now, if I go ahead and go to my app config, I don't have anything here, but I have a snippet included in the readme. So let's take a look at what it is and what it is doing. So, like I mentioned in the root.yml file, I want all requests to go to an app called Strapi. We'll see that key matched here on the name attributes. This file I may have mentioned its purpose is saying I need an app container running JavaScript to build this back end application. That's what it's doing. So, I want it to be called Strapi has to be a unique name inside the environment. And I want it to use node 18 platform will make certain assumptions if you would like it to about how you'd like applications built and deployed.

10. Building and Deploying the Backend

Short description:

The build flavor attribute in the platform configuration file allows us to specify how the application should be built. By setting the build flavor to 'none', we override the default behavior of running 'npm install' and use yarn instead. This ensures that we have the latest version of yarn available. We also want to build a production version of the backend using yarn. The goal is to create a reusable build image that can be used when creating new environments. The build image contains the static state of the repository at a specific point in time. Data inheritance is handled differently from code inheritance. While the build image is read-only, we can still write to the database and upload images. To replicate the environment-specific configurations, we create a new file called '.environments' and copy the contents of the '.env' file into it.

And so one of those places is this build flavor. If you take a look at the docs. Go to JavaScript. Or is appy not there. Bear with me. Can't spell either. Here we go. So what we see here is this build flavor set of attributes. So what platform will do automatically if this was say not here, is it will check my NPMRC and run npm install when this thing is built. But here we want to use yarn rather than npm. So build flavor none essentially overrides that assumption. I do want to use the latest version of yarn and have that available. I want to build a production version of this backend. And then I want to build it. So in this case I'm building it using yarn. I want my build to be. Yeah, I want my build to be repeatable across environments. And that's going to be important because when we start doing branching things. I really want everything to be as static as possible until I explicitly make a difference. Some use this frozen log file flag to ensure that. And then I'm going to build the backend before locally we were just running the development server but we're gonna really going to do a build and start here on the environment.

Okay, so let's build the back end strappy app. And in this case for the start commands. I got to provide the path but I didn't want to hide any exports, but that's really all it's doing. So it's running yarn start as my start command. Okay, so that'll set up my build image for this strappy app. And the goal of this file, besides just building and deploying the app, is trying to create a reusable build image. So when I create a new environment, I can keep that build image in the creation of the new environment, because that's sort of our whole goal at Platform is to, in the act of branching, create staging environments, true staging environments automatically. So that's two parts. That's how do we build images. So we'll see here that a lot of these assumptions are made just so that Platform can take care of moving build images around rather than using an external service. And the other is how data is handled. So in this case we're not dealing with any services yet. We're just dealing with the database, which like I said before, is gonna sit in this dot TNP, which is reflected here. And so we want data to be inherited as well. But data is different than code, like we want it to be writable at run time, is a major example. And that's that conflicts with our repeatable builds objective. So what this portion of the file is saying is everything above here is going to be a static build image associated associated with the state of this repo at this point in time. Everything here is stuff that can be affected at runtime. So in this case, I want to be able to upload images and I want to be able to change the database. So how they inherit across environments is going to be a little bit different. But once this thing is deployed, the whole file system is read only except for things that we explicitly defined here. So in this case, public uploads is where we had the images. So let's keep that writable and then let's still be able to write to the database.

All right, so that is almost it. The other thing that we need is inside the API subdirectory two files were created when we created that project. And this is really the important one. So this.env file said, we wanna serve from this port, this is the database we wanna use and it set up some initial keys, salts and secrets that are used internally in the application. So we need to replicate this inside the environment because if we set them as proper environment variables, these aren't gonna be used or this one won't be generated because this.env isn't and for good reason is not committed. So let's create a new info file here called dot environments and copy what's included in that read me there.

11. Configuring Environment and Uploading Data

Short description:

This part covers the process of configuring the environment and creating a pull request. It also discusses the deployment and failure due to missing data. The solution is to upload the data using additional CLI commands.

So what does this look like? It has some generic app keys that we can probably make more secure if we'd like but for right now this line is still the same but I'm able to pull from the environment and do some things. So I got two salts and two secrets that are going to pull from variables that are already exposed by platform usage inside the environment. So one is this entropy value which is gonna be unique to this project just a big long hash that I can use for these three values. And then another one which is associated with the current commit hash. It's for the tree aspect of Git rather than the commit itself. But it's another random value that's actually gonna change between deployments but it's something that we can use here. Otherwise, I need two sets that I wanna continue to use SQLite and that I want it to stay at the same location that it existed at locally. And I think we should be good here, yeah.

Okay. So let's go back here. We'll see that I should have these three files changed. Let's platformify. Oh, it doesn't like that. I can't exclaim in my commit messages. Wait, what happened? It's not even in my history. That's fine. So, in what was my branch, platformify. Push origin platformify. All right. So first let's see what happens here. So I've added my configuration. I go back to my project. I don't see any changes there. But I do see that new branch available for a pull request. So what we see here is I have what's actually called an inactive environment. So the branch is tracked but it doesn't activate it doesn't try and build and deploy that configuration I just did. But if I create the pull request based off how we set up the integration. And adds app roots and variable config or. Alright, so let's create that pull request. Alright, so we should see in this case I created the token so it's my face that shows up here. But I'm now should be triggering that environment which we'll see here. Let's see. May take a second for this to show up on the branch diagram. There it is. I can go ahead and filter out the inactive environments and go to the pull request environment. So let's give that a second, but I can take a look at the logs too while this is going. So we see I'm building an application strappy and I'm building with node 18 at this unique cache that characterizes the current state of the environment. And so what it'll do is we'll go through what I have placed in that config file. So in this case, it's gonna install dependencies for a bit. And if you notice inside here, I have the output of my build. But I also have this section here that's just gonna say that platform's gonna provision let's encrypt certificates automatically for this environment. All right, there we go. So the environment should have finished its deployment now. And we have a failure. So why do we have a failure here? I'm guessing it's because we don't have any data. So, we are going to upload that data. So I have some additional commands from the CLI. They're going to take what we've already set up here, inside of public uploads and.tmp. So, I am going to run those commands now. So I'm going to upload everything from public uploads to that mount public uploads.

12. Setting up Environment and Deploying to Production

Short description:

We set up the environment with the necessary images and database. After uploading the data, we verified its presence. When encountering an issue, we investigated the logs and made necessary adjustments. We then merged the changes into production, deactivated the environment, and pushed the new commit to the main branch.

Oh, right. But I do need to include the environment in this case. So we'll set up all those images now. And then we'll do the same for that database that we have locally. I forgot again. So in this case, Environment PR1. And upload that file. And we'll see that it's done that. If I go ahead and I can also SSH into that container or the PR1 environment, we'll see I have. So if I go, let's say,.tmp, there's my database and there are all my images. Let's get out of there.

What I want to do now is I want to redeploy that environment. Now that I've made those changes, yes I do. There we go. Now that I have the data uploaded, everything looks fine. I should be able to verify all that data is still here. Something's going on here. What is going on? I want to investigate. I can go inside messages each session and check the logs. What is the issue? What hit me? To this instead. Let's get out of here and try and just run this from scratch again. Initiate one node straight seed. All right, it looks like it's having trouble fine. So, I am going to commit that instead to. all right, yeah, let's try committing this file, because I'm not really sure why it's having such an issue here. It's still showing a quick one for you to do that. Oh, there we go, just needed to do commit, I guess. So I don't think I need to do that then. Let's go back to find our credentials. Just because I don't remember what they are. And it's doesn't look like our images are connected correctly, so I am going to run this. Let's see. So I should run scripts seed. Hmm. Try re uploading these images maybe. Ah, okay, I just need to re upload them. So here is all the data that we had locally, and all of the collection types that we're now testing on this PR one environment associated with this pull request. So if we can update this status here, it's passed our one check. Obviously, we can include many more tests that go beforehand and in response to the deployed site. For example, we could do a check like, you know, how many articles do we have on the articles endpoint but everything looks good for what we want to test here. I want to actually merge this into production because we've done what we want to do here. So let's go ahead and merge those changes. And I don't need that branch anymore. And we'll see that I'm now deleting and deactivating that environment. And if I go back to the project view, I now have a new commit being pushed to main once those two events of shutting down that previous environment have finished. And so we can get this set up on our production environment before we move on to the next step of actually putting a front end in front of this. I'm going to.

So we've managed to get this set up. We have the data like we did before, but we tested what we need to do here. One thing to point out before I do that is if we look at that activity for that push to main, we'll see the three commits, the two regular and the merge commits and we'll see that same building the application line we saw before.

13. Configuration and Build Image Reuse

Short description:

The purpose of this configuration is to define what needs to be done, what variables are required at build time, and what actions are taken afterwards. In the pull request environment, a build image is created with this configuration. When the pull request is merged, the production state is updated to match the unique state of the pull request environment. As there are no changes and a build image is already associated with this state, it can be reused.

But we'll actually see this line, which like I had mentioned. The purpose of this configuration is to clearly define what needs to be done, what variables need to be present at build time and what actually gets done after the fact, things like the start command and we'll see it deploy hook in a second to create a build image. And so in that isolated environment for the pull request, we created a build image with all this configuration. And then when we merged it, the state of production is now at that unique state that previously was set aside on that pull request environment that's associated with this hash. And since there is no changes and we already have a build image associated with this value that's now in production, I can reuse that build image.

14. Deploying and Verifying the Production Environment

Short description:

If I make a revision to my site and promote it to production, even on a Friday, I can feel pretty confident that nothing's going to break because I've already tested this build image in as close to production conditions as possible. I need to upload the database and images again. We have our production environments and all the migrated data. On our production site, we can check what articles and restaurants are available. Now we want to move on to the next part, which is adding a front end to this thing.

So if you take a look at our website or documentation, you'll see things that will say deploy Friday. So this is one direction of why we write that. Essentially if I make a revision to my site and promote it to production, even on a Friday, I can feel pretty confident that nothing's going to break because I've already tested this build image in as close to production, like conditions as is possible to get based off of our workflow. We don't really see the greatest example of that now because we need to fix the data but like I said, we figure out what to do here and this build image is reused.

What do I need to do here? I need to upload those things again which I can just copy those steps. So we can go back to the deploy section and we should be able to. Replicate those, I'm going to do the database first and see if that helps the image issue from before. So I should be already on main so I'm going to. going to move database in. Yes. And then I want to do the same thing with my images. Do. And let's go ahead and give this a deploy if measure. So this is done, let's go ahead and. See how we're going and see if we need to rerun that like we did on the PR environment.

So we have API, we have our production sites and we do. So let's go ahead and re-upload the database. I think I did have a commit, so let's just do something simple. Let's add another variable or rebuild is true and go ahead and push that. Rebuild commits just to help our migration here. So we'll rebuild that and that should allow us to pick up that newly migrated data. All right, cool. So now we have our production environments, um, and if we have all of our migrated data, and let's double check quickly that we don't need to reupload the images like we did last time. Uh, it will do that if we do. Uh, so it was admin at strappy demo .com. Uh, and our media looks okay. All right. So we are now fully migrated. So we have a, um, template already. Man, tough. Tougher. So we have the repo that we're working from. We just merged the pull requests to migrate this app locally onto platform. Here's our production environments. Um, we only have one environment visible now because we deleted branches. We don't have any open pull requests in on our production app. We have that API that we had initially set up locally. Um, okay. Um, okay. I think it's picking up the wrong URL. Yeah. It's trying to go with me. Uh, and so we have that here. So on our production site, we can check what articles are available. We can check what restaurants, all the things we did locally are now on our production site. Uh, great. So now we want to move it on the next part, which is not all that great that it's just, uh, this front end, uh, going to release this branch. Uh, but I want to add a front end to this thing. So let's go ahead and create that.

15. Creating the Frontend and Deploying to

Short description:

We create a decouple branch and work on the client sub-directory, which contains a Next.js front-end provided by the upstream repo called food advisor client. The front-end is set up with pages for handling slugs, lists of restaurants, and individual restaurants. We want to pull data from the Strapi backend API. After installing the necessary dependencies and running Yarn dev, we can browse the restaurants and their associated reviews, images, opening hours, and descriptions. We also have a list of blog articles. Our goal is to deploy this application to and set up a workflow that works across all environments.

So let's go ahead and create that. So we want to do a decouple branch, uh, and in this case, uh, what we're going to be working from, uh, is from this client sub-directory that's already included. So this is a next JS front end, uh, provided by the upstream repo called food advisor client. Uh, it's going to use next JS, um, excuse me. Uh, and it is set up with a few pages automatically of, uh, let's see, where is my, uh, entry definition of my application, uh, how I want to handle individual slugs, how I want to handle, uh, a list of restaurants and individual restaurants? And all of this is going to be based off of how I pull from the strappy backend API. Uh, so let's try to do that. Uh, if I go back to my terminal, I'm going to go into the client sub directory and uh, I'm still using yarn. So I'm going to uh, go ahead and stay using this uh, approach of not rewriting the lock file. Uh, all right. So I have that installed, and I should be able to run Yarn dev at this point. Uh, and I'm loading from this development environment. Uh, let's see. This is at 3000. All right. I should get a 404 here, because I haven't defined anything yet. Right? Yeah, 3000. Yeah. All right. So here is our 404. So, what we want to do is, we want to be able to tell next to pull from this environment. So we have that environment here, from that URL drop down, at least for the main environment. So, we can go into, again, clientsubdirectory to this.em.development. And we can see that this is using a test URL that I was using when I set up the repo. So, I can go ahead and replace that with the current backend for the project. And so, the only change that I really need to make now from what gets copied from platform is remove that Trailing Slash. I may need to reload this site. Oh, no. All right. So, now that I've given it the right URL, and because the permissions for the restaurants and blog articles and everything have been set up to public, I can pull from the backends from my production environment and build this frontend site using Next. So, in this case, I have a webpage from which I can browse. This link not work. Let's see. Oh, there we go. I can browse all the restaurants that are included in that API. I can look at an individual restaurant and see the reviews that have been associated with it. All the images that have been uploaded for this restaurant, the opening hours attribute description, and then the reviews themselves. So, this is our frontend site. I also have a list of blog articles. So, it's a, you know, pretty basic restaurant review site that's pulling from our production API. So now it will just be not just deploying this application to platformis h, but doing so in a way that we can sort of set up our future workflow so that this continues to work across all environments. So, this doing m.development file is going to be useful when we're lurking, working locally, but not so much when this thing is actually deployed. So let's go ahead and close this for a moment. We are on the decouple branch. So let's go ahead and SSH into the production environment for a moment. All right. So, now I'm in the container. Here is my file system. This is the Strapi side. So, in the same way that I said that Platform comes with a few built-in environment variables, I was mentioning this one, which is just associated with the project itself. I think the other one I said was this one.

16. Automatic Configuration of Back End Location

Short description:

You can automatically fill out the location of the back end by leveraging the built-in environment variable called platform roots. This variable is base64 encoded and contains information about the current environment, including the location of the back end. By decoding the base64 value and using a tool like jq, you can extract the URL of the back end. This URL can then be used to build the front end and can be easily configured for different environments. In the client subdirectory, there is a file called 'environment' that defines the variable value for the back end URL. This file is sourced at runtime and uses the platform roots variable to decode the base64 value and extract the back end URL. The extracted URL is then used in the front end application.

You get it. This is the tree ID, which can also be used for secrets, maybe ones that need to be changed more frequently than the entropy. There are actually quite the environment type, the entropy we just looked at the project ID. Where is the one? So, a useful thing would be how can I automatically fill out this value of the location of the back end? And so, one way we can do that is by leveraging this built-in environment variable called platform roots, which is essentially a variable form of this configuration right here where the default placeholder has been filled out with the URL of the current environment.

So, we'll see that this thing is base 64 encoded. So, if I do this quick platform roots, base64, decode and go through a built-in tool called jq that's in the container, I can see I can see the information about the current environment is actually available here. So, for my production environment, I would be able to pull what's the location of my back end so that I can not just pull from this back end to build the front end but do something very similar across multiple environments.

So if you take a look inside of. If you take a look inside of the client sub directory that came in the repo. You should see a file already there, just like the back end called that environment. And what it's going to do is exactly that. It's going to say, define this variable value, sorry I'm going small for the moment but next public API URL that was in this development environment variable file. And it's going to do that in this now.environment file, which gets sourced at runtime to pull from platform routes and do the same base 64 decode and JQ. And what we're going to look for from that big long object, something with the ID API, which you may not have noticed inside of this route CMO configuration that's right here on my upstream route definition.

17. Configuring Front End and Back End

Short description:

I can pull the current back end location and we have a dot environment file for the front end. Inside the client's subdirectory, there is an ignore dot YAML file that will replace the file.

So I can take a moment here to test this. So I go ahead and just copy this value and paste it in here. So the only other thing that this file is doing is the same as I did before, just dropping that that trailing slash, essentially. But whatever the case, with this line here in this environment file, I can pull the current back end location. So we have that. We have a dot environment file for the front end that we just built locally. And then also inside this client's sub directory, I have an ignore dot YAML file that's going to take the place of my file so we can rename it as such. So, if we take a look like we did before, what is this doing?

18. Configuring Front End Build and Deployment

Short description:

We want to have two different dependencies available at build time: yarn and PM2. PM2 will be used as a process manager to ensure smooth recovery when the front-end application is running. To handle Nexus.js's memory requirements, we delay the front-end build until later in the process. The start command pulls the application name from package JSON and uses PM2 to start the application on the PORT environment variable. We make the required directories writable at runtime and configure the routes for the back-end. After creating a new pull request, we can test the migration and verify the status code. The front end builds in an isolated space, and the data from the production environment is backed up and inherited by the new environment. The front end is then built, and we can test the communication between the front-end and back-end before promoting it to production.

I want two different dependencies available at build time. Again, I don't want to use NPM, I want to use yarn. So here's yarn, and I also want to use PM2 as a sort of process manager so that's between deployments. When one app has deployed and one has not yet deployed, I want to be able to recover fairly smoothly when this front-end application is running. So we're going to use PM2 to run this front-end server. This right here is a bit of configuration that really just tries to handle the fact that Nexus.js needs quite a bit of memory when it's building. And we're going to have to actually delay the time that this front end application is built until a little later in the process than we normally would. So we're not going to build next at build time, we're actually going to build it effectively at run time because these app containers deploy in parallel along this build deploy pipeline.

All I have to say, I have a build hook that installs my dependencies using yarn. And then I'm actually going to delay way down into a post deploy hook to build that front end site. And so it's going to leverage that back end URL that gets defined in dot environment. And it's going to build that site when we know for sure that the back end has already been deployed and is running. We can take a look at the start command for this next site in which we'll see that the name of the application is pulled out of package JSON. I'm going to use PM2 to then start that application on another built-in environment variable PORT. And right before I do all that, I'm going to do another yarn build. The start command runs after the build phase at deploy time, so essentially this is just going to continue to try and build the application until that back-end is available. And then same before, NEXT requires these two directories so we're going to make them writable at run time. PM2 also needs something staged away for its caching, so we're going to make those writable at run time. And so this sort of becomes our configuration for running the front-end site again alongside that back-end. They're going to run in a single environment. The only thing left that we have to define is this right here and we can actually just copy this. So I have a set of routes set up for the back-end. I don't need an ID. I want it to be at the main, the root domain. So I'm going to get rid of those sub domains. And I want it to not point to strappy but I want it to point to dex.js, which is the name that we gave it here. And before I push just let me double check that I didn't leave anything out here. Nope, nope. Okay, cool. So application configuration, roots configuration and then something that's going to allow us to pull that back in URL. So, All right, so we go back and we can create a new pull request for that particular URL. All right, so we go back and we can create a new pull request for that change. And we can test this migration of adding this new front end. And we should get a new environment where we will see the same status code that we saw before on merge for that back end Strapi app. So in this case, the same hash was discovered for Strapi in node 18. So it's going to reuse that entire build image. So we already are starting to build an exact replica of this production API in this isolated space just by creating the new pull request. So this front end is going to build. And then we can fiddle with making sure the communication between the two works before promoting this thing to production. And then the other thing to point out here is in the same way that build image was detected in creating the new environment of our production API, the same goes for all the data. So I'm creating a new environment. It has a cloned build image. But it's also going to take a backup of the data that's currently in production. So if you remember in this case, that's the SQLite database inside of.tmp and then all of our uploaded images. So those are going to have a snapshot taken. And then they're going to inherit to this environment for us to work with and test out this new front end. And here we're going to see the front end building for the first time. All right. Looks like this thing built. All right.

19. Deploying Front-end and Additional Topics

Short description:

We have deployed the front-end in a new environment for testing. The service graph shows the composition of the environment, including the router container and the Strapi JS and Next.js app containers. We will review the changes and merge them. The deployment should be faster this time due to the reuse of build images. We will cover additional topics in the remaining time, including setting up email support in Strapi.

All right. It made all of our slugs for restaurants. So let's take a look at that. We should now have two URLs. The current, like I said, this is a copy of the production back end. So I should be able to hit the article's endpoint and get the same data as before even though we are not in production right now. And then I should have here we go. So I have that site that we set up locally, now running in this isolated space for r. You can see the URL here. PR2 is the new environment where we can now test in isolation those changes that we made. And if we look at this service graph right here in a little bit more detail, this is the composition of our environment. We have a single router container where all that configuration went for how requests should be directed to either the API subdomain to the backend or to our actual front-end, and then those two application containers. So the Strapi JS app and the nexJS JavaScript app that exist in parallel on the one environment. Whereas if we go back to main, the only thing that we had there was the router and Strapi containers. This is our new feature for the new environment for this pull request, is adding that front-end. And for the most part, everything looks good. There's things that we maybe wanna fiddle with. We wanna take this through review to see if next is really the direction we wanna stick with or maybe we wanna do pieces of it in another front-end framework maybe, we wanna have other stakeholders take a look at what we've done. But for the most part, I think it looks pretty good. So as seen as before, we're gonna go ahead and say we wanted to review those changes but we can just revisit them here. We could get ignore this development file because it's not super relevant. But whatever the case, this looks good as far as the behavior that we wanted. So we're gonna go ahead and merge this just as we did before and delete the environments. And so like before, this should be a little bit faster than it was before because we're gonna be able to reuse two separate build images but it will potentially take a moment for this deployment to finish. And we are in our last 17 minutes. So I don't want to totally do a signup yet but I do want to cover some things that I didn't have enough time to really address in this part of the workshop. If you are curious and you want to contact myself or someone on our team, I know Marin is with me here in the chats. But inside of this same repo, if you were able to move it over to your own namespace there is one more set of directions of something that you can do as well as another potential change that you could set up. Let's see where we're at. First one, let's go for the smaller change first. Strapi by default does not set up, my video's still off, let's turn that back on. Strapi by default doesn't set up email for like forgot my password. Very simple way of setting that up that we can see inside of the configuration is if we go into the.environment file. Oh no, this isn't actually here so I'm gonna open it as an issue. Adding email support. So what we need to do is modify this.environment file so that it's slightly different. It needs to pull the following variables. So first is gonna be edits.environments. And so this will look like this. Let me uncomment. I think Github knows how to do that. So let me do it here first. All right, so we're gonna update that environment file to do something that looks like this. Platform gives you the SNTP host automatically inside the environment. So we can set that for the variable that Stratavii expects. We can set the port that's provided by platform. And then we can do something similar to getting the backend URL to, in this case not look for an ID, but look for what's considered the primary route. So that's just gonna automatically become which route goes to the route. Which route goes to the root domain which will be our front end application. So it's gonna grab that URL and use that to set things like what are my defaults from and reply to variables that Stratavii expects.

20. Configuring Plugins and Email Settings

Short description:

We need to pick plugins and configure email settings for outgoing emails. In a development environment, you need to manually enable outgoing emails and configure email with the built-in send grid proxy.

And then the other thing that we want to do is if I open this quickly and I think it's going to be in this last, no, this one, not this one, I guess none of them. We need to pick plugins. Config plugins. We also need another file. It's gonna be an API config plugins.js. Let's just go ahead and look at that quick. And that's gonna look like this, which is going to pull those environment variables from the environment and use a provider called Node Mailer to then set up our email. And so I include that here as something that's interesting to explore, because if we take a look at, well, let's double check this. Yeah, so our migration to production went just fine. But what I was saying is that, if you go into our project settings, or sorry, our environment settings for the main environment, outgoing emails are enabled by default, which makes sense, but if you start playing with this on a development environment, that's not the case, you need to turn it on, so that's typically something else that we'll do in a workshop like this, is turn on that flag, configure email with the built-in send grid proxy and start resetting, that username and password for the accounts.

Watch more workshops on topic

Decomposing Monolith NestJS API into GRPC Microservices
Node Congress 2023Node Congress 2023
119 min
Decomposing Monolith NestJS API into GRPC Microservices
Alex Korzhikov
Alex Korzhikov
The workshop focuses on concepts, algorithms, and practices to decompose a monolithic application into GRPC microservices. It overviews architecture principles, design patterns, and technologies used to build microservices. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated TypeScript services in the Node.js stack. The workshop includes a live use case demo of decomposing an API application into a set of microservices. It fits the best architects, tech leads, and developers who want to learn microservices patterns.
Level: AdvancedPatterns: DDD, MicroservicesTechnologies: GRPC, Protocol Buffers, Node.js, TypeScript, NestJS, Express.js, PostgreSQL, TurborepoExample structure: monorepo configuration, packages configuration, common utilities, demo servicePractical exercise: refactor monolith app
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
DevOps.js Conf 2022DevOps.js Conf 2022
163 min
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
Alex Korzhikov
Andrew Reddikh
2 authors
The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.
How to Convert Crypto Currencies With GRPC Microservices in Node.js
JSNation 2023JSNation 2023
117 min
How to Convert Crypto Currencies With GRPC Microservices in Node.js
Alex Korzhikov
Andrew Reddikh
2 authors
The workshop overviews key architecture principles, design patterns, and technologies used to build microservices in the Node.js stack. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated services using the monorepo approach with lerna and yarn workspaces, TypeScript. The workshop includes a live practical assignment to create a currency converter application that follows microservices paradigms. It fits the best developers who want to learn and practice GRPC microservices pattern with the Node.js platform.
Prerequistes:- Good understanding of JavaScript or TypeScript- Experience with Node.js and writing Backend applications- Preinstall Node.js, npm- Preinstall Protocol Buffer Compiler- We prefer to use VSCode for a better experience with JavaScript and TypeScript (other IDEs are also ok)
How to Convert Crypto Currencies with Microservices in Node.js and GRPC
Node Congress 2022Node Congress 2022
162 min
How to Convert Crypto Currencies with Microservices in Node.js and GRPC
Alex Korzhikov
Andrew Reddikh
2 authors
The workshop overviews key architecture principles, design patterns, and technologies used to build microservices in the Node.js stack. It covers the theory of the GRPC framework and protocol buffers mechanism, as well as techniques and specifics of building isolated services using the monorepo approach with lerna and yarn workspaces, TypeScript. The workshop includes a live practical assignment to create a currency converter application that follows microservices paradigms. The "Microservices in Node.js with GRPC" workshop fits the best developers who want to learn and practice GRPC microservices pattern with the Node.js platform.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

How to Build CI/CD Pipelines for a Microservices Application
DevOps.js Conf 2021DevOps.js Conf 2021
33 min
How to Build CI/CD Pipelines for a Microservices Application
Top Content
Microservices present many advantages for running modern software, but they also bring new challenges for both Deployment and Operational tasks. This session will discuss advantages and challenges of microservices and review the best practices of developing a microservice-based architecture.We will discuss how container orchestration using Kubernetes or Red Hat OpenShift can help us and bring it all together with an example of Continuous Integration and Continuous Delivery (CI/CD) pipelines on top of OpenShift.
Micro-scopes – How to Build a Modular Modern App in a Bundled World
JSNation Live 2021JSNation Live 2021
21 min
Micro-scopes – How to Build a Modular Modern App in a Bundled World
In this talk we will explore the great benefits of breaking a big modern app to meaningful, independent pieces – each can be built, deployed and loaded separately. We will discuss best practices and gotchas when trying to apply this microservice-like pattern to the chaotic world of the browser, and we'll see how building the right pieces guarantees a brighter future for your apps. Let's dive into Neverendering story of modern front-end architecture.
Optimizing Microservice Architecture for High Performance and Resilience
Node Congress 2024Node Congress 2024
24 min
Optimizing Microservice Architecture for High Performance and Resilience
- Delve into the intricacies of optimizing microservice architecture for achieving high performance and resilience.- Explore the challenges related to performance bottlenecks and resilience in microservices-based systems.- Deep dive into the strategies for enhancing performance, such as efficient communication protocols, asynchronous messaging, and load balancing, while also discussing techniques for building resilience, including circuit breakers, fault tolerance, and chaos engineering.- Explore relevant tooling and technologies, such as service mesh and container orchestration, and offer insightful case studies and lessons learned from real-world implementations.- Emphasize the importance of continuous improvement and adaptation in microservices environments, alongside reflections on the future trajectory of microservices architecture.
Building and Operating a Modern Composable Monolith
DevOps.js Conf 2024DevOps.js Conf 2024
19 min
Building and Operating a Modern Composable Monolith
It all started with the monolith’s fission into microservices. This set logical and physical boundaries, fusing the infrastructure and software dimensions. While microservices simplified how teams develop independently, it added extra complexities around performance, correctness, general management, and ultimately a slower development cycle. In this talk, we will dive deep into how to architect and implement a Fastify-based composable monolith, cross-service network-less communication, and how to handle efficient development across teams while deploying a single artifact in Kubernetes. Does it sound like magic?Let's discover the trick together!