Deploying a decoupled restaurant review site to production with Strapi and Platform.sh

Rate this content
Bookmark

Node.js has become an increasingly popular language to build and deploy backend APIs. In a world of legacy CMSs adopting decoupled implementations, plenty of frameworks have sprung up to classify themselves as "headless" CMSs, designed from the start to provide an easy way to personalize content models, administer permissions and authentication, and serve a content API quickly.


Strapi, one of the leaders in this space, has recently released their v4 version of the framework, and with Platform.sh it can be deployed alongside a number of frontends within the same project, giving a drastically simplified development experience working with decoupled sites. In this workshop, we'll deploy a Strapi demo application, which has been configured to serve a restaurant review site.


Piece piece you will add database services, tests, and frontends, all within the safety of isolated development environments. At the end, each user will have a functioning decoupled site, and some greater understanding of working with decoupled sites in production.

Shedrack Akintayo
Shedrack Akintayo
Chad Carlson
Chad Carlson
134 min
15 Feb, 2022

Comments

Sign in or register to post your comment.

Video Summary and Transcription

The Workshop covers the use of PlatformSH and Strapi for building and deploying decoupled applications. It explores setting up projects, configuring databases, middleware, and plugins. The workshop also covers building and deploying the application, syncing data, and exploring the site. It highlights the benefits of PlatformSH, such as isolated development environments and easy deployment of multi-app architectures. The workshop concludes with a discussion on handling drafts and implementing live previews.

Available in Español

1. Introduction to PlatformSH and Strapi

Short description:

Hey, I'm Chad Carlson from PlatformSH. We'll be using the official demo app called FoodAdvisor by Strapi for our demo today. It contains a Strapi application in the API subdirectory and a Next.js app in the client subdirectory. We'll start by logging in and then creating a project.

♪♪ Hey, I'm Chad Carlson from PlatformSH. So, who here has heard of PlatformSH before? If you have, go and leave a message. If you haven't, no big. I'll just take the silence as nobody's heard this before. Heard, but not used. Okay. Great. How about Strapi? Does anybody already have some familiarity with Strapi? We'll do a little bit of overview when we get to that section. Yeah. Heard, but not used.

I'm not surprised. Okay. All right. Created a sample. Cool. All right. Maybe the best thing to do here is start with, for those of you who have heard of Strapi or are unfamiliar with, what we're going to be using for our demo today is based off of the official demo app that Strapi provides, called FoodAdvisor. And so what this application contains is a Strapi application within this API subdirectory and a Next.js app inside of client. And so inside of API, we're going to have a few commands there. You're going to seed some sample data initially to a SQLite database. And that's going to set up our backend. And it's going to contain restaurants, authors, a few blog posts and categories and tags that go along with those collections. And then at the front end app, it's just a simple Next.js app that's set up to consume all that data from the Strapi API and presents a front end that looks sorta like this screenshot. And we will get to that at the end.

So what I'd like to start with is go ahead and we'll try and zoom this a little bit. Go to the terminal where you are going to be working from. And when we interact with the platform CLI, everything's gonna have this platform prefix here. And we're going to initially do login. And all that's going to do is it's going to generate a temporary SSH key for the CLI to use based off of a temporary token on your account. And so... Oh, I already got ahead of myself. Sorry, guys. Once you have installed the CLI, go ahead and click this link right here. This will take you to the page to register an account for the workshop. And it'll take you to a screen to create a new project. Don't worry about doing that right now. We'll do that through the CLI here. But once you've created that account, you can log in on the email that you've used and authorize the CLI to use your account. And so we'll see now that I'm logged in. There's my email if you want to get in touch with me. All right. And so first we're gonna go ahead and get this demo repo. Oops. There is one SQL migration dump file in here so it might take a second. And it should be a directory called node congress. So in here is just like I described before in the original food advisor demo app, an API and client sub directory. All the instructions if you want to follow along with the repo are in this doc sub directory under instructions. So I'm gonna go ahead, use the CLI here to create a project. And when I do that, I can say no congress. I'm gonna give it a region.

2. Setting Up Project and Exploring Repository

Short description:

We'll pick a development project, default for environments, storage, and a main default branch. The project is effectively the equivalent of our repo and detects configuration files to provision infrastructure. We have an API and a client subdirectory. We start with Strapi, setting up the data source and configuring the SQLite database. Strapi is a CMS that works as a backend for any frontend and can also work standalone. The API folder is structured with source and config folders for specific configurations.

This is just pick something that's close to you or to go ahead and most of these are gonna be by default on your side, but we're gonna pick a development project, default for environments, storage and a main default branch. And then it'll ask you, do I want to make that project remote for the repo we're working with? Go ahead and do yes and then we'll confirm it and then platform stage will start building that project for you.

Welcome to the chat. Hey Abandeep. No, at this point, we have just the 30 day free trial. We're in kind of development phase of trying to get a free tier for our platform, but right now it's the free trial. But if you contact me and you want to continue playing with it, either on Slack channel or on my email, which I can attach here, it's not difficult for us to extend the trial for you to continue playing with it. So we got a URL here for what we call the Management Console. So I can open a new tab and give it into the bar and we'll go ahead and be taken to the individual project. And so all this is going to be our deployment area for the repository. It doesn't matter if we integrate a repository on GitHub, or we push something locally, like we're going to do in this workshop, the project is effectively the equivalent of our repo and around it, it's going to detect a few configuration files to automatically provision infrastructure based off what we commit and to handle some of the inheritance that's going to make a platform do what it does. That includes access permissions, environment variables, and that same infrastructure and the data within it across all of the environments that will roughly correspond to each branch of our repo. So we'll see here, I have a main environment that is inactive because I haven't pushed anything to it for my main branch, because that's the only branch I have on this repo. Now that we have that set up, let's take a look at the repository that we just cloned.

All right, like I showed before, we have an API and a client subdirectory. I just want to make sure that I got covered here. So what we're going to do first is start off with Strapi. We want to make sure that we have our data source set up before we do anything else. And so, like I said, that's within the API subdirectory here for those of you not familiar with Strapi. It has a, for v4 onword, a configuration subdirectory that mostly what we will be concerned with how it's configured is this database.js file. Which in this case says all of our data is going to come from SQLite database in the dot temp directory, which we haven't created yet. And then there's going to be a source directory that is going to be where components are defined. So when I talked previously about we're making effectively a restaurant review app that we're going to deploy. So there will be content types like restaurants or blog posts or categories or reviews. And so within our components, we have restaurants. Sorry, we have a blog page, an article that defines an individual content type for one of those entities I described, we have categories, we have some global pages, we have locations, restaurants, and a restaurant main page and review. So this repo already has all those collections defined for you ahead of time. What we're gonna do is we're going to seed that temp database, that SQLite database. So I'm gonna go ahead and go into API. Shadrack, is there anything else that you wanted to say about the structure of Strapi while I kind of continue doing this?

Yeah, sure, sure. That's when to share my screen or, I think, okay, you need to do this. So the way Strapi is structured is Strapi is actually... Hold on, let me just pull out this here. Before Shadrack gets started. What I'm gonna do is install dependencies, run the Scene command, and run this API locally. For those of you who are on OS 10, check the repo, because there is this catch that I know I hit sometimes on my computer. If you go ahead and copy this command, you'll notice that this will halt. And so you'll just need to export this environment variable and then everything should be fine for installing the dependencies there. Sorry, Shadrack. Okay, yeah, thanks. So Strapy is just basically like any single, any CMS that you can think of, but this time it's running on node. The way it's built, it's built to basically work as the back end. It had less CMS basically to work as a backend for whatever frontend that you might need to connect it with. And Strapy has the power to also work standalone. So if you need like a standalone CMS, just a CMS, Strapy also has the power to do that. So if you check the API, check the API folder, you would see that it's built, it's separated based on, the folders are basically named based on whatever work they are doing, but the most important part of these particular folders is the source folder and the config folder. And okay, let's say database folder, but that's when you run database migration. So the source folder and the config folder. Now the config folder can consist of whatever type of configuration that you want specifically on your Strapy app. So how your admin should look like, what the API tokens that you need, that's where it should be.

3. API, Crontax, and Database Configuration

Short description:

The API in Strapi 4 is separated into several files for better separation of concerns. The crontax file is used for scheduling tasks like updating articles at specific times. The database.js file is used for configuring the database, with support for SQLite, MySQL, and Postgres. It connects to the local SQLite database for now, but can be modified to connect to other databases. In the workshop, we'll create a database relationship and connect it with Strapi.

Then the API, so the API is just basically the way Strapy 4 looks like. Before Strapy 4, the API, there's a particular API file called the api.js in the build folder where it was not there before, it's just basically in Strapy 4 where they are trying to do like separation of concerns so that your admin, your database file or your admin file does not get choked up. So all of this, if you've used Strapy 3 before, you would see that all of these particular files are just looking new, but they are actually part of the server.js and the admin.js files. What Strapy did with version 4 is they separated all of these particular functions into several files so that if you need to make any single change or any, you would easily change it. Basically separation of concerns. So the next thing is crontax. So if you need a cron, if you need a cron basically to maybe update your articles, particular time of the day, publish an article at a particular time of the day. That's what your crontax file is for. And if you look at this, it's already defined here to make a particular, publishing a particular time of the day. Then, database.js is your best friend generally. Like, I know everybody uses a CMS because they need a content management system, yeah? You need some things that you can store your content. So, database.js file consists of the configuration for your database. Configuration for SQLite is different from the way you would do it with Postgres and the way you would do it with MongoDB. Strapi 4 does not have support for MongoDB yet, but you could still use SQLite, MySQL and Postgres on Strapi. So, what here is, basically doing is you're trying to connect to your database, running database. Platform.msg gives services as, database as service basically. Platform.msg gives database as services, so to connect to that database, basically, you can modify this particular file here. But since we've not yet pushed to platform.msg yet, this particular file here, basically is just connecting to your local, your local sqlites database on your, basically, it's connected to a local sqlite database, but when we push to it, and we add a, as we go further in the, in the workshop, I think Ivan was asking a question, you'd see where we'd create a relationship, a database relationship, and try to see how we would connect it with Trapi. So that's what database.js file is for.

4. Middleware, Plugins, and Server Configuration

Short description:

Then middleware is basically, if you need an extra middleware, then plugins. Strapi has support for installing several plugins, including custom ones. The server.js file is important as it runs the Strapi server on a specified port. The config file is crucial for configuring the application. The source folder contains various files for the admin dashboard and API. The workshop will deploy the data on a front end and connect it to Strapi. The goal is to create a decoupled application and deploy it on Platform SH.

Then middleware is basically, if you need an extra middleware guys, then plugins. So the plugins file generally is for, Trapi has support for installing several plugins. So when you install a plugin, you want to create your own plugin, your custom plugin, you can, so the way Trapi is built is you can install plugins from the admin dashboard, but you can also build your own personal plugins.

Yeah, so if you have any extra special requirements, any special updates you need to do, you need to create for yourself, the plugins.js file is basically where you probably want to do that. Now, like every new JS file, the server.js is the most important part of it because we need to run a new JS server. When you start up a server.js is basically connecting, creating a 1337 ports for our hosts to listen on. And that's basically just a simple server that has the host and a post generally. That's entirely what the config file is for, which is sort of like the most, one of the most important part of the file.

Then the source folder, which is where a lot of the whole, you know, your whole magic goes on from the UI, et cetera, for the admin dashboard, the extensions your admin dashboard needs, so if you bootstrap a normal Strapi app, yeah? If you bootstrap, you know, a lot of the content you see here would actually not be there because it's bare, but since we already have a built application on, you'd see that the way it's really, really separated into various files, admin, API components, extensions, and your indexes, which is your entry point. It's really, really easier. So, strata now just started the Strapi server, so there's, if you check the repo, there's a instruction there on how to get to this particular point. So, you can just run yarn stage, run yarn develop on your, on the API, inside the API folder, then your admin accounts, create the following credentials with node, workshop, then the email, like admin at example.com, then the password you can use what is being displayed there. And after that, I would like you, Chad, to go to this particular link, the articles, the API slash articles. Can you go there, Chad?

Yeah, sure. The only thing I was gonna add is I ran the yarn seed command, and what it did is initialize this SQLite database, and it gave me all of our images for our restaurant review app in this public uploads. Yeah. As for these credentials, just so you know, you can change these later. You should probably use these. It's not a big deal if you don't, but later on in the demo, these will be credentials that you'll have to use, and you can update it. So I just kept them consistent here. So you wouldn't have to remember a couple of different passwords. Like I said, we can change them later. I logged in with, I created an admin user with those creds, and that gave me the dashboard, and you will see that this has the same structure reflected that was in the source API sub-directory here, and our seed command has initialized all of this initial data to go along with all these different collection types, including the images that were a part of that seed. And so what Shadrach is asking me to do is, Strapi provides this dashboard essentially on top of the database for using, which at this point is SQLite. But we'll switch it to a different service, and the whole point here is we're gonna begin serving an API on top of this. So you'll see inside the instructions, I have this right here. So this is our local server at 1337. So here's the dashboard, but instead I'm just gonna go straight to the articles collection type, and we'll see four of that seated data there. Sorry, go ahead Shadrach. You want to go? You want me to keep going? Oh, yeah, yeah, sure. So this basically, this as of course you're seeing here is what we got from seeing the particular data if you navigate to cinpu.cda.db. So the data.db file is basically what has given us this particular articles. So what we are going to do essentially, what the entire concept of this talk is this workshop is we are going to take this data, this particular article data we are seeing here, the images, the files, and we're going to deploy it, then serve it on a front end. That's basically the entire idea behind this talk. So we're going to add the front end in this talk and connect it to Strapi and see these particular data displayed in a very, very... This very, very interactive form, then we're going to take the entire app and deploy it on platform S.H. as a decoupled application. So that's basically the entire goal of this talk on this workshop rather than used to doing talks. All right, so let's go into platform a little bit. We have a local server. Great, now we just need to get this deployed. So we have already... Let's make sure everything looks okay here. Sweet. So every application is going to have at least three configuration files on the platform that are associated with three different container types. So the first one will be, how do we want traffic directed to our application? In this case, I want all traffic to go to an application container directly, which in this case, we'll name in a second, strappy for this back end, at this placeholder domain. So what's interesting about this placeholder is that when we, in a little bit, create a branch, we'll get a development environment to go along with that branch, and a URL is just going to be generated and substituted for this default placeholder. So that's what that default means there. We're also gonna get an ID. We can see in a little bit where that becomes a handy, but essentially associating this route with an ID.

5. Configuration Overview and Project Commit

Short description:

The upstream name must match the title of the application in the .platform.yaml file. The configuration file for the platform app demo is described. The application container named strappy contains Node.js 12. It overrides the default behavior of the Node.js container and defines a build hook to install dependencies using yarn and build the application. The platform provides assurance by tying infrastructure and builds to commit hashes and slugs. Mounts are defined for write access to the file system at runtime. Stop the local server, commit the changes, and push to the project.

Yeah, should I? Yeah, yeah. So I just needed to point out the upstream, the name of the types of your upstream has to match with the title of your application in your.platform.yaml file. So whatever name you are giving, so in the API here, whatever name you're giving the upstream here, it has to match whatever name you're going to give in your.platform.configuration file. So you have to put that in check because I've seen a lot of people, even myself, beginning using.platforms for the first time and getting errors because we omitted that particular information. So you have to keep that in mind going forward. So we'll see that here. So just remember this label here for strappy. Then we also have one of the root definition, which is a redirect from the www sub-domain. And all of this together is going to end up at a generated URL at this API sub-domain on our environment.

The next configuration file is the platform app demo file that Shedrack just described. So in this case, I'm going to have an application container that's named strappy so that traffic from our router container goes to the application. It's going to contain within it, no JS 12. Like I said before, this is a major version. In between deployments, you don't have to define minor and patch versions of the runtime language of the app container. We will deploy them for you as they are released. We have, there are stages of an application container. There are build stages and deploy stages. We're all familiar with these. So in here, what I have defined is, I'm going to override some of the default behavior of a no JS container, which in this case is NPM. And then from that, I'm going to define what's called a build hook. And in that build hook, I'm going to say, install my dependencies using yarn, which I've installed at the beginning. And I'm going to build the application. Then I can define a start command. In this case, I don't have any deployment steps, which would be in a deploy hook. We'll see in a little bit, but we are going to have a start command. In this case, I'm going to pass the production environment variable and start the application. The only other relevant things that might be worth looking at here is that platform tries to provide a little bit of assurance when you deploy applications. So when I said that your project is effectively a repository, that means that all of the logic that goes into infrastructure and individual builds is tied to commit hashes, to the slugs that you have associated with an individual commit. So what gets built on top of that is a rule. If you're in the build hook, you can write to the file system, but as soon as the build hook is finished, everything is read-only at that point. And when we do it that way, we're going to be able to do some interesting things, namely, we can associate all of our configuration for our infrastructure and all the code in our repository to a single commit ID. And then we can reuse that slug that's created from the commit hash, and we can move that build around wherever we want it to. In this case, when we create a branch, we'll get an exact same deployment that we have on production. When we do a merge, we'll be able to take what we had in the development environment and effectively move it to production without any of the fear that when we do that, that we're going to ruin our production site. So because of that, you may actually need to write to the file system at runtime. If you do, we have it defined in these mounts here. So we just showed in the local situation that we have this.tmp file or directory where the database is. So we can define a mount here that allows us to continue to write to that database at runtime. If it's not defined explicitly within your mount, it's not going to have write access. And so that's kind of the assurance that gives you repeatable builds and a great deal of security when you actually deploy this thing. Same thing for the uploads that have our images in it because at runtime we may want to add more images when we create new articles, for example.

I think that's a decent overview of our configuration. Let's go ahead and stop the local server and we're going to commit to the project. So in this case, you should have a remote defined as platform. I believe I have all the commands inside the instructions listed as platform. So I'm going to commit the changes. I guess I didn't make any changes as I was going through this. All right, and then we're going to go get push platform main. And then we're going to go ahead and push this up to project. So go ahead and do that yourself.

6. Building and Deploying the Application

Short description:

We're building the application for the first time on the backend, which will take a moment. We can initialize the environment from the commit and see the activity of the push. The build will be a combination of application code, configuration files, routes, and services. We're doing the same installation locally and gaining new participants. You can find the repository link in the chat or discourse. We'll deploy the Strapi app and migrate the data we generated locally. We'll upload the API and photos directories to the Mount upstream and refresh the page to create a user again.

This is going to take just a minute. And that is going to happen as we build this application for the first time on the backend, it's gonna take a moment. So go ahead and do it yourself on your local until we can all catch up to the same point. And we'll see that that environment is getting initialized from that commit just now. And so based off of our configuration file, we'll see the activity of the push, all the commits I have on the repo, the application name, the runtime that I'm using, and then a tail end of the hash that's used to identify this specific build. And so in this case, build will be a combination of application code, everything that we have in those platform configuration files, like the app.yaml file, routes and services, all lumped into one hash. So we'll see where we branch that we can actually reuse this hash and save some of our time on builds.

Right now we're doing the same installation locally and we're doing compilation. I do see that since we started, we gained a couple new people. So hi, welcome. I'm Chad Carlson from PlatformersH, with Chedra Kakintayo. If you go ahead- Hi, buddy. Hey, if you go in and check out the chat in Zoom or preferably inside discourse, it'll have a link to the repository that we're working with, which is platformersh-workshops.com slash Node Congress, and then some starting steps, and then you can get to this same page, the instructions page that we're working from here. Some of these deployments will take a little bit, so I'm sure you guys can catch up if you come a little late.

Okay. Not sure where that happened. Okay. Um, If you're starting to put together your negative thoughts, or things that you're stuck in the past, we want to help you get back to the right place. All right, first deploy. So what we have here is the finished activity for the commit for our main environment, the generated URL, and we're actually going to see a picture of what our cluster looks like right now. One thing that I didn't describe initially was there's a third configuration file called a services.yml file. This is where we're going to define our MySQL database. But right now we don't have anything, so we don't see it in the cluster. We just see a nodejs app for Strapi. And we have the router container inside of our cluster. We go to our generated URL. TLS handshake didn't go through there. We see that we have our deployed Strapi app that we had locally. I go ahead and go to admin, we'll see the same login sheet that we had before. Before we do that, we want to migrate some of that data that we generated before, which we have locally. We can do that by taking those directories that we defined as having write access right now, those mounts, and uploading the documents that we seeded locally. In this case, I'm going to... Let's do this one first. I'm going to upload my local API, and I'm going to upload everything inside that.tmp, which is that SQLite database, to the Mount upstream. Go ahead and continue. And then I'm going to do the same thing for the photos. And we'll see all those getting uploaded. So we'll go ahead and refresh this and create our user again. There's that in case you didn't have that in front of you again. Okay. Okay. Okay. Okay. Okay. Debbie else getting the same error on your side. I'll tell you. The issue is in a second it's just my track it down. Listen what the error is. What's up with this thing what the error is. Yeah I saw it before but I thought I fixed it in my.

7. Creating New Environment and Making Changes

Short description:

We're going to take what we had initially locally and put it on our production environment, but we're going to switch that out with an actual production service here next. And make sure that if fixing the similar guy had set up doesn't fix the issue we're just going to go ahead and move on. Because what platform is really useful for is for using production services. So, that's what we're gonna add now, and we're gonna forgo the SQLite stuff. So, on the next step, we're going to create a new environment. And what it's going to do is by copying this command with the platform CLI, it's going to create a new branch locally, and it's going to make an exact copy of what we had on production. It didn't deploy, but it's going to put it on this new development environment, and we'll see exactly what I said before, if we looked back at our initial activity, we're going to see this commit hash on the production environment.

Got the in. Got the end. What did you put in the environment? We don't need that much. I don't think you need to link them yet. I got stuck on it still, so that's why I was putting it there. No, it's not... I think it's just out of these. Finished.

Is it just reload. Wow child is still working on this. Does anybody have any question they'd like to answer. You can just, you know, drop it in the zoom chat or in discord really anywhere you want. And is anyone stuck somewhere. Well done, Chad. Probably all stuck on the same step here. I'll figure out what the issue is on this configuration. Great. It's deploying successfully. Yeah. This is really annoying. Sure. Till I hear you doing this on your sign. So check the console again, check the logs. Check the uploads again. This might just be a copy error. You did do, like, when we were working on this, did you encounter this particular info me, mm arrow. Only one other time. Right, I'm going to try and see those issues in just one second. anyway, anybody have any questions.

All right. I'm just doing this to check to make sure this can fix this initial step on your all side but if this doesn't fix the issue is SQL lite I'm just going to move on because that's not the purpose of this demo anyways, this was just supposed We're going to take what we had initially locally and put it on our production environment, but we're going to switch that out with an actual production service here next. And make sure that if fixing the similar guy had set up doesn't fix the issue we're just going to go ahead and move on. Oh, not gonna do it. We're gonna move on. Okay. So, what we don't see here is that SQLite database on production, but that's okay. But that's okay. Oh, my camera is still off. Because what platform is really useful for is for using production services. So, that's what we're gonna add now, and we're gonna forgo the SQLite stuff. So, on the next step, whoops, next step here, we're going to create a new environment. And what it's going to do is by copying this command with the platform CLI, it's going to create a new branch locally, and it's going to make an exact copy of what we had on production. It didn't deploy, but it's going to put it on this new development environment, and we'll see exactly what I said before, if we looked back at our initial activity, we're going to see this commit hash on the production environment. And that because we haven't changed anything with an additional commit in creating this new environment, because we just did a branch. We're going to rebuild. Or sorry, we're going to reuse the build from that production environment. So in this case, we're going to forgo all the dependency installation that we had before, and everything's just going to get moved to this new space. So if I go back to the project level of the management console, we'll see that that updates environment is about to be created. Now, while that's going on, let's go ahead and make some changes. So like I and Shadrach talked about before in Strapi, going to this API sub directory.

8. Database Configuration and Relationship Definition

Short description:

We need to change the database.js file to save data to an Oracle MySQL database. We'll copy the database mysql.js file from the docs subdirectory and paste it into the database.js file. The platform.sh-config library is used to define services and expose credentials through environment variables. The PlatformRelationships library decodes the Base64 encoded JSON object to access the credentials. Let's define the rest of the configuration to see how it all fits together.

One of the most important parts of our configuration is this database.js file. And so right now it's loading from this SQLite database or at least is locally. What we want to do is we want to change this to accept or to save our data to an Oracle MySQL database. So if you go into the docs subdirectory of the repo, you will find this file, database mysql.js. So go ahead and copy that file, go back to the database.js file and paste it there. So let's look at what's going on here. First thing is it's going to load a library that's already been installed in the repo called platform.sh-config. So when we define a service, we're going to put that in the services.YAML file in a second. And that's going to do things like give it a name, tell us how much disk space is there. And then we're going to place a definition called a relationship in our application definition. Now once we do that, that's actually going to expose all the credentials to access that service container inside the application container. And it does that through environment variables. In this case, there will be a Base64 encoded JSON object called PlatformRelationships. And what this library does is it decodes it. In this case, it's a module for Node.js, so that inside of our application we can easily access those credentials and use them in our application. So for here, let's go ahead and do some definitions of the rest of it so that we can see how this all fits together.

9. MySQL Container Configuration and Data Migration

Short description:

I have defined a relationship, a new service, and I've told Strapi how to connect to it. I included a dump from a service container called Food Advisor SQL. I used a SQLite to MySQL conversion library for Python to convert the database. We have a drop-down menu that provides URLs and commands to clone the repo. We continue to use the SQLite database until runtime. The service graph now includes a router container, an application container, and the SQL database. We need to do a migration to add data to the service container. We run the platform SQL command to upload the dump file and migrate the restaurant data to the updates environment's Strapi app.

Right now, I have an empty services.yaml file. But if I instead go into my docs, we'll see that I have a new services.yaml file that I can place there in this.platform hidden directory. And here's our configuration to get a new MySQL container. I give it a name. I say the type, which version I want, and how much disk is associated with it. And that's it. Soon as we push this, we get a whole new change in our infrastructure on this updates environment. So keep in mind this name for the service, dbMySQL.

I need a way to get to what's the right way to put it, to allow access to this service container within the application container. Nothing else in the world will be able to access this other than the application container. So again, if I go inside my docs directory, and if I go to the Strapi app.yml file, we'll see something that's pretty close to what we had before but with a few changes. So I'm going to go back into API to my platform app.yml file and paste it in there. In this case, we have pretty much the same build hook. The only real change that's happened here is the definition of a new relationship, which in this case is going to be pointing at the DB MySQL service container that has the following type. And we're going to name every way that I interact with that container through this relationship MySQL database. And so with that relationship name, we'll see that that's what we have here. MySQL database as the relationship name. This block right here is relevant because in order to make sure that these builds are usable across environments and do things like I described before of saving that build ID on branch and merge events, one of the other restrictions, other than no write access at runtime is that these containers can't talk to each other during build. And so what we do is we temporarily tell Strapi hey, continue to use the SQLite configuration while you're building and then once we get into a deploy state on platform SH, which is with this if statement here, we're going to then load that relationship and grab the credentials from the environment from the environment variables there. And then use that to connect to the database and that's what we'll see inside of our activity when we push here, We're going to use the default build hook, the default SQLite database until those service containers are available and then switch over as soon as they are. So I have defined a relationship, a new service, and I've told Strapi how to connect to it and that should be it. So what I'm going to do is I should be on the updates branch there. I should have changed these 3 files. I'm going to go ahead and click finish. So we'll go ahead and push those changes up. And so what I have included inside of this repo is a dump from one of the service containers called Food Advisor SQL. I took the SQLite database that seeds automatically. And then I used a pretty interesting tool that I found recently, and it's a SQLite to MySQL conversion library for Python, uploaded it to a service container, because it interacts with it. When it does its conversion with the actual live service container, and then I downloaded that dump file. There should be some documentation how I did that inside this example repo as well. It'll just take a minute to deploy those changes hopefully you have the same going on on your side. there's anything interesting to put in here I guess you saw while I was trying to troubleshoot the SQL lite issue, you saw me SSHing into the application container you're able to do that for every single environment so here I'm on the updates environment it's like we're still in build so I won't be able to do it until this thing deploys but we have these drop-downs here that gives the URL for where our roots configuration directing traffic a raw SSH link and then commands to actually clone the repo from the project itself. What you saw was me just using the platform CLI just using platform SSH and then especially at the end of this when we have multiple application containers going you can specify which app you want to troubleshoot and SSH into. Um, so you see here we have that if statement that I was talking about during the build step where we're just going to continue to use the SQLite database until this thing hits runtime. and our service graph hasn't updated yet. All right, now we see that it has. We now have a router container, the application container, and the SQL database. And I believe initially these TLS certificates are not liking my environment names. This thing's going to fail. That's because we have a service container, but we don't have any data in it. So it doesn't know what to do with it. So we're going to do a migration. So we're going to copy this command and we'll see that this is not in this folder, but if we go into API, we should… Oh, sorry, not API. We're going to go into docs. And then in docs, we're going to add that food advisor SQL dump. So we are on the updates branch in the updates environment. So I'm going to run platform SQL, not main. We're just going to use the current environment, food advisor SQL. And so then we're just going to give that a second to upload that dump file into our service container and actually migrate all of our restaurant data onto the updates environment's Chappie app. All right.

10. Merging Updates and Working on the Front End

Short description:

We log in to the admin panel and migrate the Strapi app to an Oracle MySQL container. We merge the updates environment into production. While the merge is happening, we can work on the front end as we now have a live API. We update the M development file in the client subdirectory to match the deployed environment. We run the migration to the main environments and ensure that data doesn't flow up. In case of merge conflicts, it's advisable to set up an integration to GitHub or GitLab for better conflict resolution.

A lot of data, a lot of data. All right, so now data is there. This is going to be fine. We're going to go to our admin login, and the dump is already going to contain those same credentials that we had earlier. So this is why I was prefacing it as using them, because then you don't have to reset it. So I'm going to do Admin Example, admin1234, looking good. I'm going to go ahead and log in. And so now our Strapi app has all the same content that we had locally, but it's been migrated now to a Oracle MySQL container. And so we have everything that we had locally, everything is already published. So now what we want to do is merge this into production. So let's get the command for that again. So we're going to use the CLI again, and we're going to merge the updates environment into production. Yes I do. All right, and so that is going to take just a second. And so while that is happening, I'm just going to preface it that part of the benefit of having this development environment is that we could test our migration outside of our production app. So we will have to run it one more time for production, but if our migration failed for some reason, we wouldn't have a broken production site obviously because we tested out on this development environment. So we'll go ahead and give that a second to merge into production. While that is happening though, we can start to work on this front end because we have a live API now. Same as I did before, I can go to API articles. And there's all our articles on our development environment. This is deploying, so I'm OK. Go here, still on the updates branch. So in this case, I am going to go into the client subdirectory now for the first time. And so here we have a NextJS app. We see that we don't have anything specific to platform.sh on it except for this one script, which I'll go to in a second. So what we're going to do is just check out what this looks like. We'll see that for the front end, we have this M development file that is expecting this locally running Strapi instance. So instead, we are going to update this to what we have deployed. So it's going to be this. We're going to remove this trailing slash because it cares a lot about that. We'll see the same thing on our production environment there. So I'm just going to do two at once so that we can keep this rolling. When you get a chance in between these, go ahead and just do what we did before. Where are we going to desktop, Node Congress, docs, and going to.... Still on the updates branch. I'm just going to run that same migration, but this time to the main environments. Actually think it's going to give me, no, okay, we're good. So when you get a chance, go ahead and do the same thing. You can put this environment flag to make sure that you import to the right environment, but with the same import, like I said, that gives you some protection so that you can test out imports ahead of time on a development environment. But it's a reflection of the way our model works. So every time you create a new development environment, you get all the same infrastructure and code on a development environment. And it comes with all of the data that happens to be in the production environment, but data doesn't flow up. Makes sense, I mean, we do a merged code and infrastructure moves up the chain to a parent environment, but not the data. So we test out that migration development and you go to production. Go ahead, Chedarak, sorry. Yeah, sorry for interrupting. So like I'm curious, right? What if I have a development environment that is more updated than my main environment and I try to merge them, right? Would platform automatically do the updates for me or is there going to be a merged conflict? If I have, I'm sorry, can you say that again, man? So if I have updates, like, you know how, you know, gates merging different branches work. When there's a merge it would, when there's a conflict it would do merge. So is there situation where the development environment and the main environment will have a merge conflict? I guess that could happen, in which case that's why it's always a good idea. When I do projects, typically I'll set up an integration to GitHub or GitLab because it'll give you better visibility of how to resolve those conflicts.

11. Syncing Data and Deploying Front End

Short description:

To restart your current environment, you can use the sync feature in the management console or CLI. After installing dependencies and updating the environment file, you can run the app and view the restaurant review site. To deploy the front end app, additional routes and configuration files are needed. A script will dynamically detect the backend URL of Strapi based on the environment. The script will write the URL to a file used by the Next.js app. The backend URL will change depending on the environment, and the script ensures the correct URL is used for each environment.

But should you get into that situation, there is a part inside of the management console and that you can get through the CLI, which is called sync. So this gives you the ability to re-sync either just selectively data or re-sync code. So if you did something and you broke something and you wanna restart your current environment, this is what it does, it effectively pulls down to that environment so you can continue working. Yeah, exactly.

Okay, so I went ahead and installed the dependencies on my frontend app. I updated this development file, environment file on the frontend. And I'm gonna give you a look at what this final deployed app is gonna look like. I'm gonna run yarn dev within the client sub-directory after updating that environment file. Let me double check here. Yeah, my production app is okay. I'm gonna go back to updates. In this case, my frontend is here at port 3000. So we'll give it a second to build the pages from what I have on Strapi. So here's the app that I was talking about here. So it is pulling data from that API articles, API restaurants to build out a restaurant review site. This is Strapi's demo. I can go to the restaurants list page and go ahead and take a look at it. Let's go to Kinsei in San Francisco and see hours, information, total reviews, a collection of pictures that are all part of our Strapi backend and comments left by a bunch of users with an average rating for the restaurant. I also can check out the blog posts that are served up and check out those individually and this front end app will automatically take the content there to create individual pages for blog posts as well according to the styles in the template. So that all looks good there. So I'm gonna go ahead and shut down the server and I'm going to platformize this.

So one of the first things that we'll need to change here is we will have to tell platform.sh how to direct traffic to this front end app once we actually deploy it. In this case, that will mean adding another set of routes pointing to a new application container. So we had this pair for the API sub domain that our local front end was just pulling from pointing at the Strapi app and now we'll use the route domain, in the www sub domain, to define a new upstream to a new app called next.js. We're gonna give it an ID and that's gonna be the extent of our new routes configuration. But we need one of these two, we need something that tells platform how to build and deploy it. So I am going to go ahead again into my Doc subdirectory and go to a file called nextjs.app.yaml file and then in clients, I'm gonna add a new.platform.app.yaml file and include that. So once again, let's go through this. I have a name of the application, I have a version of node.js, we're gonna restrict this to yarn once again. I'm going to install those dependencies. And in this case, we actually do have some more steps here. So let's go through those. During build, I am going to grant permissions to this script here in the build hook to run in my deploy hook. And what this is gonna do, in the same way that there was a config reader library that read from our environment variables to pull service credentials, there are other variables available there, like the platform roots, which is gonna be important because we want the front end app to dynamically detect the backend URL of Strapi. So that it can use it in its own builds. But that value is going to change in every environment that we're in. So I guess I can take a second to show this. Here's our updates environment. Any URL that's generated here is gonna contain this platform environment, kind of not quite a hash, but it's gonna have a unique ID with the environment name. And it'll be the same for every application that we have inside this environment. Whereas if we go to our main environment, it'll be a little different. We'll have main here. So our back end URL is gonna change depending on the environment we're in. And we wanna leverage the fact that we can create as many environments we want to dynamically define the back end URL. So what this script is gonna do is it's going to read our platform routes environment variable. And within there it's going to have essentially what is actually generated and filled in for these default terms. And it's going to pull out the one with the ID API. And then it's going to write to a.M file that the Next.js app is gonna use a public API URL, which is effectively going to be exactly what I put here. It's going to be the current environment backends minus this trailing slash. And then it's going to put a preview secret here.

12. Deploying the Application and Exploring the Site

Short description:

We set up the necessary files and scripts for the deploy hook. We build the application and start the Next.js server. The deploy hook allows us to grab values and pull data from the backend Strapi app. The configuration files are updated, and a new application container shows up in the cluster. We now have an isolated environment where the Next.js front end pulls data from the current instance of Strapi. We merge the changes to production and everything looks good. We can filter and view restaurants based on categories. The merging process is almost complete, resolving the SQLite database issue. The frontend app is deployed and we can explore the site.

I'm not gonna do much in preview secrets in this workshop, but it's going to set up this.M file and then it's going to run that script in the deploy hook. Again, in the build hook, no other containers are available, but they are available in the deploy hook. So that's when we can actually run it and grab these values. Then we're going to build the application. There's also a start command, which in this case, we're going to start the next JS Server. We're going to do a quick rebuild and start using the built-in variable port because all app containers run on port 8888 on Platform SH. And then, yeah, other than that, we're going to have a few settings that dictate how much memory is used inside of our deploy hook and start command. And that's going to give us enough space to actually pull all the data from our backend Strapi app and build the front end. I believe that that's all the steps that we need to change here. I updated the roots YAML, I updated the app YAML file. Make sure I have it in the right place. Not found it yet. And so we'll do another push there. So what we will see inside this environment because this thing is moving now is yet another application container show up in our cluster. So in this case, it'll be our front end that's pulling data from the back end that's pulling and storing data from our database. So every environment that we create from here on out is gonna have the same configuration until we explicitly change the version of any of our service or app containers. So I'm gonna go ahead and let this go for a second and I'm gonna mute because I think my wife is making smoothies in the background and it's probably very noisy. And then as soon as this is completed we'll go ahead and check out what deployed. Yeah, I was wondering what the noise was. Anyga혀 nṣṭṭṭṭṭṭṭṭṭṭṭṭṭʃṅṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭṭ ṭ᾽ Ḥᵒᴍᵐᾶᵏᵏᵃᵃ ᵉᵐᵒᵏᵃᵏ ḥᵒᴅ ḥᵒᴍᴱ ʸᵒᴄᴱ ᴱᴄᴱᴜ ᴴᴴ ᴴᴴᴴᴴᴴᴴᴴᴴᴴ ᴴᴴᴴᴴᴴᴴᴴᴴ ᴴᴴ ʸᵒᴄᴱ ᴴᴴ ᴴᴴuriᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴ ᴴᴴ ᴴᴴ ᴴᴴ ᴴᴴ ᴴᴴᴴ� ᴴᴴ ᴴᴴ ᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴᴴ� Find out more about this tip at www. mysteryShell.com Find out more at www.mysteryshell.com www.mysteryshell.com www.mysteryshell.com And just because it took so long, I'm gonna give it one redeploy to make sure that we're okay here. All right, and now it won't take this long from here on out, but we should be able to view our service tree. My mic is on right? Our service tree that now contains both strappy, Next.js and mariadb container. We have a new URL here that has our frontend app. So now we have an isolated environment where our Next.js front end is pulling from the current instance of strappy on this environment. I'm gonna go ahead and merge so that we can get this up to production. And we'll see all the same stuff that we saw locally. Even a little bit faster now that it's not restricted to our local instance. So here's that same Kinsey restaurant. Kinsey restaurant. All the reviews that go along with it. The blog for our backend. Everything's looking good. Let's go back to the articles. I can filter by categories. I guess I don't have the Meet American categories. Which ones do I have here? All right, I have one with the European category tag. I'm gonna go back to restaurants and see what my filtering looks like here. I'm gonna look at Guatemala, restaurants here. And I can view an individual restaurant that fits that criteria. And so this should be nearly done merging to production. Get us something nice and production ready instead of that SQLite database issue. Let's see what else we have on this. I'm gonna buy European food, the Mint Lounge. Everything looks good. I'm gonna scroll through all the different images. I haven't looked at this a lot to see if there's any image highlight feature on that slideshow. This allows you to start repo, documentation. Oops. Sorry.

13. Deploying Multi-App Environment

Short description:

We're almost done with deploying the multi-app environment. We just need to fix the issue with the SQLite stage. The production services and deployment process are looking good. Next.js has been a great front-end tool for decoupled deployments on PlatformSH. We're still working on resolving the issue with the first deploy. Our solution keeps everything within the same repo, allowing for easy development and compatibility testing between the front and back ends. We now have a decoupled architecture between Next.js and Strapi, ready for further development. We can add a domain to the production environment and continue with our development. We can also provision a development environment with the same infrastructure and data as the production environment. If you have any questions or need help, feel free to reach out to us. Check out our community channel for more resources and answers to your questions.

Everybody following there have trouble getting to this last step here of actually deploying the multi-app environment. It's okay if you just wanted to watch now and then download this later and try it yourself. Just obviously be aware that there's something we need to fix on that SQLite stage, but it looks like everything to do with the production services and actually deploying this thing is looking good here. So once again, we're just gonna need to do this first build to get it to production. And then we will be all set to do that on this end. To start developing from here on out.

Yeah, next is a pretty interesting front end. I've been doing it a lot more. When I started doing this decoupled stuff on platform, I was working a lot with Gatsby and I only recently switched to Next.js to prepare this demo app, but then also as a front end for Drupal, which has been pretty interesting. A new tool came out recently, specifically for Drupal and Next.js. But I've liked it, I don't want to say a lot more than Gatsby, but it's been solving my multi-app deployment demos a lot better than that situation did. And it really gets stuck there on the first deploy, I got to figure out what's going on there. It looks like we're stuck on trying to start strapping. Well, like I said, because of that first time where it's not available at all, it's essentially writing the backend URL to a sym-linked EMV file. So the very first time it does it, that data's not available to even create that file. So it's essentially starting the server with an under-configured EMV file. And so it takes a second, it seems like, on the first push when migrating to fill out the file in time to actually make that connection. Okay. Yeah, it's still to figure out what it's going to do with that container, I guess. But yeah, I would be curious to hear everybody else's deployment solutions here, because most of the time when I see a pattern like this, it's putting the front end on Vercel and deploying the back end elsewhere. And so what's interesting about our solution is that everything stays within the same repo so that you can create a development environment. And then whether you're somebody who's changing the back end API schema and you're trying to make sure that the front end is still compatible with what you've changed it to, you can do that with full copies of both the front and back end in that environment, or vice versa. If you're working on the front end, you can have your own development copy of that API application inside the development environment. There we go. So we have our API. Let's go ahead and refresh. We've got our Next.js front end and then we should have great. We have our production environment, a decoupled architecture between an Next.js and Strapi application. So now we're at this point free to add a domain to this production environment and continue on with our development. And so since we've completed the migration to production since we have all of our infrastructure and code set up on production if we go either through the CLI or through the management console we can now come here and say let's say we want to add a shop to the front end application of some kind that doesn't specifically go through Strapi but uses something external like Shopify but we want to still have a copy of the API that doesn't necessarily conflict with anything going on production wise we now will be able to provision this development environment called add shop that will have the exact same infrastructure and all the same data that we migrated to production on this development environment. All of the restaurants all the blog posts any of the data that we might want at our disposal when we're writing tests for the changes in this isolated environment will be available to us there in that isolated environment. That is. Yeah, that's pretty much what I wanted to cover today and hopefully you guys were able to follow in on your pace or you check the recording later or check the repo for the instructions and set up the pattern yourself. One other thing that I did want to add is if you go to our. You go to our platform SH templates organization on GitHub and search for strappy you will find actually a number of different templates so that are use the base strappy for strappy three back end by itself. But these two here are two separate kinds of front ends that are pulling data from the strappy backend just like I'm showing you in this demo. Otherwise, obviously you can get rid of your search and go to JavaScript and see quite a number of different ones that we have there. So hopefully this has been interesting and useful. We've set aside time we don't need to stay for the full three hours but Shadow Rack and are we're already planning on being available to help anybody, either get to the point that I got to here, or ask any questions about platform or what node is like on platform. Happy to stick around and answer any of those questions and really appreciate you guys all coming out. Yeah, thanks everybody for coming in, coming out for this workshop. I'm if you need any help, please just let us know, and we will be able to you know, if you have any specific question, you need to understand something particular concept and platform message. Happy to answer your questions. One screen. Yeah. Yeah, I just want to talk about the community channel. Our community is a community link is very useful. divided into various from how to guide to tutorials, questions and answers. You can come here and check it. You definitely must have found a way to answer your question on the community link.

14. Leveraging Isolated Environments

Short description:

You can leverage isolated identical development environments and do some pretty interesting work. You can try out different versions of Node.js and get everything identical to what we had on production. The containers will inherit the relationship to the same database and have the same data. You can skip rebuilding the front end if you have already built it on a previous version of Node.js. Just give it a refresh start command and test on the new version. Easily provision infrastructure in isolated environments.

So please feel free to do that. Yeah, same goes for Slack. You can go to chat.platform.sh and join our Slack workspace, Shedrack, myself and the rest of our DevRel team will be there along with... I mean, you could have a conversation with the CTO potentially inside of our public Slack channel. We're all there to help people get used to Platform.sh because I know it's a little maybe different than some other deployment platforms that you're used to. But if you are able to accept some of the few rules that I talked about of write access being revoked post build, handling things through environment variables, maybe a little extra time during the initial migration, you really get into a position where you can leverage these isolated identical development environments and do some pretty interesting work. And go into our repository here and say, well, I have a front end on node 14 and I have a strappy backend on node 12. But if I go to the documentation and I go to the Node.js section, I see we actually have support for 16. So I can go ahead and go into both of these App.yml files and say, let's try out strappy on 16. Push that and you'll actually get everything completely identical to what we had on production with the one change of, now it runs in a Node 16 container. And I mean identical. I mean, that Node 16 container will inherit the relationship to the same database, to a copy of that database and we'll have that same data. It will also be smart enough to detect that I've already built that front end application on Node 12, I have that commit hash associated with it, that build ID, so I can skip rebuilding the front end. I can just give it a refresh start command because we have a new version of the backend and I can test, all right, let's test on Node 16. Same for the front end and just really easily provision infrastructure that way in these isolated environments.

15. Exploring Multi-App Pattern and Troubleshooting

Short description:

All right. We discussed the variations of the multi-app pattern. We're here to answer your questions and provide resources. Let's review the SQLite issue. I found a Python package for using MySQL in production. You can open a tunnel to your services and access the running service container on your development environment. Update the database.js file to be less strict on being on platform. Use the tunnel to retrieve credentials for the live service container. Run SQL8 to point to the data db file and provide the necessary credentials.

All right. Well, I guess, let's go here, showed you the, yeah, these were the variations I was talking about I'd done before, so simple Gatsby front end, and then two other variations, or three total on the multi-app pattern, in this case, Gatsby pulling in from WordPress, CMS, Strapi, and Drupal, so there's a lot of resources here to see the same pattern in our GitHub organization.

Thanks, Irving. Yeah, we'll stick around for a bit. If you guys have any questions, like feel free to ask them. Chedarak and I are definitely available to point you to some resources or, you know, heck, I was planning on being around, so if there's something else interesting that you want a demo of, I'm happy to do that. But yeah, a lot of resources around multi-app and Node on platform.

Let's see. Thanks Shane. Yeah, hopefully you guys were able to follow along. If not, check out the repo and do on your own time. I tried to be pretty detailed with the instructions. We'll take a look to see what was going on there with the SQLite stuff. But like I said, it wasn't the point of the workshop anyways.

Oh, I wanted to check, do you see that shed rack? There's a error there on the poll in that the date is invalid on these articles. You see this? Well, I guess they're on the comments. Yeah, I can't see it. Let's review. Go to one of these. Mm-hmm. There's no field for date, apparently. No. But... That's fine. So my... Okay. So my dump was good except for the create it at an updated date. That's pretty good though. The last time I did this, I wasn't able to get a full dump of all the data prepared for MySQL, and I found, what was it? What was it? I don't know if any of you all listening come across this, if... I don't know if you do demos like we do. But the food advisor repo comes with that seed command to put together the SQLite database. But what I really wanted to show you was this production service of using MySQL. And I've Googled it a few dozen times trying to figure it out. And I found this one this week. So it's a Python package that I installed via Pip. And so what I did was I installed it and then when you... One other thing that you can do when you start developing this locally, which is obviously the next step, is that you can update this database.js file to not be so strict on being actually on platform. And you can open a tunnel to your services. So what's my branch right now? I'm still on updates. So I can do Platform tunnel single which will then give me access to the running service container on my development environment. So obviously you don't do this in production. And then where did I have my notes here? Was it this one? No. Oh. It doesn't matter. I can just look it up in the documentation. So, I can open a tunnel to the application. Now that I have the tunnel open I can open a... So now I have mocked environment variable for platform relationships, which I can decode the same way and pipe it through this jq library which I'm leveraging inside the app container to and get my credentials for the service container that's running live on my environment. And so what I did, is I use that open tunnel with the credentials to then run SQL8, mySQL, I pointed at the data db file that came with Food Advisor demo, I gave it the name, which in this case on platform, the name is this path variable, the path credential, user, and port. It's still local host so I just need the user and then in this case when I open the tunnel I'm at 30 000 instead of 3306.

16. SQLite to MySQL Conversion and Serverless DB

Short description:

The SQLite database file was converted to MySQL using a flag and the provided password. The converted file was uploaded to the live database container and then to the repository for use. The conversion process was successful, with the exception of the dates. Only the reviews section contained dates. There was a discussion about serverless DB services, but no recommendations were made due to limited experience.

And then the only other flag that I had to put was without foreign keys. And then I entered, obviously, the password that comes up in the credentials and so that actually converted that SQLite database file. I just didn't even convert it to a dump. It just connected to my database and just uploaded it for me to the right expression, so that it uploaded the whole thing. And so then for this demo, we have, I really don't like this little thing in Zoom. Yeah, it's not my favorite. Inside document, section, go to the services and then we were using MySQL and then I use our export command to then just dump what I had converted from SQLite to the live database container and then uploaded that to the repo so that you all could use it. That made me happy because that was definitely the last time I did a version of this workshop. I kind of had to use a shortened version of the data but this helped get pretty close to the SQLite experience actually uploaded to the service container. That's not necessarily relevant other than a cool tool I found recently that's interesting to any of you. Yeah beyond any of that... What's that, Shedron? Yeah, yeah, so when you were updating the workshops I was just wondering how you planned to migrate the SQLite to MySQL database so it really is quite useful. Yeah, it helped me out there. I said the last time I kind of manually pieced together like 20 rows of this gigantic SQLite database and gave up there, but this was able to convert it pretty well. And it looks like that's the only thing that didn't get converted was dates... I don't know if it's dates everywhere. Let's go to blog posts. These have dates on them? No, I guess the only thing in here in this sample that has dates are the reviews. So that was the only thing that didn't come through, looks like. Did you recommend any serverless DB... Shadirak, you might know better than me about that. You don't know about that. Do you recommend any serverless DB service? I do not really know so much about serverless because I just played with like a bunch of just Netlify functions and rest, but I cannot honestly recommend any, to be honest, because I have not really used serverless a lot. I'm still on the whole server based train even though serverless has a server. Yeah, it's quite, I don't have any recommendations, but it's just funny how serverless actually runs on a server. Irving, what are you trying to use it for? Because yeah, I don't have a ton of experience with it either. I came in with mostly Python background and then learning PHP.

17. PlatformSH Features and Benefits

Short description:

I've seen a lot of people using DynamoDB and other similar services for no-code configuration. PlatformSH provides more than just hosting, it offers orchestration for workflows, webhooks, and CI tasks. The development environments inherit data and environment variables from the production environment, making it easy to enforce standards and manage security. The API can be scaled to multiple applications, allowing for the creation of SAS products. The platform also facilitates dependency management and keeps infrastructure up to date. It's a comprehensive DevOps platform with many features. Feel free to try the demo, visit the website, and join the Slack community for further discussion.

Okay. No, no code. I think I've seen a lot of people using DynamoDB, I think. I've seen a lot of people using DynamoDB, but that's the only time I've seen that. Of course I mean no code configuration. Oh yeah, I've seen fauna, I've also seen tuna... Yeah. Yeah. Hey, everything so, yeah, we wouldn't consider it just hosting I mean part of what we're trying to provide is all of the orchestration that would go along with you rebuilding this workflow on actions or any other webhooks that you would need to attach to your repository that you're doing your development. And now that I've made this migration I can copy this all over to GitHub and make an integration to the same project. And I already have my association between a branch or pull request equals a development environment, so I can make it run alongside all of my existing CI tasks, all of the tests that I have set up for individual environments, I can wait for a deployment to then do any visual regression or any other tests that have to do on the deployed environment. So I created this environment, add shop, when I was talking. And if I go here, and I go to the camera stuff again. So this was the production app that we just built, this was the environment we were building it on. And after I finished all that and got it on a production, I created this branch. So now that I have this branch, and can start doing work like I said changing the infrastructure. I get all of the build that I did before. But I also get all the data. This is a fully built out identical copy of the production environment. So, it's hosting but it's also this inheritance paradigm, I guess like that, I will get all of my production data on every development environment that I create. And I can tie it into my logic of git that I'm already using. And that same inheritance happens for things like I can set... Let's say for my production environment, I can set environment variables that are specific to this app. So let's say I needed some credentials to go to Shopify to work production. I may not want those to be inherited by anything, but I could very well create a staging environment that I then make children off of on that staging environment and give it its own Shopify test credentials that will be inherited by every development environment as well, like built into our API. So those kinds of things, the inheritance, the abstraction of all the DevOps tasks just kind of beat Git with infrastructure as code that's managed. So you can't change anything of your infrastructure until you commit it, but you don't have to commit it down to the patch version. Security updates are applied automatically. So we tend to consider it as more than just hosting. So if I go to my production environment, I can schedule or take individual backups of the environment, and I can actually use those backups to then sync to child environments. One of the interesting things on top of, isn't it interesting that you can have a front end and back end in the same project is that this standard of a branch equals an environment and how inheritance works is it makes enforcing standards of how your team of developers work on a single project really easy and auditable security wise. But this API can be scaled out to as many applications as you want. So you could take this decoupled pattern of a front end and a back end that has a similar CMS back end. And you could make a SAS product that then initializes as many on demand projects for customers as you wanted to. By the same project create initialize from this repo And then you get these isolated projects that within them have isolated environments that, say, your in-house developers can make customizations and personalizations for whoever the customer it may be. And establish this fleet of apps that have all the DevOps already cooked into it rather than you having to build them for scratch just like for the single site case for 100 sites. And then you can do things like define commands to run Yarn upgrade on every one of your applications in this upstream repo, and then just say there are my 100 sites run Yarn upgrade on all of them and keep the upgrades in a development environment and just run my tests. If any of them fail, they need inspection. But if they pass then merge into production and keep your dependencies up to date that way. So it's a DevOps platform, I guess is sort of how we look at it with all those little features in there. Sure thing. Yeah. Please, you know, try out this demo. Come take a look at our website and join our Slack because you'll find ShedRack and I both in there. And we're happy to talk about what it is you're deploying, unless you want to share now. And then, yeah, help you help you see how what you're working on went, fit, or run, at least, on-platform. Thanks, everyone. Thanks, everyone. Do you know, ShedRack, does this, because I saw this environment variable, the one you said that was too much. Yeah.

18. Preview Setup and Implementation

Short description:

I have a preview secret on the Strappy side and on the client side. Let's give it a try and see if it works. I was building the same thing for a Drupal backend and wanted to figure out the preview part. It was a pain to do with a Gatsby frontend, but it's already set up with Next. I think the workshop code for the live preview is already implemented. I got a preview URL set up for a draft version. I just need to include something on the strappy index file.

Where is it? Well, it's on both sides. Thanks, Nazeem. Thank you for coming. I have a preview secret on the Strappy side, and I'm also supposed to have a secret on the client side. It's the one I rewrote. Because with Next, I'm supposed to be able to ping that front-end and refresh what's been updated on the back-end, right? Yeah, yeah. Well, I mean, let's give it a try. I doubt it'll work. But it could. Let's see. Article... These are my blogs, I guess. Wait, let's clean up all these tabs before I do it on a different environment. Let's go to Add Shop. Here's my API, and here's my front end. All right.

I'm building the same thing for a Drupal backend, and I really want this to figure this part out because doing preview with a Gatsby front end was not simple. Yeah, definitely. I think I'll try to adapt to when I get more. Yeah, like I had to run a development server on non-production environments and set up all kinds of stuff, but from what I understand, Next doesn't work like that when you and I have talked. So I want to go to Articles. And let's see, blog, why are Chinese hamperers? Okay. Let's go to that. Like we do. Um, so that just does this. And then, we'll save. Editing a published version. Okay. Hey look at that. Yeah, so you guys stuck around. That's already set up. I was hoping that it would be. It was such a pain to try to do this same thing on with the Gatsby frontend. It's still nice but but some different logic had to be done. Maybe I'll read this and maybe that's just. I needed a little time to figure out the difference. I free. I think the shirt the workshop code for what you see what did you do. For which part. What the live preview. How do I was implemented. Looks like it's already implemented, you know, at least we'll do an update. I didn't know I wasn't familiar yet if in v4 strappy. I got like a preview URL setup. So I could view a draft version. I think I got a three preview secrets. Extensions. Yeah. Okay, so it's just that I need to include something on the strappy index file. This is for v3, maybe there's a better way now.

19. Handling Drafts and Questions

Short description:

But it looks like I need to add something. Oh, I have to change how I handle drafts and published articles. Drupal already has that built-in logic to give you a preview URL. I'm exploring all the changes in Strapi-v4. Thank you all for coming. Gustavo, do you have any questions? Thank you, Gustavo. Reach out to us if you need anything. Have a great day!

But it looks like I need to add something. Or did it say this was. Oh, I have to add a new year. Oh, I have to make one for each collection. That thing gives me a preview button. I have to change how I handle drafts and published articles. Let's see. Automate changes your draft. Yeah, let's see. I'm assuming this is gone. Okay. The draft version. Because that's what was interesting about the Drupal back end version is that there was sort of some because Drupal already has like that built in logic to give you a preview URL. And so it sort of leverages that to create, which it looks like. So does next, which is what Drupal is leveraging like it'll give you this API preview endpoint that you could look at like a draft copy, essentially, like that already updated. Yeah, anybody still hanging around let me know if you want to see some, otherwise I'm just kind of exploring all the changes in Strapi-v4 I haven't had a chance to look at yet.

So, thank you again for coming. Thank you everybody for coming. Gustavo, you're the last one my friend. Do you have any questions or anything from us? I'm going to take your silence as a no. Thank you, Gustavo. Take care of yourself. Appreciate you coming again, feel free to reach out to us, we'll be at the rest of know Congress for the next few days. So if you think of anything, go and drop us a line inside the thread or ping us directly, and have a great rest of your day. All right.

Watch more workshops on topic

React, TypeScript, and TDD
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Web3 Workshop - Building Your First Dapp
React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Top Content
Featured WorkshopFree
Nader Dabit
Nader Dabit
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
Remix Fundamentals
React Summit 2022React Summit 2022
136 min
Remix Fundamentals
Top Content
Featured WorkshopFree
Kent C. Dodds
Kent C. Dodds
Building modern web applications is riddled with complexity And that's only if you bother to deal with the problems
Tired of wiring up onSubmit to backend APIs and making sure your client-side cache stays up-to-date? Wouldn't it be cool to be able to use the global nature of CSS to your benefit, rather than find tools or conventions to avoid or work around it? And how would you like nested layouts with intelligent and performance optimized data management that just works™?
Remix solves some of these problems, and completely eliminates the rest. You don't even have to think about server cache management or global CSS namespace clashes. It's not that Remix has APIs to avoid these problems, they simply don't exist when you're using Remix. Oh, and you don't need that huge complex graphql client when you're using Remix. They've got you covered. Ready to build faster apps faster?
At the end of this workshop, you'll know how to:- Create Remix Routes- Style Remix applications- Load data in Remix loaders- Mutate data with forms and actions
Vue3: Modern Frontend App Development
Vue.js London Live 2021Vue.js London Live 2021
169 min
Vue3: Modern Frontend App Development
Top Content
Featured WorkshopFree
Mikhail Kuznetcov
Mikhail Kuznetcov
The Vue3 has been released in mid-2020. Besides many improvements and optimizations, the main feature of Vue3 brings is the Composition API – a new way to write and reuse reactive code. Let's learn more about how to use Composition API efficiently.

Besides core Vue3 features we'll explain examples of how to use popular libraries with Vue3.

Table of contents:
- Introduction to Vue3
- Composition API
- Core libraries
- Vue3 ecosystem

Prerequisites:
IDE of choice (Inellij or VSC) installed
Nodejs + NPM
Developing Dynamic Blogs with SvelteKit & Storyblok: A Hands-on Workshop
JSNation 2023JSNation 2023
174 min
Developing Dynamic Blogs with SvelteKit & Storyblok: A Hands-on Workshop
Top Content
Featured WorkshopFree
Alba Silvente Fuentes
Roberto Butti
2 authors
This SvelteKit workshop explores the integration of 3rd party services, such as Storyblok, in a SvelteKit project. Participants will learn how to create a SvelteKit project, leverage Svelte components, and connect to external APIs. The workshop covers important concepts including SSR, CSR, static site generation, and deploying the application using adapters. By the end of the workshop, attendees will have a solid understanding of building SvelteKit applications with API integrations and be prepared for deployment.
Back to the Roots With Remix
React Summit 2023React Summit 2023
106 min
Back to the Roots With Remix
Featured Workshop
Alex Korzhikov
Pavlik Kiselev
2 authors
The modern web would be different without rich client-side applications supported by powerful frameworks: React, Angular, Vue, Lit, and many others. These frameworks rely on client-side JavaScript, which is their core. However, there are other approaches to rendering. One of them (quite old, by the way) is server-side rendering entirely without JavaScript. Let's find out if this is a good idea and how Remix can help us with it?
Prerequisites- Good understanding of JavaScript or TypeScript- It would help to have experience with React, Redux, Node.js and writing FrontEnd and BackEnd applications- Preinstall Node.js, npm- We prefer to use VSCode, but also cloud IDEs such as codesandbox (other IDEs are also ok)

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Don't Solve Problems, Eliminate Them
React Advanced Conference 2021React Advanced Conference 2021
39 min
Don't Solve Problems, Eliminate Them
Top Content
Humans are natural problem solvers and we're good enough at it that we've survived over the centuries and become the dominant species of the planet. Because we're so good at it, we sometimes become problem seekers too–looking for problems we can solve. Those who most successfully accomplish their goals are the problem eliminators. Let's talk about the distinction between solving and eliminating problems with examples from inside and outside the coding world.
Jotai Atoms Are Just Functions
React Day Berlin 2022React Day Berlin 2022
22 min
Jotai Atoms Are Just Functions
Top Content
Jotai is a state management library. We have been developing it primarily for React, but it's conceptually not tied to React. It this talk, we will see how Jotai atoms work and learn about the mental model we should have. Atoms are framework-agnostic abstraction to represent states, and they are basically just functions. Understanding the atom abstraction will help designing and implementing states in your applications with Jotai
A Framework for Managing Technical Debt
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top Content
Let’s face it: technical debt is inevitable and rewriting your code every 6 months is not an option. Refactoring is a complex topic that doesn't have a one-size-fits-all solution. Frontend applications are particularly sensitive because of frequent requirements and user flows changes. New abstractions, updated patterns and cleaning up those old functions - it all sounds great on paper, but it often fails in practice: todos accumulate, tickets end up rotting in the backlog and legacy code crops up in every corner of your codebase. So a process of continuous refactoring is the only weapon you have against tech debt.In the past three years, I’ve been exploring different strategies and processes for refactoring code. In this talk I will describe the key components of a framework for tackling refactoring and I will share some of the learnings accumulated along the way. Hopefully, this will help you in your quest of improving the code quality of your codebases.

Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Fighting Technical Debt With Continuous Refactoring
React Day Berlin 2022React Day Berlin 2022
29 min
Fighting Technical Debt With Continuous Refactoring
Top Content
Let’s face it: technical debt is inevitable and rewriting your code every 6 months is not an option. Refactoring is a complex topic that doesn't have a one-size-fits-all solution. Frontend applications are particularly sensitive because of frequent requirements and user flows changes. New abstractions, updated patterns and cleaning up those old functions - it all sounds great on paper, but it often fails in practice: todos accumulate, tickets end up rotting in the backlog and legacy code crops up in every corner of your codebase. So a process of continuous refactoring is the only weapon you have against tech debt. In the past three years, I’ve been exploring different strategies and processes for refactoring code. In this talk I will describe the key components of a framework for tackling refactoring and I will share some of the learnings accumulated along the way. Hopefully, this will help you in your quest of improving the code quality of your codebases.