How to Solve Real-World Problems with Remix

Rate this content

- Errors? How to render and log your server and client errors

a - When to return errors vs throw

b - Setup logging service like Sentry, LogRocket, and Bugsnag

- Forms? How to validate and handle multi-page forms

a - Use zod to validate form data in your action

b - Step through multi-page forms without losing data

- Stuck? How to patch bugs or missing features in Remix so you can move on

a - Use patch-package to quickly fix your Remix install

b - Show tool for managing multiple patches and cherry-pick open PRs

- Users? How to handle multi-tenant apps with Prisma

a - Determine tenant by host or by user

b - Multiple database or single database/multiple schemas

c - Ensures tenant data always separate from others

195 min
21 Nov, 2022

AI Generated Video Summary

Welcome to my inaugural remote workshop for RemixConf Europe. We'll cover error logging, multi-page forms, patching Remix, and multi-tenant apps with Prisma. We'll discuss expected and unexpected errors, code and error handling, and server-side error logging. Additionally, we'll explore handling form fields and errors, using typed JSON and overriding default functions, and managing data for multi-page forms. We'll also touch on storing session and form data, using standard fetch and form data in a Remix app, and multi-tenant Remix apps with Prisma. Finally, we'll delve into patching Remix to overcome limitations and testing with Remix and Playwright.

1. Introduction to RemixConf Europe Workshop

Short description:

Welcome to my inaugural remote workshop for RemixConf Europe. We'll cover error logging, multi-page forms, patching Remix, and multi-tenant apps with Prisma. Expected errors should be returned to the client as JSON payload, while bad client requests should receive appropriate responses.

Welcome, everyone, to this, my inaugural remote workshop here for RemixConf Europe. Today we're going to be talking all Remix. You know, Remix all the things. Yay. My name is Michael Koh. Great to be here. My name is Kathy Frost. And I'm a co-founder here at Final Cut Pro, and I am excited to be here. Remix all the things. Yay! My name is Michael Carter. You may know me online as Kiliman. It's short for Kilimanjaro. I actually had a consulting company called Volcanic Technologies. All my computers are named after volcanoes, and I wanted a short nickname that I could use online. And Kilimanjaro just happened to sound really good. So that's where that came from. Again, this is my first time doing a workshop. Or giving a workshop, I should say. I've done several others. It's going to be a little bit different than what you probably are used to. In that we're not really trying to build an app. What I want to do is... I've been using Remix since day one, when they first started the licensing. In fact, I got one of the first licenses when Kent had tweeted the link to the Buy link. Apparently, he tweeted it a little too early because the license I got was their Dollar Test License. Anyway, I've been using Remix. I've been part of the group that helped define the API. Just by using it and giving feedback. So when they finally released it back in November, I was excited for everybody else to be able to use Remix. So with that being said, the talk that I'm going to be, the workshop that I'll be doing is primarily talking about the things that I've learned using Remix, as well as just over the years of doing web development in general.

We're going to cover four main topics. Let's go back to my, let me go ahead and share my screen real quick. So we can see what we're going to be talking about. So we're going to be covering four main topics. I'm going to zoom this a little bit more to error logging, you know, you know what are errors in Remix and how, sorry. I forgot to hit the actual share. OK, so, you should be able to see my screen now. So we're going to be talking about error logging, you know, what are really what are errors in Remix? You know, how do you handle them? Multi page forms. You know, that was a topic that a lot of people have asked for. How do you in the in the. You know, with Remix loaders and actions, what is a good way to deal with with forms? I'll be also talk about how I patch Remix. As you know, Remix is a, you know, a fast moving framework. The people on the team are very smart, but they are a small team. And there's going to be issues that come up and there are also, you know, so there's a lot of pull requests or bug fixes and things like that that are so sitting out there open and waiting to be merged. But you have an app to build and you've got to keep moving. So I'm going to tell you how I manage patches for Remix so that I can get unblocked and continue building my app. And then finally, we're going to talk about multi-tenant apps and how to use them with Prisma. A lot of that's another question I get I see a lot on Discord is, you know, how to handle multi-tenant apps. This one is actually pretty interesting because it's not- these are two parts to multi-tenancy in which we'll discuss that. I focus more on the backend part, how to access the database for multiple tenants. And then if we have time, I will kind of go back over the testing part, which we had in the discussion group last week, and we'll go over that. I have the code and all my notes about that. So without further ado, I guess we'll go ahead and get started on the error logging portion. So we probably- I'm pretty sure. I'm going to go ahead and start my timer. Okay, so we're going to have probably a break about an hour and a half in. I'm sure people are going to need to get drinks, bathroom breaks, what have you. And I know that after about an hour and a half my voice is probably going to go. So let's go ahead and get started. So here we're talking about error logging. And oh, just in case, I'm not sure if I- let me post that in the chat. Zoom chat. Okay, I'm gonna post the link just in case you have- if nobody has seen this one yet. Sorry, I'm kind of like- I've got the microphone right in my face and I have to look around it to get to the screen. So let me post the link to the GitHub here in case you don't have it. Just follow along with the code. Alright, so let's go back to error logging slides and number one.

When it comes to errors, you know, what exactly is an error in Remix? To me, I see them as three different categories of errors. We have expected errors, we have bad client requests because remember, when you've got clients that are going to send stuff they can send stuff to your app regardless of whether you have client-side validations or any of that kind of stuff, so you always have to make sure you validate on the server, so, and then you have, then finally you have unexpected server events. Those are like, you know, null or undefined references. You call an API and you made an error or database is down, those kind of things. Those are unexpected server errors. So, what are expected errors? These are the types used, sorry, these errors are typically caused during request validation. Remember, when you have a form, you typically will have some client-side validation and they submit, you know, once they pass that they'll submit the form, and then you also want to do server-side validation. You never want to have, assume that your client-side validation is going to catch those errors. Main reason is, again, anybody can post to your end point, so if you don't validate on the server, they can bypass any rules that you may have on the client. But when you have a validation error and the data is not in the correct format, we want to let the user know that there are problems so that they can fix them and submit the data again. So, these types of errors should always be returned to your client from your action. Do not throw an error or a response. I see a lot of people think, oh, hey, I got this validation error and I'm going to throw a response saying, you know, bad request, but then you end up in your catch boundary and not your back into your form route. So, don't, when you have expected errors, like I said, namely with validation, don't throw an error or a response. Simply return that data in your JSON payload. The next thing is bad client requests again. These are typically things like invalid URL params. For example, you're expecting, you know, your user ID is supposed to be a number, but they send a string. Or you are, you know, they send you a user ID and that person, that user doesn't exist. Those are things that are bad client requests. You would send a not found response or unauthorized access to a resource. For example, a user tries to, you know, they change the URL to try to access somebody else's notes. You want to verify that even though the note may exist, does that user have access to the notes and if not, you know, throw an unauthorized response.

2. Handling Expected and Unexpected Errors

Short description:

Errors should be typically thrown by libraries or external services, because if it's an error, you should have dealt with that and then let the user know that there was a problem. By throwing an error, you're just kind of throwing your hands up in the air and saying, oh, hey, sorry, can't do anything. So, like I said, it's much better to do the checks or verify your functions before you call them. So, this classifier should strictly be for unexpected errors. Most of these errors will be rendered in your error boundary.

So here, these are the cases where you want to throw a new response. And I actually have some helper functions that I'll show you that I use so I know what types of responses to throw and you're essentially saying I'm able to process your request with the information you provided. For the most part, you will want to display this error in your catch boundary. So this is where you're going to see a catch boundary where you could say, hey, you know, that note is not available, you're not authorized to do it, you know, invalid request, you know, because you sent a number you sent a string instead of a number or so forth. And then finally, the last category is the unexpected errors. And these are truly unexpected. These are not ones that you are...because you want to try to avoid bugs. So, you should do defensive coding, you should verify that you're, you know, you don't have a null value before you access or use the null coalescing operator, make sure that you call your APIs correctly. But there are some things that just is beyond your control such as, you know, the database is down, you know, your app can't plan for that and it just has to try to recover gracefully. I recommend that you don't throw errors directly in your application code. Errors should be typically thrown by libraries or external services, because if it's an error, you should have dealt with that and then let the user know that there was a problem. By throwing an error, you're just kind of throwing your hands up in the air and saying, oh, hey, sorry, can't do anything. So, like I said, it's much better to do the checks or verify your functions before you call them. So, this classifier should strictly be for unexpected errors. Most of these errors will be rendered in your error boundary. And if you don't have an error boundary in your route, it ends up bubbling all the way up to your route, which may not necessarily be a good user experience. So, try to minimize errors. I also don't recommend using try-catch in your application code, because, again, errors should be limited. And the only time that error should be thrown is because you didn't do defensive coding and check for that potential error beforehand. And, again, unless you can actually do something about the error, it's a lot easier to just kind of, like I said, inform the user that you have an error and you're going to probably have some error logging anyway, so you'll be able to see, be notified when these errors occur. Next slide.

3. Code and Error Handling

Short description:

We'll start by discussing expected errors and performing validation with forms and params. I've created a helper library called Remix Params Helper, which uses Zod for data validation and conversion. By using get params or throw, I can verify the expected format and type of data. If the validation fails, I can return an appropriate response, such as a not found error. Additionally, I use typed JSON helpers to serialize the JSON payload and include metadata for non-JSON types. This allows for easier handling of complex types and nested properties. Server-side validation can be demonstrated by leaving a required field blank and submitting the form. The get form data function returns errors, data, and fields, allowing for customized error messages and easy access to validated data.

So now we're going to actually start talking code. All right, so the first class of errors, like we said, are the expected errors. So what you want to do is we're going to do some validation, not only with forms, but validating params and so forth. I actually wrote some helper libraries. You may have seen some of the packages that I've created. One of them is called Remix Params Helper. I use Zod to do my validations. It's a great library. You can define what the structure of your data should look like, and then you validate it. My helper not only does the validation, but it also parses the data and converts the data from the form or the Params or search Params into the format that Zod expected. So by default, everything pretty much tends to come in as a string. But if you have numbers or booleans or any other type, it will convert that final result so you don't have to do the conversion after the fact.

So here is my typical loader. Let me close that. In this case, actually, I should probably show the route. If you saw my remix flat routes discussion, this should be familiar to you. So here is my route users.userid.edit. So in this case, I have a user ID param. I'm expecting it to be a number. So here's how I do that. I'm getting my params, and then I have my helper function that says get params or throw. I pass in the params object, and then this is the schema, this is the format that I expect it to be in. I expect it to be a user ID that's a number. And because I called get params or throw, typically I have like here get form data over here. I could do get form data or throw, or I can do get params or fail. Failed returns throws an error. I rarely do that. But throw, throw returns an invalid object response, and I'll show how that works shortly. So at this point, I wanna clear, I wanna verify that everything that I am doing is correct so that I can focus on the happy path. So I checked for all my errors up front. And then once I get past that, if I haven't thrown or returned something at this point, I know that I've got, that everything is good and I can return my response. So here get, params, or throw is going to return a user ID. And as you can see the type, because it knows that user ID is a number from Zod, that my variable is typed. So I get type inference. Zod is very good at giving you your types correctly. So now that I've got a user ID, now I'm gonna call my get user function and return my user. Again, don't worry about the, because actually this is not a real database. It's just a static file, but it returns, it's gonna return my user and it knows that it's a user type. If I don't have a user, again this, I'm doing defensive coding. So I wanna check to see if my expectations have been met. So if I don't have a user, if they pass in a user ID that is a number, but happens to be some number that's not in the database, then I'm going to throw a new response called not found. Here, the not found response. Oops! These are just, again, helper functions. You'll see in the utils folder in the repo. All it does is it returns a new response. I have ones for not found, I have one for bad requests, for unauthorized, forbidden, the invalid and success ones I'll show you shortly when we're actually doing form validation. So here it's gonna return a 404 error and then that's gonna end up in my catch boundary. At that point, I'm now gonna return my data. As you can see, I'm using my typed JSON helpers. Typed JSON basically, it's a replacement for the standard JSON and type loader data and use action data functions hooks. What they do is, one, it uses my typed JSON format so that it will serialize the JSON payload, but we'll also include metadata for types that are non JSON like dates, like dates, big int, and so forth, sets, and maps. If you may be familiar with SuperJSON, SuperJSON is another library that does that. It definitely has far more support for more complex types. My library is focused on the common types like dates and it's also a lot smaller than SuperJSON because it is more limited. It's like two and a half K minified, so it's pretty lightweight compared to almost 10 K for SuperJSON, so when you wanna try to minimize your client payload, it's best to, you know, this is a useful package. The reason why I like my TypeJSON is because I actually, I don't do the whole serialize object wrapper around my JSON responses, so when I return a TypeJSON here, I'm returning the user as a user type. When I actually get it in my loader over here, again, use TypeLoaderData with type of loader using the standard type inference that you get with Remix now, my user object is actually a user. It's not that serialized object optional wrapper stuff, which I think makes it really hard when you return objects with nested properties that have complex types. It's very hard to get access to that and I'll show you that in a little bit. So let me go ahead and we're gonna just go ahead and launch my app. Can actually see this in action. Yeah, I'm sorry if you see me kind of doing this, I'm dodging my microphone. So here's our users route. Oh, this is actually, this was a pretty simple, similar to what I've done here, but it's just getting the list, returning the list of users. So now when I click on, I'm now in my user edit route. And if you notice it, again, it used the type loader data, and here are the fields. I'll talk about this shortly, this get field helper. It's mainly helpful for things like, when validation fails. So here I'm going to first just, I've done no client side validation, because typically what you'll do is you'll put things like required attribute or set some kind of pattern, or say this is a not numeric field or what have you. This is an email type. So if I tried to type something without email, client side validation will flag it. But I want to show you how validation works server side and how you can just render those errors. So here, I'm going to just leave it blank and I hit submit and now it shows the age is required. So if you look at my action, again, I do the get params or fail again just because I need to get the user ID. Then I'm calling my helper function called get form data and I pass it in the request and it's going to do the await form data and then the form schema and here's the form schema. So I have, this is odd schema, I have a required name, an email that's an email type. So it'll validate that it's a valid email and age is a number. So again, we haven't, because it's failed, it knows that that age was required. So what happens here, get form data returns a tuple of errors, data and fields. Why three different things? Well, errors, obviously are form validation errors. So if any of the Zod validation fails, I will get an errors object that's keyed by the property and the error message. So here it's going to, you know, I'm going to, if I have a name in this particular case, I did have an error and the errors was age. So it'll be errors.age and then the message was required. And you can also use Zod's display option so you can customize the error messages that you want. The data that's returned is, again, this is typed. It knows based on the schema, this is a fully inferred type. So it has name, email, age, and strength. So this will be valid as long as your schema, the validation passes. And then finally, the fields. Why do we need the fields object? Well, the fields object is there because when you do validation and it fails, you want to return the message.

4. Handling Form Fields and Errors

Short description:

When you have JavaScript enabled, React will keep the fields that were submitted. If you don't have JavaScript enabled, Remix will re-render the form on the server with the initial data. To ensure correct default values, use the get field helper and pass in the original data and fields. The get field helper will return the correct default value based on whether there are fields or not. By redirecting after a successful action, you avoid having the post entry in the history and the possibility of duplicate charges. If there are errors, you can check for specific field errors using the errors object. Remember to re-enable JavaScript for the best experience.

Now, when you have JavaScript enabled, React will keep the fields that were submitted. And so even if you displayed some junk, if I did that and then hit submit, see now expected a number, reserved a string. Now as you can see, this field was still there, that's because React kept that field. It never, when I'm setting the default value, only get updated when that component remounts. Well, the form doesn't remount. It's just being re-rendered with new data. So that's why React will continue to keep that. However, if for whatever reason, you don't have JavaScript enabled, when Remix re-renders your form on the server, it's going to be, you know, it's going to be just like its initial page render. So your default value would end up being the same, whatever the original user data was. And then, so instead of seeing this, I would see 52. So what we want to do is, we want to return the fields. Fields basically just returns your form data back to you and then in my, when I was setting up the form, I call the get field helper. So the get field, you pass in the original data, this is your initial data, and you pass in your fields, and then you pass in the property name. And what we'll do is if the default value will, if there are no fields, we use the user name, if there are fields, and then we use as a default value. So that way, whenever, if you rerender without JavaScript enabled, so here, let's let me have 12. Disable. Refresh this. And even without JavaScript enabled, it will refill in with the correct fields thing, because let's see, let's go back to here and was actually show you what would happen if you didn't do this. Let's say, should not age, okay. Cause that's what you would normally, that's what you would typically do when you are thing. So here, I'm going to refresh 52. Doing here, hit submit and I get 52 back because again, when without JavaScript, Remix is always gonna server-side render and when server-side rendering, it's always gonna reinitialize default value and in this case, it's always gonna reinitialize it to whatever the existing age was. So by using this helper, it ensures that I get the correct one. And what's nice is that get field is type, does have, is type on inferred, use types, type inference. So if I do this, I'm actually getting, not assignable to parameter of type of key user because get field is a generic field. So it knows the type of the initial data and will only allow you to access values. Even though fields is a key of string, it verifies that you can't, because this parameter has to always be a key of whatever the initial type is. It will ensure that you can't access. So it'll give you a type error. And then finally, before we get any further, again, we had the user's typeloader data. This was our initial data. And then type the action data is the data that get returned from our action. So if we have any errors, where we're gonna return invalid. Invalid is another helper function that takes our data and returns that as a bad request. But this data is actually in a specific form. It uses an errors. There's an error's key and a fields key. And when it returns there, my get invalid is expecting those invalids and errors, and they have different types. So now that my errors is gonna be an errors type and the fields will be a fields type. We actually have another helper called get success. And that's typically if you return actual data from your actions. My recommendation is that unless you're using fetcher forms that you always redirect from your action for success and only return if there's errors. If for example, if you're doing something like a like button and you do that, you probably wanna use a fetcher form, have the action and then just return a success response. But if you are doing doing anything else, I would recommend redirects. The main reason is that by redirecting, you don't get in that state where the post is still in your history. And you get that thing where it's like, don't refresh on, after a submit, cause your credit card may be charged twice type of thing. By redirecting, you get rid of that extra, that post entry in the history. So if we do have any errors, then we know that because it's keyed, it's going to be keyed by the field name. So, so I just can check, if I have an, then I can render that message. So here, when I was doing the errors.age, because I actually had an error age, then it was going to render that. So I'm going to come back over here and re-enable my JavaScript, just so I don't forget. Okay. So now that we've shown the validation, okay, and all of this is in my notes. So if you miss any of the talk or you want to review back, you can, yeah I was up until midnight trying to get this workshop finalized. So let's go to the next slide.

5. Server Side Error Logging

Short description:

Remix allows logging of server-side errors to the console by default. However, it is recommended to log errors to an error logging service like Sentry or Bugsnag. If your logging service does not have a Remix package, you can customize it by creating a patch. The patch creates a handle error export in your entry.server, which is called whenever an error response is caught. You can import Bugsnag and follow their instructions to start logging errors. By using the handle error export and passing in the request, error, and context, you can customize the error logging process. You can also upload source maps for production errors and use the source command to import environment variables in scripts.

All right, so Server Side Errors. Okay. This is all well and good, you know you've got your validations, but you're eventually gonna have actual unexpected errors. And you want those, you need to know when those occur. Right now, by default, Remix will simply just render, will log the Server Side Error to your console. And a lot of hosts will give you access to those console logs, but yeah, it's still a big wall of text, which you have to scroll through or parse. So, typically what you're gonna want to do is log your errors to some error logging service, like Sentry or Bugsnag or LogRocket. Some sort of services like Sentry, they actually have a package or remix that you can install, and that hooks into the remix's pipeline to make sure the errors are logged properly. However, if your logging service of choice does not have a remix package, and because the Sentry one is open source, you can go ahead and look at it and see if you can customize it for your purpose, for your logging service. However, me and, this was prior to Sentry having their package, I wanted a, to do, to be able to log bugs, and there was really no easy way to get access to the error in Remix. You know, so again, as, you know, somebody that has, to get their work done and move on, you know, I could've just, you know, complained and posted some issue on GitHub and said, hey, why don't have a way of accessing the errors? I just said, went ahead and just hacked the source. And we'll talk about patching in a little bit, but that's essentially what I did was I created a patch, which now creates a handle error export in your entry.server. Help if I can spell. Okay. So we now have a new export called handle error, and Remix will now call anytime it has a, catches an error response. It will call your handle error function, and then it will pass in the actual request, the error that occurred and the context. This is context that would be part of the GetLoadContext if you were using Express or something like that. So I'm just still using Remix App Server. I don't really have context, but in my example that I had posted online, I actually used Express and Context and showed how you could use that to store the current user. So when you are notifying Bugsnag, you could actually pass in the current user. So in the Bugsnag console, you'll see which user had the error. So aside from the handle error export, I imported Bugsnag, just follow the directions that they have there, start Bugsnag with the API key. And at that point, I can now start calling the notify. So I'm gonna go ahead over here. So this is my Bugsnag console, all right? And I'm going to go back to my app, go back home and, okay, this is demo time, so hopefully it works. It was working last night. But throw an error. Okay, so here, as you can see, it threw an error. I get the wall of text in my log, but notice here I get in the console the notify Bugsnag. So it did, I did get the, my handle error got called. And now if I go over to my logging here and refresh, I should see, oh, come on, it figures. Oh, I could just, tell you now, now it's not, it's not reversing. It's not, no more tests. So if I do that log, it should work, let's check it out. Oh, it doesn't look like it's working. Oh, okay. No, I'm, let's try again. Oh, here is the error. And if I, that's weird. Okay, sorry about that. Let me just go back to a previous one because it actually did show the stack trace when I first did it. Oh, you know what? Yeah. I was working with them because one of the problems that we have is in development you have the stack trace by default because Remix enables the source maps and will, you know, display this. It compiles your app down into a single file but when you are getting actual errors you want to see that, if I click on here, we want to actually see that where, what line and file that particular error occurred in. What I was trying to do and I think I screwed this up is I uploaded my source maps on the production build cause I wanted to show that yes, you can, you know, you can have source maps for your production errors. But I think what happened is it's also screwing up my dev error. So let me delete that, come over here. I never did get this thing to work. So as you see, I was up late last night. So delete that. That's the one from, and I wanna delete this one too. Okay. And I think what was happening is that those source maps were interfering with my error log here. So when we restart the app, hopefully that will correct. Okay. So we got this error now, and let's come over to here, refresh. Nope, not really sure why it's doing that. Oh, okay, development. Well, it is showing me that it's in the wrap up, on the... the error, but before it was actually showing you, you know, color-coded syntax highlighting and all that kind of stuff. So I don't know what happened where so, but that's just a ghost to show you that you can still log your server errors. You know, even though Remix doesn't provide that, that way. I actually have a script to upload source map. So if you, and I'm still working on this, but if you're using Bugsnag and want to know how to do it, this is how. Here's a cool trick, and I'll show you that a little bit when in my Prisma talk, that we have a... Oh, sorry about the... As I said, I was hoping somebody kind of like yell out if there was a question, and I haven't been following the chat. I just kind of zoned in on my code. So yeah, I'll go back over and look at these questions soon. But here was a cool feature that you can do is that if you have scripts and you actually need to access keys that are part of your environment, you can use the source command to import those environment variables because the environment file is basically just like a regular script anyway. On the set minus A and set plus A, I think that is to make sure that those environment variables don't get like exported out of your script. And then, so here's just a curl thing to do the upload for that. So, all right, let's go back over your questions here. Okay, Andre, at some point, could you elaborate on throwing versus returning in actions and loaders? I'm not sure if I covered that in my talk or if you still have questions about it. Again, return invalid data from your actions. Okay, well, loaders aren't really, because you're either going to return data or not, because that's the point of loaders. It's the actions that really have issues. Actually, I guess throwing is important. Again, throw responses. Don't throw errors. So throw an invalid response or throw a not found or throw an unauthorized response. A nice thing about throwing is that you can have helper functions that actually do the throw. So if you have, at the start of your loader, you require a user, you can do get required user and then that function can go in and verify that you have an actual user and if you don't have an actual user, then you can throw a redirect to your login page, for example. That way you don't have to actually check to see in your loader itself, have to check to see if you have a valid user and then return the redirect. So throw responses and again, in the actions, don't return data in your actions, except for invalid, validation errors, or if it's a non form, like a full form, if you're just using fetchers or whatever, then you can return data obviously, so you can get that data. But typically you'll want to redirect on successful form actions. Nested fields, yes, I'm not sure if I have an example, but what I could do is we can just create one. I did this this morning, cause I wanted to show you why I use my typed loaders instead of the standard ones. Because here I'm using standard remix functions now. So when I'm returning those, so I'm returning an invalid response and with all these errors, this is what ends up happening, my use action data gets mangled with the serialized object, undefined, optional wrapper.

6. Using Type JSON and Overriding Default Functions

Short description:

To access the internals of the wrapped type, the speaker uses a type form and passes along the returning type to the loader data. The type JSON can be standalone, with Remix acting as a wrapper for remix-specific tasks. The speaker acknowledges the possibility of better ways to simplify and clean up the code, as they are not a TypeScript expert. They mention using the type JSON for a few months and have been using it in their code. The speaker also talks about a command script called GenRemix, which generates a file called Remix in the app directory to handle package re-exporting. They explain how they override default functions in the Remix node, JSON, redirect, metafunctions, and react, and use their typed versions in their code. The speaker concludes by mentioning the ability to export a test function that uses the Remix package and demonstrates an example using loader data and a date type.

So I have that, but now because it's wrapped in here, it's hard to access this when I wanna get typing. So even though my GET invalid that caused these GET errors where it checks for data and checks to see if the particular key's in the data and returns the data as a type, all of that goes by the wayside because I don't, the initial data that I'm passing in doesn't have the full errors. I haven't figured out exactly how to like unwrap this because I really just want to get access to the internals of that, not the serialized stuff. So that's the main reason why I use my type form because I just pass along whatever type that I'm returning to my loader data. And the assumption is that when it's deserialized back into a native, from JSON, that any type conversions that it needs, such as, you know, date, string to date, so forth will automatically be done before I have my data. And it does support nested fields. I don't have an example, but I will try to come up I will try to come up with one and update the, the repo with that.

Again, the... Yes, the recording. Again, this is the first time I've worked with GitNations. So my understanding is that the recordings will be available for people that are members there and, or anybody that bought a ticket to the conference. Okay, so that... Again, if I, if you have a question and I miss it, you know, just, you can just shout out and say, hey, got a question. You don't have to even say the question. You could type the question out, just kind of get my attention. So I don't, I don't miss you. All right. Okay, that went, that went, that wasn't too bad, 40 minutes. Sorry. I'm losing my voice already. A lot of the stuff, that like TypeJSON and my params helper, although they're packages, in most cases they're single file. And so for me, when I'm working on stuff, you know, like every project's a little bit different and you end up having, you know, you wanna kind of tweak your code to make it, you kind of come up with a better pattern or what have you. And you know, I don't necessarily wanna go through the whole, you know, publish import process. So I just copy the file directly into my project and edit it there. Eventually, some of the enhancements that I've made will make it into the actual package. I did break some of the API, you know, I changed some of the way that it's done. I think it's simpler this new way. So it will be a breaking change. Luckily, I'm still on the version zero point something so, you know, and it's not like it's a major change, so anybody can pretty much do a search and replace for stuff. So yeah, so type JSON again. The type JSON here is actually can be standalone and the remix just as a wrapper that handles all the remix specific stuff. All right, so now let's, was that the last of the slide? Yes, so thank you. All right, we got through one. One down and what, three, four to go. Anyways, I hope everybody is, this is interesting to you. It's more, I'm kind of just, you know, it may not necessarily be your typical workshop where you're doing hands-on stuff, but I think that this helps. Okay, yeah, I'm pretty sure there are better ways to do it. I'm not a TypeScript expert. I kind of just hack at it until get it set into a form that it worked for me. So I'm sure there are a lot of ways that I can simplify or clean up things. What the funny thing was, is that I've been using the type JSON for a while, since I don't remember exactly, a couple of months, two or three months ago when I first wrote it. And so all of my code has been using that. In fact, I actually have a helper, as you know, I write a lot of code. So let's go to here, OCLI. Back over here. Yeah, I'm gonna have to rethink this microphone placement because it's, I have this function, I have this command script called GenRemix. I'll show you how that works here. What GenRemix does is, I have a config file here where I can specify which packages that I want to re-export. So what I wanna do, one of the issues we have, remember when we first, I'm not sure how many people started Remix, but they used to have a meta package called just Remix that imported all the platform specific ones. So like from node, cloud-flare and forcel and so forth. And you didn't have to worry about which package it was in. It was in the Remix virtual package. However, what they were doing was they were actually rewriting files in node modules which ended up being bad form, especially for people that use plug and play, those kinds of things. So instead what I do is actually create... It just generates a file in my app directory called Remix. One second what the command is. Next you have a set for post install. So anytime I update Remix, it will get the correct one. And I think I am going to have to... I got to install this because I am actually overriding... This is one of the things I like about this. Again, because I'm lazy and I want to... What I do is I override the default functions functions in the Remix node, JSON, and redirect metafunctions and react all those with my version. So now in my code, I can just use JSON, use loader data, and it will instead use my typed versions. That way I don't have to worry about type, actually doing the type version. So here... So all I did was it went through all the packages that I exported. And we'll then create a new file called Remix that has all the imports. So here's the imports that I'm gonna override. This is not my fault. They actually exported this as JSON function, but it should have been exported as a type. So this helper does go through and figure out which ones are values and which ones are types. And then we'll finally... And again, I think these are all, just issues with the original exports. So one nice things about that is that I can go in now into a route and... It's gonna go ahead and create a new one, test that. So I can export default, let's say loader. One second. Sorry. Okay. What I need to actually do is when I go to export, see now I can add the import for my remix package. So everything is gonna be there. So it's not, yeah, I can select my remix package, JSON, and let's say message on Hello World. And then we'll say date is new date. Okay. So then I can come over here and export, default function, test. We'll use loader data from my remix, type of loader. It should be date. And then I can say, return, return is tmessage at date, sorry. I assorted date to locale data string.

7. Handling Dates and Type JSON

Short description:

I use typed versions in Remix export, JSON, and Loader data to handle date conversions. The type JSON package provides typing without the extra payload of Super JSON. I discussed the benefits with the Remix team and hope they consider importing it. Rendering dates in a client-only approach can avoid hydration issues. Using the time element in HTML with UTC values and converting to local time or using relative time can provide a consistent display. Be mindful of content layout shift when rendering dates on the client. I use the type JSON package to ensure consistent data between the server and client.

Okay, so as you can see, I was able to use, Oh, that should be tight. I'm using my Remix export. Have JSON. I have Loader data, but these actually use the typed versions because as you can see here, it knows that this is the date. If you were to do this in standard remix date would be treated as a string because that's the JSON serialized version. So here we're going to have my two low cast. So because it's a date, it can still do that. And if you didn't have the, if you were using the standard serialized one and if you didn't have type of Loader, you just use it just the thing and everything was any because date would be a string. You can still call this function, but you would get a runtime error because a string doesn't have that as a function. So now that I have that, I'm going to go ahead and go back to my app. This time I'm going to test and here you go. See, I got the hello world. And I had to do it. Did you ever discuss this with the Remix team because I would find it a quite useful addition to standard Remix if they included this functionality there. We've... In terms of the type JSON, or are you talking about the... Yes, the type JSON. Yes, there is actual discussion. Sergio actually originally did the discussion with making Super JSON the default, but the team, the Remix team was hesitant because Super JSON is such a heavy package. That's one of the reasons why I wrote type JSON because I wanted to have the typing, but I didn't necessarily want to have the extra payload. Hopefully we can get them to import it. I think it makes it a lot easier because you don't have to worry about doing the conversion. Because yes, when they say type safety, they're kind of basically saying, yeah, we're gonna tell you what used loader data actually returns, which is basically a JSON serialized version of whatever payload you did. But it's not type safety in the sense of, hey, if I pass in a date, I want that to be a date on the other side of the network. So that's one of the reasons why I did this is so that I can get, especially when you're dealing with Prisma and you have date fields all the time, so I didn't wanna have to worry about doing the conversion in the loader or at the route, the component level. And you get full, so let's say I go in and you get full type refactoring too. So if I wanna rename this to today. Oh, yes. I wish that, that is one of the things it does. It's saying, hey, today is the value variable but you called it date here, so it doesn't rename it. And I think there's a way to get it to not do that but at least it knew that the variable that was part here was today. It just when you destructure, you can override the actual property name. So that's why it kept this as the date. So here, I'm gonna change that. Change this to today. So here. So that's, do you have a hydration issue with locale date stream? My app is full of logs about that. Time zone mismatch from server to client. Should I render date in a client only? Yes, that's one of the things that I typically do is if I'm gonna have a bunch of dates, there's a couple of things that you can do. You can, what I will do is I have like a, I use the time, HTML actually has a time element. And what I will do, and then I would put the date time value as the UTC value. So I typically try to save all my stuff as UTC. Then I would then render the time locally with the local time. So I would then convert that to local date time. Or if I don't really care about the time but just want to kind of like was it now or like three hours ago, then I would use some of those things that give you that how long ago type of thing. So instead of it being a specific date time, it would just say five hours ago. And because five hours ago will be the same on the server as it is on the client, you won't get the hydration issues. Part of the problem though is that you do have to worry about content layout shift because if you don't render the date time on the server, it's basically gonna be blank. And then depending on how that is in your layout, once it actually renders on the client, it may cause the text to shift a little bit. So try not to put those, if you're gonna use those kind of times, make sure that they're either in a place that's not gonna be an issue or you perhaps set some kind of width on it to ensure that it'll automatically take up as much space as you expect to begin with and then it will fill in with the text. So... So that's, hopefully that answers that question. So, yeah, so again this, I was just kind of showing you some of the tips and techniques that I use. I use my little remix import file. The type JSON gives me the ability to, I don't have to reason about it. It's whatever data that I pass in and return from a loader I'm going to get that same exact data on the client. Now, granted, again, the type JSON is limited in the types of values it is, but I haven't really had any use for things beyond the typical stuff that I already do, so...

8. Multi-page Forms with Remix

Short description:

This part covers multi-page forms with Remix. Different ways to render and process multi-page forms are discussed, including the example of a single route with a single form where inputs are shown or hidden based on the current page. The speaker emphasizes the use of loaders to manage state and store data directly in the fields. The example app demonstrates how data is retained when navigating back and forth between pages. The loader retrieves the current page's data from the session object, and the layout component handles the rendering of each page. The form uses buttons with different values to determine the action to be taken. The speaker also mentions the use of Tailwind and the Remix app server in their app.

All right, so that concludes the error handling portion plus miscellaneous. So the next one is, this one's actually going to be pretty simple mainly because I just created, I created the sample on CodeSandbox a while back to answer somebody's question. I was planning on doing more advanced stuff here, but just ran out of time. So I just copied the sample. It has some commentary on it, so... But we'll go over that as well. So let's go ahead and, we'll go ahead and close out all of the... Let's save everything. Okay, so now we close the error logging, go to multi-page forms.

Anyways, like I said, this is my first time. So I'm hoping that you guys are okay with the format, that it's not your typical workshop. So here's the discussion about the multi-page forms with REMIX. We're talking about the multi-page forms, okay. We're gonna show a sample showing how to render and process multi-page forms. Okay, pretty simple, straight forward there. All right, rendering multi-page forms. All right. As you know, there are many ways to do this. It all really depends on your app, what works, how your form is. Some are complex, some are simple. You just wanna be able to collect some data. But you don't wanna have one giant, long scrollable form. So there are different ways you can do this. For example, you could have a single route with a single form and simply show or hide the inputs based on which page the user is currently on, which is the example that I'm gonna be showing. Because again, I was just trying to make do the simplest thing possible. I'm actually gonna show you an example from an actual app that I'm working on. It's not open source, but I can show you a little bit of it. And I'll probably actually create a sample that takes some of the concepts from this app and make it an open source example.

So here, if you're going to store everything on a single page, a single form, you're gonna always want to return the current page's data. Because we're not using any local state. That's one of the nice things about Remix is that you don't have to use, especially with forms. Back in the bad old days where everything was useState and event.preventDefault. Here, we're using loaders to manage our state and we're storing the state directly in the fields themselves. So they're uncontrolled inputs, not controlled stuff. So this way, when the user navigates back, you won't lose the data. So let me go ahead and start that app so we can actually see the app while I discuss it. As you can see, most of my packages, I use Tailwind. I still use the Remix app server. So I still have to generate the Tailwind directly when I do it in watch mode. And I like to always clean my build folders between, so I have a pre-build that runs RimRaf, which is kind of a cross platform version of RM-rf. So I delete my build in public build folder. So that's why you see so much stuff happening at the beginning. Must not have had, yeah, I guess I didn't update my.env. That's all right, it still works. So let's go ahead and launch my app. OK, so here is a multi-page form. Now, these photo ones don't actually do anything, but I will show you photo how to do file uploads in the app that I'm actually building, which is pretty cool. So here's this multi-page form. It's actually, that's funny. Because I'm storing the data in session, it still remembered the session data, so my form data. So as you go through, all of this is data that I had already filled in. So let me go, let's go back here, and we'll actually delete those cookies so that we're not. Because I'm using session-based storage. We'll talk about different types of storage shortly. So here, now, I've cleared my session. Now I don't have anything. So here I can go ahead and type in web developer and click Next. And if I go back to previous, that data still shows up. And the way this works is in my route, here's my loader. OK, this doesn't have all the stuff that I normally do with my params helper. Because again, I literally coded this on Code Standbox. So it's just kind of default remix out of the box. So here I'm getting the page number, and I default to 1 if it's not present. I get my session. Standard remix session object. It's cookie based. And as long as I'm not on the last page, so less than 4, I'm going to get the current data for that page out of session. So its foreign data, page, and then the page number. So each page gets its own key in the session object. And then if I don't have a value, I just create an empty object, and then just return that with the page number and the data that I got. So in my component, my layout component, I get the loader data, I get the page, and I get the data. And then I simply, this is just checking what page number. So whether previous or next button should be displayed. And then here's the actual form part. So we have a single form at the top. And then each page, if page equals one, then I render this. If page equals two, render this. So there's nothing. All I'm doing is I took a form that was three pages long, required a lot of scrolling. And instead, just chunked it up. Still single form. The next and previous buttons are the standard submits. But here, as we know, if you name a form, a button with a value, when you actually click on that button, that's the value that will be submitted with that name. So even though they're buttons with the same name, when I click on the previous button, when I get action, it will return previous. If I clicked on the next button or return action, it'll be next. Again, buttons always submit. So when I hit next, let's go ahead and just go over here. Here. When I hit next, all right, so here's my post.

9. Managing Data for Multi-page Forms

Short description:

The payload for the form is extracted using the qs package. The page, action, and other fields are collected in the data object. The session is updated with the data for the current page. If the action is 'next', the current page is incremented by one; otherwise, it is decremented by one. The session is committed and the cookie is set. The same process is repeated for the second page. Fields are optional, allowing users to skip them. The final page collects all the data from the session and renders it as JSON. Forms can be implemented in various ways depending on the use case and desired user experience. Complex forms can be split into separate routes or components. Other storage options for multi-page form data include storing each page in session or using external storage solutions like FileBase, Database, or Redis.

When I hit next, all right, so here's my post. My payload, page one, username about, file out blow but here's action was next because that's the one I clicked on. So now in my action, oh, this was, this is real old. This is still, oh, I know why I did that. Okay, I want to get the payload and the payload remember the payload by default is this is what the payload actually looks like. Hopefully you guys can see it. It's basically it looks like a query string. Oops, I think I went a little too far. Okay, so it looks like a query string and the reason why I did that in this particular case is because I actually do have, it's a multi checkbox list and you can check off multiple items and you need to be able to get those as individual, you know and it will do like, you know, name equals value, name equals value two and you typically would get that was like all and then it would return an array. I wasn't doing anything specific in this example, I wasn't doing anything specific at the page level so I wanted to kind of make it very generic till it didn't matter what it was doing. So I'm using this package called qs and parse will actually do the name, the multi-value things directly. If I were to use my params helper, params helper also does that, it will convert duplicate names into an array value but I was just using this really quickly. So that's why I was trying to remember why I did that but yeah that's why, because I wanted to just get it and so here I'm extracting the page out of my pro, so let's go back to the parsed version, I'm extracting the page out, the ACTION out and all the rest of the fields, I'm just collecting in data and then I'm getting my session and I'm setting that page's data with everything else. So username, about, file load, excuse me, all of those will be set in the data, the session variable for page one and then it checks to see if the action is next, then I'm gonna add one to the current page, if it's not I'm gonna subtract one from the current page to go previous and then I'm gonna redirect to that page and again, anytime you're dealing with session cookies, that is, cookies that are session that use session, sessions that use cookie storage, you always have to commit the session even if you're not anytime you mutate it, you have to always return that cookie and one thing people, if you use session flash, session flash is one of those where you can set a value and then the next time it's read, it will automatically remove that value. You typically do that with like type toast messages or whatever they show up on the next, because you submit, remember as I said before, you should always redirect after a successful action. Well, if you want to display a success message that redirect, there's no way to render that message. So you would set it in a flash and then on the resulting page, you would read from the flash and then render that message. However, because reading from flash, even though it's a get, it does mutate the session object by removing that key from the session. So you still have to make sure you set the cookie and commit it. So that's, that's a gotcha that some people don't realize because you would assume that get are non-mutating, but unfortunately the flash one is. So here, because I'm in an action, I'm mutating the data. I'm going to commit my session and set that cookie. So now I'm in the second page and I'm doing this. I can do the same thing. So let me see. And I don't have to fill in all of the data. That's one of the nice things about these types of forums, is that sometimes the user doesn't know all the data right away. And they want to be able to, you know, they might want to just be able to skip to that and then come back to filling in the form later. And if you require them to fill in everything, then it's not a good user experience. So by making all these fields optional, they can fill them in. And then obviously at the final stage, when you're doing like an order form or whatever, you can actually pay now, you must have all that other stuff completed. And then you can then give some kind of feedback in terms of which data is still required. So here I'm gonna go ahead and get to next. And as you can see here when I post it, here's my payload for form two. There's my page, first last name, email. So it still has all the values. It just doesn't have any, there's no text associated with it. And I still have the action next. So in this case, now session is, has session page two has this data. And if I go back to the previous one, because my loader always returns whatever the current data is in that particular page, I can then re-render the form with the previous data in there. Sorry, I think I actually went back all the way, or did I? All right, so this was where I was talking about where I had the multiple check marks. If you look here, if I check multiple and say next. I'm gonna get the, yeah here, so on page three, if you have a form field, this is Standard HTML, if you have a checkbox field or any field that has the same name, multiple values, so here's my check and comments. So this was the comments or email name, email, an email, the value is what's returned if it's checked, and if you have multiple, it will then return, it sends it multiple times, and then what I want is for it to be treated as an array. So with that being said, when I get to the final page, the page is four now, my final page, now I'm just gonna go ahead and collect all the data that's in the session. So I just spread page one, two, and three into a single data object and then return that and for that final page, I'm just simply rendering that as JSON. So as you can see, all of the values that I collected are showing up and email is even showing as an array as requested. So any questions there? Again, forms, there's probably as many ways to do forms as there are web pages. So it really is all dependent on your use case and the UX that you want to provide to your user. About 15 more minutes and then we'll take a break. Let's see. Let's go back to my notes here. So again, this is about... So that was the simple form, just you have a page, you show the page, and then if it's not the current page, then it's hidden. So those values kind of disappear from the browser, that's why you have to return those previous page data there because there's nothing holding it. So you don't have to do things like... If that section is not shown, have a hidden input to store that data. And again, you're not using React you state to manage that data, everything is stored on the server and then returned as needed. So I think that's one of the reasons why I like Remix is that it just gets a lot of that boilerplate stuff that you used to have to do with any type of form based applications. You just do what... It's kind of like the simplest way and the simplest way it turns out to be the correct way and you don't have to really... It's definitely much easier to reason about. So the other option, again, would be, especially if you have a really complex form instead of having everything in one big route like this, what you may wanna do is create separate routes. So you may have a top level route and then each page would be a separate route that gets rendered in the parent's outlet. So that way you can break those out into separate things and people can even bookmark them. But the nice thing about it is, is that, the actual storage and processing is pretty much still the same, but each page can now have it's. Remember how I said the loader and the action was very generic, by separating them out into separate routes or separate components. Then each of those routes can have it's own logic. Yeah that's, I'm gonna get to that next and then I'm gonna actually show you this demo of my multi-section form. Because like I said, I have a really complex route. It was about 1,200 lines long because I was just copying and pasting when I was prototyping it. So when I did the refactor I was able, and I used flat routes with co-locating feature. So I extracted all of the individual, this was a multi-section one with collapsible panels. So each form section was now its own independent component which I could have co-located in the route folder. So I could import those without having to have them in some app components folder. And I'll show you that in a little bit. But yeah, good question. Cookie storage, have a size and payload limit, what other storage options are available? So, that was the next slide. So, managing data for multi-page forms. Okay, so there are several ways you can store the data for multi-page forms as your navigates. Because that's one of the things you want to do. You always want to actually store the data that they submit. One, it allows them to come back later whether it's in their previous next navigation, or even come back at a future time, maybe they weren't able to complete the form and they want to be able to, so if you have some way of being able to pick up where they left off, that results in a good UX. So you can either store each page and session like we did. And you could do sessions that are not cookie storage sessions, because all sessions are cookie based at some point because the cookie typically will have a session ID that you can then use to either access some external storage, whether it's FileBase, Database, Redis, any of those kinds of things. So the cookie just stores the session ID and not the actual contents of the session. But in our case, we were using the session storage, cookie storage, so the actual data was stored in the cookie itself. So in fact, if you go back over here and I look, as you can see, I'm not sure how well you can see that. No, there's a, keep forgetting the actual command for it. Okay, there it is.

10. Storing Session and Form Data

Short description:

When storing session data, consider the size limitations of cookies and the potential for data loss. Storing data in a database allows for durable storage and cross-device access. However, required fields and relations may pose challenges. Alternatively, using a key-value store can simplify storage but may require additional processing to save the data in the final models. Storing data locally in the browser using local storage or index DB is suitable for offline support but adds complexity. The speaker shares a project using Remix Flat Routes and explains how Zod schemas are used on the server to avoid bundling unnecessary code. The project also demonstrates features like drag and drop image uploading and recording and uploading videos using signed URLs with Cloudflare.

See, so you can see that the session object is pretty big. In fact, if I, you know, so it's actually, so yeah. So, as the form gets larger, then this cookie is going to get, the cookie value will get a lot larger and, you know, you may exceed that limit. Also remember every request the cookie gets sent, even if it's a request for, you know, CSS or JavaScript or images, that cookie gets sent. It's not just going to send on meaningful route navigation. So yeah, so definitely I would not, I don't typically recommend using cookie-based sessions unless your sessions are going to be relatively small, things like, you know, Theme or what have you, or the current user ID, you know, the authentication type cookies.

Now hopefully I can remember how to... Okay, there it goes. It went away. All right, so let's go back to my doc. I can close that now. All right, so we can either store it in the session, store the data in the database, or use local storage in the browser. Again, storing in the date session, it's the simplest. User navigates. You submit a form data as added to the current form session by page number. When the final page is submitted, you should have all the data needed to persist the data to your database. As we saw when we got to page four, all that data was there. And at that point is when you validate to make sure, did they fill in all the required fields. And if not, have that maybe links back to that page to fill those in. So the pros obviously with that are simple to use and the data persists as long as the session exists. As you saw before, I filled out that stuff last night, came back, and because it was stored in my cookie, all those fields actually still persisted. Cons, as we discussed cookie storage, the session can get pretty big depending on the data. If the session expires, then they lose that data. If they haven't finalized it, they'll lose that data. And the other con is you can't continue it from a different browser.

So the other option is to store the data in the database. One nice thing about that is that you can durably save the data. So you don't have to worry about the data going away if the session expires or what have you. And also because it's a centralized store, the user can come back and pick up where he left off regardless of which device they're accessing the data on. One of the potential issues with storing in the database is depending on your data model, you may have issues with required fields of relation. So if you have a model that has fields that are required and you wanna be able to save that but you only have partial data, well, your model is gonna reject that because it's gonna say, oh, that field can't be null, gotta have a value. So you either make your model have all those fields optional which is obviously not ideal or you have to create a separate data model just to store this transitional data. So it can get a little bit tricky or you could just have a simple thing that has I'm gonna store the data like session just kind of in a name, key, a key value store. So we just store the blob of JSON or whatever as the value. And then it's not until we go to the final page that we then save that into to the actual data models as we need because sometimes you have actual relations where you have to have like a project that has, you have to create a project before you can add team members to it. So if they have, if you haven't had a way to create that project, and then, you know, they're going to another page where they're processing the team members, then, you know, you may have issues because you can't save the team members because you don't have the reference to the project ID because that's not been created yet. So a lot of times what you'll do is you'll again, save all that data and just a key values store and then do the actual, your final models later.

And then the final option is storing the data data locally in your browser. And yes, you can use local storage. You can use index DB. However, I don't recommend that as a primary option. It would have to be a very specific use case where you need to support the user to be able to be offline. Because one of the issues with that is that once you are not directly involving the server with storing the data then you now have to go, you kind of revert back to the old method of maintaining local state. And that just adds a lot of additional code. So this is kind of, again, I would only use that as in specific use cases. And, in fact, in my example that I'll show, I have an option to be able to record video and I don't want the user to record 10 minutes of video and then find out when they go to try to save it that something's broken. So, what it actually does is as they're recording each of the blobs, the video blobs, get stored in IndexedDB and then when they stop the recording, then I bundle up all those blobs and send it to the server. So, again, those are specific use cases in order to... Because ultimately, you want to do what's best for the user, enhance their user experience. And, losing data is probably one of the worst things that a user can do, so. So, yeah, anything that you can do to mitigate that is always helpful.

All right, so let me actually show you. This is the actual project that I'm working on. And I'm not sure how well you can see this, but as you can see, this is using Remix Flat Routes. So this is my routing structure, so I can see everything at a glance, what all my routes look like. And this is the one with the message form. I actually split my route into two separate files, the main route file and then a Route.server. So all my server stuff, I keep in the route server. And the nice thing about this is that when I have, when I'm using Zod or schemas, that because it's all on the server, I don't have to worry about that, the Zod package, which is almost like 20 K. I don't want that to be in the client bundle. So by only using the Zod schema in the server code, I don't have to worry about it getting bundled. So let me actually, see if I can. Oh, yeah, this one doesn't have the port, auto port, oh, let me kill port 3000. All right, don't, you guys, don't look at that because it's not actually live yet. All right, so. Okay, here it is, so here's my collapsible forms, so I can click, why isn't it? Oops. Something isn't working, figures. I don't wanna go to the test one. Okay, so this is one where actually zoom in where I can drag pictures. Okay, so I can now drag pictures, and of course it's not working here, but I can drag and drop and it will go out and this is actually using CloudFlare images. I have an API call that will go out and fetch a private URL, a secure URL to upload directly to. I fetch that from my remix server and then when it comes back down, I get the URL, then I actually upload directly to CloudFlare directly from the browser instead of having to... I don't want to have the browser upload to my server and then have my server turn around and upload it to CloudFlare, but also you got to have with the whole keys and stuff, I can't directly upload to CloudFlare. So there's an indirection where I first get requests from their API, a URL that's specific to that upload. Yeah, signed URLs. Basically, yes, thank you. And so I do that for the photos, I'm not sure why it's broken right now. I also have the ability to record audio and then recording video. And this is the one where I was talking about where I record a video, it would store the data in local DB, index DB, and then would then upload. And does the same thing. Gets assigned URL and uploads to CloudFlare streams, which is pretty cool. Because CloudFlare images, if you haven't used it, it's kind of like Cloudinary and some of these other third-party ones, where you get, it will do all of the regenerate the image based on whatever your user's device is. And for example, in images, you can specify in the URL, like cropping and widths and stuff like that, and it will determine what formats the device can handle and will automatically generate images for that. Same with the video, you upload the raw video and then it will do all the conversions to the different formats that a device has. And depending on what their capabilities are, it will, you'll either get a high bit rate version or a lower bit rate version. So yeah. So if you, and I'm not trying to sell anybody on Cloudflare, but you know, they do have some great services and they're relatively easy to use. On the video panel here was.

11. Using Standard Fetch and Form Data in a Remix App

Short description:

You can still fall back to other ways. The use fetcher is great for specific tasks, but it doesn't support async await. Standard fetch can be used in a remix app to make remote calls and get results. Form data is useful for handling form-related data.

So if you, and I'm not trying to sell anybody on Cloudflare, but you know, they do have some great services and they're relatively easy to use. On the video panel here was. This is that. Oh, attachment. Okay, here it is. The attachment uploader is where I fetch. See, this is one of the things that you, you know, people don't understand or don't realize is that you don't have to use. Everything doesn't have to be a remix. You can still fall back to other ways. For example, the use fetcher is great for the types of things that it's good at. However, because it doesn't have the ability to, it doesn't support await, async await. So, if you want to fetch some data and then in that same event be able to handle the response, if you were to use a fetcher, you don't get that because you have to wait for the next, you have to wait for the transitions to happen. And then, so you'd have to have a use effect, transitions on when data, you know, when the type is done and those kind of things. But here, I just want to get the actual URL from my, so this API, I still call this standard remix resource route, and it will, API upload URL. So, I think there it is, upload URL. This is my standard loader, it's a resource route, has no component, and this is actually going to go and call the cloud Flare, based on the different, based on what type of media I'm uploading, and then it will just return, I'm going to do the fetch, and then just return the response JSON here. So, I just want to await and get that response. So, it's kind of, it's just basically making an RPC call and getting the data. So, you can, you can still use standard fetch in a remix app. So, I get the JSON, get the upload URL, and then this particular one I'm using drop zone. So, it, for every file that gets dropped, it calls this to get all of the information it needs. So, one of the requirements is to get the actual URL that you want to upload to. And then it goes in and uploads that. And what's cool is that it uses XMLHttpRequest to do the uploader. Because fetch does not provide like progress. But the XML, XHR, I'll just call that, does. So that's why it can, it didn't work on my demo, but you actually do see the little progress meter as it's uploading. So, that's kind of cool. So, once the final, once the images are finally uploaded, then I get a change status. And when the status is done, this is where I go in and I have a standard fetcher. Here. Now I'm actually going in and create a new form data, do the image URL that I get back. And here's my action to add an attachment. And then I submit. And then in my action, I just check to see is that an add attachment? And if so, then process that request. So form data is great when you want to do that kind of stuff. When you're dealing with form-related data, but standard fetchers work great when you just need to make a remote call and get the results back so that you can continue processing. So that's just my takeaway. Make sure that you understand that you don't always have to use everything with mix. And I think that was it for this slide, yep.

12. Multi-Tenant Remix Apps with Prisma

Short description:

We're going to be talking about multi-tenant remix apps using Prisma. A tenant is a group or an organization where data and users are separated from other tenants. There are two parts to a multi-tenant application: determining the tenant and accessing the correct data for that tenant.

So like I said, welcome back, everyone. We're going to be talking about multi-tenant remix apps using Prisma. It's a mouthful. So slide number one. Okay. So what do we mean by multi-tenant? Obviously a tenant is a group or an organization where data group or an organization where data and commonly users are separated from other tenants within the same application. So you have a SAS type application, company signs up and then they can manage their own users, but that company will have their own separate data. So data from company A will not be seen from company B or any type of mutations. There are typically two parts to a multi-tenant application. One is determining which tenant the user is making the request for. And two, getting the correct data for that tenant and ensuring no data from another tenant is returned or mutated. So determining a tenant, there are many ways to do this. In our test app, we'll be using the host header to determine the tenant, for example, tenant1.remix.local will be for tenant1. And I'll show you how we onboard a tenant shortly. But, again, there's so many different ways to do that. That's not what we're focusing on here, we're actually focusing on the backend. Once we determine what the tenant is, how do we access that data and make sure that we're accessing the correct data for that specific tenant?

13. Multi-Schema Support for Tenant Isolation

Short description:

To ensure data isolation and prevent accidental exposure of tenant data, we use multi-schema support provided by Postgres. Each tenant has its own schema, physically separating their data. Prisma allows us to specify the schema file for migrations and accessing specific schemas. A public schema contains the tenant model, while each tenant has a separate schema with the same tables. Accessing tenant-specific data requires specifying the schema within Prisma. This approach eliminates the need for filtering and ensures data isolation between tenants.

That's a very common way to do that. So if you have a projects model, you'd have a tenant ID, and so whenever you're querying for that, you would do a file where you were querying to projects where tenant ID equals tenant A. And you have to make sure you do that all the time, so, although simple and workable, it's also very dangerous because it's easy to accidentally expose data from one tenant to another since they're all literally in the same database in the same tables, and if you always have to make sure that you're including the correct filter. Usually you would have some sort of middleware, your framework that would ensure that the tenant ID is always included. So, there are plenty of multi-tenant type libraries that will hide the tenant ID filtering for you. However, this precludes you from making direct queries or using other database tools that are not tenant aware. So, if you had some kind of sequel query tool and you wanted to do that, you would have to now ensure that you did the where tenant ID equals or any other tooling that accesses the database, they would have to always know to do that filtering for you. In fact, the company where I used to work for, we did work for the pharmaceutical industry and that was a big issue in that we were not able to, you know, there was, you know, mixing data from one pharmaceutical to another was a big no, no, it definitely used to lose your job and there was regulations and all that kind of stuff, so we actually had separate databases for each client and in fact, some clients like Pfizer, they wanted their data on a totally separate machine, they didn't even want their data even mingling with the competitor. This was pre-cloud days where everything was on-premise, so yeah, we had a whole server room just with a bunch of different servers for different clients. So, in this example, again, I did not want to do the tenant ID way, I think that's a dangerous way, although it's simple, I think it has too many drawbacks. So, for this example, I took a different route. Instead of filtering the data, I isolate the data, so that it's impossible to query data for the wrong tenant. In this particular case, we're going to use multi-schema support provided by Postgres. In schema's, schema is basically like a namespace, all your models, your tables and stuff are assigned to a specific namespace and you can't access that namespace or that schema without explicitly specifying the schema. But Postgres doesn't really support, Prisma doesn't support that directly, so I'll show you how I did that. So, each tenant will have its own schema and Postgres will ensure that you can't access other tenants data. All right. So, let's go to number two. Prisma schema, okay. Typical Prisma app consists of a single schema with all the models defined within it. So, you typically will have a Prisma folder. Okay. And here, as you'll see, there's actually two separate folders because we're actually, we're creating two separate sets of schemas. And we'll talk about that shortly. Although Prisma does support, have multi-schema support, this is used to group Prisma models together. You still have a single database and a single set of tables for each model. So, it's not like what I'm referring to with multi-schemas. It's kind of like being able to group, you know, let's say you have a big app and you have, like, an accounting section. So, you want to kind of have an accounting schema with all your models there. And then you have, you know, HR schema with all that. So, they're all in the same database. They're just separated by schema. This, we're actually saying we're going to have the same set of models, but each tenant has its own separate schema so that they can't interact with each other. So, here is how we'll be doing it. As stated before, Postgres lets you duplicate your models for each tenant, which is namespace by the schema name. Here, we have a public schema that contains the tenant model. Okay. Here, I'm actually, let me, hold on, I actually have it here live. Sorry, one. All right. So, here is, this is a tool called Beekeeper, and I'm connected to my Postgres database, which is running on Docker. So, I have a public schema, which it has the tenant model. So, here, all it has is a name, host, and so forth. So, the host is the tenant, so the host name is the part, the prefix for the, you know the domain. Then, what I do is I actually generate a separate schema for each tenant. So, here's tenant one, tenant two, tenant three, and as you can see, each of them have their own, the same tables. They're just in separate things. And, by doing that, you cannot access tenant two's data, because it's physically separated from tenant one, and vice-versa. Same with tenant three. So, in fact, if we go into here, and look at the notes. Okay, so, here is a note that was created in tenant one, and if I double-click here, these are notes that are in tenant two. So, even though they're the same tables, table names, they are physically separated, and the only way you can access that is by specifying which schema you want to access from within Prisma, and I'll show you how we do that. Okay, so, that was that screenshot. Okay, slides. All right. In order to support this model, we first define a public schema. So, let's go back to our... So, we have separate folders for each schema. So, we have a schema.prisma, exactly like we wouldn't typically have in a Prisma model. Oh, speaking of Prisma, does everybody here pretty much knows how Prisma works and has used Prisma? Sorry, I probably should've start let off with that. Prisma is just an ORM to access relational databases using Typescript. So here, we have a tenant model that matches that, excuse me, that matches that table structure that we had here. Here's our tenant model. ID, name, host. So, one of the nice things about Prisma is that it does allow you to have a, to specify the schema file. So you can, we can generate and do migrations and everything and do migrations and everything by specifying which schema that we're accessing. So here, if we're doing the public schema or the tenant schema. So here's our tenant schema and then this is the one that has users notes and stuff. I basically took the, the blue stack. So that's what the models are based off of. So to do our setup there, I do have a Docker compose file that just sets up the here. Here's my dot, my multi tenant Prisma container running in Docker. And then we run the setup script. The setup script basically sets up your environment variables so that it takes your example and then copies it to here. That's cool. I actually have a. An extension called cloak that allows me to, this way, if I'm if I'm sharing my screen, I don't accidentally share my secrets. It's unlike the one that's, I think, built into Visual Studio. This one because this one actually changes the. The theme. So these text made scopes are set to be, you know the text to be the same as the background color. So because one of the issues that we had was I know that Kent had he was using the other way and he He would he accidentally showed his environment various screen and it was just for like a single frame on his YouTube video. Where before I had a chance to kind of hide obfuscate the keys and so he ended up having to shut down his dream and go in and clear all those keys out. So this one does it because it's already. It's actually. Changes the colors for this particular type of file. You'll never have that that flash of unprotected content. But the actual URL looks like this because we don't really care. It's basically the same. Its base connects the postgres SQL username and then the database that we want the way we get around.

14. Multi-Tenant Prisma Setup

Short description:

We specify the schema in the connection string to access the schemas. The initial setup involves migrating the public schema and generating the client for each schema. Prisma handles redirection and creates an alias package called Prisma Prisma client. The onboarding route checks for existing tenants and adds new tenants using separate Prisma clients. The DB server initializes the Prisma clients for the public and tenant schemas, connecting to the respective schemas and returning the clients. Tenant clients are stored in a map and initialized on first access.

Its base connects the postgres SQL username and then the database that we want the way we get around. The way we're able to access these. Um? The schemas is that we specify the schema in the connection string that's how that's the easiest way to select that because. Prisma doesn't let you go in and change just how it actually Query's the database. Because normally went if you were to do a straight SQL query, you would do select Astros from schema dot. Table name and if you've ever seen that you would actually see sometimes you get an error and say, oh, cannot find main dot user or something. Is it's looking for specific schema?

Okay, um. So, once we set up the initial set up, like I said, here is we migrate our public schema. So, I have a script here, this migrate what this does is it goes in and determine you pass in the schema and which tenant that you want. The tenant being the, the schema is the schema file, I should say and tenant is the will end up being the schema in the database. So, here's our schema file, and then I just take the, again, I source from the dot n file and I just change my database URL. To pass the schema in here and then it runs the Prisma migrated. This is the same command that you would normally use when you had a single schema. But I, because I'm changing the environments, the database string and passing in what schema file I want, that's where the magic happens. So. Go back to our setup and then finally I go in and generate the client. Remember, the client is the all the typescript stuff. And because we have two different schemas, we need two different clients. So this my Prisma generate will generate for the particular schema and I specify the folder. Unfortunately, there's you can't specify the output folder on the command line itself. You have to do inside the schema file. So here in my generator client, I specify that I want the client for the public to be in Prisma client public. If you look at node modules and you go to Prisma, the client, okay, the client one is typically where all of the Prisma stuff would be in. But here we actually want two separate ones. So we have one for public. So this is all the public stuff. It's the same kind of things. The only thing that we're really, the only thing that we really care about is this index dot the TypeScript definitions. But unfortunately, you still have to do all of this in there. And then the tenant one has its own TypeScript definitions. So by specifying the. Which. Which folder to output to? We can have separate tenant client files. All right. So, again, this is that. And then. So there's a bunch of redirection in Prisma. Prisma also creates an alias package called Prisma Prisma client. And all it does is it re exports the generator folders. So. What I did was actually because I needed to be able to keep track of the separate. Schemas. I just created a Prisma folder in app. And I have one for public. And then all I do is I re export Prisma client public and then tenant I re export. Well, it should be an app sign. Prisma client tenant so it's kind of the same thing. It's just it's an interaction, so I'm not directly accessing the Prisma stuff anywhere in my app. It handles all that for me. Okay. And we'll get to actual have a running in one second. So we're in for now. Okay. So now that we've got that let's let's go ahead and And run our Okay. So here I just created a simple page a simple route And Dix route for onboarding a new tenant. Okay, so, you know, just, it's just a simple form asking for the tenant name and post it's when you submit here. That I first check to see if I can get if, are we on. I can create a separate onboarding app is the same app. So, I kind of have to do a little bit of Trickery here, especially like in the route. Check to see if I have a user and users are tied to tenants so you You can be logged in under one tenant, but you won't be logged into on another and then the tenant ID. It looks for the tenant in the fact I don't want this to be local host I want everything to be remixed local Okay. Um, so here it was just if I don't have a user, but I do have a tenant ID and I'm on the route that redirects me to the login. So, I this the loader part just has a couple of things just to make it so I can have this. And this onboarding one doesn't even have a user thing because it's Again, it's a test one demo. So, here I'm Onboarding. And what I'm going to do is I'm going to first check to see if I have that tenant already. So, let's say I'm going to do My Test Tenant. And I say Tenant 1. Okay, and I hit Add Tenant. It's going to say, hey, I already exist because that tenant's already in the database. And then if it doesn't have one, then I'm going to Add Tenant. And this is where the model stuff comes into play. When I call Get Tenant by ID here, notice here typically what you will do is you'll import Prisma from your ‑‑ from some back end server thing that actually does the connection. So you would be Prisma and then dot tenant and then whatever method you are trying to do. But here we actually want to ‑‑ we're actually creating two separate Prisma clients. So let's go to our DB server. Yeah, this stuff here is only because ‑‑ all this global stuff is there because of how Remix reloads your data on ‑‑ when your routes change. So ignore this stuff. The main thing is, is that here we're importing the Prisma client, but we, you know, alias it one as public and one as tenant from the specific packages. And then here we first initialize the prisma, the public Prisma client. Just here. And this is basically going in and making sure that our database URL is set and the schema is the public schema. And then we initialize the client, and we connect and then we return that and that's the public client schema. Then I also create a map here for tenant clients. So these will be initialized. There'll be a separate one for each tenant and they'll be initialized on first access. So here is when we access the client. One, we're just going to call Prisma, but instead of just having it as a set of empty parameters, we actually pass in the tenant ID. Using that, it modifies the database URL passing it the schema for that tenant, and then initializes the client, just like before. So this one, all this does is just check to see if I already have that client connected and then returns it if it is.

15. Creating and Provisioning Tenants

Short description:

We create a new tenant by calling the public client and getting the tenant find unique. The provision tenant script runs a migration based on the new tenant schema. After provisioning the tenant, we log in and create a new account. The join route ensures a valid tenant ID and retrieves the user ID for that tenant. The blue stack handles the redirection and registration process. The modified methods pass in the tenant ID and update the connection string to access the specific schema. Adding a note demonstrates the connection to the tenant schema in the database.

So back to our tenant, again, we're just because the tenant stuff is in the public we're going to go in and call the public client and get the tenant find unique there. Back to our route. So once that's been added. And again, if you saw the key for one. Once a new tenant is added. So here's the table that manages that it then calls provision tenant. Okay. And all provision tenant is is it runs a script to Do a migration based on that new tenant schema. So again, migrate Dev. We saw that earlier, that script. All it does is it it gets the tenant. So it's going to create a new connection. And if the tenant does the schema does not exist in there, it will automatically be created as part of them generating the model. And then here, it's going to call the migrate dev with that schema. So let's go ahead and do that. We're going to go ahead and create tenant for Changes to four. And if you look down in my chart in the terminal. You'll see it says that it's provisioning will starting. So click on add tenant. So it's provisioning tenant. And it does take a take a minute. Because it's actually running the migration scripts, just like you would do if you were doing the deploy the migrate on your command line. So once it's finished there, and again, this is something that you would do from an ad administration thing. So here we've got successfully created. And then if I go back to beekeeper. I refresh my see it shows the tenant for was created. And then if I refresh my schema. You'll see that tenant for schema was created and they all have their own separate tables and these are all empty. There's no user. All right. So now when I go in, I got to go log in to tenant. So if I hit that. You'll see, okay. And let me be up in the URL. It's now tenant for dot remix local so now it knows that I'm in the tenant for database essentially. So talk about that cough. Let's try it. Let's see if I if I try to log in as my user from a previous tenant. They invalid email or password because that user is in a different tenant. The invalid email or password because that user does not exist in tenant four. So now I can go in and sign up and again I'm still in tenant for no sense tenant for joining. So I'm going to go ahead and. Sorry, a stupid microphone is in my way. Create an account. So now I'm in test for notes and if I look at tenant for again and go to user. See now there's my new account that just got created. So how does that work again if you go into our let's go to the join. We require a tenant ID so loader when the loader comes in. It's going to go in and first look to make sure that the request. One gets the tenant ID that's pulls the tenant from the URL. So it splits the host name and then it has to if it's greater than an email it gets the user and then it gets the host name and then it has to if it's greater than equal 3 I get at least the correct segment. And then required tenant ID just make sure that the tenant ID. There is a value if there's only two parts like remix dot local it would be undefined and. Then that would throw throw the. 404 error. So coming back here so as long as I'm I have a valid tenant. Then I'm going to get the user ID for that tenant to see if there's a user. If there's if I do have a user that means I've already joined so it redirects me. Otherwise, it returns here. So once I again, this is all this part of the. I'm sorry, the blue stack. That's what I'm trying to say. So that's the blue stack so it all it already you know I didn't have to really change much with it. I just added some of the stuff for the. For the tenant support. And then here when I'm going to actually do a registration. As you can see, it says get user by mail, so I had to modify these these methods to always pass in the tenant ID. So when you go to get user by mail. And by doing that it would update the. Connection string to point to that schema. And then it will return you the tenant client. So it knows that there's a user object or note or whatever is in the tenant schema. Whereas with the public one if you were in the public one you wouldn't have a user or notes because there are those models don't exist in the public schema. So this by doing this. This is the only way that you can access this particular tenant. So everything after that is always going to be directed to this schema in the database. So you don't have to worry about adding any kind of where filters or whatever. It's always going to be connected directly to that schema in the database. So. Go ahead and add a new note. Hello from 10 and for. Testing one to. Okay. I see. Okay. And then if you go into my notes. You will see that here is my new note, and that's only intended for deletes work the same. Let me just create and get another know another note. They've had and refresh over here. Make sure you guys can see that, again, we have the the other note.

16. Migrating Tenant Schemas and Handling Migrations

Short description:

This section covers migrating all schemas for the tenants. A script is used to retrieve existing tenants and run migrations for each one. The script ensures that all tenant schemas are up to date with the latest migrations in the database. The developer experience remains largely unchanged, with the only difference being the need to run the migration script across all tenants. The technique for migrating the production server is similar to the development environment, with the use of the 'migrate deploy' command instead of 'migrate dev'.

I delete it, refresh. That note is gone. This ensures that we're only able to access the data in the specific tenant. It's interesting here. I'm going to go ahead and select and if you look here, you can see in the URL that has a note ID. Now I'm going to log out. Copy that. Log out. Paste. If I go into tenant one, okay and then see, even though I said tenant one and I'm, I was already, I was still signed into Tenant One. If you look here that URL, that ID that was, that's only in Tenant 4. VRSD8, VRSD8, because I'm in Tenant 1, it's saying note not found. So there's no way for you to accidentally have it. Here's an ID that can be just because you happen to be logged into a specific tenant. You still cannot access the data in another tenant.

Okay. Let's check the next one. I probably skip through some slides here. Yeah, using the Prisma. So this talks about how the public and the tenant versions work. And then this talks about how to call your functions and passing in Tenant ID to create users. So, again, I wrote all this last night just because I want to make sure I covered everything. But also if anybody wants to review this, even if they haven't seen the discussion, they can at least follow along.

And then the final one. Okay, so every app, you know, you've released your first version and you've added your first feature in there, and you're always going to have some kind of migration. You know, you've got to add a new model, add a new field, what have you. Well, now you have all these Tenant schemas out here, okay? And you've got to make sure that existing tenants will also get the new tables, you know, otherwise, you know, you're going to have, you know, users in various states and you have a single application, and those are going to break. So I actually have a – another script. Again, scripts are good. They automate stuff so you don't make mistakes. I should probably kill the app.

Okay, so what this script does is this is where you go in and you want to migrate all the schemas. So let's go to our Tenant 1. We're going to go into our notes, and we're going to go in and – I had tested this with this one. Let's say it is favorite, their favorite note, and we'll call it Boolean and default this font, okay? We're going to add this new note scheme. Will it create a failed – yeah, because if I do, then probably my demo will stop working. I'll leave that as an exercise to the viewer. So here I've created my schema, and so now I want to migrate. So this is where I run Prisma migrate all. Before I do that, let me actually show you the script because one of the first things it does is we need to know what all the tenants are that exist. I have another function here. This one I actually use, and this is a TypeScript file, and what's cool is that you can actually still reference code in your app folder, even though it's totally out of there. So here I'm importing the public schema from going up the tree, the folder hierarchy, to my app Prisma TV server. So the same thing that I would do in my Remix app I can do from external scripts here, and this basically is an async function that calls the use of the public schema, the tenant model, find many. So it's going to return me all of the tenants that are in the current database, and then here I just care about the ID. I don't need the full tenant thing, and then I map – because it actually returns me an object with an ID property, but I just really want the string. So here I map that out and then I'm just going to output the tenant ID. So in fact, before I do that, let's do this just so you can see what it looks like. TS node and then we'll say Prisma. No, Prisma get tenants. That's going to go out and connect to the database and returns me the four tenants. All right. So now let's go ahead and run that script. Prisma. I agree. So we're going to migrate them all. So it's going to go out again, connects and gets the tenants. And now it's going to do loops through each tenant. Loop through each tenant and do the migration, the first time it gets there because that migration is, again, we're still in the same tenant. The Prisma tenant thing with migrations, it's going to ask me for a name for the migration. So I'm going to say add is favorite. And that does that, and that's going to eventually, it's going to. It created the migration for add is favorite. So the first time it's generating that schema, then for every subsequent tenant, because it's rerunning that same tenant script, but this time for a different schema, it's now syncing that up. So it's making sure that those schemas are up to date. So regardless of how behind a particular tenant schema is, once you do the migration, it will always catch it up to whatever the latest migrations are in the database. The migrate dev script, I told it to skip the client generation because I didn't need to have it regenerate every time for every single one. So then at the final thing, I actually tell it to remigrate, regenerate the tenant. And sometimes Visual Studio doesn't pick up those typescript types, the new ones. So if you just reload Visual Studio then you can go back in and let's go to our notes model. OK. Where's our get boat? OK, so we have a select. I believe we can say is favorite. Yes, see, his favorite is now part of our schema. So our model has been changed, we can go back to our beekeeper. And now go to our tenant for refresh it. Tenant for note now has a favorite, is favorite. Same with tenant one. Tenant two and so forth. So our developer experience hasn't really changed all that much. The only thing that we need to do is have a script that goes out and just runs those migrations across all the different tenants. So for production, the technique for migrating the production server will be similar to Dev. And again, it was late. I'm going to leave it as an exercise for you guys. But I think it's pretty self-explanatory. I think the only thing you'll need to do is instead of calling migrate dev, you call migrate deploy. And you know, it'll run through all the migrations for you. You'll still call this script in order to get a list of all the existing tenants on the production side. And do it that way. I think that was the last slide.

17. Making Prisma Work with Multiple Schemas

Short description:

I found a way to make it work inside Prisma by changing the database name and connection string. The multischema feature in Prisma is not designed for generating multiple schemas from a single schema file. It can also be used with SQLite by specifying a different file for each tenant.

Yeah. So thank you. Hey, three down. All right, so that was... I hope that was interesting. It was kind of a challenge. Again, I was curious as to how it would work inside Prisma because it doesn't have direct control over the SQL generation. So once I figured out that you actually found someone else had created a sample and they actually did the connection string change, then that was when I had that aha moment on how to do that. And that would work if you wanted to have separate databases as well. Instead of having a single database with separate schemas, you could just change the database name and your database connection string. So here, instead of it being slash Postgres, it'd be like slash tenant one or slash public or whatever. So you would just change the database name, and because Prisma will always during the migration will always create the database if it doesn't exist or create the schema if it. Again, that multischema from, okay. Rafael asks, could it work with the Prisma preview feature multischema? My understanding, based on cursory search through their GitHub issues and stuff is that the multischema is for taking your database. You would have a single database and then you would have a schema for kind of like application grouping. So you'd have a schema for like accounting and you'd have a schema for HR and you'd have a schema for, you know, whatever. Each schema in your schema file would still have different models. It's not taking a single schema file in Prisma and generating multiple schemas in there. That was one of the reasons why I built this, was because I couldn't figure out another way for Prisma to support that. I know people were asking for this way and were confused because it did a different way and they're saying, well, that's how it's working now. And they still are discussing whether they're going to support this way, single schema file with multiple schemas, Postgres schemas. This doesn't just work for Postgres too. Again, if you want to use SQLite and use SQLite for each tenant, you would just do the same thing in your database URL. You would have a file and then you would specify a different file for each tenant. So, hopefully that made sense. And those that missed the workshop, I know that Sabine worked for Prisma and said he was interested in that. So, hopefully I will get a chance to show him this, or he can see the workshop recording.

18. Patching Remix and Overcoming Limitations

Short description:

In this section, the speaker discusses the process of patching Remix to fix bugs or add features. They emphasize taking responsibility for resolving issues and not solely relying on open source maintainers. The speaker explains the limitations of using patch package and introduces their approach of editing the original source code to overcome these limitations. They highlight the challenges of patching generated files and the need to update both server-side and browser versions of packages. By avoiding the limitations of patch package, the speaker aims to find effective solutions to problems in Remix.

All right. So, we got about 25 minutes left. So, if we don't have any other questions on the Prisma stuff, I'll go ahead and kind of talk about how I deal with problems in Remix when they come up, or if there's some feature that I want to add that you know, it may not necessarily be something that the core team wants to add. Let's go this thing. So, again, my ‑‑ this talk is about patching Remix to fix bugs or add features. And discuss the techniques that I use to update Remix whenever I get stuck, again, when I want to add something like the handle error, for example. You know, that's ‑‑ if it's not something that I can do as a library or something external to Remix and I have to actually touch the code, then this is the process that I use. This should not be a dominant ‑‑ I don't ‑‑ okay. So all right, unblocking yourself. This is what motivates us as developers. Is that we're developing an application. We're using some third party code and either there's an error or there's some feature that's missing or we don't necessarily like the way it's working, you know. We still have to build ‑‑ ultimately we're still responsible for building our own app. And you know, although it may seem cathartic to create a scathing issue in GitHub as to how terrible the package is and berate the maintainers into dropping everything to fix your bug or add your feature now. You see that often as an open source maintainer, you know, like you suck. Why isn't it working. It's like, dude, you're getting it for free. Calm down. So, I'm a firm believer in you're responsible for your app. Period. Don't blame other people if it doesn't work, you know, you got to figure it out. So, I treat any open source software as software I didn't have to write. I'm like 90% there. If I didn't have to write that code, I'm already ahead in the game. So, any issues that arise that block me from moving forward, it's my responsibility and I'm ultimately responsible to my users and my boss to resolve the problem and move on. So, I can't tell the boss, hey, that feature's not going to be done because I'm waiting on some overworked underpaid open source maintainer to fix this bug. So, you know, granted, it's easier said than done. Hopefully, there's a workaround while a bug fix is being worked on. Because, yeah, you know, sometimes, you know, you got so many different people doing so many different things with code that it just may be one of those edge cases. And as long as you can do a workaround, that's probably the best solution. But if you have no other choice, then going in and digging into the source is the best way to do it. And like I said, sometimes you just have to go in and hack at the offending code and beat it into submission. So, you know, that's what you do. And that's pretty much what I've been doing over the past couple years working with Remix is anytime I see a bug, you know, I'll post about it and either do a PR or create an issue or at least make some, you know, give some visibility into it. But I will go ahead and just fix it if I need to and move on. Okay, sorry, I thought I had another slide in there. Okay. So, using patch package, you know, and a lot of times you'll see people say, hey, if you've got a bug, use patch package to fix it, and that's great. It definitely gets you 90% there. So, patch package is a tool, if you haven't used it, that enables you to modify package and node modules and then generate a patch that can be reapplied as necessary. Okay. So basically, what you're doing is you're going into your node modules and actually editing the source, running patch package, and what it does is it goes in and compares your modified version with the official version, and then any differences, it will create a patch file that has those changes, and the next time you reinstall your package, and you commit that patch, the next time you install it, it will apply that patch to the package as if that bug fix was already there. All right. And that works great. Yeah. It literally saves you from pulling out all your hair with the little I have left. So, however, as amazing as this tool is, it does come with some limitations, okay? So here, it only supports a single patch profile. You know, sometimes you have multiple bugs, and each bug is, you know, is independent, but they happen to be in the same file, whether it's in the compiler or the server, you know, component or whatever. So you can't like pass around a patch that fixes one and have one fix another and then have those two fixes be part of the same patch because you would have to go in and modify the file with both bug fixes directly. The other problem is, patches are generated for a specific version of the package. So when you have a, like here, in the test one that I'm going to show is Remix 1.75. All right. So I make those patches, everything's great, and then Remix comes out with 1.76. Well, when you try to run the patch tool, you know you update your thing, the patch package tries to apply your patches, but it sees, hey, this patch is for 1.75, you're using 1.76. It will warn you and it will then try to patch it, you know, as long as that the new code, the new version of the package, doesn't make any major changes to where your particular change is, you know, just like with any other version control. It's a patch is simply trying to merge a change on top of an existing file and as long as those the code hasn't the underlying code hasn't changed too much, it typically applies cleanly, but sometimes you're going to get conflict. So patch package, if it has a conflict, will bailout and return an error. So that's one issue. And so if you have a bunch of patches and you want to update. Sometimes that's one of the reasons why you don't want to update is because you don't want to have to deal with fixing any patches that don't apply cleanly. And then the final one, which probably is kind of the hardest one to work around is that you're patching the generated files, not the original source. You're patching the output of whatever build system that they're currently using, you know, which does all the transpiling the minifying and those kinds of things. And sometimes the code that you see in Node modules doesn't even closely resemble the original source. And you're trying to figure out, okay, well, where's that bug when all these variable names are like single letter number combinations, and you can't figure it out. So, patch package makes it, you know, although it's possible to do it, if the particular package that you're trying to fix really mangles the code, it's pretty much impossible to do that. And for one quirk of Remix is that for some packages, like the React package, it builds two versions. One for the server side and one for the browser. Remix does all the React rendering server side as well as on the client so the same code that use loader data and all those kind of things, they all run on the server as well as the client. And what ends up happening is that if there is a bug or patch that you want to do you have to actually update it in both packages. So that's one of the drawbacks of patching. At least you have that option. But again, when I run into a solution that just makes my life a problem and I try to figure out what I can do to fix that problem, as I said, you know it's better if you just avoid the problem altogether.

19. Patching Remix to Overcome Limitations

Short description:

I patch the original source code to overcome limitations and create custom patches for Remix. By editing the original source instead of patching the transpiled files, I can fix bugs and add new features. Each bug fix is a separate branch off the base Remix repo. After creating the patches, I execute yarn build to generate the packages with the patches baked in. Then, I use a script called apply patches to copy these packages to my project's node modules. This process allows others to easily access the patched code without going through the entire process. When upgrading to a new version, I pull the latest Remix version and create a new branch to cherry pick the patches. I fix any conflicts against the original source code. Let's now take a look at the process using my GitHub client, Get Kraken.

But again, when I run into a solution that just makes my life a problem and I try to figure out what I can do to fix that problem, as I said, you know it's better if you just avoid the problem altogether. » So, the way I do it is I actually patch the original, the actual source. So, overcoming the limitation. So, as with most programmers, when presented with a challenge, I try to find a solution to overcome it. I've been working on some scripts that allow me to sidestep the limitations of patch package.

So, one, I edit the original source. So, instead of patching the transpiled files, I keep a separate repository that is a clone of the Remix repo. I can then edit the original source to fix the bug. Each bug fix is a separate branch off the base Remix repo. And then, as I create a patch, if I want to apply multiple patches, I can create a branch off of the base version and then cherry pick the patches that I want. And then, finally, when generating the patch, because ultimately the patches need to be applied to the transpiled version, I execute yarn build from the Remix repo, and it actually goes in and generates all the packages, the same packages that are distributed on NPM. These built packages now have my big patches baked in because I've cherry picked those patches directly into the Remix source. And then I have another script called apply patches that copies these packages to my node modules. So once I have, and I'll show you this whole process in a minute. Once it does the build, I take those node modules that were built, copy them directly on top of the node modules in the repo that I'm in my project. And this is like a hand edited node modules version, because again the one that I've built has my patches already there, and then I can... It has all the necessary code changes. I actually do run the patch package command now. So it will then go in and create the patch. So those patches now have all the changes for whatever handful of patches that I've added. It will have all of those built in. And then I can commit that. And if anybody checks out and does a clone of my code, they'll always get the patch. So they don't need to have this whole process. This process is more for a person like me creating patches, and not necessarily a person that just being a consumer of patches. And then for versioning patches, again the issue that we had, is if you upgrade to a new version, does the patch apply cleanly? If not, what I'm doing is because I'm I pull the latest version of Remix on top of the main branch. And then I create a new branch that I cherry pick the patches in again. And if there are any conflicts, I can fix them. But I'm fixing them against the original source code, and not some transpiled version that looks like nothing like the original. And then we'll talk about the enhancements later. So let's... enough talking. Let's show here. Okay. This is my GitHub client. It's called Get Kraken. Not sure how many of you have used it or are familiar with it. It's a nice graphical... You know, I still use the console every now and again. But I like to use... You know, this definitely helps it make it a lot easier to visualize the... ...branch structure. Sorry. It's been a long... It's been a long morning. So here was the main one. So I'm gonna go ahead and... Just so you can see. Okay. So this was the original branch that I imported Remix 1.75. So all the source code in there. In fact, if I... Here's my pullRemix script where I can specify what version I want. And what it does is this is going to go out and go to GitHub and download the zip of the entire archive for that particular version. I then unzip it, and then I R-sync it directly to my sources folder. So that's... Where's the... Okay. So this is the source here. So it basically looks like a clone of... If you were to clone the Remix repository directly, then what I do is... So here is that handle error. Okay. Stop soloing. And here's my handle error commit. So I basically branch all of my patches off of the main... The base remix repo. So here's my handle error export. And if you look here at the code, this is the change. And what's cool is I can do all the types and everything. So I went and actually added a handle error function type, and here's... This is me modifying the places where I wanted the... When... Here's remix catching the error. And if I have a style exporting. And if it doesn't equal test C of before, it just logs it to the console, but there's no other place where we get access to that. So this is where I go in and we'll call the handle error and pass it in all the information it needs to make that call. So there was several different places because remix has different entry points. You've got the document request, you've got the data request, you've got the resource request. So it differentiates between them, because resources are the ones that don't have the default export. So, so there was just several places where I needed to make those, make that call. And, excuse me. So, so I make that patch. And then once I make the patch, so let's go ahead and we're just going to pretend like we're on this patch here. So we're going to check it out and then we're on here. So now I'm on. So I'm on the handle error patch and then if I, yarn build. Okay. So now I'm building Remix, base Remix 175 with my specific patch because I've already applied that to, to this particular branch. So if you go into the packages, this is where Remix builds the node modules.

20. Patching Remix and Applying Patches

Short description:

Here, the speaker discusses the process of creating and applying patches to the Remix codebase. They explain how the code is transpiled and show the changes made to the server runtime. The speaker also mentions the importance of creating branches and using the correct build version. They highlight the issue of including stack traces in production builds and propose a solution to remove them. The speaker demonstrates the process of creating a new patch and discusses the challenges of patching specific parts of the code. They conclude by mentioning the creation of a new branch and the need to check out the patched version.

So this is the exact. So if you were to go in and look at your node modules now, this is the same structure and all of the files are the same. So that was in server runtime. So I go into server runtime, go to the, okay. This is the dist, the distribution folder. And then if I look at server, okay. Here, this is again, this is taking, transpiling the original type script into common JS. And then here's my code right there. And then let's go down a little bit longer to, so here me passing in the handle error. And if I go, let's go to that, handleResourceRequest. And, oh, where's, sorry, just one second. Let's try the handle data. Okay, and then here's where it does the server mode test and then there's, here's my code that was already in there.

Alright, Vladimir. I'm glad that you were able to join us, and we will, like I said, this is going to be recorded so you can watch the rest of it later and also the, you know, the code is on GitHub, so if you have any questions or comments, please add it to the discussions. So again, so that's how that. So we've now added that patch and what I typically do is I will take I will first create a branch off of main called patched remakes 175 and that's where I what's really cool is that we could just drag and drop stuff in there. So what we're going to actually create a new patch. This was one of the things that I noticed so we're going to first check out our main branch. Okay. And if I were to do yarn build again you'll see that my this code disappears my handle air stuff because again, we're on the non-patched version. We're the other. So if I go to server run time server day here we no longer have that. If build entry handle air at the beginning of our request handler. So this is clean version 175.

So one of the things that I noticed is that when you create an error is that the error actually on a production build it includes the stack trace. And typically you don't want a stack trace on your production build. You just want if you. We actually. Go back to. Workshop. Back to error logging and go back to our app. Routes error, OK. So, rmp run build. I want to do this in production mode. Okay. Okay, so let's go to our network and throw the error. And what you'll see is it's returning the error. And then the response also includes the stack trace. The problem that we have is that now it's showing you, showing the user code that you don't necessarily want to show up on the client. Really all you really want is this error to be returned, and then the client is going to see a response. Really all you really want is this error to be logged on your logging service with all its stack trace stuff, but you don't want to expose that to the client. So here we're going to do is we're going to go back to our patches, our original stuff here, and I believe serialized error. So this is where the error is actually serialized. What you want to do here is check to see if the error is not sure if we can do it here. So I would have to actually patch that but. We're now for. This particular. If server mode. It's a production. You say update. Delete update there. I want to make sure that we're not touching this part of the code. If I were to put it in here, then I wouldn't be able to cleanly apply the other patch. So let's go and find. We actually have to. I don't even know server mode is visible there. Okay. I'm gonna go ahead and save that and then if you look in Get cracking. We're going to see our Why is My bad. I was actually building in the wrong spot. I was building a node modules. I'm so used to doing that. I really wanted to do it back in the server on time. Server. So let's do that again. Sherborne. Sherborne public. I guess we just server mode. Okay. Let me copy that now. Okay so go down here. I'm not sure if this will let me. We're not gonna worry about that particular one. That's, I believe that's actually I think we do need to do that. Yeah, server mode. I'm not sure if I'm going to be able to do that. Well, let's just not worry about that for now. So now here, if you look at my working, here are the changes that I made. Okay, so now I'm going to do is, I want to create a new branch. I'm going to call this patches. Serialize. Let's call it error stack trace. Error. Stack. So now we have a new patch, new folder. And then I'm going to back up here and I'm gonna say, remove error stack in production. Okay, so now I've got this patch out there. It's off of the main, it's off of the original base one I'm gonna do now is I wanna check out my patched version. This is one that's already have all of my existing patches but it doesn't have my error stack one.

21. Streamlining Development with Patch Management

Short description:

To apply patches, right-click on the desired commit, select 'cherry pick,' and commit immediately. Then, run 'Yarn build' to ensure the distributed package contains the patch. Use the 'apply patches' script to remove existing patches, copy the node modules, and run 'patch package' on Dev and server runtime. Future plans include creating a UI to select patches, upgrading patches when Remix releases new versions, and applying existing PRs. Additionally, a project for testing with Vitest and Playwright has been developed, allowing for mock data and inline testing code in the route file using React Testing Library.

So all I need to do is I just right click on this one and say, cherry pick, the commit. Do I want to commit immediately? I say yes. And there, now I have my patch version has my error stack as well. So if I go ahead and Yarn build,... The package, the built package, the distributed one should have. Here's the delete error stack code. So that's not in there. So now let's go back over to, this is my... I'm gonna do that. and then I got to go ahead and say, slash patch tool. Apply patches. Okay, so this script here, apply patches, what it's gonna do is it's going to go in and first remove any patches that I may already have. And it's gonna copy the node modules from my bit current build and replace the ones in my current application. This is there just because it needs... Actually, I need to... I haven't figured out how to do this. Making sure that R sync keeps the executable permission. So once it's in sync, then I run patch package on dev and server runtime. So I'm gonna try and figure out a way to be able to know which packages were modified with patches so I don't have to manually specify. But for now, that's how I'm gonna do it. So when I run the patches, now as you can see, it's gone ahead and copied the files over. It's now running patch package or on the Dev and then it will run it against server runtime. What a plan on doing is actually creating a, kind of like a UI where you can check which patches you want and then have it automatically do it in the background for you. Just didn't have a chance to finish it beforehand, but as you can see, it's now ran patch package against the Dev and server runtime. So if I look in my patches folder, now I go to look at the server runtime I go to look at the server runtime if I look for delete, see, so now my delete stack function is in there. And now if I go ahead and npm run build and recreate a production build npm start, boot, whoops, refresh. And clear my network, throw the error, and if I'm not mistaken... Oh, okay, I'm not sure what I did. I must have broken something. But as you can see, something is different. So, yeah, I would, obviously, I would test the... make sure that the patch is doing when it's supposed to. But what it should have done, and it's probably related to that one where I didn't have access to the server mode. But when it returns the response, it would have deleted the stack property. So you would only have seen the message and not the stack. And that way, you would still have the error here, which would then be logged on your error logging service, but then it would not show up in the network trace. So. Oh, Andre, thank you. I will try that. The rsync-P so. Anyway, that's, that's how I manage to deal with all the patches. I have, I have a whole handful of other ones. I haven't updated to use this new method. This is kind of like trial and error to see what workflow works best. As I was shown in here on the enhancements, I'm also working on being able to apply an existing PR. So you'll be able to select in my, once I get the GUI tool set up, you'll be able to point to a PR that's in Remix and have it apply that as a patch to it. Because there's a lot of good PRs out there that unfortunately, the Remix team, you know, they've just been so busy that they haven't been able to build something to help them out. And they're so busy that they haven't been able to get to a lot of them. But it allows you to at least get those fixes in or get those new features. And it gives them some real-world exposure because now you can actually, instead of having it only available inside of a test, you can actually apply it to an application and run it and be able to provide feedback and say, this is what is intended or propose some fixes to the PR. So that's one of the things I want to be able to do. So be able to cleanly apply patches, be able to select, pick and choose which patches you want, and being able to upgrade those patches when a new version of Remix comes out. Because in some cases, Remix in a future version may actually fix the bug that you had to patch. And if so, you don't want that patch to be in there. So that's what I've been working on, and hopefully I explained it well enough. It's mostly just a bunch of Git stuff and syncing folders and things. It's not anything magical, but it does allow you to work around some of those limitations that I discussed about patch package. So, oh, here, server mode is not defined. So I must have something in there that's wrong. So I guess that's it for me. I know we are late on time. If you notice in the repository, I do have a project there about doing testing with Vitest and Playwright. It's actually based on an example that Jacob Ebi from the Remix team had posted, excuse me, and it's gonna be in the... It's gonna be part of the... I know that the future version of Remix is going to include some testing helpers so that you can actually test routes, not just loaders and actions, and part of the problem with testing right now is a lot of the routing stuff uses hooks or other parts of Remix that requires Remix context. So in order for you to be able to test a component in isolation, you have to mock all that. So this particular project shows you how to mock out things like your loader datas and forms and all that kind of stuff in order to be able to test. And what's cool about this particular example was that he incorporates the test in the route file itself. So if you didn't see my discussion last week, the discussion group, this will be kind of interesting for you. So here is your typical, all this part right here is your typical Remix app right here. You got your loaders, you got your default component, loader data, search params, whatever, forms, links. Here is the actual test code. Okay. So it's actually just like you can inline your loaders and actions, which are server side code. You can now inline your test code. And it's basically guarded by checking for node EMV equals test. And then all your testing code is right here. So we have a Mox file that has, that sets up all of the Mox, the Vite Mox for you. And you just go in and then you provide the return values, like we're here. Like if you want to mock the data that comes from the loader data, you just mock the return value and pass in what data you're expecting. Same with any other hooks, you would mock that. And then when the route renders and it calls use loader data, it's going, because obviously the loader itself hasn't really, didn't actually run, it's just going to return whatever data that you specified as your mock return. So this way you can test different scenarios, whether it's, you know, the happy path or, you know, some kind of error condition, you can just generate different mock return values and then make sure that your, component reacts accordingly. On the main thing is, is that when you render it uses React testing library, so rendering and all your expectations should be doing expectations against the rendered markup, not the React component. So you're actually going to be here. This one is just checking to see if the link was rendered. The React component or Remix component is a link component here. But when we actually, you know, a link component with a two attribute, but when we actually are testing it, we're get by role. Testing library recommends that you don't try to find specific elements but roles or by tags or whatever. So here, this is just going to return a link, which is the anchor tag and then get attribute a traffic.

22. Testing with Remix and Playwright

Short description:

Remix allows you to test against the result, rather than the implementation, making it easier to ensure your app is working as intended. You can mock return values for hooks and perform unit tests for loaders and actions. By passing in controlled inputs and verifying the responses, you can test different scenarios. Playwright is a powerful tool for end-to-end integration tests, automating user actions and verifying expected results. It offers a scripted approach, allowing you to specify step-by-step actions. Playwright is fast and provides readable test files. To simplify testing routes that require authentication, you can create an API endpoint to get a logged-in user automatically. Thank you for your feedback and feel free to reach out on Twitter, Discord, or GitHub for further assistance.

So remember this Remix is going to render that as an anchor tag, a tag with an href to the actual route. So this is what you're going to be doing expectations against not is it a... It doesn't testing library does not want you to test implementation. It wants you to test against what is the result because what the result is is closer to what the user sees. And if you are testing about what the user's seeing or doing, then you're more confident that your app is working as intended. Same thing here. You can mock return values for any hook and pass it the values and so forth.

So, and then this section here shows you how to mock how to do unit tests for your loaders and actions. A loader is simply a function that remix happens to call at certain points and it passes in things like a request context params. So in order for you to test your loader all you need to do is create a new request object with whatever data that you're, that you're expecting, run your loader, get the response and then you can go in and do the assertions against what the response is. So here, we're getting a loader. We're expecting the status to be at 200, okay. And that we get the JSON payload and then we want to inspect that to make sure that we're getting the correct response, the data. And you just do different scenarios. So here's one with no name parameter, here's one where you pass in a name for a query string but don't provide a value. Here's, if you have to have a value, it should, instead of it being the default, it should be hello-test. So all, this one, you're not mocking. When you're testing your loaders in actions you don't want to mock it. You actually want to run your loaders and actions. You want to make sure that what it's doing is correct. So you're, you're passing in controlled inputs and then verifying the responses. So nice thing about that is that you can go in and find out all your different edge cases and then pass in a request that matches that and then verify that your loader or action or whatever is responding correctly.

And then finally this example also shows you how to do end-to-end integration tests that uses PlayWrite. PlayWrite is similar to Cypress in that it will launch a browser in the background and will execute, will actually run a real server and automate tests, you know, going through routes here, here, go to the URL. It waits until some text has been rentered and then finds some kind of input and enters. So this is the, it's very scripted. So you specify step-by-step as if, as what the user would be doing. So they would go in and fill the name field with the test and then they would click on the submit button. And then once it goes through, you wait, you want it to wait until the page re-renders and does it have the tests that you expect? And this one here, I was actually doing that. Let me actually go to, oh here it is. Run test. So here is it running the unit testing. And then here is the one that's running the integration. It runs through all your routes and then there's your test cases that have passed. And then here's ones where you're running integration. And now it's actually launching. It finds all the tests, it launches three worker processes and runs Chrome against each of those, runs through the script and verifies that everything is correct. So change this name on submit. Yeah, see, I wanted to show off when I was doing the discussion that it will fail. So here, so it's technically should be hello test. So if I comment that back out and rerun it, it should pass now because... Oh yeah, I got it. Sometimes the server doesn't read. So now those tests pass because I have the correct thing. So if you're interested in using vTest or Playwright, check this out, because it's got all of the setup for you. It does make sure that globals are set up. It even includes code coverage. So if I npm run coverage, it will launch a server to show the test coverage. So yeah, I'm not gonna say a hundred things about that request. So yeah, everything's already pre-configured, shows you examples of how to use it. And then, yeah, I think it makes it really simple to do your testing, especially with... You know, and the fact that you can just create your tests while you're creating your routes. I think that's pretty slick. And if your routes files get too big, if you use Remix flat routes, you can co-locate your route file with your test file. So you can have them all in the same folder instead of having them separated. Yeah, from what I understand, PlayWrite is actually really fast compared to Cypress. I'm not sure if that's because they've optimized things or if they're doing less stuff than Cypress. But for general testing, it works really well. I actually like the way the test files read too. It's very, very readable and it makes it simple to say, yeah, I understand exactly what that page is trying to do. If you have an app that requires authentication for routes, instead of having to go through, because each instance is basically a brand new, opening the browser and going to a page. So instead of having to go in and go to your login page and log in with an actual user, I took a page out of Kent's test scripts. And what he does is he creates an API endpoint to get a logged in user automatically. And that endpoint obviously is only active during testing, but he just hits that endpoint to get an active user. And then the rest of the script can go on. That way he doesn't have to do the whole login sequence for every test. So that is basically my workshop in, it's not your traditional hands-on one, but I do know that I've seen it, but again, I think this is one of those things. It's kind of like a blog series, but in a video form. Well, thank you everybody for sticking it out with me. I appreciate the feedback. And if again, if you have any questions you can follow me on Twitter at Killerman. I'm on discord as well as Killerman and you know my GitHub. So definitely feel free to post any comments, feedback in the discussions there. And hopefully, I'll be able to help out any way I can. That's just one of the things I do like to do is I do like to help people. I like to teach people stuff. And so, all right, well that's it, folks. Hope you enjoy the rest of your week and I will talk to you soon.

Watch more workshops on topic

React Summit 2022React Summit 2022
136 min
Remix Fundamentals
Featured WorkshopFree
Building modern web applications is riddled with complexity And that's only if you bother to deal with the problems
Tired of wiring up onSubmit to backend APIs and making sure your client-side cache stays up-to-date? Wouldn't it be cool to be able to use the global nature of CSS to your benefit, rather than find tools or conventions to avoid or work around it? And how would you like nested layouts with intelligent and performance optimized data management that just works™?
Remix solves some of these problems, and completely eliminates the rest. You don't even have to think about server cache management or global CSS namespace clashes. It's not that Remix has APIs to avoid these problems, they simply don't exist when you're using Remix. Oh, and you don't need that huge complex graphql client when you're using Remix. They've got you covered. Ready to build faster apps faster?
At the end of this workshop, you'll know how to:- Create Remix Routes- Style Remix applications- Load data in Remix loaders- Mutate data with forms and actions
React Summit 2023React Summit 2023
106 min
Back to the Roots With Remix
Featured Workshop
The modern web would be different without rich client-side applications supported by powerful frameworks: React, Angular, Vue, Lit, and many others. These frameworks rely on client-side JavaScript, which is their core. However, there are other approaches to rendering. One of them (quite old, by the way) is server-side rendering entirely without JavaScript. Let's find out if this is a good idea and how Remix can help us with it?
Prerequisites- Good understanding of JavaScript or TypeScript- It would help to have experience with React, Redux, Node.js and writing FrontEnd and BackEnd applications- Preinstall Node.js, npm- We prefer to use VSCode, but also cloud IDEs such as codesandbox (other IDEs are also ok)
Remix Conf Europe 2022Remix Conf Europe 2022
156 min
Build and Launch a personal blog using Remix and Vercel
Featured Workshop
In this workshop we will learn how to build a personal blog from scratch using Remix, TailwindCSS. The blog will be hosted on Vercel and all the content will be dynamically served from a separate GitHub repository. We will be using HTTP Caching for the blog posts.
What we want to achieve at the end of the workshop is to have a list of our blog posts displayed on the deployed version of the website, the ability to filter them and to read them individually.
Table of contents: - Setup a Remix Project with a predefined stack- Install additional dependencies- Read content from GiHub- Display Content from GitHub- Parse the content and load it within our app using mdx-bundler- Create separate blog post page to have them displayed standalone- Add filters on the initial list of blog posts
React Advanced Conference 2023React Advanced Conference 2023
112 min
Monitoring 101 for React Developers
If finding errors in your frontend project is like searching for a needle in a code haystack, then Sentry error monitoring can be your metal detector. Learn the basics of error monitoring with Sentry. Whether you are running a React, Angular, Vue, or just “vanilla” JavaScript, see how Sentry can help you find the who, what, when and where behind errors in your frontend project.
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura
React Advanced Conference 2023React Advanced Conference 2023
104 min
Building High-Performance Online Stores with Shopify Hydrogen and Remix
I. Introduction- Overview of Shopify Hydrogen and Remix- Importance of headless e-commerce and its impact on the industry
II. Setting up Shopify Hydrogen- Installing and setting up Hydrogen with Remix- Setting up the project structure and components
III. Creating Collections and Products- Creating collections and products using Hydrogen’s React components- Implementing a Shopping Cart- Building a shopping cart using Hydrogen’s built-in components
VI. Building the home page with Storyblok- Cloning the space and explaining how it works- Implementing Storyblok in the repo- Creating the Blok components- Creating the Shopify components- Implementing personalisation

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Remix is a new web framework from the creators of React Router that helps you build better, faster websites through a solid understanding of web fundamentals. Remix takes care of the heavy lifting like server rendering, code splitting, prefetching, and navigation and leaves you with the fun part: building something awesome!
React Advanced Conference 2021React Advanced Conference 2021
39 min
Don't Solve Problems, Eliminate Them
Humans are natural problem solvers and we're good enough at it that we've survived over the centuries and become the dominant species of the planet. Because we're so good at it, we sometimes become problem seekers too–looking for problems we can solve. Those who most successfully accomplish their goals are the problem eliminators. Let's talk about the distinction between solving and eliminating problems with examples from inside and outside the coding world.
Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Do you have a large product built by many teams? Are you struggling to release often? Did your frontend turn into a massive unmaintainable monolith? If, like me, you’ve answered yes to any of those questions, this talk is for you! I’ll show you exactly how you can build a micro frontend architecture with Remix to solve those challenges.
Remix Conf Europe 2022Remix Conf Europe 2022
37 min
Full Stack Components
Remix is a web framework that gives you the simple mental model of a Multi-Page App (MPA) but the power and capabilities of a Single-Page App (SPA). One of the big challenges of SPAs is network management resulting in a great deal of indirection and buggy code. This is especially noticeable in application state which Remix completely eliminates, but it's also an issue in individual components that communicate with a single-purpose backend endpoint (like a combobox search for example).
In this talk, Kent will demonstrate how Remix enables you to build complex UI components that are connected to a backend in the simplest and most powerful way you've ever seen. Leaving you time to chill with your family or whatever else you do for fun.
Node Congress 2022Node Congress 2022
34 min
Server-side Auth with Remix, Prisma, and the Web Platform
In this talk, we'll get a live coded demo of building custom hand-rolled authentication. When you have the right tools (and we do), authentication can be quite simple and secure. This is more (and better) than just: "Install this library and you're good to go." When we're done we'll have our own auth code that can evolve with our ever-changing requirements without a need to learn some library-specific APIs. We'll be leveraging the Web Platform the way it was meant to be done to give us simple and secure server-side authentication for the web.
You can check the slides for Kent's talk here as well as demo code.