How to Solve Real-World Problems with Remix


- Errors? How to render and log your server and client errors

a - When to return errors vs throw

b - Setup logging service like Sentry, LogRocket, and Bugsnag

- Forms? How to validate and handle multi-page forms

a - Use zod to validate form data in your action

b - Step through multi-page forms without losing data

- Stuck? How to patch bugs or missing features in Remix so you can move on

a - Use patch-package to quickly fix your Remix install

b - Show tool for managing multiple patches and cherry-pick open PRs

- Users? How to handle multi-tenant apps with Prisma

a - Determine tenant by host or by user

b - Multiple database or single database/multiple schemas

c - Ensures tenant data always separate from others



Again, welcome, everyone, to this, my inaugural remote workshop here for RemixConf Europe. Today we're going to be talking all remix, you know, remix all the things, yay. My name is Michael Carter. You may know me online as Kiliman. It's short for Kilimanjaro. I actually had a consulting company called Volcanic Technologies. All my computers are named after volcanoes. And I wanted a short nickname that I could use online and Kiliman just happened to sound really good. So that's where that came from. Again, this is my first time doing a workshop or giving a workshop, I should say. I've done several others. It's going to be a little bit different than what you probably are used to. In that we're not really trying to build an app. What I want to do is I've been using remix since day one when they first started the licensing. In fact, I got one of the first licenses when Kent had tweeted the link to the buy link. And apparently he tweeted it a little too early because the license I got was their dollar test license. So anyway, I've been using remix. I've been, you know, part of the group that helped define the api, you know, just by using it and giving feedback. So when they when they finally released it back in November, I was excited for everybody else to be able to use remix. So with that being said, the talk that I'm going to be the workshop that I'll be doing is primarily talking about the things that I've learned using remix, as well as just over the years of doing web development in general. We're going to cover four main topics. Let's go back to my. Let me go ahead and share my screen real quick so we can see what we're going to be talking about. So. We're going to be covering four main topics. I'm going to zoom this a little bit more to error logging, you know, you know, what are errors in remix and how. Sorry, I forgot to hit the actual share. OK, so. You should be able to see my screen now. So we're going to be talking about error logging. You know, what are what are errors in remix? You know, how do you handle them? Multi-page forms. You know, that was a topic that a lot of people have asked for. How do you in the. You know, with remix loaders and actions, you know, what is a good way to deal with with forms? I'll be also talk about how I patch remix, you know, as you know, remix is a, you know, a fast moving framework. The people on the team are very smart, but they are a small team and there's going to be issues that come up. And there are also. You know, so there's a lot of of of pull requests or bug fixes and things like that that are still sitting out there open and waiting to be merged. But, you know, you have you have an app to build and you've got to keep moving. So I'm going to tell you how I manage patches for remix so that I can. Get unblocked and continue building my app. And then finally, we're going to talk about multi-tenant apps and how to use them with prisma. A lot of that's another question I get. I see a lot on Discord is how to handle multi-tenant apps. This one is actually pretty interesting because it's not two parts to multi-tenancy and which we'll discuss that. I focus more on the back end part, how to access the database for for multiple tenants. And then if we have time, I will kind of go back over the testing part, which we had in the discussion group last week, and we'll go over that. I have the code and all my notes about that. So without further ado, I guess we'll go ahead and get started on the error logins portion. So we probably I'm pretty sure I'm going to go ahead and go ahead and start my timer. OK, so we're going to have probably a break about an hour and a half in. I'm sure people are going to need to get drinks, bathroom breaks, what have you. And I know that after about an hour and a half, my voice is probably going to go. So. Let's go ahead and get started. So here we're talking about error logging and oh, just in case, I'm not sure if I let me post that in the chat. So chat. OK, I'm going to post the link just in case you have if nobody has seen this one yet. Sorry, I'm kind of like I got the microphone right in my face and I have to look around it to get to the screen. So let me post the link to the GitHub here in case you don't have it. You can just follow along with the code. All right, so let's go back to error logging slides and number one. OK. So when it comes to errors, you know, what exactly is an error in remix? To me, I see them as three different categories of errors. We have expected errors. We have bad client requests. Because, remember, when you're you got clients that are going to send stuff and they can send stuff to your app, regardless of whether you have client side validations or any of that kind of stuff. So you always have to make sure you validate on the server. So and then you have then finally you have the unexpected server events. Those are like, you know, null or undefined references. You call an api and you made you made an error, databases down, those kind of things. Those are unexpected server errors. So what are expected errors? These are the types used. Sorry, these errors are typically caused during request validation. Remember, when you have a form, you typically will have some client side validation and they submit, you know, once they pass that, they'll submit the form. And then you also want to do server side validation. You never want to have assume that your client side validation is going to catch those errors. Main reason is, again, anybody can post to your endpoint. So if you don't validate on the server, they can bypass any rules that you may have in the client. But when you have a validation error, you know, and the data is not, you know, the data is not in the correct format. We want to let the user know that there are problems and that they, you know, so that they can fix them and submit the data again. So these types of errors should always be returned to your client from your action. Do not throw an error or response. I see a lot of people think, oh, hey, I got this validation error and I'm going to throw throw a response saying, you know, bad request. But then you end up in your catch boundary and that you're back into your form route. So don't when you have expected errors, like I said, namely with validation, don't throw an error or response. Simply return that data in your JSON payload. The next thing is bad client requests. Again, these are typically things like invalid URL params. For example, you're expecting, you know, your user ID is supposed to be a number, but they send a string. Or you are, you know, they send you a user ID and that person, that user doesn't exist. Those are things that are bad client requests. You would send a not found response or unauthorized access to a resource. For example, a user tries to, you know, they change the URL to try to access somebody else's notes. You want to verify that even though the note may exist, does that user have access to the notes? And if not, you know, throw an unauthorized response. So here, these are the cases where you want to throw a new response. And I actually have some helper functions that I'll show you that I use. So I know what types of responses to throw. And you're essentially saying, I'm able to process your request with the information you provided. For the most part, you'll want to display this error in your catch boundary. So this is where you're going to see a catch boundary where you could say, hey, you know, that note is not available. You're not authorized to do it. You know, invalid request, you know, because you sent a string instead of a number or so forth. And then finally, the last category is the unexpected errors. And these are truly unexpected. These are not ones that you are because you want to try to avoid bugs. So you should do defensive coding. You should verify that you're, you know, you don't have any null values before you access or use the null coalescing operator. Make sure that you call your APIs correctly. But there are some things just beyond your control, such as. You know, the database is down, you know, your app can't plan for that, and it just has to try to recover gracefully. I recommend that you don't throw errors directly in your application code. Errors should be typically thrown by libraries or external services, because if it's an error, you should have dealt with that and then let the user know that there was a problem by throwing an error. You're just kind of throwing your hands up in the air and saying, oh, hey, sorry, can't do anything. So, like I said, it's much better to do the checks or verify your functions before you call them. So this class of errors should strictly be for unexpected errors. These errors will be rendered in your error boundary, you know, unless and if you don't have an error boundary in your route, it ends up bubbling all the way up to your route, which may not necessarily be a good user experience. So try to minimize errors. I also don't recommend using try catch in your application code, because, again, errors should be limited. And the only time that errors should be thrown is because you didn't, you know, do defensive coding and check for that potential error beforehand. And again, unless you can actually do something about the error, it's a lot easier to just kind of like, like I said, inform the user that you have an error and you're going to probably have some error logging anyway. So you'll be able to see, you know, be notified when these errors occur. All right. So next slide. So now we're going to actually start talking code. All right. So the first class of errors, like we said, are the expected errors. So what you want to do is we're going to we're going to do some error, some validations, not only in with forms, but validating params and so forth. I actually wrote some helper libraries. You may have seen some of the packages that I've created. One of them is called remix Params Helper. I use Zod to do my validations. It's a great library. You can define what the structure of your data should look like and then you validate it. My helper not only does the validation, but it also parses the data and converts the data in from the form or the or the params or search params into the format that Zod expected. So by default, everything pretty much tends to come in as a string. But if you have numbers or Booleans or any other type, it will convert that the final result. So you don't have to do the conversion after the fact. So here is my typical. OK, so here's my typical loader. In this case, actually, I should probably show the route. If you saw my remix flat routes discussion, you know, this is the this should be familiar to you. So here's my route users dot user ID dot edit. So in this case, I have a user ID param. I'm expecting it to be a number. So here's how I do that. I'm getting my params and then I have my helper function that says get params or throw. I pass in the params object. And then this is the schema. This is the format that I expected to be in. I expect it to be a user ID. That's a number. And because I called get params or throw, typically I have like here get form data form data over here. I could do get form data or throw or I can do get params or fail fail returns throws an error. I rarely do that. But throw throw returns an invalid object response. And I'll show how that works shortly. So at this point, I want to clear I want to verify that everything that I am doing is is correct so that I can focus on the happy path. So I check for all my errors up front. And then once I get past that, if I haven't thrown or returned something at this point, I've I know that I've got that everything is good and I can return my return my response. So here get params or throw is going to return a user ID. And as you can see, the type, because it knows that user ID is a number from Zod, that my variable is type. So I get type inference. Zod is very good at giving you your types correctly. So now that I've got a user ID, now I'm going to call my get user function and return my user again. Don't worry about the because actually, this is not a real database. It's just a static file, but it returns it's going to return my user. And it knows that it's a user type. If I don't have a user again, this I'm I'm doing defensive coding. So I want to check to see if my expectations have been met. So if I don't have a user, if they pass in a user ID that is a number, but happens to be some number that's not in the database, then I'm going to throw a new response called not found. And I hear the not found response. These are again helper functions you'll see in the utils folder in the repo. All it does is it returns a new response. I have one for not found, I have one for bad, bad request for unauthorized, forbidden. The invalid and success ones I'll show you shortly when we're when we're actually doing more validation. So here it's going to return a 404 error and then that's going to end up in my catch boundary. At that point, I'm now going to return my data, as you can see, I'm using my typed JSON helpers. Type JSON, basically, it's a replacement for the standard JSON and type loader data and that use action data functions hooks. What they do is one, it uses my type JSON format so that it will serialize the JSON payload, but will also include metadata for types that are non JSON like dates, big int and so forth, sets and maps. If you may be familiar with super JSON, super JSON is a is another library that does that. It's definitely has far more support for for more complex types. My library is focused on the common types like dates and it's also a lot smaller than super JSON because it is more limited. It's it's like two and a half K minified. So it's it's pretty pretty lightweight compared to almost 10 K for super JSON. So when your payload, you know, when you want to try to minimize your client payload, it's best to, you know, this is this is a useful package. The reason why I like my type JSON is because I actually. I don't do the whole serialize object wrapper around my JSON responses, so when I return a type JSON here, I'm returning the user as a user type when I actually get it in my loader over here. And use type loader data with type of loader using the standard using the standard type inference that you get with remix now. My user object is actually a user. It's not that serialize object optional wrapper stuff, which I think makes makes it really hard when you have when you return objects with nested properties that have complex types. It's very hard to get access to that and I'll show you show you that in a little bit. So let me go ahead and we're going to just go ahead and launch my app. You can actually see this in action. Yeah, I'm sorry for you see me kind of doing this, I'm dodging my microphone. So here is our users route. This is actually this was pretty simple, similar to what I've done here, but it's just getting the list, returning the list of users. So now when I click on, I'm now in my user edit route. And if you notice it again, it used the type loader data. And here are the fields. I'll talk about this shortly, this get field helper. It's mainly helpful for things like when validation fails. So here I'm going to first just it's a I've done no client side validation, because typically what you'll do is you'll put things like required attribute or set some kind of pattern in or say this is a numeric field or what have you. This is an email type. So if I tried to type something without email, client side validation will flag it. But I want to show you how validation works server side and how you can just render those those those errors. So here I'm going to just leave it blank and I hit submit. And now it shows that age is required. So if you look at my action again, I do the get params or fail again just because I because I need to get the user ID. Then I'm calling my helper function called get form data and I pass in the request and it's going to do the await form data and then the form schema. And here's the form schema. So I have this odd schema. I have a required name, an email that's an email type. So it'll validate that it's a valid email and age is a number. So again, we haven't we because it's failed, it knows that that age was required. So what happens here, get form data returns a couple of errors, data and fields. Why three different things? Well, errors obviously are validation/talks">form validation errors. So if any of the Zod validation fails, I will get an errors object that's keyed by the property and the error message. So here it's going to you know, I'm going to if I have a name in this particular case, I did have an error and the errors was age. So the errors that age and then the message was required. And you can also use Zod's display option so you can customize the error messages that you want. The data that's returned is again, this is typed. It knows based on the schema, this is a fully inferred type. So it has name, email, age and string. So this will be valid as long as your schema, the validation passes. And then finally, the fields. Why do we need the fields object? Well, the fields object is there because when you do validation and it fails. You want to return the message now. When you have javascript enabled, react will keep the fields that were submitted. And so even if you displayed some junk, if I did that and then hit submit, see now expected a number reserved to string. Now, as you can see, this field was still there. That's because react kept that field. It never when I'm setting the default value, default values only get updated when that component remounts. Well, the form doesn't remount, it's just being re-rendered with new data. So that's why react will continue to keep that. However, if for whatever reason you don't have javascript enabled, when remix re-renders your form on the server, it's going to be, you know, it's going to be just like its initial page render. So your default value would end up being the same whatever the original user data was. And then so instead of seeing this, I would see 52. So what we want to do is we want to return the fields. Fields basically just returns your form data and back to you. And then in my, when I'm setting up the form, I call the get field helper. So the get field, you pass in the original data. This is your initial data and you pass in your fields and then you pass in the property name. And what we'll do is the default value will, if there are no fields, we'll use the username, If there are fields and it will use as the default value. So that way, whenever, if you re-render without javascript enabled. So here, so we have 12, disable, refresh this. Even without javascript enabled, it will refill in with the correct fields thing, because let's see, let's go back to here and we'll actually show you what would happen if you didn't do this. Let's say, because that's what you would normally, that's what you would typically do when you are. So here, I'm going to refresh 52. Hit the thing here, hit submit, and I get 52 back, because again, when without javascript, remix is always going to server-side render. And when server-side rendering, it's always going to re-initialize default value. And in this case, it's always going to re-initialize it to whatever the existing age was. So by using this helper, it ensures that I get the correct one. And what's nice is that get field is type does have, is type inferred, use type inference. So if I do this, I'm actually getting, it's not assignable to parameter of type of key user, because get field use is a generic field, so it knows the type of the initial data and will only allow you to access values. Even though keep fields as a key of string, it verifies that you can't, because this parameter has to always be a key of whatever the initial type is, it will ensure that you can't access it, so it will give you a type error. And then finally, before we get any further, again, we had the user's type loader data. This was our initial data. And then the action data is the data that we get returned from our action. So if we have any errors, we're going to return invalid. Invalid is another helper function that takes our data and returns that as a bad request. But this data is actually in a specific form. It uses an errors, there's an errors key and a fields key. And when it returns there, my get invalid is expecting those invalids and errors, and they have different types. So now my errors is going to be an errors type and the fields will be a fields type. We actually have another helper called get success, and that's typically if you return actual data from your actions. My recommendation is that unless you're using Fetcher forms, that you always redirect from your action for success and only return if there's errors. If, for example, if you're doing something like a like button and you do that, you probably want to use a Fetcher form, have the action, and then just return a success response. But if you are doing anything else, I would recommend redirect. So the main reason is that by redirecting, you don't get in that state where the post is still in your history, and you get that thing where it's like don't refresh after a submit, because your credit card may be charged twice type of thing. By redirecting, you get rid of that extra, that post entry in the history. So if we do have any errors, then we know that because it's keyed, it's going to be keyed by the field name, so So I just can check if I have an, then I can render that message. So here when I was doing the errors.age, because I actually had an error.age, then it was going to render that. So I'll come back over here and re-enable my javascript, just so I don't forget. So now that we've shown the validation, and all of this is in my notes. So if you miss any of the talk or you want to review back, I was up until midnight trying to get this workshop finalized. So let's go to the next slide. All right, so server-side errors. Okay, this is all great, all well and good. You've got your validations, but you're eventually going to have actual unexpected errors. And you need to know when those occur. Right now, by default, remix will simply just render, will log the server-side error to your console. And a lot of hosts will give you access to those console logs. But yeah, it's still a big wall of text, which you have to scroll through or parse. So typically, what you're going to want to do is log your errors to some error logging service, you know, like Sentry or Bugsnag or Log Rocket. Some services like Sentry, they actually have a package or remix that you can install. And it hooks into the remix's pipeline to make sure the errors are logged properly. However, if your logging service of choice does not have a remix package, and because the Sentry one is open source, you can go ahead and look at it and see if you can customize it for your purpose, for your logging service. However, you know, me, you know, this was prior to Sentry having their package. I wanted to be able to log bugs. And there was really no easy way to get access to the error in remix. You know, so again, as somebody that has to get their work done and move on, you know, I could have just, you know, complained and posted some issue on GitHub and said, hey, why don't I have a way of accessing the errors? I just said, went ahead and just hacked the source. And we'll talk about patching in a little bit. But that's essentially what I did was I created a patch, which now creates a handle error export in your Entry.server. Okay. So we now have a new export called handle error. And remix will now call, anytime it catches an error response, it will call your handle error function, will pass in the actual request, the error that occurred, and the context. This is context that would be part of the get load context if you were using Express or something like that. So I'm not really, I'm just still using remix app server. I don't really have context. But in my example that I had posted online, I actually used Express and context. And showed how you could use that to store the current user. So when you are notifying Bugsnag, you could actually pass in the current user. So in the Bugsnag console, you'll see which user had the error. So aside from the handle error export, you know, I imported Bugsnag, just followed the directions that they have there, start Bugsnag with the api key. And at that point, I can now start calling the notify. So I'm going to go ahead over here. Okay. So this is my Bugsnag console. All right. And I'm going to go back to my app. Go back home. And this is demo time. So hopefully it works. It was working last night. But throw an error. Okay. So here, as you can see, it threw an error. I get the wall of text in my log. But notice here, I get in the console the notify Bugsnag. So it did, I did get the, my handle error got called. And now if I go over to my logging here and refresh, I should see, oh, come on, hit figures. I should have tested this before the call. I just had it, it was working. Oh, there it is, 47 seconds ago. Yay. It just took a second for it to show up. So let's go back to my 47 seconds ago. And as you can see, here is the error. And if I, that's weird. Okay. Sorry about that. It's, let me just go back to a previous one. Because they had, because it actually did show the stack trace when I first did it. Oh, you know what? Yeah. I was working with the, because one of the problems that we have is on, is on the, in development, you have the stack trace by default. remix enables the source maps and will, you know, display the, you know, it compiles your app down into a single file. But when you are getting actual errors, you want to see that, if I click on here, we want to actually see that where, what line and file that, that particular error occurred in. What I was trying to do, and I think I screwed this up, is I uploaded my source maps on the production build, because I wanted to show that, yes, you can, you know, you can have source maps for your production errors. But I think what happened is it's also screwing up my dev error. So let me delete that from over here. I never did get this thing to work. So I, as you can see, I was up late last night. So delete that. Yes, I want to, I want to delete this one, too. Okay. And I think what was happening is that those source maps were interfering with my error log here. So let me restart the app, and hopefully that will correct. So we got this error now, and let's come over to here. Refresh. Nope. I'm not really sure why it's doing that. Oh, okay. Development. No. But it is showing me that it's in the wrap out layer, the error. But before it was actually showing you, you know, color-coded syntax highlighting and all that kind of stuff. So I don't know what happened where, so. But that's just a ghost to show you that you can still log your server errors. Even the remix doesn't provide that, that way. I actually have a script to upload source map. So if you, I'm still working on this, but if you're using Bugsnag and want to know how to do it, this is how. Here's a, here's a cool trick, and I'll show you that a little bit when in my prisma talk that we have a. Oh, sorry about the, I said, I was hoping somebody kind of like yell out if there was a question, and I wasn't, I haven't been following the chat. I just kind of zoned in on my code. So yeah, I'll go back over and look at these questions. But here was a cool feature that you can do is that if you have to, if you're in, if you have scripts and you actually need to access keys that are part of your environment, you can use the source command to import those environment variables, because the environment file is basically just like a regular script anyway. The set minus A and set plus A, I think that is to make sure that those environment variables don't get like exported out of your script. And then so here's just a curl thing to do the upload for that, to that. So, all right, let's go back over your questions here. Okay, Andre, at some point, could you elaborate on throwing versus returning in actions and loaders? I'm not sure if I covered that in my talk or if you still have questions about it. Again, return invalid data from your actions. Okay, well, loaders aren't really, because you're either going to return data or not, that's because that's a point of loaders. It's the actions that really have issues. Well, actually, I guess throwing is important. Again, throw responses, don't throw errors. So, throw an invalid response or throw a not found or throw an unauthorized response. Nice thing about throwing is that you can have helper functions that actually do the throw. So, like if you have, if you, at the start of your loader, you require a user, you can go get required user and then that function can go in and verify that you have an actual user. And if you don't have an actual user, then you can throw a redirect to your login page, for example. That way you don't have to actually check to see if in your loader itself, have to check to see if you have a valid user and then return the redirect. So, throw responses. And again, in the actions, don't return data in your actions except for invalid, you know, validation errors. If you, or if it's a non-form, you know, like a full form, if you're just using fetches or whatever, then you can return data, obviously, so you can get that data. But typically you want to redirect on successful form actions. Nested fields, yes. I'm not sure if I have an example, but what I could do is we can just create one. Yeah, this is what I was, I did this this morning because I wanted to show you why I use my typed loaders instead of the standard ones. Because here I'm using standard remix, remix functions now. So, when I'm returning those, so I'm returning an invalid response and with all these errors, this is what ends up happening. My use action data gets mangled with this serialized object undefined optional wrapper. So I have that, but now because it's wrapped in here, I don't have, it's hard to access this when I want to get typing. So, even though my get invalid calls these get errors where it checks for data and checks to see if the particular key is in the data and returns the data and as the type, all of that goes by the wayside because I don't, the initial data that I'm passing in doesn't have the full errors. And I haven't figured out exactly how to like unwrap this because I really just want to get access to the internals of that, not the serialized stuff. So, that's the main reason why I use my typed one because I just pass along whatever type that I'm returning to my loader data and the assumption is that when it's serialized, deserialized back into a native, from JSON that any type conversions that it needs such as date, string to date, so forth, will automatically be done before I have my data. And it does support nested fields. I don't have an example, but I will try to come up with one and update the repo with that. And yes, the recording. This is the first time I've worked with Git Nation. So, my understanding is that recordings will be available for people that are members there and or anybody that bought a ticket to the conference. Okay. So, that again, if you have a question and I miss it, you know, just you can just shout out and say, hey, got a question. You don't have to even say the question. You could type the question out just kind of get my attention so I don't miss you. All right. Okay. That wasn't too bad, 40 minutes. Sorry, I'm losing my voice already. A lot of the stuff that I type JSON and my params helper, although they're packages, in most cases they're single file. And so, for me, when I'm working on stuff, you know, like every project is a little bit different and you end up having, you know, you want to kind of tweak your code to make it, you know, you kind of come up with a better pattern or what have you. And I don't necessarily want to go through the whole, you know, publish, import process. So, I just copy the file directly into my project and edit it there. Eventually, some of the enhancements that I've made will make it into the actual package. I did break some of the api. You know, I changed some of the way that it's done. I think it's simpler this new way. So, it will be a breaking change. Luckily, I'm still on the version zero point something. So, you know, and it's not like it's a major change. So, anybody can pretty much do a search and replace for stuff. So, yeah, so type JSON again. The type JSON here is actually can be standalone and the remix just is a wrapper that handles all the remix specific stuff. All right, so now let's, what was the, was that the last of this slide? Yes, so thank you. All right, we got through one. One down and what, three, four to go? Anyways, I hope everybody is, this is interesting to you. It's more, I'm kind of just, you know, it may not necessarily be your typical workshop where you're doing hands-on stuff, but I think that this helps. Okay, yeah, I'm pretty sure there are better ways to do it. I'm not a typescript expert. I kind of just hack at it and to get it sent into a form that it works for me. So, I'm sure there are a lot of ways that I can simplify or clean up things. What the funny thing was is that, you know, I've been using the type JSON for a while since I, you know, I don't remember exactly, a couple months, two or three months ago when I first wrote it. And so all of my code has been using that. In fact, I actually have a helper. As you know, I write a lot of code. So, let's go to here, CLI. Back over here. Now I'm going to have to rethink this microphone placement because it's, I have this function, I have this command script called GenRemix. I'll show you how that works here. What GenRemix does is I have a config file here where I can specify which packages that I want to re-export. So, what I want to do, one of the issues that we had, remember when we first, I'm not sure how many people started remix, but they used to have a meta package called just remix that imported all the platform-specific ones. So, like from, you know, Node, cloudflare, and for cell and so forth. And you didn't have to worry about which package it was in. It was in the remix virtual package. However, what they were doing was they were actually rewriting files in Node modules, which ended up being bad form, especially for people that use, you know, plug-and-play, those kind of things. So, instead, what I do is actually create, it just generates a file in my app directory called remix. So, one second, what the command is. And I actually have it set to do for post install. So, any time I update remix, it will get the correct one. And actually, I think I'm going to have to... I got to install this because I'm actually overriding. This is one of the things I like about this, again, because I'm lazy and I want to... What I do is I override the default functions in remix, Node, JSON, and the redirect, meta functions, and the react, all those, with my version. So, now, in my code, I can just use JSON, use loader data, and it will instead use my typed versions. That way, I don't have to worry about typing, you know, actually doing the typed version. So, here... So, all I did was it went through all the packages that I exported and will then create a new file called remix that has all the imports. So, here's the imports that I'm going to override, export... This is not my fault. They actually exported this as JSON function, but it should have been exported as a type. So, this helper does go through and figure out which ones are values and which ones are types. And then, we'll finally... And again, I think these are all just issues with the original exports. So, one nice thing about that is that I can go in now into a route, and... I'm just going to go ahead and create a new one, test that. So, I can export default, let's say, loader. Sorry. Okay. What I need to actually do is when I go to export, see, now I can add the import for my remix package. So, everything is going to be there. So, it's not... I can select my remix package, JSON, and let's say message on hello world. And then, we'll say date is new date. Okay. So, then I can come over here and export default function, test. We'll use loader data from my remix type of loader. Oops, should be date. And then, I can say return... is... p... message... at date up to isr-date to locale-bait string. Okay. So, as you can see, I was able to use... I should be tight. I'm using my remix export. I have JSON, I have loader data, but these actually use the typed versions because as you can see here, it knows that this is a date. If you were to do this in standard remix, date would be treated as a string because that's the JSON serialized version. So, here, we're going to have my to look... So, because it's a date, it can still do that. And if you didn't have the... if you were using the standard serialized one and if you didn't have type of loader, you just use the thing and everything was any, because date would be a string, you could still call this function, but you would get a runtime error because a string doesn't have that as a function. So, now that I have that, I'm going to go ahead and go back to my app. This time, I'm going to go to test. And here you go. See, I got the hello world and I have to do... Did you ever discuss this with the remix team because I would find it quite useful in addition to standard remix if they included this functionality there? We've... In terms of the type JSON, are you talking about the... Yes, the type JSON. Yes, there is an actual discussion. Sergio actually originally did the discussion with making super JSON the default, but the team, the remix team was hesitant because super JSON is such a heavy package. That's one of the reasons why I wrote type JSON, because I wanted to have the typing, but I didn't necessarily want to have the extra payload. And... Hopefully, we can get them to import it. I think it makes it a lot easier because you don't have to worry about doing the conversion. Because, yes, when they say type safety, they're kind of basically saying, yeah, we're going to tell you what use loader data actually returns, which is basically a JSON serialized version of whatever payload you did. But it's not type safety in the sense of, hey, if I pass in a date, I want that to be a date on the other side of the network. So that's one of the reasons why I did this, is so that I can get the... Especially when you're dealing with prisma and you have date fields all the time. So I didn't want to have to worry about doing the conversion in the loader, or at the component level. And you get full... So let's say I go in and you get full type refactoring, too. So if I want to rename this to today... Oh, yes, I wish that... That is one of the things it does. It's saying, hey, today is the variable, but you called it date here, so it doesn't rename it. I wish... And I think there's a way to get it to not do that, but at least it knew that the variable that was part here was today. It's just when you destructure, you can override the actual property name. So that's why I kept this as the date. So here, I want to change that. Change this to today. Go here. So that's... Hey, do you have a hydration issue with locale date string? My app is full of blogs about that. Time zone mismatch from server to client. Should I render date in a client only? Yes, that's one of the things that I typically do, is if I'm going to have a bunch of dates, there's a couple of things that you can do. You can... What I will do is I use the time, because html actually has a time element. And what I will do... And then I will put the datetime value as the UTC value. So I typically try to save all my stuff as UTC. Then I would then render the time locally with the local time. So I would then convert that to local datetime. Or if I don't really care about the time, but just want to make sure that the time is right, or if I don't really care about the time, but just want to kind of like... Was it now or like three hours ago? Then I would use some of those things that give you that how long ago type of thing. So instead of it being a specific datetime, it would just say five hours ago. And because five hours ago will be the same on the server as it is on the client, you won't get the hydration issues. Part of the problem though is that you do have to worry about content layout shift, because if you don't render the datetime on the server, it's basically going to be blank. And then depending on how that is in your layout, once it actually renders on the client, it may cause the text to shift a little bit. So try not to put those... If you're going to use those kind of times, make sure that they're either in a place that's not going to be an issue, or you perhaps set some kind of width on it to ensure that it will automatically take up as much space as you expect to begin with, and then it will fill in with the text. So hopefully that answers that question. So again, I was just kind of showing you some of the tips and techniques that I use. I use my little remix import file. The type JSON gives me the ability to... I don't have to reason about it. It's whatever data that I pass in and return from a loader, I'm going to get that same exact data on the client. Now granted, again, the type JSON is limited in the types of values it is, but I haven't really had any use for things beyond the typical stuff that I already do. All right, so that concludes the error handling portion plus miscellaneous. So the next one is... This one's actually going to be pretty simple, mainly because I just created... I created a sample on Code Sandbox a while back to answer somebody's question. I was planning on doing more advanced stuff here, but just ran out of time, so I just kind of copied the sample. It has some commentary on it, so... But we'll go over that as well. So let's go ahead and... We'll go ahead and close out all the... Save everything. Okay, so now we close the error logging, go to multi-page forms. All right, workshop. Anyways, like I said, this is my first time, so I'm hoping that you guys are okay with the format, that it's not your typical workshop. So here's the discussion about the multi-page forms with remix. We are talking about... Multi-page forms, okay. We're going to show a sample showing how to render and process multi-page forms. Pretty simple, straightforward there. All right, rendering multi-page forms. As you know, there are many ways to do this. It all really depends on your app, what works, how your form is. Some are complex, some are simple. You just want to be able to collect some data, but you don't want to have one giant, long scrollable form. So there are different ways you can do this. For example, you could have a single route with a single form and simply show or hide the inputs based on which page the user is currently on, which is the example that I'm going to be showing, because again, I was just trying to do the simplest thing possible. I'm actually going to show you an example from an actual app that I'm working on. It's not open source, but I can at least show you a little bit of it. And I'll probably actually create a sample that takes some of the concepts from this app and make it an open source example. So here, if you're going to store everything on a single page, a single form, you're going to always want to return the current page's data, because we're not using any local state. That's one of the nice things about remix is that you don't have to use, especially with forms, back in the bad old days where everything was use state and event.preventDefault. Here, we're using loaders to manage our state, and we're storing the state directly in the fields themselves. So they're uncontrolled inputs, not controlled stuff. So this way, when the user navigates back, you won't lose the data. So let me go ahead and start that app so we can actually see the app while I discuss it. As you can see, most of my packages, I use Tailwind. I still use the remix app server, so I still have to generate the Tailwind directly when I do it in watch mode. And I like to always clean my build folders. So I do have a pre-build that runs RemRaf, which is kind of a cross-platform version of RM-RF. So I delete my build and public build folders, so that's why you see so much stuff happening at the beginning. I must not have had, yeah, I guess I didn't update my .emv. That's all right. It still works. So let's go ahead and launch my app. Okay, so here is a multi-page form. Now, these photo ones don't actually do anything, but I will show you how to do file uploads in the app that I'm actually building, which is pretty cool. So here's this multi-page form. Wow, it's actually, that's funny, because I'm storing the data in session, it still remembered the session data, so my form data. So as you go through, see, all this is data that I had already filled in. So let me go back here, and we'll actually delete those cookies so that we're not, because I'm using session-based storage. We'll talk about different types of storage shortly. So here, now I've cleared my session, now I don't have anything. So here I can go ahead and type in, okay, web developer, and click next. And then if I go back to previous, that data still shows up. And the way this works is in my route, here's my loader. Okay, this doesn't have all the stuff that I normally do with my, you know, params helper, because again, I literally coded this on code sandbox. So this is kind of default remix out of the box. So here I'm getting the page number, and I default to 1 if it's not present. I get my session, standard remix session object, you know, it's cookie-based. And as long as I'm not on the last page, so less than 4, I'm going to get the current data for that page out of session. So it's form data, page, and then the page number. So each page gets its own key in the session object. And then if I don't have a value, I just create an empty object and then just return that with the page number and the data that I got. So in my component, my layout component, I get the loader data, I get the page, and I get the data. And then I simply, this is just checking what page number, so whether previous or next button should be displayed. And then here's the actual form part. So we have a single form at the top, and then each page, if page equals 1, then I render this. If page equals 2, we render this. So there's nothing, all I'm doing is I took a form that was, you know, three pages long, required a lot of scrolling, and instead just chunked it up. Still a single form. The next and previous buttons are the standard submits. But here, as we know, if you name a button with a value, when you actually click on that button, that's the value that will be submitted with that name. Okay, so even though they're buttons with the same name, when I click on the previous button, when I get action, it would return previous. If I clicked on the next button or return action, it will be next. Again, buttons always submit. So when I hit next, let's go ahead and go over here. And I hit next. All right, so here's my post. My payload. Page 1 username about a file upload, but here's action was next, because that's the one I clicked on. So now in my action, oh, this is real old. This is still, oh, I know why I did that. Okay, I want to get the payload, and the payload, remember, the payload by default is, this is what the payload actually looks like. Hopefully you guys can see it. It basically looks like a query string. Oops, I think I went a little too far. Okay, so it looks like a query string. The reason why I did that in this particular case is because I actually do have, it's a multi-checkbox list, and you can check off multiple items, and you need to be able to get those as individual, you know, and it will do like, you know, name equals value, name equals value 2, and you typically would get that with like form data dot get all, and then it would return an array. I wasn't doing anything specific, in this example, I wasn't doing anything specific at the page level, so I wanted to kind of make it very generic, so it didn't matter what it was doing, so I'm using this package called qs, and parse will actually do the multi-value things directly. If I were to use my params helper, params helper also does that. It will convert duplicate names into an array value, but I was just using this really quickly. So that's why, I was trying to remember why I did that, but yeah, that's why, because I wanted to just get it, and so here I'm extracting the page out of my program. So let's go back to the parsed version. I'm extracting the page out, the action out, and all the rest of the fields, I'm just collecting in data, and then I'm getting my session, and I'm setting that page's data with everything else. So username, about, file load, excuse me, all of those will be set in the data, the session variable for page one, and then checks to see if the action is next, then I'm going to add one to the current page. If it's not, I'm going to subtract one from the current page to go previous, and then I'm going to redirect to that page, and again, anytime you're dealing with session cookies, that is cookies that are session, that use session, sessions that use cookie storage. You always have to commit the session, even if you're not, anytime you mutate it, you have to always return that cookie, and one thing people, if you use session flash, session flash is one of those where you can set a value, and then the next time it's read, it will automatically remove that value. You typically do that with, like, type toast messages or whatever that show up on the next, you know, because you submit. Remember, as I said before, you should always redirect after a successful action. Well, if you want to display a success message that redirect, you can't, there's no way to render that message, so you would set it in a flash, and then on the resulting page, you would read from the flash and then render that message. However, because rendering from, reading from flash, even though it's a get, it does mutate the session object by removing that key from the session, so you still have to make sure you set the cookie and commit it. So that's a gotcha that some people don't realize because you would assume that get are non-mutating, but unfortunately the flash one is. So here, because I'm in an action, I'm actually mutating the data. I'm going to commit my session and set that cookie. So now I'm in the second page, and I'm doing this, I can do the same thing. So you see. And I don't have to fill in all of the data. That's one of the nice things about these types of forms is that sometimes the user doesn't know all the data right away, and they want to be able to, you know, they may want to just be able to skip to that and then come back to filling in the form later. And if you require them to fill in everything, then it's not a good user experience. So by making all these fields optional, they can fill them in. And then obviously at the final stage, if you're doing like an order form or whatever, you can actually pay now, you must have all that other stuff completed, and then you can then give some kind of feedback in terms of which data is still required. So here I'm going to go ahead and do next. And as you can see here, when I post it, here's my payload for form two. There's my page, first, last name, email. So it still has all the values. It just doesn't have any, there's no text associated with it, and I still have the action next. So in this case now session is, has, session page two has this data. And if I go back to the previous one, because my loader always returns whatever the current data is in that particular page, I can then re-render the form with my, with the previous data in there. Sorry, I don't think I actually went back all the way. Where did I? All right. So this was where I was talking about where I had the multiple check marks. If you look here, when I, if I check multiple and say next. And look at the, yeah, here. So on page three, if you have a form field with the standard html, if you have a checkbox field or any field that has the same name, multiple values, So here's my check in comments. So this was the comments zone or email, email name, email, and email. The value is what's returned if it's checked. And if you have multiple, it will then return. It sends it multiple times. And then what I want is for it to be treated as an array. So with that being said, when I get to the final page, the page is four now, my final page. Now I'm just going to go ahead and collect all of the data that's in the session. So I just, I just spread page one, two, and three into a single data object and then return that. And for that final page, I simply rendering that as JSON. So as you can see, all of the values that I collected are showing up and email. You can see even treat showing as a as an array as requested. So any questions there? Again, forms. There's probably as many ways to do forms as, you know, as there are web pages. So it really is all dependent on your use case and the UX that you want to provide to your user. About 15 more minutes and then we'll take a break. Let's see. Let's go back to my notes here. So again, this is about. So that was the this kind of the simple form. Just you have a page, you show the page, you know, and then if it's not the current page, then it's hidden. So those values kind of disappear from the browser. That's why you have to return those previous page data there because there's nothing holding it. So you don't have to do things like if it's not if that section is not shown, have a hidden input to store that data. And again, you're not using react use state to manage that data. Everything is stored on the server and then returned as needed. So I think that's one of the reasons why I like remix is that it just gets a lot of that boilerplate stuff that you used to have to do with any type of form based applications. You just do what is kind of like the simplest way and the simplest way turns out to be the correct way. And you know, and you don't have to really. And it's definitely much easier to reason about. So. The other option, again, would be if you, you know, especially if you have a really complex form, instead of having everything in one big route like this. What you may want to do is. Create separate routes so you may have a top level route and then each page would be a separate route that gets rendered in the parents outlet. So that way you can break those out into separate things in your and you can even book, you know, people can even bookmark them. But the nice thing about it is that. You know, the actual storage and processing is pretty much still the same, but each. Page can now have its. Remember how I said the loader and the action was very generic. By separating them out into separate routes or separate components, then each of those. Routes can have its own logic. Yeah, that's I'm going to get to that next and then I'm going to actually show you this demo of my multi section form because I said I have a really complex route. It was about 1200 lines long because I was, you know, it's just copying and pasting when I was prototyping it. So when I did the refactor, I was able and I used flat routes with co locating feature. So I extracted all of the individual. This was a multi section one with collapsible panels. So each form section was now its own independent component, which I could have co located in the route folder. So I could import those without having to have them in some app components folder and I'll show you that in a little bit. But yeah, good question. Cookie storage have a size and payload limit. What other storage options are available? So that's that was the next slide. So managing data for multi page forms. OK, so there are several ways you can store the data for multi page forms. This is just navigates. That's one of the things you want to do. You want you always want to actually store the data that they submit. One, it allows them to come back later, whether it's previous next navigation or even come back at a future time. You know, maybe they weren't able to complete the form and they want to be able to. You know, so if you have some way of being able to, you know, you know, pick up where they left off that that results in a good UX. So you can either store each page in session like we did, and you could do sessions that are not cookie based, cookie storage sessions, because all sessions are cookie based at some point because the cookie typically will have a session ID that you can then use to either access some external storage, whether it's file base, database, Redis, any of those kind of things. So the cookie just stores the session ID and not the actual contents of the session. But in our case, we were using the session storage, cookie storage. So the actual data was stored in the cookie itself. In fact, if you go back over here and I look, as you can see, I'm not sure how well you can see that. No, there's a, I keep forgetting the actual command for it. Okay, there it is. See, so you can see that the session object is pretty big. Device, you know, so it's actually. So yeah, so as the form gets larger, then this cookie is going to get the cookie value will get larger. And, you know, you may exceed that limit. Also, remember every request, the cookie gets sent, even if it's a request for, you know, css or javascript or images, that cookie gets sent. It's not just going to send on meaningful route navigation. So yeah, so you definitely, I would not, I don't typically recommend using cookie based sessions unless your sessions are going to be relatively small. Things like, you know, theme or or what have you, or the current user ID, you know, the authentication type cookies. Now, hopefully I can remember how to. Okay, there it goes. All right, so let's go back to my doc. I can close that now. All right, so we can either store it in the session, store the data in the database, or use local storage in the browser. Again, storing in the data session, it's the simplest. User navigates, you submit a form data as added to the current form session by page number. When the final page is submitted, you should have all the data needed to persist the data to your database. As we saw when we got to page four, all that data was there. And at that point is when you validate to make sure, did they fill in all the required fields? And if not, you know, have that maybe links back to that page to fill those in. So the pros, obvious of that, are simple to use and the data persists as long as the session exists. As you saw before, I filled out that stuff last night, came back, and because it was stored in my cookie, all those fields actually still persisted. Cons, as we discussed, cookie storage, the session can get pretty big depending on the data. If the session expires, then they lose that data. If they haven't finalized it, they'll lose that data. And the other con is you can't continue it from a different browser. So if they're logging on one, let's say they're using their phone and then they go to their computer and try to access that. Because all that data was stored in session, which is tied to the browser, they won't be able to do it and it has to start over. So the other option is to store the data in the database. One nice thing about that is that you can durably save the data. So you don't have to worry about the data going away if the session expires or what have you. Also, because it's a centralized store, the user can come back and pick up where he left off, regardless of which device they're accessing the data on. One of the potential issues with storing in the database is depending on your data model, you may have issues with required fields of relations. So if you have a model that has fields that are required and you want to be able to save that, but you only have partial data, well, your model is going to reject that because it's going to say, oh, that field can't be null. It's got to have a value. So you either make your model have all those fields optional, which is obviously not ideal, or you have to create a separate data model just to store this transitional data. So it can get a little bit tricky or you could just have a simple thing that has, I'm going to store the data like session, just kind of in a name key, a key value store. So we just store the blob of JSON or whatever as the value. And then it's not until we go to the final page that we then save that into the actual data models that we need. Because sometimes you have actual relations where you have to have like a project that has, you know, you have to create the project before you can add team members to it. So if they haven't, if you haven't had a way to create that project and then, you know, if they're going to another page where they're processing the team members, then, you know, you may have issues because you can't save the team members because you don't have the reference to the project ID because that's been created yet. So a lot of times what you'll do is you'll again save all that data in just a key value store and then do the actual, your final models later. And then the final option is storing the data, data locally in your browser. And yes, you can use local storage, you can use indexed DBE. However, I don't recommend that as a primary option. It would have to be a very specific use case where you need to support the user to be able to be offline. Because one of the issues with that is once you are not directly involving the server with storing the data, then you now have to go kind of revert back to the old method of maintaining local state. And that just adds a lot of additional code. So this is kind of, again, I would only use that as in specific use cases. And in fact, in my example that I'll show, I have an option to be able to record video. And I don't want the user to record 10 minutes of video and then find out when they go to try to save it that something's broken. So what it actually does is as they're recording, each of the blobs, the video blobs get stored in indexed DBE. And then when they stop the recording, then I bundle up all those blobs and send it to the server. So again, those are specific use cases in order to, because ultimately you want to do what's best for the user, enhance their user experience. And losing data is probably one of the worst things that a user can do. So yeah, anything that you can do to mitigate that is always helpful. All right. So let me actually show you. This is the actual project that I'm working on. And I'm not sure how well you can see this, but as you can see, this is my, this is using remix flat routes. So this is my routing structure so I can see everything at a glance what all my routes look like. And this is the one with the message form. I actually split my route into two separate files, the main route file and then a route dot server. So all my server stuff I keep in the route server. And the nice thing about this is that when I have, when I'm using Zod or schemas, that because it's all on the server, I don't have to worry about that, the Zod package, which is almost like 20k. I don't want that to be in the client bundle. So by only using the Zod schema in the server code, I don't have to worry about it getting bundled. So let me actually see if I can. Yeah, this one doesn't have the port auto port. Let me kill port 3000. All right, don't you guys don't look at that because that's not actually live yet. All right. Okay, here it is. So here's my collapsible forms. So I can click. Something isn't working, figures. Okay, so this is one where actually zoom in. Where I can drag pictures. Okay, so I can now drag pictures. And of course it's not working. But I can drag and drop and it will go out and this is actually using cloudflare images. So I have an api call that will go out and fetch custom private URL, a secure URL to upload directly to. So I fetch that from my remix server. And then when it comes back down and get the URL, then I actually upload directly to cloudflare directly from the browser instead of having to. I don't want to have the browser upload to my server and then have my server turn around and upload it to. cloudflare, but also, you know, you got to have, you know, with the whole keys and stuff. I can't directly upload to cloudflare. So there's an indirection where I first get requests from their api, a URL that's specific to that upload. Yes, signed URLs. Basically, yes. Thank you. And so I do that for the photos. I'm not sure why it's broken right now. I also have the ability to record audio and then recording video. And this is the one where I was talking about where I would record a video. It would store the data in a local DB, index DB, and then would then upload and does the same thing, gets the signed URL and uploads to cloudflare streams, which is pretty cool. cloudflare images, if you haven't used it, it's kind of like Cloudinary and some of these other third party ones where you get. It will do all of the, regenerate the image based on whatever your user's devices. And for example, in images, you can specify in the URL, like cropping and, you know, widths and stuff like that. And it will determine what formats the device can handle and will automatically generate images for that. Same with the video. You upload the raw video and then it will do all the conversions to the different formats that a device has. And depending on what their capabilities are, you'll either get a high bitrate version or a lower bitrate version. So, yeah, so if you, you know, I'm not trying to sell anybody on cloudflare, but, you know, they do have some great services and they're relatively easy to use on the video panel here was. This is that. Oh, attachment. Okay, here it is. The attachment uploader is where I fetch. See, this is one of the things that you, you know, people don't understand or don't realize is that you don't have to use. Everything doesn't have to be remixed. You can still fall back to other ways. For example, the use fetcher is great for the types of things that it's good at. However, because it doesn't have the ability to, it doesn't support a weight, async a weight. So if you want to fetch some data and then in that same event, be able to handle the response. If you were to use a fetcher, you don't get that because you have to wait for the next, you have to wait for the transitions to happen. And then, so you'd have to have a use effect that transitions on when data, you know, when the type is done and those kind of things. But here I just want to get the actual URL from my, so this api, I still call this a standard remix resource route, and it will api upload URL. There it is. Upload URL. This is my standard loader. It's a resource route, has no component, and this is actually going to go and call the cloudflare based on the different, based on what type of media I'm uploading, and then it will just return. I'm going to do the fetch and then just return the response JSON here. So I just want to await and get that response. So it's kind of, it's just basically make an RPC call and getting the data. So you can, you can still use standard fetch in a remix app. So I get the JSON, get the upload URL. I then, this particular one, I'm using drop zone. So for every file that gets dropped, it calls this to get all of the information it needs. So one of the requirements is to get the actual URL that you want to upload to, and then it goes in and uploads that. And what's cool is that it uses XML HTTP request to do the uploader because fetch does not provide like progress, but the XML, XHR, I'll just call that, does. So that's why I can, it didn't work on my demo, but you actually do see the little progress meter as it's uploading. So that's kind of cool. So once the final, once the images are finally uploaded, then I get a change status. And when the status is done, this is where I go in and I have a standard fetcher here. Now I'm actually going in and create a new form data. Do the image URL that I get back. And here's my action to add an attachment, and then I submit. And then in my action, I just check to see is that an add attachment, and if so, then process that request. So form data is great when you want to do that kind of stuff, when you're dealing with form related data. But standard fetches work great when you just need to make a remote call and get the results back so that you can continue processing. So that's just my takeaway. Make sure that you understand that you don't always have to use everything with Mix. So let's close that. And I think that was it for this slide. So like I said, welcome back, everyone. We're going to be talking about multi-tenant remix apps using prisma. It's a mouthful. So slide number one. Okay. So what do we mean by multi-tenant? Obviously, a tenant is a group or an organization where data or commonly users are separated from other tenants within the same application. So you have a SaaS type application. A company signs up and then they can manage their own users, but that company will have their own separate data. So data from company A will not be seen from company B or any type of mutations. There are typically two parts to a multi-tenant application. One is determining which tenant the user is making the request for. And two, getting the correct data for that tenant and ensuring no data from another tenant is returned or mutated. So determining a tenant. There are many ways to do this. In our test app, we'll be using the host header to determine a tenant. For example, tenant1.remix.local will be for the tenant1. And I'll show you how we onboard a tenant shortly. But again, there's so many different ways to do that. That's not what we're focusing on here. We're actually focusing on the back end. Once we determine what the tenant is, how do we access that data and make sure that we're accessing the correct data for that specific tenant? So typically in a multi-tenant database, you'd have a single database where all the tenant data is stored and each model would have a tenant ID column and all queries would filter the data by this. That's a very common way to do that. So if you have like a projects model, you'd have a tenant ID. And so whenever you're querying for that, you would do projects where tenant ID equals tenant A. And you have to make sure you do that all the time. So although simple and workable, it's also very dangerous because it's easy to accidentally expose data from one tenant to another since they're all literally in the same database, in the same tables. And you always have to make sure that you're including the correct filter. Usually you would have some sort of middleware, your framework, that would ensure that the tenant ID is always included. So there are plenty of multi-tenant type libraries that will hide the tenant ID filtering for you. However, this precludes you from making direct queries or using other database tools that are not tenant aware. So if you have had some kind of SQL query tool and you wanted to do that, you would have to now ensure that you did the where tenant ID equals or any other tooling that accesses the database. They would have to always know to do that filtering for you. In fact, the company where I used to work for, we did work for the pharmaceutical industry. And that was a big issue in that we were not able to, you know, there was, you know, mixing data from one pharmaceutical to another was a big no-no. You definitely lose your job and there was regulations and all that kind of stuff. So we actually had separate databases for each client. And in fact, some clients like Pfizer, they wanted their data on a totally separate machine. They didn't even want their data even mingling with the competitor. This was pre-cloud days where everything was on-premise. So yeah, we had a whole server room just with a bunch of different servers for different clients. So in this example, so again, so I did not want to do the tenant ID way. I think that's a dangerous way, although simple. I think it has too many drawbacks. So for this example, I took a different route. Instead of filtering the data, I isolate the data so that it's impossible to query data for the wrong tenant. In this particular case, we're going to use multi-schema support provided by Postgres. Schema is basically kind of like a namespace. All your models, your tables and stuff are assigned to a specific namespace and you can't access that namespace or that schema without explicitly specifying the schema. But Postgres doesn't really support, prisma doesn't support that directly, so I'll show you how I did that. So each tenant will have its own schema and Postgres will ensure that you can't access other tenants' data. All right, so let's go to number two. A prisma schema. Typical prisma app consists of a single schema with all the models defined within it. So you typically will have a prisma folder. And here, as you'll see, there's actually two separate folders because we're actually creating two separate sets of schemas. And we'll talk about that shortly. Although prisma does support, have multi-schema support, this is used to group prisma models together. You still have a single database and a single set of tables for each model. So it's not like what I'm referring to with multi-schemas. It's kind of like being able to group, you know, let's say you have a big app and you have like an accounting section. So all your, you want to kind of have an accounting schema with all your models there. And then you have, you know, HR schema with all, so they're all in the same database. They're just separated by schema. This, we're actually saying we're going to have the same set of models, but each tenant has its own separate schema so that they can't interact with each other. So here is how we'll be doing it. As stated before, Postgres lets you duplicate your models for each tenant, which is namespaced by the schema name. Here we have a public schema that contains the tenant model. Here I'm actually, let me pull up, I actually have it here live. All right. So here is, this is a tool called Beekeeper. And I'm connected to my Postgres database, which is running on Docker. So I have a public schema, which it has the tenant model. So here all it has is a name, host, and so forth. So the host name is the part, the prefix for the, you know, the domain. And then what I do is I actually generate a separate schema for each tenant. So here's tenant one, tenant two, tenant three. And as you can see, each of them have their own, the same tables. They're just separate things. And by doing that, you cannot access tenant two's data because it's not, it's physically separated from tenant one and vice versa. Same with tenant three. So in fact, if we go into here and look at the notes. OK, so here is a note that was created in tenant one. And if I double click here, these are notes that are in tenant two. So even though they're the same tables, table names, they are physically separated. And the only way you can access that is by specifying which schema you want to access from within prisma. And I'll show you how we do that. OK, so that was that screenshot. OK, slides. All right. In order to support this model, we first define a public schema. So let's go back to our. So we have a, we have separate folders for schema. So we have a schema.prisma, exactly like we wouldn't typically have in, you know, a prisma model. Oh, speaking of prisma, you know, does everybody here pretty much know how prisma works and has used prisma? Sorry, I probably should have started let off with that. prisma is just an ORM to access relational databases using typescript. So here we have a tenant model that matches that, excuse me, that matches that table structure that we have here. Here's our tenant model ID name post. So one of the nice things about prisma is that it does allow you to have a, to specify the schema, schema file. So you can, you can, we can generate and do migrations and everything by specifying which schema that we're, we're accessing. So here, if we're doing the public schema or the tenant schema. So here's our tenant schema. And this is the one that has users, notes and stuff. I basically took the, the blues stack. That's what the models are based off of. So to do our setup, I do have a Docker compose file that just sets up the, here, here's my multi-tenant prisma container running in Docker. And then we run the setup script. The setup script basically sets up your environment variables so that it takes your example and then copies it to here. Now it's cool. I actually have a, an extension called Cloak that allows me to, this way if I'm, if I'm sharing my screen, I don't accidentally share my secrets. It's unlike the one that's, I think, built into Visual Studio. This one, because this one actually changes the, the theme. So these textmate scopes are set to be, you know, the text to be the same as the background color. So, because one of the issues that we had was, I know that Kent had, he was using the other way and he would, he accidentally showed his environment screen. And it was just for like a single frame on his YouTube video where before it had a chance to kind of hide, you know, obfuscate the keys. And so he ended up having to shut down his stream and go in and clear all those keys out. So this one does it because it's already, it's actually changes the colors for this particular type of file. You'll never have that, that flash of unprotected content. But the actual URL looks like this because we don't really care. It's basically the same. It connects to PostgreSQL username and then the database that we want. The way we get around, the way we're able to access these, the schemas is that we specify the schema in the connection string. That's how, that's the easiest way to select that because prisma doesn't let you go in and change just how, how it actually queries the database. Because normally when, if you were to do a straight SQL query, you would do select asterisks from schema dot table name. And if you've ever seen that, you would actually see, sometimes you get an error and you'll say, oh, cannot find main dot user or something if it's looking for a specific schema. So once we set up the initial setup, like I said here, is we migrate our public schema. So I have a script here that's migrate. What this does is it goes in and determine, you pass in the schema and which tenant that you want. The tenant being the, the schema is the schema file, I should say, and tenant is the, will end up being the schema in the database. So here's our schema file. And then I just take the, again, I source from the .env file and I just change my database URL to pass the schema in here. And then it runs the prisma migrate. So this is the standard, this is the same command that you would normally use when you had a single schema. But I, because I'm changing the environments, the database string and passing in what schema file I want, that's where the magic happens. So let's go back to our setup. And then finally, I go in and generate the client. Remember, the client is the, all the typescript stuff. And because we have two different schemas, we need two different clients. So this, my prisma generate will generate for the particular schema and I specify the folder. Unfortunately, there's, you can't specify the output folder on the command line itself. You have to do it inside the schema file. So here in my generator client, I specify that I want the client for the public to be in prisma client public. If you look at node modules and you go to .prisma client, the client one is typically where all of the prisma stuff would be in. But here we actually want two separate ones. So we have one for public. So this is all the public stuff. It's the same kind of things. The only thing that we really care about is this index, the typescript definitions. But unfortunately, you still have to do all of this in there. And then the tenant one has its own typescript definitions. So by specifying the, which, which folder to output to, we can have separate tenant client files. All right. So again, this is that. And then, yeah, so there's a bunch of redirection in prisma. prisma also creates an alias package called prisma, at prisma client. And all it does is it re-exports the generator folders. So what I did was actually, because I needed to be able to keep track of the, the, the separate schemas, I just created a prisma folder in app. And I have one for public. And then all I do is I re-export prisma client public and then tenant I re-export. Well, that should be an app sign. prisma client tenant. So it's kind of the same thing. It's just a, it's an indirection. So I'm not directly accessing the prisma stuff anywhere in my app. It handles all that for me. Okay. And we'll get to actual, have it running in one second. So we're in four now. Okay. So now that we've got that, let's, let's go ahead and run our. Okay. So here I just created a simple page, a simple route, index route for onboarding a new tenant. Okay. So, you know, just, it's just a simple form asking for the tenant name and host. It's when you submit here that I first check to see if I can get, if are we on, I didn't create a separate onboarding app. It's the same app. So I kind of have to do a little bit of trickery here, especially like in the route. I check to see if I have a user and users are tied to tenants. So you, you can be logged in under one tenant, but you won't be logged into on another. And then the tenant ID, it looks for the tenant in the, in fact, I don't want this to be local host. I want everything to be remixed out local. Okay. So here it was just, if I don't have a user, but I do have a tenant ID and I'm on the route, then it redirects me to the login and so forth. So that's, so this, the loader part just has a couple of things just to make it so I can have this. And this onboarding one doesn't even have a user thing because it's again, it's a test one demo. So here I'm onboarding and what I'm going to do is I'm going to first check to see if I have that tenant already. So let's say I'm going to do my test tenant and I say tenant one. Okay. And I hit add tenant. It's going to say, Hey, I already exist because it's that tenants already in the database. And then if it doesn't have one, then I'm going to add tenant. And this is where the model stuff comes into play. So when I call get tenant by ID here, notice here, typically what you will do is you'll import prisma from your, from some backend server thing that actually does the connection. And then you would see prisma and then dot tenant dot and then whatever methods you're trying to do. But here we actually want to have, we're actually creating two separate prisma clients. So let's go to our DB server. This stuff here is only because all this global stuff is there because of how remix reloads your data on when your routes change. So ignore this stuff. The main thing is, is that here we're importing the prisma client, but we, you know, alias it one is public and one is tenant from the specific packages. And then here we first initialize the prisma, the public prisma client, which is here. And this is basically going in and making sure that our database URL is set and the schema is the public schema. And then we initialize the client and we connect and then we return that. And that's the public client schema. Then I also create a map here for tenant clients. So these will be initialized. There'll be a separate one for each tenant and they'll be initialized on first access. So here is when we access the client one, we're just going to call prisma. But instead of just having it as empty parameters, we actually pass in the tenant ID. And by using that, it modifies the database URL, passing it the schema for that tenant and then initializes the client just like before. So this one, all this does is just check to see if I already have that client connected and then returns it if it's not, if it is. So back to our tenant. Again, we're just because the tenant stuff is in the public, we're going to go in and call the public client and get the tenant find unique there. Let's go back to our route. So once that's been added, and again, if you saw the key for one. Once a new tenant is added. So here's the table that manages that. It then calls provision tenant. And all provision tenant is it runs a script to do a migration based on that new tenant schema. So again, migrate dev, we saw that earlier, that script. So what this does is it, it gets the tenant, so it's going to create a new connection and if the tenant does, if the schema does not exist in there, it will automatically be created as part of the generating the model. And then here it's going to call the migrate dev with that schema. So let's go ahead and do that we're going to go ahead and create tenant for changes to four. And if you look down in my turn in the terminal. It says that it's provisioning was starting so click on add tenant so it's provisioning tenant. And it does take a take a minute. Because it's actually running the migration scripts, just like you would do if you were doing the deploy the migrate on your command line. So once it's finished there. And again, this is something that you would do from an administration thing so here we've got successfully created. And then if I go back to be keeper. I refresh my. See it shows the tenant for was created. And then if I refresh my schemas, you'll see the tenant for schema was created, and they all have their own separate tables and these are all empty. So there's no user. So now when I go in I got to go log into tenant. So if I hit that, you'll see, okay. And let me see up in the URL. It's now tenant for remix local. So now it knows that I'm in the tenant for database, essentially. So, back off. Let's try it. Let's see if I, if I tried to log in as my user from a previous tenant. The invalid email or password, because that user does not exist in tenant for. So now I can go in and sign up. And again, I'm still in tenant for it knows it's tenant for joining, so I'm going to go ahead and stupid microphone is in my way. Create an account. Okay, so now I'm in test for notes and if I look at tenant for again and go to user. See now there's my new account that just got created. So how does that work? Again, if you go into our, let's go to the join. We require a tenant ID. So loader, when the loader comes in, it's going to go in and first look to make sure that the request. One gets the tenant ID that's pulls the tenant from the URL. So it splits the host name, and then it has to if it's greater than or equal to three I get at least the correct segment. And then require tenant ID, just make sure that the tenant ID. There is a value. If there's only two parts like remix.local, it would be undefined, and then that would throw the 404 error. So coming back here, so as long as I have a valid tenant, then I'm going to get the user ID for that tenant to see if there's a user. If I do have a user, that means I've already joined so it redirects me. Otherwise, it just returns here. So once I, again, this is all this part of the, I'm sorry, the blue stack. That's what I'm trying to say. So that's the blue stack. So it already, I didn't have to really change much with it. I just added some of the stuff for the tenant support. And then here when I'm going to actually do a registration, as you can see, it says get user by mail. So I had to modify these methods to always pass in tenant ID. So when you go to get user by mail, you will see that it now, your prisma function that you import from the DB server, you have to pass in the tenant ID. And by doing that, it will update the connection string to point to that schema. And then it will return you the tenant client. So it knows that there's a user object or note or whatever is in the tenant schema. Whereas with the public one, if you were in the public one, you wouldn't have a user or notes because those models don't exist in the public schema. So this, by doing this, this is the only way that you can access this particular tenant. So everything after that is always going to be directed to this schema in the database. So you don't have to worry about adding any kind of where filters or whatever. It's always going to be connected directly to that schema in the database. So let's go ahead and add a new note. Hello from tenant four. testing one, two, three. Okay. I hit save. And then if you go into my notes, You will see that here is my new note. And that's only in tenant four. Deletes work the same. Let me just create another note. Another note. Save. Go ahead and refresh over here. Make sure you guys can see that. Again, we have the other note. And then if I delete it, refresh, that's notes gone. So this ensures that we're only, excuse me, that we're only able to access the data in the specific tenant. And what's interesting here is let's say that I'm going to go ahead and select, and if you look here, you can see in the URL that it has a note ID. Okay. So now I'm going to log out. Copy that. I'm going to log out. Paste. And if I go into tenant one. Okay. And then Even though I said tenant one, and I was still signed into tenant one, if you look here, that URL, that ID that's only in tenant four. See, VRSD8. VRSD8. Because I'm in tenant one, it's saying note not found. So there's no way for you to accidentally have, here's an ID that can be, just because you happen to be logged in to a specific tenant, you still cannot access the data in another tenant. Okay. Let's check the next one. I probably skipped through some slides here. Yeah, using the prisma. So this talks about how the public and the client, the tenant versions work. And then this talks about how to call your functions and passing in tenant ID to create users. So again, I wrote all this last night, just because I want to make sure I covered everything. But also if anybody wants to review this, even if they haven't seen the discussion, they can at least follow along. And then the final one. Okay. So every app, you know, you've released your first version or you've got your first feature in there, and you're always going to have some kind of migration. You know, add a new model, add a new field, what have you. Well, now you have all these tenant schemas out here. Okay. And you got to make sure that existing tenants will also get the new tables. You know, otherwise, you know, you're going to have, you know, users in various states and you have a single application and those are going to break. So I actually have another script, again, scripts are good. They automate stuff so you don't make mistakes. I should probably kill that. Okay. So what this script does is this is where you go in and you want to migrate all the schemas. So let's go to our tenant one. We're going to go into our notes and we're going to go in and I just I had tested this with this one. What is favorite, their favorite note, and we'll call it Boolean. And default is false. Okay. We're going to add this new note scheme, note. Well, create a failed migrate. Yeah, because if I do, probably then my demo will stop working. So I'll leave that as an exercise to the viewer. So here I've created my schema and so now I want to migrate. So this is where I run prisma migrate all. Okay. Before I do that, let me actually show you the script because one of the first things it does is we need to know what what all the tenants are that exists. Another function here. This one actually use this is a typescript file. And what's cool is that you can actually, you could still reference code in your app folder, even though it's totally out of there. So here I'm importing the public schema from going up the tree, the folder hierarchy to my app prisma TV server. So the same thing that I would do in my remix app, I could do from external scripts here. And this basically is an async function that calls the use of the public schema, the tenant model, find many. So it's going to return me all of the tenants that are in the current database. And then here I'm just I just I just care about the ID. I don't need the full tenant thing. And then I map because it actually returns me an object with an ID property, but I just really want the string. So here I map that out and then I'm just going to output the tenant ID. So facts before I do that, let's do this just so you can see what it looks like. I say TS node and then we'll say prisma. prisma get tenants. That's going to go out and connect to the database and returns me the four tenants. All right, so now let's go ahead and run that script. prisma migrate all. OK, so we're going to migrate all. So it's going to go out and connects and gets the tenants. And now it's going to do loop through each tenant. Loop through each tenant and do the migration. The first time it gets there, because that migration again, we're still in the same tenant, the prisma tenant thing with migrations. It's going to ask me for a name for the migration. So I'm going to say add is favorite. OK, and now it does that and it's going to eventually it's going to. It created the migration for add is favorite. So the first time it's generating that schema. Then for every subsequent tenant, it's because it's rerunning that same tenant script, but this time for a different schema. It's now syncing that up with so it's making sure that those schemas are up to date. So regardless of how behind a particular tenant schema is, once you do the migration, it will always catch it up to whatever the latest migrations are in the database. The migrate dev script, I told it to skip the client generation because I didn't need to have it regenerate every time for every single one. So then at the final thing, I actually tell it to re-migrate, regenerate the tenant. And sometimes Visual Studio doesn't pick up those typescript types, the new ones. So just if you just reload Visual Studio, then you can go back in and let's go to our notes model. OK. Where's our get vote? OK, so we have a select. I believe we can say is favorite. Yeah, see, is favorite is now part of our schema. So our model has been changed. You can go back to our beekeeper and now go to our tenant for refresh it. Tenant for note now has a favorite is favorite. Same with tenant one. So our developer experience hasn't really changed all that much. The only thing that we need to do is have a script that goes out and just runs those migrations across all the different tenants. So for production, the technique for migrating the production server will be similar to dev. And again, it was late. I'm going to leave it as an exercise for you guys. But I think it's pretty self-explanatory. I think the only thing you'll need to do is instead of calling migrate dev, you call migrate deploy and it'll run through all the migrations for you. You'll still call this you'll still it will still call the get tenants script in order to get a list of all the existing tenants on the production side and do it that way. I think that was the last slide. Yeah. So, thank you. Hey, three down. All right, so that was, I hope that was interesting. It was kind of a challenge. Again, I was curious as to how it would work inside prisma, because it doesn't have direct control over the SQL generation. So, once I figured out that you actually found someone else had created a sample and they actually did the data, the connection string change. Then that was when I had that aha moment on how to how to do that. And that would work for if you wanted to have separate databases as well instead of having a single database with separate schemas. You could just change the database name in your database connection string. So here instead of it being slash Postgres, it'd be like slash tenant one or slash public or whatever. And so you would just change the database name. And because prisma will always during the migration will always create the database if it doesn't exist, or create the scheme of it. Again, that multi-schema from Rafael asks, could it work with the prisma preview feature multi-schema? My understanding and based on cursory search through their GitHub issues and stuff, is that the multi-schema is for taking your database, you would have a single database and then you would have a schema for kind of like application grouping. So you'd have a schema for every accounting and you have a schema for HR and you'd have a schema for whatever. Each schema in your schema file would still have different models. It's not taking a single schema file in prisma and generating multiple schemas in there. That was one of the reasons why I built this was because I couldn't figure out another way for prisma to support that. I know people were asking for this way and were confused because it did a different way. And they're saying, well, that's how it's working now. And they still are discussing whether they're going to support this way. Single schema file with multiple schema, Postgres schemas. This doesn't just work for Postgres too. Again, if you want to use SQLite and use SQLite for each tenant, you would just do the same thing in your database URL. You would have a file and then you would specify a different file for each tenant. So hopefully that made sense. And those that missed the call, the workshop. I know this Sabin worked for prisma and said he was interested in that. So hopefully I will get a chance to show him this or he can see the workshop recording. All right. So we got about 25 minutes left. So if we don't have any other questions on the prisma stuff, I'll go ahead and kind of talk about how I deal with problems in remix when they come up. Or if there's some feature that I want to add that, you know, it may not necessarily be something that the core team wants to add. Let's go to slides. Oops, I should be doing it over here. Okay, so again, this talk is about patching remix to fix bugs or add features and discuss the techniques that I use to update remix whenever I get stuck. Or again, when I want to add something like the handle error, for example, you know, that's if it's not something that I can do as a library or something external to remix, and I have to actually touch the code, then this is the process that I use. I should not be done. Okay, so. All right, so unblocking yourself. This is what motivates us as developers, is that we're developing an application, we're using some third party code, and either there's an error or there's some feature that's missing or we don't necessarily like the way it's working. We still have to build. Ultimately, we are still responsible for building our own app. You know, although it may seem cathartic to create a scathing issue in GitHub as to how terrible the package is and berate the maintainers into dropping everything to fix your bug or add your feature now. So that's, yeah, and you see that very often as an open source maintainer. You know, it's like you suck. Why isn't it working, you know, and it's like, dude, you're getting it for free. Calm down. So, um, I'm a firm believer in, you're responsible for your period. Um, you know, don't blame other people, if it doesn't work, you know, you got to figure it out. So I treat any open source software as software I didn't have to write. Hey, so I'm like 90% there. You know, if I didn't have to write that code. I'm already ahead in the game. So any issues that arise that block me from moving forward, it's my responsibility and I'm ultimately responsible to my users and my boss to resolve the problem. So I can't tell the boss, hey, you know, that feature is not going to be done because I'm waiting on, you know, some overworked underpaid, you know, open source maintainer to fix this, this bug. So, you know, granted, it's easier said than done. Hopefully there's a workaround while a bug fixes being worked on because yeah, you know, sometimes, you know, you got so many different people doing so many different things with code that, you know, it just may be one of those edge cases. And as long as you can do a workaround, then that's probably the best solution. But if you have no other choice, then going in and digging into the source is the best way to do it. And like I said, sometimes you just have to go in and hack at the offending code and beat it into submission. So, you know, that's what you do. And that's pretty much what I've been doing over the past couple years working with remix is anytime I see a bug, you know, I'll post about it and either do a PR or create an issue or at least make something, you know, give some visibility into it. But I will go ahead and just fix it if I need to and move on. Okay, sorry, I was thought I had another slide in there. Okay, so using Patch Package, you know, and a lot of times you'll see people say, hey, you know, if you got a bug, you know, use Patch Package to fix it. And that's great. It definitely gets you 90% there. So Patch Package is a tool, if you haven't used it, that enables you to modify package and node modules and then generate a patch that can be reapplied as necessary. Okay, so basically what you're doing is you're going into your node modules and actually editing the source, running Patch Package. And what it does is it goes in and compares your modified version with the official version. And then any differences, it'll create a patch file that has those changes. And the next time you reinstall your package, you know, you commit that patch. The next time you install it, it will apply that patch to the package as if that bug fix was already there. All right, now it works great. Yeah, it's literally saves you from pulling out all your hair, the little I have left. So, however, as amazing as this tool is, it does come with some limitations. Okay, so here only supports a single patch per file. You know, sometimes you have multiple bugs and each bug is independent, but they happen to be in the same file, whether it's in the compiler or the server, you know, component or whatever. So you can't like pass around a patch that fixes one and have one fixes another and then have those two fixes be part of the same patch because you would have to go in and modify the file with both bug fixes directly. The other problem is patches are generated for a specific version of the package. So when you have a like here in the in the test one that I'm going to show is remix 175. All right, so I make those patches. Everything's great. And then remix comes out with 176. Well, when you try to run the patch tool, you know, you update your thing, remix the patch package tries to apply your patches, but it sees, hey, this patch is for 175, you're using 176. And it will warn you and it will then try to patch it, you know, as long as that the new code, the new version of the package doesn't make any major changes to where your particular change is, you know, it's just like with any other source version control. A patch is simply trying to merge a change on top of an existing file. And as long as those the the code hasn't the underlying code hasn't changed too much. It typically applies cleanly, but sometimes you're going to get conflict. So patch package, if it has a conflict, will bail out and and return an error. So that's that's one issue. And so if you have a bunch of patches, and then you want to update, sometimes that's one of the reasons why you don't want to update is because you don't want to have to deal with, you know, fixing any patches that don't apply cleanly. And then on the final one, which is probably is kind of the hardest one to work around is that you're patching the generated files, the not the original source, you're, you're patching the output of whatever build system that they're currently using. You know, which does all the transpiling and the minifying and those kind of things. And sometimes the code that you see in node modules doesn't even closely resemble the original source. And you're trying to figure out, okay, well, where's that bug when all these variable names are like single letter number combinations and you can't figure that. So patch package makes it, you know, although it's possible to do it, if the particular package that you're trying to fix really mangles the code, it's pretty much impossible to do that. So. And for one, one quirk of remix is that for some packages like the react package. It actually builds two versions, one for the server side, and one for the browser. Excuse me. Because remember, remix does all the react rendering server side, as well as on the client. So the same code that you know all the use loader data and all those kind of things they all run on the server as well as the client. And what ends up happening is that if there is a bug or patch that you want to do, you have to actually update it in both packages. Excuse me. So yeah, that's one of the drawbacks of, of patching, you know, at least you have that option, but When I run into a solution that just makes my life problem, you know, I try to figure out what I can do to fix that problem. You know, as I said, you know, it's better if you just avoid the problem altogether. So the way I do it is I actually patch the original, the actual source. So, overcoming the limitations. So as with most programmers, when presented with a challenge, I try to find a solution to overcome it. I've been working on some scripts that allow me to sidestep the limitations of patch package. So one, I edit the original source. So instead of patching the transpiled files, I keep a separate repository that is a clone of the remix repo. And I can edit, I can then edit the original source to fix the bug. Each bug fixes a separate branch off the base remix repo. And then as I create a patch, if I want to apply multiple patches, I can create a branch off of the base version and then cherry pick the patches that I want. And then finally, when generating the patch, because ultimately the patches need to be applied to the transpiled version, I yet execute yarn build from the remix repo. And it actually goes in and generates all the packages that the same packages that are distributed on npm. These built package now have my big patches baked in, because I've, you know, I carry pick those patches directly into the remix source. And then I have another script called apply patches that copies these packages to my node modules. So once I have, and I'll show you this whole process in a minute. So once it does a build, I take those node modules that were built, copy them directly on top of the node modules in the repo that I'm in my project. And this is like a hand edited node modules version, because again, the one that I built has my patches already there. And then I can, it has all the necessary code changes. I actually do run the patch package command now, so that it will then go in and create the patch. So those patches now have all the changes for whatever handful of patches that I've added. It will have all of those built in, and then I could commit that. And if anybody checks out and does a clone of my code, they'll always get the patches. So they don't need to have this whole process. This process is more for a person like me creating patches and not necessarily a person that just being a consumer of patches. And then for versioning patches, like again, the issue that we had is if you upgrade to a new version, does the patch apply cleanly? If not, what I'm doing is because I pull the latest version of remix on top of the main branch, and then I create a new branch that I cherry pick the patches in again. And if they're in a conflict, I can fix them. But I'm fixing them against the original source code and not some transpiled version that looks nothing like the original. And then we'll talk about the enhancements later. So let's, enough talking. Let's show here. Okay. This is my GitHub client. It's called GitKraken. I'm not sure how many of you have used it or are familiar with it. It's a nice graphical, you know, I still use the console every now and again, but I like to use, you know, this definitely helps to make a lot easier to visualize the branch structure. Sorry. It's been a long, it's been a long morning. So here, here was the main branch. So I'm going to go ahead and just so you can see. Okay, so this was the original branch that I imported remix 175. So all the source code in there. In fact, if I, here's my pull remix script, where I can specify what version I want. And what it does is it's going to go out and go to GitHub and download the zip of the entire archive for that particular version. I then unzip it, and then I are sync it directly to my sources folder. So let's first see. Okay, so this is the source here. So, so it basically looks like a clone of, if you were to clone the remix repository directly. Then what I do is, here is that handle error. Okay, stop following. And here's my handle error commit. So I basically branch all of my patches off of the main, the base remix repo. So here's my handle error export and if you look here at the code. This was the change and what's cool is I can do all the types and everything. So I, so I went and actually added a handle error function type and here's, you know, this is me modifying the places where I wanted the And here's remix catching the error. And if this doesn't equal Tessie before it just logs it to the air to the console, but there's no other place where we get access to that. So this is where I go in, and we'll call the handle error and pass it in the all the information it needs to make that call. There were several different places because remix has different entry points. You've got the document request, you've got the data request. You've got your resource request. So it differentiates them because resources are the ones that don't have the default export. So, so there was just several places where I needed to make those make that call and Excuse me. So, so I make that patch. And then once I make the patch. So let's go ahead and we're just going to pretend like we're on this patch here. We're gonna check it out. And then we're on here. So now I'm on. So I'm on the handle error patch. And then if I yarn build. Okay, so now I'm building remix base remix 175 with my specific patch because I've already applied that to to this particular branch. So if you go into the packages. This is where remix builds the node modules. So this is the exact. So if you were to go in and look at your node modules now, this is the same structure and all of the files are the same. So that was in server runtime. So I go into server runtime. Okay, this is the distribution folder. And then if I look at server. Okay, here this is again, this is taking transpiling the original typescript into Common JS. And then here's my code right there. And then let's go down a little bit longer to So here's me passing in the handle error. And if I go, let's go to that and the resource request. And Sorry, just one second. Let's try the handle data. Okay, and then here's where it does the server mode test and then there's here's my code that was already in there. All right, Vladimir, I'm glad that you were able to join us and we will, like I said, this is going to be recorded so you can watch the rest of it later. And also, the, you know, the code is on GitHub. So if you have any questions or comments, please add it to the discussions. So again, so that's how that so we've now added that patch. And what I typically do is I will take, I will first create a branch off of main called patched remix 175. And that's where I what's really cool is that we could just drag and drop stuff in there. So we're going to actually create a new patch because this was one of the things that I noticed. So we're going to first check out our main branch. Okay, and if I were to do yarn build again. You'll see that my this code disappears my handle error stuff, because again, we're on the non patched version on the. So if I go to server runtime server. Here we no longer have that. If build entry handle error at the beginning of our request handler. So this is clean. This is clean version 175. So one of the things that I noticed is that when you create an error. Is that the error actually on a production build, it includes the stack trace and typically you don't want a stack trace on your production build. You just want it to show because if you. Let me actually go back to. Workshop. Back to error logging and go back to our app. Routes error. Okay. Actually, I want to MPM run build. I want to do this in production mode MPM start. Okay. Oops, I'm in the wrong. Okay, so let's go to our network and throw the air and what you'll see is it's returning the air. And then the response. Also includes the stack trace. The problem that we have is that now it's showing you showing the user code that you don't necessarily want to show up on the client. Really, all you really want is this error to be logged on your logging service with all its stack trace stuff, but you don't want to expose that to the client. So here we're going to do is we're going to go back to our patches, our original stuff here. And I believe. Okay, serialized air. So this is where you this is where the air is actually serialized. What you want to do here is check to see if the air is. Not sure if we can do it here. So I would have to actually patch that but. Or now, for this particular I'm going to say if server mode production delete update error stack. Oh, it's my. Yeah, my three hours is up so one second, let me tell it to stop. Okay, so this is a patch that we're going to make, and I think there's a couple of other places. Realize. And remember this, this was member with this is where we patched our code before for the handle air so this. We want to make sure that we're not touching this part of the code. If I were to put it in here, then I wouldn't be able to cleanly apply the other patch. I'm going to go and find the I don't even know server mode is visible there. Okay. I'm going to go ahead and save that and then if you look in it cracking. We're going to see our why is my bad. I was actually building in the wrong spot. I was building a node modules I'm so used to doing that. I really wanted to do it back in the server on time. Server so let's do that again. I guess we're just Okay. Copy that now. Okay, so I'm not sure if this will let me. We're not going to worry about that particular one. That's, I believe that's I actually think we do need to do that. Yeah, I'm not sure if I'm going to be able to do that. Okay. Well, let's just not worry about that for now. So now here, if you look at my working here the changes that I made. Okay, so now I'm going to do is I want to create a new branch. I'm going to call this patches the serialized what's called error stack trace error. So now we have a new patch new folder. And then I'm going to back up here and now I'm going to say Remove error stack in production. Okay, so now I've got this patch out there. It's off of the original base one. What I'm going to do now is I want to check out my patched version. This is one that's already have all of my existing patches, but it doesn't have my error stack one. So all I need to do is I just right click on this one and say cherry pick the commit. Do I want to commit immediately I say yes. And there now I have my patch version has my error stack as well. So if I go ahead and yarn build package, the built package the distributed one should have See my here's a delete error stack code. So that's now in there. So now let's go back over to this is my do that and then I got to go ahead and say slash patch tool Apply patches. Okay, so this script here apply patches. What it's going to do is it's going to go in and first remove any patches that I may already have. And that's going to copy the node modules from my current build and replace the ones in my current application. This is there just because it needs actually need to I haven't figured out how to do this. Making sure that our sync keeps the executable permission. So once that's once it's in sync, then I run patch package on dev and server runtime. So this I'm going to I'm going to try to figure out a way to be able to know which packages were modified with patches so I don't have to manually specify. But for now, that's how to do it. So when I run the package is now if you can see it's gone ahead and copy the files over. It's now running patch package for on the dev and then it will run it against server runtime. What I plan on doing is actually creating a kind of like a UI where you can check which patches you want and then have it automatically do it in the back background for you. Just didn't have a chance to finish it beforehand. But as you can see it's now ran patch package against the dev and server runtime. So if I look in my patches folder now, I go to look at the server runtime. If I look for delete. See, so now my delete stack function is in there. And now if I go ahead and npm run build I'm going to recreate a production build npm start refresh clear in my network. If I'm not mistaken. Okay, I'm not sure what I did. I must have broken something. But as you can see something's different. So yeah, I would obviously I would test test the make sure that the patches doing what it's supposed to. But what it should have done and it's probably related to that one where I didn't have access to the server mode. But when it returns the response, it would have deleted the stack property. So you would only have seen the message and not the stack. And that way you would still have the error here, which would then be logged on your error logging service, but then it would not show up in the network trace. Oh, Andre, thank you. I will try that. There are sink dash P. So, anyway, that's, that's how I managed to deal with all the patches. I have, I have a whole handful of other ones I haven't updated to use this new method. So it's kind of like trial and error to see what workflow works best. As I was shown in here on the enhancements. I'm also I'm also working on being able to apply an existing PR so you'll be able to select in my once I get the GUI tool set up, you'll be able to point to a PR that's in remix and have it. You can apply that as a patch to it because there's a lot of good PRs out there that unfortunately, the remix team. They're just been so busy that they haven't been able to get to a lot of them. But it allows you to at least get those fixes in or get those new features. It gives them some real world exposure because now you can actually instead of having it only available inside of a test. You can actually apply it to an application and run it and be able to provide feedback and say, hey, yeah, it works as intended or propose some fixes to the PR. So that's that's one of the things I want to be able to do. So be able to cleanly apply patches, be able to select, pick and choose which patches you want. And being able to upgrade those patches when a new version remix comes out, because in some cases, you know, remix in a future version may actually fix the bug that you had to patch. And if so, you don't want that patch to be in there. So that's kind of that's what I've been working on. And hopefully I explained it well enough. It's mostly just, you know, a bunch of get stuff and, you know, syncing folders and things. It's not any anything magical, but it does allow you to work around some of those limitations that I discussed about patch package. So here, server mode is not defined. It's almost had something there is wrong. So I guess that's it for me. I know we are late on time. If you notice in the repository, I do have a project there about doing testing with V V test and playwright. It's actually based on an example that Jacob Eby from the remix team had posted. Excuse me. And it's going to be in the. It's going to be part of the I know that the future version of remix is going to include some testing helpers so that you can actually test routes, not just loaders and actions. And part of the problem with testing right now is a lot of the routing stuff uses hooks and or other other parts of remix that requires remix context. So in order for you to be able to test the component in isolation, you have to mock all that. So this particular project shows you how to mock out things like your loader data and forms and all that kind of stuff in order to be able to test. And what's cool about this particular example was that he incorporates the test in the route file itself. So if you had if you didn't see my discussion last week, the discussion group, this would be kind of interesting for you. So here is your typical list. All this part right here is your typical remix app right here. You got your loaders, you got your default component, loader data, search params, whatever, forms, links. Here is the actual test code. OK, so it's actually just like you can inline your loaders and actions with your server side code. You can now inline your test code, and it's basically guarded by checking for node EMV equals test. And then all your testing code is right here. So we have a mocks file that sets up all of the mocks, the VIT mocks for you. And you just go in and then you provide the return values like right here. If you want to mock the data that comes from the loader data, you just mock the return value and pass in what data you're expecting. Same with any other hooks, you would mock that. And then when the route renders and it calls use loader data, it's because obviously the loader itself didn't actually run. It's just going to return whatever data that you specified as your mock return. So this way you can test different scenarios, whether it's the happy path or some kind of error condition. You can just generate different mock return values and then make sure that your component reacts accordingly. The main thing is that when you render, it uses react testing library. So rendering and all your expectations should be doing expectations against the rendered markup, not the react component. So you're actually going to be here. This one is just checking to see if the link was rendered. The react component or remix component is a link component here. But when we actually, you know, a link component with a to attribute, but when we actually are testing it, we're get by role testing library recommends that you don't try to find specific elements, but roles or by tags or whatever. So here this is just going to return a link, which is the anchor tag, and then get attribute href. So remember, this remix is going to render that as an anchor tag, a tag with an href to the actual route. So this is what you're going to be doing expectations against, not, you know, it doesn't testing library does not want you to test implementation. It wants you to test against what is the result, because what the result is, is closer to what the user sees. And if you are testing about what the user seeing or doing, then you're more confident that your app is working as intended. Same thing here. You can mock return values for any hook and pass it the values and so forth. So and then this section here shows you how to mock how to do unit tests for your loaders and actions. A loader is simply a function that remix happens to call at certain points and it passes in things like a request context params. So in order for you to test your loader, all you need to do is create a new request object with whatever data that you're that you're expecting. Run your loader, get the response, and then you can go in and do the assertions against what the response is. So here we're getting a loader. We're expecting the status to be at 200. OK. And that we get the JSON payload and then we want to inspect that to make sure that we're getting the correct response, the data. And you just do different scenarios. So here's one that with no name parameter. Here's one with where you pass in a name query string, but don't provide a value. Here's if you actually have a value, it should instead of it being the default, it should be hello test. So all this when you're not mocking it, when you're testing your loaders and actions, you don't want to mock it. You actually want to run your loaders and actions. You want to make sure that what it's doing is correct. So you're you're passing in controlled inputs and then verifying the responses. So the nice thing about that is that you can go in and find out all your different edge cases and then pass in a request that matches that and then verify that your loader or action or whatever is responding correctly. And then finally, this example also shows you how to do end to end exam integration tests. It uses Playwright. Playwright is similar to cypress in that it will launch a browser in the background and will execute will actually run a real server and automate tests, you know, going through routes here, go to the URL. It waits until some text has been rendered and then finds find some kind of input and enters. So this is the it's it's it's very scripted. So you you specify step by step as if as what as what the user would be doing. So they would go in and fill the name field with the test and then they would click on the submit button. And then once it goes through, you wait, you want it to wait until the page re renders. And does it have the tests that you expect? And this one here, I was actually doing that. So here is it running the testing/talks">unit testing. So it runs through all your routes and then there's your test cases in the past. And then here's ones where you're running integration. And now it's actually launching, it finds all the tests, it launches three worker processes and runs Chrome against each of those, runs through the script and verifies that everything is is correct. Yeah, see, I was I wanted to show off when I was doing the discussion that it will fail. So here it's technically should be hello test. So if I comment that back out and rerun it, it should pass now because. Oh, yeah, I got it. Sometimes the server doesn't read. So now those those tests pass because I have to correct things. So if you're interested in doing, you know, using V test or playwright, check this out because it's got all the setup for you. It does make sure that globals are set up. It even includes code coverage. So if I MPM coverage, it it will launch a server. To show the. The coverage test coverage. So. So, yeah, everything's already pre-configured, shows you examples of how to use it. And then. Yeah, I think it makes it really simple to to do your testing, especially with. You know, the fact that you can just create your tests while you're creating your routes, I think that's that's pretty slick. And if you, you know, if your routes files get too big, if you use remix flat routes, you can co-locate your route file with your test file. So you can have them all in the same folder instead of having them separated. Yeah, from what I understand, playwright is actually really fast compared to cypress. I'm not sure if that's because they've optimized things or if they're doing less stuff than cypress. But for general testing, it works really well. And I think I actually like the the way the the test files read, too. It's very, very readable and it makes it simple to say, yeah, I understand exactly what that page is trying to do. If you have an app that requires authentication for routes instead of having to go because each each instance is basically like a brand new, you know, opening the browser and going to a page. So instead of having to go in and go to your login page and log in with an actual user, I took a page out of Kent's test test scripts. And what he does is he creates an api endpoint to get a to get an logged in user automatically. And that endpoint obviously is only active during testing, but he just hits that endpoint to get an active user and then the rest of the script can go go on. That way he doesn't have to do the whole login sequence for every test. So that is basically my workshop. It's not your traditional hands on one. But again, I think this is one of those things. It's kind of like a blog series, but in a video form. Well, thank you, everybody, for sticking sticking it out with me. I appreciate the feedback. And again, if you have any questions, you can follow me on Twitter at Kiliman. I'm on Discord as well as Kiliman and you know, my GitHub. So definitely feel free to post any comments, feedback in the discussions there. And hopefully, you know, I'll be able to help out any way I can. This is one of the things I do like to do is I do like to help people. I like to teach people stuff and so. All right. Well, that's it, folks. Hope you enjoy the rest of your week and I will talk to you soon. Bye.
195 min
21 Nov, 2022

Watch more workshops on topic

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career