Remix Persistence With DynamoDB

Rate this content
Bookmark

Remix is the best React framework for working with the second most important feature of the web: forms. (Anchors are more important.) But building forms is the fun part: the tricky part is what happens when a web consumer submits a form! Not the client side validation logic but the brass tacks backend logic for creating, reading, updating, destroying, and listing records in a durable database (CRUDL). Databases can be intimidating. Which one to choose? What are the tradeoffs? How do I model data for fast queries? In this talk, we'll learn about the incredibly powerful AWS DynamoDB. Dynamo promises single-digit millisecond latency no matter how much data you have stored, scaling is completely transparent, and it comes with a generous free tier. Dynamo is a different level of database but it does not have to be intimidating.

41 min
18 Nov, 2022

Video Summary and Transcription

DynamoDB is a next-generation key-value database that is low-latency, scalable, and easy to use. It offers advantages such as local development options, a generous free tier, and fast performance. Common misconceptions about DynamoDB being expensive or hard to learn are debunked. The Talk covers topics like basic modeling, separating concerns, working with DynamoDB in Remix, and building a DynamoDB client. Overall, DynamoDB is a powerful database that integrates well with Remix and provides efficient data access patterns.

1. Introduction to DynamoDB

Short description:

Today I'm going to talk about persistence with DynamoDB, a next-generation key-value database. DynamoDB is a low-latency, wide-column key-value store that allows querying by key and storing values as JSON. It is a completely managed database with no patching or software upgrades required. Additionally, DynamoDB is scaled to zero, meaning you only pay for what you use.

Hi, everybody, it's a real pleasure to be here, and I'm stoked to be part of this remix movement with you. And today I'm going to be talking to you about persistence with DynamoDB. My name is Brian LaRue, you can find me on various socials, so it still exists under that name, and I work for Begin.com.

So, before I get into Dynamo, I'm just going to talk about persistence in general. So, persistence is a really important requirement for dynamic web applications. It is an essential complexity for anything that's personalized, so anything that's got an authentication step and we're saving some data about a person and we need to do that in a secure way and in a fast way, you're going to need a database. And you can't do this with a flat file system. You don't want to do this with a flat file system because you'll get into concurrency issues.

So, traditionally, people would choose a relational database, and DynamoDB is a kind of next-generation database that is key-value based. And so, most organizations out there these days have settled on Amazon for infrastructure, and Dynamo is the flagship key-value database for AWS. So, if you're using AWS, you probably would like to learn more about Dynamo.

So, what exactly is Dynamo? It's a low-latency, wide-column key-value store that's a really fancy way of saying that we query by key, and we store values as JSON, and we can have different shapes of different columns on every row. So, for every row in my database, I could have different item attributes and it's all good. Dynamo is a completely managed database, and this means that there's no patching. There's no software to upgrade, and that's really nice. And also really nice is that it's scaled to zero, which is a fancy way of saying that you only pay for what you use, and you don't pay for anything else. So, it's 100% utilization. You're not keeping a big cluster of servers around just to meet demand that you may or may not have. You just use what you pay for, and happy days. You move on from there.

2. Advantages of DynamoDB

Short description:

DynamoDB is the flagship managed database for the pioneer of the cloud, used by Amazon themselves. It offers great local development options, a huge free tier, and fast performance regardless of data size. With only a few key API calls and excellent compatibility with AWS Lambda, DynamoDB stands out from traditional relational databases in terms of speed and integration.

So, the next kind of big question that I often get is like, well, why would I choose DynamoDB out of the millions of database options that we have available to us out there? And the key one for me is it's the flagship managed database for the pioneer of the cloud, and they use it themselves at Amazon to back Amazon.com's retail business. So, it's just a good choice. From that regard, there's great local development options. It's got a huge free tier. It's fast no matter how much data you store, which is kind of a science fiction dream for databases. Used to be, as you added servers, you would add latency, and you definitely would add problems, which we just don't experience with a managed database. It's very small, so there's not a whole lot of API to learn. There's really only about six key API calls to deal with Dynamo, which is really great. It's got a good SDK for just about every runtime you could think of. And the sort of sleeper feature, my favorite feature, that it works really well with AWS Lambda, single digit millisecond latency for queries, no matter how much data you're storing. It's generally the line that gets tossed around, and this is a really big deal because a lot of databases are quite slow today, especially traditional relational databases. They can be made fast, but they're never gonna be that fast. And they just don't work as well with Lambda for a variety of reasons.

3. Why Not DynamoDB?

Short description:

So, the first big trade-off with DynamoDB is that it's an Amazon-only thing. However, there are examples of people cloning Dynamo for other key-value stores. Another valid criticism is that it's a bit weird to learn, especially the query syntax. But the myths about DynamoDB being expensive, requiring upfront knowledge of access patterns, and being hard to modify and migrate are not true. The DynamoDB Free Tier offers a generous amount of storage and requests for free, making it a cost-effective choice. You don't need to know access patterns upfront, and both relational and key-value databases require a schema declaration.

So, the next question would be, well, why not DynamoDB? Obviously there's trade-offs, this isn't gonna be a trade-off-free decision. So, the first big kind of thing is that it's an Amazon-only thing, so people get worried about that. This isn't actually technically totally accurate, though there are examples of people cloning Dynamo now for other key-value stores, so we saw this happen with a lot of other Amazon services, like S3 very famously, it's cloned quite a bit now. So, that'll probably continue to happen.

A very valid criticism, though, is that it's just a bit weird to learn. So, modeling for a wide-column key-value store is very query-centric, you'd have to think about how you wanna get the data before you store it, unlike relational data, where we denormalize everything and we can kind of query ad hoc. And the other thing that's a bit weird is that the query syntax is a bit strange. It's not as same as relational, although I would argue that this is just a familiarity thing if you had learned Dynamo first and then learned Postgres after you'd probably find Postgres weird, so not really a major thing.

Why non-DynamoDB also has a bunch of myths, and I wanna address these head-on. So one of the first myths is that it's expensive, another one is that you need to know all of your access patterns completely upfront before you do anything, that's not true. I've heard that it's hard to modify and migrate, that's also not true. Sometimes people say you can't use SQL, that's not true, and we'll address the biggest red herrings of all, lock-in and scaling at the end, cause it'll be funny.

So, first one is, is it expensive? Well, the DynamoDB Free Tier gives you 25 gigabytes stored on disk per month for free, and it's 25 cents for every gigabyte after that. That's a lot of starter data, 25 gigs, it's a ton. From a, that's just on disk, what it costs. You also have to pay for read and write, and basically it works out to about 200 million requests per month is gonna be in the free tier. So, storing 25 gigs, 200 million requests per month. I think this is not expensive at all, this is very cheap. And so, even if, as you scale you find these numbers to just get out of hand, it still speaks to using Dynamo for at least prototyping your application. It's gonna get you really far, for very little. As a side, the joke I used to like to say is like, yeah, Dynamo's expensive but so are DBAs. And that joke really only lands if you've had to shard a database. So, I'm not gonna make the assumption that anyone knows what I'm talking about there.

Another big myth is that you need to know access patterns upfront. So DynamoDB modeling is different than relational database modeling, and it's just different, it's not better. It's just different. And so, in both databases, relational or key-value, you need to have a schema of some kind. You need to say that these values are expected and that's how I'm gonna query for stuff. So, it really doesn't matter which database you choose. At some point, you're going to have to declare a schema.

4. Debunking Common Misconceptions

Short description:

You're gonna have to say I have an accounts table or I have a songs table or whatever. And this is true, whether you're dealing with relational or a wide column store. Sometimes people say you can't use SQL with DynamoDB. That used to be true. That's no longer the case. We now have a query language called PartiQL. So, the other big red herring that gets thrown around is whether or not you're locked in. And at a certain point, you're always locked in. The biggest lock-in of all is choosing upfront complexity. Sometimes people think that if you use a database like Postgres, that means your data is inherently portable. And I think that's a lie. You're going to have to go through a huge amount of pain to move that data between different database providers, regardless of the database you choose. This one's fun, and it comes up all the time. People are like, well, does it scale? Yeah. It super does.

You're gonna have to say I have an accounts table or I have a songs table or whatever. And this is true, whether you're dealing with relational or a wide column store. So, I don't think it's a great argument. You're going to be migrating and learning about your app, and as it gets more big, and you understand the access patterns more, you're gonna be able to store the data in more efficient ways. And that's true no matter what database you choose.

Sometimes people say you can't use SQL with DynamoDB. That used to be true. That's no longer the case. We now have a query language called PartiQL. Sounds fun. It looks like SQL. So, I guess if you find that fun, then it is. And you can click through and check it out. It's not just limited to Dynamo. It's actually supposed to be a broader idea. I think right now, Dynamo is probably the only people using it, though.

So, the other big red herring that gets thrown around is whether or not you're locked in. And at a certain point, you're always locked in. And I would argue that the biggest lock-in of all is choosing upfront complexity. And sharding and managing a running instance for a database is definitely a lot of complexity. Sometimes people think that if you use a database like Postgres, that means your data is inherently portable. And I think that's a lie. You're going to have to go through a huge amount of pain to move that data between different database providers, regardless of the database you choose. So, to me, it's not a very good argument. There's a much longer, more rational written argument here on Martin Fowler's website that you can check out. I just don't think lock-in's a good reason to choose anything. If you've already selected Amazon as a vendor, you're probably going to be there for a while.

This one's fun, and it comes up all the time. People are like, well, does it scale? Yeah. It super does.

5. DynamoDB Overview and Terminology

Short description:

Amazon DynamoDB powers high-traffic Amazon properties and systems, including Alexa and Amazon.com sites. It scales and is used by many organizations for various applications. DynamoDB is easy to scale, has a generous free tier, supports an SQL-like syntax, and works well with AWS Lambda. To learn more, I recommend picking up a book by Alex DeBrie and following him on Twitter.

So Jeff Barr last year says, Amazon DynamoDB powers multiple high-traffic Amazon properties and systems, including Alexa and Amazon.com sites and all the Amazon fulfillment centers. Over the course of the 66-hour prime day, these sources made trillions of API calls while maintaining high availability with single-digit millisecond performance peaking at 82 million requests per second. That scales.

So it's also not just Amazon. Social proof is how you like to make decisions. There are a ton of logos here of people using DynamoDB at scale for a variety of applications. I think we can put that one to bed.

So as a very fast recap, DynamoDB scales up and down dynamically, generous free tier, supports an SQL-like syntax. It's fairly straightforward to migrate around, works really well with AWS Lambda. Yeah, it scales and yeah, it's a bit weird to learn and we're gonna get into that.

Before I get into it too much further, if you just wanna zone out and not pay attention to the rest of my talk, a thing I would recommend you do is pick up this book and follow Alex DeBrie on Twitter if you're still there, this is just an immediate buy for your whole team if you're using Dynamo, it is the best docs written for DynamoDB by far. When we started adopting Dynamo, I wish this book had come out because we would have saved a lot of time if it had. It is the cheapest thing you'll ever be thankful that you bought. It's gonna give you super powers and I can't recommend it enough.

So Dynamo, let's get into it a little bit deeper. Some terminology to be aware of, first of all, so Dynamo has tables and tables are collections of items. You could think of these like pages in a spreadsheet and items are like a row in spreadsheet or maybe in a database, if that's what you're familiar with. And now items have attributes. And so like if a row, if the table's name is accounts and a row is a user in the accounts, it might have attributes like name, email and address. Partition key is how we query for a specific item or a collection of items. So partition key is kind of like a primary key and relational terms. It's usually a fairly unique value, but it doesn't have to be. And it's how we get stuff. A sort key is a secondary way to query. And so a sort key might be a way to narrow down the collection of items or a way to get a more exact representation of hierarchical data. So if you want to get like a lower leaf in the tree, as it were. And then the last concept is indexes. And indexes are actually, they're just basically like tables. They're another table with a different primary key, sort key schema so that you can query by different stuff. And I'll explain what I mean by that.

6. Basic Modeling and Single Table Design

Short description:

A partition key is a unique value for accessing an item. It's important to have the correct table names and indexes when provisioning DynamoDB tables. Primary keys and sort keys are essential for querying one-to-many relations. DynamoDB supports querying by various attributes, such as slug and published. Amazon recommends using a single table design whenever possible.

So basic modeling. A partition key is a unique value for accessing an item. So this right here is what we call an Arc file. And it's a document we use to provision DynamoDB tables in the architect open source project.

So we say, Hey, table, I want a table named accounts and I want a partition key of account ID. And I'm going to have an extra index for the accounts table. Oh, I have a typo here. That should be accounts with an S, that's terrible. But this should be accounts. And so it matches here, and it would be a different way to query. So I might want to get my user by account ID, but I might only have the email, such as when they're registering or signing in or logging into the system. So I need an index to query that by email. I like how my typo carried over to here, so sorry about that.

So basically in 101.2, so there's a primary key concept and then there's a secondary key concept or the sort key, sometimes called a secondary key. Sort key is for querying sort of one to many relations. So if I was modeling a blog in my system and I've got accounts and accounts have many blogs. And so one account ID would have more than one blog post. And we could have a secondary key of slug to get an individual posts. Now this is cool because we might also have to query blogs by when it was published. And so this is a secondary key in the index where we're gonna go by published instead. And so it's kind of nice. So now we can get all the blogs. We can get a blog by its title and we can get whether or not it's been published. Cool. So primary keys and sort keys, tables and indexes. That's like the most basic high-level stuff. There's a lot more to it. One of the things to know about DynamoDB and you hear this come up all the time. So I'm gonna address it now. What about single table design? So Amazon recommends that you model your application with as few tables as possible.

7. Modeling Data and Separating Concerns

Short description:

Ideally, you should have one table to store complex relationships. However, for ephemeral and durable data, it's best to separate them into different tables. Single table design is an optimization step once you know your data access patterns. When modeling your app, consider using the Grunge Stack or working with DynamoDB locally. Put your data access logic in separate models to keep your application layer focused on business logic.

And ideally that means just one table and you can get really fancy with partition keys and sort keys to store a really complex relationships inside of your tables, inside of one table.

Now, I don't like to do this with ephemeral data and durable data. And what I mean by that is if I have API keys or tokens which I'm expiring with a time to live, I don't wanna store that in the same place where I have user accounts, which are durable data that I wanna keep around forever and maybe back up on a regular cadence. I don't wanna back up short-lived tokens. I want to back up user accounts. And so we don't wanna put those tables together. So you typically will have more than one table depending on the life cycle of the data involved.

Now, this is a personal opinion and everybody's got one. So take it for what it's worth. But I think single table design is an optimization step once you know your data access patterns. Until you know them, you should probably just lean towards trying stuff out and figuring out the best design for your app at the time. And that might mean that there's gonna be opportunities to improve it later. And that's not a bad thing. That's actually what will happen regardless of whether you design it for a single table or not.

Okay. That's a lot of hype. Let's get into modeling some stuff. So first of all, if you're gonna create a remix app to work with this, you're gonna wanna choose the Grunge Stack. That uses Architect Under the Hood, which gives you all these AWS superpowers. You don't have to do that though. And I'm gonna show you how to work with DynamoDB directly on your machine locally without an Amazon account because that makes things a lot easier. If you do check out the Grunge Stack, you will see in the generated code that it gives you this awesome note-taking application that is really close to how I would recommend you'd wanna work anyways.

So typically, you'd have some kind of folder called Models and you would wanna put all of your data access logic in those models. You don't want your application layer to be interacting with DynamoDB. You want it to be interacting with the entities of your system. And so in this case, the entities are User and Note. And so, whatever you wanna call them, whatever those things are that you're persisting, you wanna collect those into like an object so that your application layer reads really cleanly and it's more focused on business logic and not on data access logic. And that's really, really important to separate those concerns. And once this is all compiled, it'll all be mashed together in the right places. But if you sort of dig into like these models, you'll see that they kind of encapsulate all the ugliness of working directly with the database, which isn't that ugly.

8. Working with DynamoDB in Remix

Short description:

But db.note.get with these PK and SK things is a lot less clear than the interface of just getNote. So that's a nicer way to work. I'm gonna create a new tab and make a directory called Remix DynamoDB. I'm gonna echo a pair of parentheses into a file called package.json and install some dependencies for testing. Now, I'm gonna touch a file called app.arc and modify it. We will start by defining a DynamoDB table called cats with a partition key of cat ID. We'll be able to run this locally with Architect. Let's create test.mjs.

But db.note.get with these PK and SK things is a lot less clear than the interface of just getNote, right? So that's a nicer way to work. Now, I was gonna go through this and just sort of explain how all this works, but I feel like it starts with almost too much context. And I don't wanna confuse you with all this extra business that's going on. I just wanna show you the most basic, basic stuff.

So with that, I'm actually just gonna go jump over to my terminal, and I should fix my example. Super funny. I'm gonna create a new tab and I'm gonna make a directory and I'm gonna call it Remix DynamoDB. And I'm gonna jump in there. I'm going to echo a pair of parentheses into a file called package.json, and I'm gonna NPM install tape, tap spec, architect sandbox, architect functions. And I'm gonna save them for development. It's gonna take a sec. We're all kind of used to the NPM tax, but it's worth maybe talking about what's going on here.

So I've just created a package.json file and I added some dependencies to it. Those dependencies are for testing, and that's it. And there we go. Now, I'm gonna touch a file called app.arc and modify it. I'm just gonna call it myapp. And we will start by defining a DynamoDB table called cats. And we'll give it a primary, or a partition key, sorry. I always wanna call it a primary key. We'll give it a partition key of cat ID. So we have, typically I like to name the tables in plural and then each row would be like a singular type thing. And so now with the way that Architect works, we'll be able to run this locally, which is really, really nice. But before I do that, let's go pop over to package.json. Cool, I got all my depths. So we'll just add a script. And and tap spec. Okay, so this is gonna run tape against a file called test.mjs. And it's gonna pipe the results to tap spec. Now test.mjs doesn't exist yet, so let's create that.

9. Exploring Local Testing with the Arc Sandbox

Short description:

We add a test and run it locally using the Arc sandbox. The DynamoDB runtime allows us to test without deploying to the cloud. We get some output and can explore the architect DynamoClient for data access.

And we'll just add a silly test. Oops. Works. Now you can add all kinds of tooling to this, obviously, and you can get really, you know, buck nutty if you want, but I'm just trying to get this working. And I just want to like, do like, one little step at a time and see how it works, you know. Just taking baby steps. So we have one passing test that feels pretty good. But there's not a whole lot going on here.

So remember when I went into this app.arc file and I defined this table called cats. It would be cool if we could, you know, take a look at that. So why don't we do that? So before we do that, we're going to need the Arc sandbox. So architect ships with a DynamoDB runtime based on a thing called Dynolate, and it lets us run it locally, which is nice. So you don't have to deploy anything to the cloud to test out your DynamoDB. And I'm gonna say, now it runs like a dev server. So, you know, we have to start and stop it and we'll do that before and after our tests run. And I'm not sure why I get this screen thing going on. There we go. And end it. And we'll see some weird output when we do this. So let's take a look at that. Oh, missing a comma. I should install ESLint, but I don't want this to turn into a configure my stuff story. So you'll see here, we got some output. And when the sandbox started said tables created in local database, which is nice. Actually don't want to see that every time. I'm gonna tell it to be quiet true. And instead of just saying works, why don't we take a look at the architect DynamoClient. Architects are of a collection of small functions that you work with AWS. And so one of those functions is a data access layer. And so we're gonna say let data equals await Arc.tables.

10. Creating Data Access Layer and Writing Data

Short description:

So this is gonna reflect back in the tables that we defined in our arc file. If we were to deploy this, that arc file would generate the cloud formation needed to create the database at AWS. Let's create a new data access layer and a new cloud formation. We'll invoke this data access layer and do a new query. At runtime, we look it up by key and allow you to have this Dynamo client that is easier to work with. Let's write some data to the table with a cat ID, name, and birthday.

So this is gonna reflect back in the tables that we defined in our arc file. And if we were to deploy this, that arc file would generate the cloud formation needed to create the database at AWS. So let's take a look at that.

So let's create a new data access layer. So we're gonna say, we're gonna create a new cloud formation, and then we're gonna say, we're gonna use this data access layer that we've written out. So we'll just create a new cloud formation. And we're gonna say, we're gonna create a new data access layer and we're gonna say this is the AX. And then we're gonna say, we're gonna invoke this data access layer and we're gonna do a new query.

The reason this is useful, I'm gonna just quickly jump over to the architect website and I'll show you. Here is on the left, an arc file, oops. And this on the right is the generated cloud formation for that arc file. And you'll see down here somewhere, it's gonna define a DynamoDB table. And that's great. We've got like scope and data and IDs and it's got a TTL on it. This table when it gets generated is gonna just have a meaningless GUID for a name. If I call it cats, for example, there's no cats in here. It's gonna be mashed in with what gets generated. So at runtime, we actually will look it up by key and allow you to have this Dynamo client that is a little more easy to work with. So you're just working with arc. or data.cats, as opposed to working with some GUID.

Okay, so we got our table. Let's write some data to it. So, write a cat. result equals await data.cats.put. And I think I said in the schema that it would have a cat ID. And I'll just, you know, this isn't intended to be production code. So, we'll just do that for the cat ID, give it a name. Sutro's my cat's name. Yeah, a birthday. Okay.

11. Writing Data to the Database

Short description:

We wrote a cat to the database and then we exited right away. By default, this will actually just run in memory, which is great for fast tests. In our case, for begin.com, we have thousands of tests that run in milliseconds.

And instead of saying T.compass, we'll say T.OK result.cat ID has a cat log out result here so we can take a look at it. Cool. So, we wrote a cat to the database and then we exited right away and it's gone because you don't persist anything in our local file system sandbox. By default, this will actually just run in memory and it's just great because as you add more and more tests and you build out a bigger and bigger data access layer, the tests are gonna run fast. In our case, for begin.com, I think we have something like thousands of tests and they run in milliseconds, which is how you want this to work.

12. Reading and Querying Data in DynamoDB

Short description:

Let's read all the cats and add a couple more. We can query the cats by cat ID, which is a recommended way to work with rows. This allows us to retrieve specific cats quickly and efficiently. Reading and writing data in DynamoDB is fast, taking only a few milliseconds. So, as far as CRUD operations go, we have create, read, and list with scan, and now we can round it out with a query. This is just one way to work with DynamoDB, and there's much more to learn about modeling and building larger applications.

So, why don't we read, oops, read all cats and we can add a couple of cats. Instead, okay. Another cat in tuxedo, how about that? And I'm gonna, I'm gonna wanna get this ID later, so I'm just gonna say, my cat ID up here outside the scope of my test and I'm gonna copy this chunk of code up here. And so this cat ID will be outside of the scope and then later on I'll be able to look it up. But for now, let's just get all the cats.

So data.cats.scan, as cats. Okay, so scan is typically considered bad practice. Typically. So what scan will do is it'll go through and it'll read everything. And if your table is really big, that's gonna take a long time and it's gonna have to paginate through multiple pages of data. So typically you wanna grab things by querying or by directly giving it a key. So that's just the sloppy, fast way to work. But you know what? This also scales for quite a long time. So don't feel bad about using the tools you have available to you, and then iterating to improve it. Doesn't have to be perfect upfront.

So let's read one cat, data.cats.oops! You get, and we're gonna query by cat ID. So if we see here by cat ID, we pulled back this one, and this is just screaming fast, like we're just adding queries and no big deal. Reading and writing, all that takes place in a few milliseconds. Not bad, kind of exactly what you want. So as far as Crud goes, we've got create, we've got read, we've got a list with this, oops, with this scan business. Let's do a query. Just to round it out. And so this is kind of the, read cats by cat ID. So this is kind of the recommended way to work with with rows. Key, condition, expression. Now you can do this in multiple different ways, this is just sort of one way and there's a whole lot to unpack about how to learn how to do proper modeling and building up a larger application. I'm trying to give you a sense here, I'm not trying to say that this is the only way or the best way, this is a, I'm introducing you to Dynamo for the first time. So we have a key condition expression, we're saying, give me all the cats where the cat ID equals this value. And then we have to have an expression attribute value. And one of them is going to be cat ID and we'll pass the cat ID we generated above.

13. Inspecting Results and Performing Deletions

Short description:

That's gonna get us a result and we'll inspect the value of that result. As you design and learn how to model out DynamoDB applications, you'll use IDs to query on groups of items in hierarchies. You can add conditions to get more than one item by ID. Let's delete a row and have all the CRUD operations: create, read, update, delete. Use a testing framework like tape for node projects, as it's closer to the metal and provides more trust in test results. Grab the code to get the cats and delete a cat by ID, then check the result. Additionally, perform a scan to see the number of items in the scan.

That's gonna get us a result and we'll inspect the value of that result. So you can see it's really actually quite similar to running a scan. But it was prefiltered on the ID that we were looking at. Now as you design and learn how to model out DynamoDB applications, you'll use IDs as a way to like query on groups of items in hierarchies. And this is how you get more than one item by ID. And you can add all kinds of conditions to this, it doesn't have to be too complicated.

I'm gonna do another one of these, say like cat, maybe two, and we'll use it for this one. And so read all cats by cat ID, has a bunch of cats, cool, cool, cool. Let's delete a row, so we'll have all the crud now, we've got create, read, update, delete. So destroy a cat, that sounds sad. Delete a cat row, that's a little more benign. I am a friend of the cats and I want to see them get hurt.

Okay, so I'm using tape by the way here, very deliberately. Sometimes people in the React community think you should use Jest, don't use Jest to test node projects, it's not very good for that, it's bloated, it's slow and it patches globals. You want something as close to a metal as possible for your tests. You don't want your tests to be a murder mystery, you want them to be a place where you can trust what's going on, what you thought was going on. All right, so let's grab some code from there, we're gonna get the cats, and we're gonna say delete, and it's just by cat ID, so we can just steal that code. And then we'll see if we get a result of some kind. And sadly, yes, we will. The right way to do this would actually be, let's do a scan afterwards, and we'll see how many items we have in the scan. No, it's the right way, but the right way to do this to learn, I should say. Call this one result. Oh, and delete the second one. Okay. I think I know where I messed up. It was back up here when it's creating this. Yeah. Nope, still have an error. Delete cat row, hot cats. Oh, read cat by cat ID.

14. Building a DynamoDB Client

Short description:

That's a very fast tour of building a DynamoDB client in one file with three or four dependencies. As you mature the project, you would likely want to have a separate module for interacting with the cats code. This allows your application to consume that model independently, freeing you from worrying about how the components compose. It's just the tip of the iceberg when it comes to learning DynamoDB.

I got one. And then delete row has cats. Let's just see what I got here. Oh, we deleted both cats, didn't we? I deleted this cat. And did I delete both cats? Where'd the other cat go? Going off script. Read cats by cat ID. We have a tuxedo, we have a tuxedo, tuxedo. We're in cat. That's interesting. It's almost like we overwrote our cat, up here. Did not cats.put, did not cats.put. Oh, I think we did. Because these might have happened so close that they shared an ID. Yeah. That's what happened. They shared an ID. Now this is obviously for demo purposes. You could use UUIDs here. Probably wanna use some other data like email and stuff too. But let's just see. So yeah, now we've got just one back. And it's doing what we expect. Cool. So that's a very fast tour of building a DynamoDB client in one file with three dependencies or four dependencies. As you were to mature this project, you would likely wanna have a thing called cats, right? And you wouldn't be interacting with the Dynamo code in here. You would be interacting with the cats code. And so the exercise for the reader would be to move this into some kind of model here and then your application would consume that model, would be tested independently and save your UI of your application and you wouldn't have to worry about how these things compose. They can compose nicely together as you build out your app. Okay, so that was like the highest level, fastest speed run of learning Dynamo I could throw at you. It's obviously only just barely scratches the surface.

15. Recap and Conclusion

Short description:

DynamoDB scales dynamically, has a huge free tier, supports SQL-like syntax, and works well with Lambda. It's a bit weird to learn, but if you want to give it a try, follow me on social media and join the Architect Discord for assistance.

A big recap here. So DynamoDB scales up and down dynamically. It's got a huge free tier, supports SQL like syntax. It's very easy to migrate, it works super well Lambda. Yes, it scales and yeah, it's a bit weird to learn. I'm out of time, so please follow me on the various socials at Bryan LaRue. If you want to give this a try, give the grunch stack a go, remix and then if you're having any problems, just join us at the Architect Discord. You can find us at arc.codes. We're always happy to help. Thanks so much.

16. DynamoDB and Remix Integration

Short description:

DynamoDB is a great complement to Remix because it provides excellent performance and scaling characteristics. While tools like Postgres are safe and widely used, they don't work well with ephemeral compute and Cloud Functions. DynamoDB, on the other hand, is a key-value store that is well-suited for these scenarios. Although DynamoDB is relatively new and AWS-specific, it offers a higher level of abstraction and can be seamlessly integrated with Remix. Overall, Remix provides a clean and efficient way to handle data access patterns, whether you choose to use Postgres or DynamoDB.

Let's have Bryan join me for Q&A and let's first go over the poll that we had which is, what database do you use? And we see the results. Most people chose Postgres, DynamoDB did beat out MongoDB and then nobody chose other. So very interesting response and I know we were talking earlier and you kind of said like, oh, everybody just wants to play it safe and stuff like that. Keeping all the jokes aside, I do want to talk about, how does DynamoDB complement Remix in your opinion? Why is it such a great combination? Yeah and I didn't want my talk to be like, here's how to build a forum with Remix. I felt that everybody kind of understood that part but to me the most exciting part about Remix is that we've got loaders and actions co-located with our forums and those forums are real database forums. So they're going to talk to a backend directly. And so the hard part then becomes, what's that forum going to do? And how do I talk to a database? In my view, Dynamo is probably the best in class of managed databases today. I get into that pretty heavy in the talk but like just empirically speaking it's got probably the best performance and the best characteristics for scaling, maybe not the best pricing characteristics and tools like Postgres are great and they're fine and they're gonna get you a really long way. And I think it's totally a safe choice everyone's using a Postgres. But the problem with the Postgres is it doesn't speak to things with ephemeral compute that well. So things that are talking through Cloud Functions and Remix is really designed to work on the edge with Cloud Functions. And so it's gonna work better with a key value store like a Dynamo or Mongo. I'm actually surprised that Dynamo beat Mongo, maybe it's just cause it was a Dynamo talk but Dynamo is still a bit new, it's also still in this like corner around AWS and not everybody's super comfortable going to AWS. They want higher level abstraction. And so it doesn't surprise me that we'd see these kinds of scores in the poll and yeah, and I think Remix users would be, whatever it is you choose to put in your loader or action, whether it's Postgres or a DynamoDB, that that's the beauty of Remix is it you get this nice layer that makes data access patterns really slick, clean.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a new web framework from the creators of React Router that helps you build better, faster websites through a solid understanding of web fundamentals. Remix takes care of the heavy lifting like server rendering, code splitting, prefetching, and navigation and leaves you with the fun part: building something awesome!
React Advanced Conference 2021React Advanced Conference 2021
39 min
Don't Solve Problems, Eliminate Them
Top Content
Humans are natural problem solvers and we're good enough at it that we've survived over the centuries and become the dominant species of the planet. Because we're so good at it, we sometimes become problem seekers too–looking for problems we can solve. Those who most successfully accomplish their goals are the problem eliminators. Let's talk about the distinction between solving and eliminating problems with examples from inside and outside the coding world.
Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Top Content
Do you have a large product built by many teams? Are you struggling to release often? Did your frontend turn into a massive unmaintainable monolith? If, like me, you’ve answered yes to any of those questions, this talk is for you! I’ll show you exactly how you can build a micro frontend architecture with Remix to solve those challenges.
Remix Conf Europe 2022Remix Conf Europe 2022
37 min
Full Stack Components
Top Content
Remix is a web framework that gives you the simple mental model of a Multi-Page App (MPA) but the power and capabilities of a Single-Page App (SPA). One of the big challenges of SPAs is network management resulting in a great deal of indirection and buggy code. This is especially noticeable in application state which Remix completely eliminates, but it's also an issue in individual components that communicate with a single-purpose backend endpoint (like a combobox search for example).
In this talk, Kent will demonstrate how Remix enables you to build complex UI components that are connected to a backend in the simplest and most powerful way you've ever seen. Leaving you time to chill with your family or whatever else you do for fun.
JSNation Live 2021JSNation Live 2021
29 min
Making JavaScript on WebAssembly Fast
Top Content
JavaScript in the browser runs many times faster than it did two decades ago. And that happened because the browser vendors spent that time working on intensive performance optimizations in their JavaScript engines.Because of this optimization work, JavaScript is now running in many places besides the browser. But there are still some environments where the JS engines can’t apply those optimizations in the right way to make things fast.We’re working to solve this, beginning a whole new wave of JavaScript optimization work. We’re improving JavaScript performance for entirely different environments, where different rules apply. And this is possible because of WebAssembly. In this talk, I'll explain how this all works and what's coming next.
React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.

Workshops on related topic

React Summit 2022React Summit 2022
136 min
Remix Fundamentals
Top Content
Featured WorkshopFree
Building modern web applications is riddled with complexity And that's only if you bother to deal with the problems
Tired of wiring up onSubmit to backend APIs and making sure your client-side cache stays up-to-date? Wouldn't it be cool to be able to use the global nature of CSS to your benefit, rather than find tools or conventions to avoid or work around it? And how would you like nested layouts with intelligent and performance optimized data management that just works™?
Remix solves some of these problems, and completely eliminates the rest. You don't even have to think about server cache management or global CSS namespace clashes. It's not that Remix has APIs to avoid these problems, they simply don't exist when you're using Remix. Oh, and you don't need that huge complex graphql client when you're using Remix. They've got you covered. Ready to build faster apps faster?
At the end of this workshop, you'll know how to:- Create Remix Routes- Style Remix applications- Load data in Remix loaders- Mutate data with forms and actions
React Summit 2023React Summit 2023
106 min
Back to the Roots With Remix
Featured Workshop
The modern web would be different without rich client-side applications supported by powerful frameworks: React, Angular, Vue, Lit, and many others. These frameworks rely on client-side JavaScript, which is their core. However, there are other approaches to rendering. One of them (quite old, by the way) is server-side rendering entirely without JavaScript. Let's find out if this is a good idea and how Remix can help us with it?
Prerequisites- Good understanding of JavaScript or TypeScript- It would help to have experience with React, Redux, Node.js and writing FrontEnd and BackEnd applications- Preinstall Node.js, npm- We prefer to use VSCode, but also cloud IDEs such as codesandbox (other IDEs are also ok)
Remix Conf Europe 2022Remix Conf Europe 2022
195 min
How to Solve Real-World Problems with Remix
Featured Workshop
- Errors? How to render and log your server and client errorsa - When to return errors vs throwb - Setup logging service like Sentry, LogRocket, and Bugsnag- Forms? How to validate and handle multi-page formsa - Use zod to validate form data in your actionb - Step through multi-page forms without losing data- Stuck? How to patch bugs or missing features in Remix so you can move ona - Use patch-package to quickly fix your Remix installb - Show tool for managing multiple patches and cherry-pick open PRs- Users? How to handle multi-tenant apps with Prismaa - Determine tenant by host or by userb - Multiple database or single database/multiple schemasc - Ensures tenant data always separate from others
Remix Conf Europe 2022Remix Conf Europe 2022
156 min
Build and Launch a personal blog using Remix and Vercel
Featured Workshop
In this workshop we will learn how to build a personal blog from scratch using Remix, TailwindCSS. The blog will be hosted on Vercel and all the content will be dynamically served from a separate GitHub repository. We will be using HTTP Caching for the blog posts.
What we want to achieve at the end of the workshop is to have a list of our blog posts displayed on the deployed version of the website, the ability to filter them and to read them individually.
Table of contents: - Setup a Remix Project with a predefined stack- Install additional dependencies- Read content from GiHub- Display Content from GitHub- Parse the content and load it within our app using mdx-bundler- Create separate blog post page to have them displayed standalone- Add filters on the initial list of blog posts
React Day Berlin 2022React Day Berlin 2022
86 min
Using CodeMirror to Build a JavaScript Editor with Linting and AutoComplete
Top Content
WorkshopFree
Using a library might seem easy at first glance, but how do you choose the right library? How do you upgrade an existing one? And how do you wade through the documentation to find what you want?
In this workshop, we’ll discuss all these finer points while going through a general example of building a code editor using CodeMirror in React. All while sharing some of the nuances our team learned about using this library and some problems we encountered.
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Top Content
WorkshopFree
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura