GraphQL Workshop Medley to build cloud native apps with Open Source Distributed SQL database

Rate this content
Bookmark

YugateDB is a popular open-source Distributed SQL database designed and built for cloud native applications from the ground up. YugateDB allows developers to easily scale the RDBMS workloads for internet-scale apps while natively handling the cloud infrastructure outages supporting cross-region and cross Datacenter deployments. In this workshop, participants will get firsthand experience implementing GraphQL applications using three of the most popular GraphQL engines in Hasura, Prisma, and Spring GraphQL connected to the YugateDB database.

This workshop will provide quick start guides for GraphQL developers for getting started with YugateDB. The workshop will include steps for creating a YugateDB database instance, configuring the database for the respective GraphQL engine, and the best practices for writing the GraphQL queries for YugateDB. From this workshop session, developers will know all the main concepts of getting started with YugateDB and GraphQL to solve the business use case at hand using YugateDB.

155 min
06 Dec, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

The GraphQL Workshop Medley covers Distributed SQL, Yugabyte, and Popular GraphQL Servers. YugabyteDB bridges the gap between SQL and NoSQL systems, offering strong SQL features and horizontal scalability. It integrates seamlessly with GraphQL and supports regional and multi-cloud deployments. The workshop explores creating Hasura and YugabyteDB instances, configuring the cluster and database, applying migrations, and working with Hasura, Prisma, and Apollo. It also covers scaling GraphQL workloads with YugabyteDB, query tuning, and optimizing performance.

1. Introduction to Workshop and Agenda

Short description:

Welcome to the GraphQL Workshop Medley! We'll cover Distributed SQL, Yugabyte, and Popular GraphQL Servers. I'm Eric Pratt, a senior solutions engineer at Yugibyte. With me are Nikhil, an engineer on the ecosystem team, and Marco Rejcic, a solutions engineer at Ubite. We'll provide an overview of distributed SQL databases and GraphQL, and then dive into hands-on sessions. Join us for real-time poll app, social post app, and space explorer app. Time permitting, we'll discuss tuning performance for GraphQL queries with distributed SQL.

Welcome everybody. My name is Eric Pratt. Today we're going to be going over our GraphQL Workshop Medley. We're getting started with Distributed SQL, Yugabyte, and some Popular GraphQL Servers. My name is, as I said, Eric Pratt. I'm a senior solutions engineer here at Yugibyte on the cloud team. Previously, I was a premium support engineer over at Datastats. I have two others with me, and I will let them introduce themselves.

Hey everyone, my name is Nikhil. I'm one of the engineers on the ecosystem team. We build integrations with popular developer tools like GraphQL, Spring Framework, and hosts of other cloud-native projects. If you have any questions on the integrations with distributed SQL and GraphQL. You can reach us out on our Slack. That's what we do daily in our engineering efforts. Prior to this, I was at Ubuntu. I was a senior data architect there, building cloud-native solutions. Thanks, Brad.

Hi everyone. My name is Marco Rejcic. I'm also a solutions engineer here at Ubite. Previously in solutions engineering at Oracle focused on both cloud and on-premise technologies. So excited to speak to you today. As Nikhil mentioned, we're very active on our Community Slack channel. So if you're able to download Ubite locally or using the cloud or whatever, have any questions, feel free to find us there.

This is our workshop agenda. We're just going to give you a brief overview, getting started with distributed SQL databases and GraphQL. We're going to have a brief overview of our open source database, our open source offering for Ubite. And then we're going to do some hands-on sessions for implementing these. So you're going to do a real-time poll app that'll be with Marco and Asura. For me, we have a social post app with Prisma and Ubite. And then finally, Nico will be doing a space explorer app, which I think is pretty cool, with Apollo Platform and Ubite. At the end of those, if we have some time, we will be going over tuning performance for GraphQL queries with distributed SQL and some do's and don'ts that we've come across as we've worked with customers over the last year or so.

2. Introduction to GraphQL and Lugabyte DB

Short description:

Getting started with GraphQL, you can query and mutate data via GraphQL constructs, evolve the API without versioning, and use out-of-the-box pagination and filtering. GraphQL is a robust query language for your API, supporting both GraphQL and generic REST APIs. Examples are provided to demonstrate data retrieval using GraphQL. We will explore different GraphQL frameworks such as Casura, Prisma, and Apollo in relation to Lugabyte. Lugabyte DB is an open-source distributed SQL database that bridges the gap between traditional SQL and NoSQL systems, aiming to be the go-to database for cloud-native applications. The founders have experience with Oracle, Cassandra, and HPASE at Facebook, where they encountered the need for scalable NoSQL systems due to rapid user growth.

So getting started with GraphQL, we have your app, we have your GraphQL server, our database. We have abstract layers that kind of go over our database so you can query mutate data via the GraphQL constructs, build your schema and evolve the domain models. We can evolve the API without versioning, which I think is kind of nice. And then you have out-of-the-box pagination and filtering. We'll call it disparate data sources and then finally venting support.

So the GraphQL, right, it's just query language for your API, you build your queries, you can get exactly what you need. You can combine a few different resources in a single request. It's pretty robust. And you can, like we said previously, you can evolve your API without versioning and supports event-based system. So we kinda have an example here of a query. If you guys are familiar with GraphQL, this should look pretty familiar. And then, you know, it has your generic rest APIs. So your gets, posts, it supports all of those. Here's some examples that we have on how those work with a particular response. And then for GraphQL, we have our ways for retrieving data, right? Which is our posts. You can see the request there. Here's the query that we would run on this side. So let's say we wanna get the author name and articles from a particular thing table, and then we have that response here that you'll get. And as we work through a lot of these examples, or a lot of these different workshops, we will be running through quite a few of these, so you'll get kinda familiar with those for the different GraphQL, Casura, Prisma and then Apollo. So you get to kinda see what each of those has and how they kind of interact with Lugabyte. It's pretty cool. I think you guys are gonna like it.

So I'm gonna turn it over to Marko here as we will be going over the open source distributed SQL database and how kinda fundamentals of Lugabyte. So before we get into the fun part of the live demo, we wanted to just catch everybody up what we're doing here at Lugabyte DB, why we feel this is important for cloud native applications and really what Lugabyte DB is for those that are unfamiliar. So Lugabyte DB is an 100% open source distributed SQL database. Our goal is to make it the go-to cloud native database for cloud native applications. We're really trying to bridge the two worlds between traditional SQL and the strengths that those types of systems provide with the strengths of typical NoSQL systems. Our founders have vast experience with both Oracle as well as building out Cassandra and HPASE at Facebook in the mid 2000s. And if you guys are, know anything about Facebook in the mid 2000s, that's kind of where they had this crazy user growth and really had to start moving towards NoSQL systems because the traditional database systems that they were using just could not scale to what they wanted.

3. Challenges of SQL and NoSQL Systems

Short description:

Once user growth started showing, SQL systems couldn't scale. We provide strong SQL features with Postgres compatibility, asset transactions, security. Downsides of SQL systems include resilience and high availability bolt-ons, manual sharding, and separate instances for geographical distribution. NoSQL systems excel at high availability, horizontal scalability, and geographical distribution. However, they lack full SQL compatibility, data integrity, and multi-node transactions. YugabyteDB bridges the gap by combining strong SQL features, data integrity, and horizontal scalability.

And obviously this was very new for everybody running on the internet. Once this type of user growth really started showing, SQL systems could not scale to the speed at which you needed them to, right? And so by being able to continue to provide strong SQL features with our Postgres SQL compatibility, feature compatibility, and asset transactions, security, all major factors of why users like SQL systems in the first place, right? Whether it's Oracle, SQL Server, MySQL, Postgres, either are always the really important things, right?

You want strong SQL features, you want good performance, you want strong security, and you wanna have asset compliance and data integrity. The downsides of those types of systems is that although there's some resilience and high availability, most of it's bolt ons. You have the ability to scale horizontally to do, but you have to use manual sharding, right? It's not a core feature of the database and it wasn't created with the database as something that was gonna be important, right? And then geographical distributions typically you're gonna have to have separate instances of your database in order to be able to accomplish that.

So, being able to take on the benefits of the NoSQL world where these three things really are something that those systems are good at, right? You have a cluster running multiple nodes where high availability can be taken care of if one of those nodes goes down, you have other nodes that are able to take on that traffic. You're able to scale horizontally, right? That's the key piece for those systems and why they were built in the first place, scaling reads and writes, and then geographical distribution. So, the downside of NoSQL systems obviously that phrase NoSQL where, although some of these systems continue to add SQL features as time went on, they don't have full SQL compatibility, they're still missing a lot of features that are used in applications today, they don't have data integrity or the ability to run, you know, multi node transactions and make sure they're acid compliant. So, really bridging those two worlds together, making sure that you have strong SQL features, you have data integrity, but you also at the same time have built in HH horizontal scalability and the ability to run your database on any cloud or your own private data center along with different regions across the entire world as you kind of grow as a company.

4. Choosing Postgres as Inspiration for Query Layer

Short description:

We chose PostgresSQL as the inspiration for our query layer due to its popularity and community support. We reused the Postgres query layer, similar to Aurora, to minimize changes for users transitioning to YugabyteDB. Unlike Google Spanner, which introduced a new syntax, we focused on providing the SQL features that users are familiar with. Our goal is to reduce friction for new users and allow them to use the same drivers and tools as they would with Postgres, making the transition seamless.

The first real, you know, Fork in the Road that we came into was, if we're gonna design the perfect distributed SQL database, what routes we want to go down, what database do we want to focus on and use that kind of as inspiration for our query layer. And what we decided to do is go with PostgresSQL. We saw a lot of popularity. We loved the community. As this visual on the left hand side here shows from, you know, 2013 of where we're at today, and this visual cuts off at 2019, 2020, but even to the end of 2021, we see Postgres popularity really growing and growing more so than some of the other NoSQL systems out there that have been extremely popular in the last five, 10 years too, right? So we really wanted to be able to build, continue to build off of that. What we did is we reuse the Postgres query layer similar to the approach that Aurora took, right? Aurora allows you to reuse your MySQL and Postgres applications without a lot of changes, similar to your ByteDB if you're coming from the Postgres end. And that's a little bit different than what Spanner did, right? Google Spanner was one of the first distributed SQL databases, but their approach was, hey, we're gonna build a new syntax. And the issue that they ran into is that a lot of our RDBMS features, a lot of SQL features users are still looking for. And we really saw Amazon Aurora benefit of that as time went on. And Aurora today is much more popular than Google Spanner. We saw Google Spanner actually in the last month come out with more Postgres SQL feature functionality because I think they've kind of noticed that that's gonna be something important that they're gonna have to introduce to. So you can by DB kind of blends these two things together with the Aurora side and the query layer. We took their approach where we were like, Hey, we want users. We want the least amount of friction as possible for new users coming to you to buy DB. We wanna make sure we have all the SQL features that Postgres provides. We wanna have the ability to, for users to continue to use the same drivers, et cetera, in order to make the least amount of changes necessary in order to run on you get by DB instead of developing a new SQL syntax, instead of rewriting Postgres or any other open source database in a different language, right.

5. YugabyteDB Storage Layer and Scaling

Short description:

The second piece with YugabyteDB is the storage layer, which follows the Google Spanner approach for scalability and high availability. YugabyteDB supports RDBMS features like stored procedures, triggers, and role-level security, making it compatible with SQL systems. It also offers automatic sharding, eliminating the need for manual sharding in traditional systems. Each node in a YugabyteDB cluster acts as a master node, allowing for vertical scaling or adding more nodes to the cluster. YugabyteDB can be seamlessly integrated with GraphQL, allowing for the creation of microservices and APIs that can scale across multiple nodes. Regional and multi-cloud deployments are supported, providing redundancy and low latency for global applications. YugabyteDB offers both synchronous and asynchronous replication for data integrity and low latency in different regions. By combining Yugabyte Cloud and other managed services like Hasura Cloud, developers can build a fully managed GraphQL stack without the need for manual infrastructure management.

The second piece with you by DB is the storage layer, right. The query layer, we took the Aurora approach, the storage layer, we really took the Google Spanner approach because we wanted both scalability and HA. If you know anything about Amazon Aurora, you know that although it scales reads really well, right scaling isn't nearly as up to par. So using Google Spanner using Raft consensus, we're able to slowly add more nodes to your cluster until you need it, right.

And just to kind of double click on the RDBMS feature set, this is what's supported today on you go bite from version 2.9, you can see a full list of this on our docs as well. But things like stored procedures, triggers, role level security, user defined types, et cetera, you'll see that not many NoSQL databases, right. If any, NewSQL, as well as all the distributed SQL systems aren't gonna have that because it's almost impossible to do without reusing the Postgres SQL code itself. So if you're using any of these today, rest assured you can use them with you by DBA as well if you decide to make that transition.

Now to double click on the scaling portion. Over here to the left is what you have seen for years on right, your typical, hey, we have a single server database and we have the clients coming in, all our data is on a single node. If we want to scale to be able to handle more client traffic, we're kind of bottlenecked. Depending on where your bottleneck is in storage or if it's CPU or if it's RAM, really all you can do is increase the size of your instance, scaling vertically. And eventually, you get to this crossover point where it just becomes too expensive, right? Or maybe the physical limitations are there. You're kind of relying on manufacturers of these servers that are choosing your CPU and RAM relationships, right? And sometimes that's kind of what makes you think, okay, and now I got to start sharding, all right? Well, up to this point, you would have had the shard manually if you're using MySQL, Oracle, they have the ability to shard, but they don't do automatically sharding for you like Ubiquity Db does, you have to manually shard your data across a cluster, and you have to manage all that. Anytime there are changes, you're gonna have to go ahead and make those changes, right to the shards themselves and manually shard it to the application, and this is very time consuming. If you're somebody who is doing this today or has done this in the past, or has even just heard about it, it's very time consuming, and it takes a lot of experience as well. So finding the people that know how to do it and how to do it right is also extremely challenging. So you might DB's goal was to take this away from the user and make sure that the database itself could automatically do this.

So every time you create a database in you by DB, take a look over here on the right side, we split that data into chunks or shards, and then we distribute those shards across all of the nodes in your cluster. The cool part about this is that now every single one of these nodes is gonna be acting as the quote unquote, master node, accepting client traffic, having its own CPU and RAM that is able to serve that traffic. Anytime that you need a scale, you can still vertically scale if you'd like, right, go from two cores to four cores, it's eight, 16, 32, et cetera, or you can just add more nodes to the cluster. And these nodes can be in the same region, it could be in different regions, or very flexible when it comes to type of topology that you're having. The key part is all these nodes are identical, right, every node has its own Postgres query layer, it has its own DocumentDB storage layer running on the YugoByteDB node within a single cluster.

So if we take a look at a user on their cell phone, if you have a mobile application or it's a web application using GraphQL connected to something like Hasura GraphQL Engine, and then Hasura communicates the back end YugoByteDB engine, it communicates with that query layer that's Postgres compatible, as you'll see when I run through the Hasura demo, we'll be using the Postgres Connector itself. There's no special YugoByteDB connector, we're using the exact same one. And then that communicates with the data, right? So connecting to any node, YugoByteDB will cache kind of the different shard locations and be able to direct the traffic to where it needs to go, and then you can add or remove nodes anytime without any downtime. So that's different than how you'd have to do with traditional systems, right? Let's say you had a monolithic application and a single-server database, you'd have one app connecting to GraphQL, connected to the GraphQL server, hitting that single node Postgres instance, in this example.

Now, if we take a look at what it's gonna look like with YugoByteDB and GraphQL, as you kind of see today, you're broken up into microservices, each one's using GraphQL APIs, connecting to the same cluster. As you build out more microservices and you build out more APIs, you can hit the same cluster and add nodes to that cluster, right? So you're not stuck on a single node. If you start with a three node cluster, you can grow that to a six node cluster or a nine node cluster, et cetera. So even if you're running, a lot of times in our conversations, users will create microservices, right, for their application code and they'll break their massive application code into different sections. And sometimes they'll still be running it on a single node database, whether that's Postgres, MySQL, Oracle. It doesn't really matter, because at the same time all of them are kind of, it's the same type of system, right? It doesn't scale as easily. So a key part to any type of modernization effort for your applications is gonna be the database. Because really what we're doing is if you continue to run it on a single node instance, you're eventually gonna run into another bottleneck. Because that's a benefit that YugibyteDB provides.

And there are different typologies with any YugibyteDB cluster. Every YugibyteDB cluster has a minimum of three nodes. So if we assume every single one of these little visuals is a single node, you can run across availability zones. Alright, so availability zone one, availability zone two, availability zone three. Your data is being distributed across these three nodes running in three different availability zones, and it's being synchronously replicated and using RAF consensus to make sure that there's data integrity between them. So you have consistency across zones, but obviously you don't have the luxury of regional fillers. That's where the regional topology comes in. If you want region level redundancy, you're gonna want to have your nodes in three different regions rather than three different availability zones. Technically, they can be three different availability zones as well. But they're gonna be in three different regions, right? So that's kind of the key there. We're kind of expanding the distance that we want to be protected for.

And then the last one is multi cloud. If you want to run on AWS and GCP and Azure, and you want to have X amount of nodes in each, you can do that as well. And that obviously takes it to another level, right? Where if something were to have a new cloud provider in a particular region, you have other cloud providers that, or other nodes and other cloud providers that can, will make sure that your cluster's still up and running. And these clouds can actually be a private data center as well. So this doesn't necessarily have to be a public cloud like AWS. If you have your own data center, you can have a node in your own data center and then other nodes that are maybe in public clouds. One of the key areas of conversation that we're having with users is as they expand globally, how do we reach our users with lower latency in these new regions that we're pushing on our app. And for that, that is something that NewYork ADB is really good at, where we allow you to scale and be able to add nodes to new regions as needed. And you can accomplish this in two different ways. One of them is using synchronous replication and the idea of geo-level partitioning, where you pin specific rows to specific regions. We partition off a particular column that allows you to select different variables that allow you to choose, hey, if we have a row come in and the geo column is US, it'll automatically pin that data to the United States. If it's EU, it'll go to EU. If it's Asia, it'll go to Asia. Right? So that's something that's, when it comes up a lot, when you talk about data compliance, especially with the EU and GDPR compliance, we almost always need to make sure that, hey, EU data stays in EU. But at the same time, if you wanted to, you could query across the different regions, but the data is stored in that particular region. You go by, DB also has the concept of asynchronous replication. So if you want to have two separate clusters synced up with asynchronous rather than synchronous replication, that's also something that we can do. And when we've seen users accomplish that for low latency in different regions as well. So using Yugo by Cloud and other different, let's say, for example, Hasura cloud, we've really seen developers start to embrace this type of architecture where they don't have to deal with the infrastructure piece. They don't have to go to the command line and run different scripts and different types of commands in order to stand up these different technologies. These technologies are fully managed for them by the people that built them in the first place. And by using them together, you really get a fully managed GraphQL stack. And that's what we're going to show you today on the Hasura piece.

6. Creating Hasura and Yugo by DB Instances

Short description:

It removes the need to have to install and manage things yourself. And then with the Yugo piece added in, you have the luxury of scaling linearly as your traffic grows. You can just add nodes to your cluster and add those nodes, again, different cloud providers, different regions. So that's just a quick hello from the Yugo by team, giving you guys an idea of what Yugo by DB is at a high level. If you have any questions, feel free to continue asking questions in the chat. Now, getting to the fun part, starting off with Hasura, we're going to show you how to use the GraphQL Counsel and back in the GraphQL engine to interact with Yugo by DB and its Postgres compatibility. We'll create a Hasura cloud instance, create a Yugo by DB cloud instance, configure Hasura to communicate with that instance, and run the application to show you what you can do with it. The application today is a real-time polling app built using React, powered by Hasura cloud for the GraphQL engine, and Yugo by DB for the backend. First, create a Yugo by DB cloud instance, create a Hasura cloud instance, and make sure you have network access. Let's go to Hasura.io/cloud to get started. Create a project and save the admin secret. Now, let's go to cloud.yugobyte.com to create a Yugo by DB cluster. Sign up and create an account, then create a free tier cluster. Follow the steps and click Next.

It removes the need to have to install and manage things yourself. You click a couple buttons through a UI and it does that work for you. And then with the Yugo piece added in, you have the luxury of scaling linearly in order as your traffic grows, right? You can just add nodes to your cluster and you can add those nodes, again, we kind of talked about it, but different cloud providers. Different regions, et cetera.

So, that's just a quick hello from the Yugo by team, kind of giving you guys an idea of what Yugo by DB is at a high level. Obviously, we'd love to continue this conversation for you guys. If you have any questions, feel free to continue asking questions in the chat, whichever one of us is not presenting, we'll take a stab at answering them for you. And we'll go from there.

So now getting to the fun part, right? Starting off with Hasura, and what we're going to do is really quickly, we're going to show you how to use the GraphQL Counsel. And back in the GraphQL engine to interact with Yugo by DB and its Postgres compatibility. So, the steps that we're going to run through, we're going to create a Hasura cloud instance, we're going to create a Yugo by DB cloud instance, and then we're going to configure Hasura to communicate to that instance, prove it out, make necessary changes to the schema and the application in order to use that Hasura cloud instance. And then we're going to run the application and kind of show you what you can do with it.

So, the application today that we're going to be running it's called real-time polling app. And this is built using React, it's going to be powered by Hasura cloud for the GraphQL engine piece, and then under the back end is going to be Yugo by DB. We're going to show you kind of how you can cast votes on the poll and look at the results in real time. We'll create a database and kind of show you how the data exists in there as well. So it'll be fun.

First thing that we're going to have to do, and I'll put this in the chat, here. So, Yugo by Hasura cloud workshops is going to be the repo that we're using today. Calls out some prerequisites, some technical requirements. We're going to go through all this with you. Reiterates the application that we're going to be working on. What we'll be using today is for the hands-on session is going to be this document right here. And we're slowly just going to go through these steps together. So the first thing, create a Yugo by DB cloud instance, create a Hasura cloud instance, make sure that we have network access. So that's step one. So let's first go to Hasura. So if you go to Hasura.io backslash cloud, we'll be able to go ahead and get started. So I've already went ahead and created my Hasura account. It's very quick. You can actually just sign in with your Google cloud credentials, which is what I did. And you can go ahead and create a project. Free tier. Okay. And of course, if you want to use a paid one, you can, I'm using free tier, because I want to show you guys how you can use free tier on Hasura and Yugo by DB, do not have to pay anything and play around with some of these different applications. Change the code to maybe look a little bit more like what you guys are looking to do. I'm going to change this to Yugo Byte DB demo. Let's go ahead and save that. Okay. And then one of the things that we're going to need here, we're going to go ahead and we're going to save a copy of this because we're going to need it later. So I'm going to save, I already have my cluster created. So I'm going to add that. And then last thing we need here is this admin secret. So let's go ahead and copy that over here as well. So that's everything on the Hasura side. Really quick to create an account and create a project. Now, really quickly. Let's go ahead and do it on the Yugo Byte DB side. If you go to cloud.yugobyte.com, you'll be able to log into your account if you already have one. If you do not, you can do a backslash signup and it'll take you to this page. Marko, can you just increase the size of your screen a bit? Yep. How's that look? Is that better? Yeah, that's good. So, typical, just personal information in order to sign up. We don't require you to put in any type of credit card to go in. So you can go ahead and create account, start using Yugo by Cloud, and then you'll be introduced to our Getting Started section. Here, I've already went ahead and created a cluster just because it takes a little bit of time. So that's already ready for me. Obviously, you guys will not have any clusters, but you'll be able to go ahead and click on Getting Started, create a free tier cluster, and you'll be able to create your free Yugo by Cloud cluster. We have a limit of one per account, so you'll see here that I cannot create one. But I'm gonna go ahead and just walk you guys through creating a paid cluster. But, obviously, you don't have to do that. It's the same process. So what you'll see here is, I'm gonna go ahead, this one's created in AWS US-West1, so I'll just keep it in the same region. I'll do the same. So here, I'll call it Asura demo. It's not letting me, cause I already have that name. And then I'm gonna go ahead and do this one, US-West1. I wanted availability zone tolerance, so I wanted node level, I can do that as well. Soon, we'll have regional level as well. And then, go ahead and click Next.

7. Configuring YugabyteDB Cluster and Hasura Cloud

Short description:

To create a YugabyteDB cluster, download your credentials and choose between default or custom credentials. There are two APIs available: ySQL for Postgres compatibility and yCQL for Cassandra compatibility. After downloading the credentials, create a cluster and configure IP allow lists. You must have at least one IP in the allow list to connect to the cluster. Configure the Hasura Cloud instance to use the YugabyteDB Cloud by launching the console and connecting to the existing database using the provided connection string.

The first thing you'll have to do is download your credentials. If you want to keep the default credentials, go ahead and do that. If you want to add your own, you can do that as well. You'll see here, that calls out two different APIs. ySQL is the Postgres-compatible API that we spoke about a little bit during the presentation portion. yCQL is our Cassandra wire-compatible API, if you have any NoSQL workloads that you wanna test out. You'll see here, you can use the same credentials for each one. You can do it differently. So really, just depends on the level of control you want. But you'll have to download your credentials either way and before you can create your cluster.

Once you do download those credentials, go ahead and get them copied over so that you can have them available. Make sure all of that is secure and nobody else has access to your passwords. And then go ahead and click on create a cluster. I, like I mentioned, already have my cluster set up. So I'm gonna go ahead and kind of take you through what that looks like once it is up and running.

You'll see an overview section. We have an idea of what our fault tolerance is. If you're running a free node cluster, you'll see that there's free tier cluster. You'll see that there's only one node. So if you're looking to test things like high availability and horizontal scalability, you will not be able to do that with the free tier cluster. But if you have a particular use case in mind and you wanna try something like a POC, you can reach out to me or Prad or anybody on the sales engineering team or the sales team. And we can kind of get you stood up for a free trial as well in order to test out an actual full UBIDB cluster. You'll see some more information. You'll see the level of encryption when it's created, key metrics. I don't have anything running right now. So you don't really see anything there. Node tables, we have a single node, different things we can do in the performance side, looking at the activities that are happening and then other settings too. So, you'll see one of the things that are called out here is in order to connect to this cluster, you must have at least one IP in the allow list. Right now, I don't have any. So let's go ahead and create our IPs. So I'm gonna create a new one. This one's just gonna be my home network. Meaning that if I, for my local laptop, I may want to go ahead and use this, I can go ahead, connection from. IP address or range, I don't know my IP. So I'm gonna go ahead and just click on this button to detect it itself and add it in there, go ahead and click Save. Let me run it again. Okay, so it's gonna go ahead and apply that. Here, I do wanna add another one to my Hasura cluster, so let me really quickly go to Network Access and let me add another IP address. So this one's gonna be called Hasura. In order to connect from my Hasura. So IP range, I'm gonna go back to my Hasura project and I'm gonna go ahead and copy this over here too. And I will add that here. As you can see, here it can be either a specific IP address or addresses or an entire range, so I've done both. My home network is range for this is a particular IP address because I knew it was. So you'll see here, green is denoted that, hey, this is an active network, allow list, right? So you'll see here that it's being used by this particular cluster and it'll show all the other clusters that it might be using by. You can't delete it if it's being actively used. So if I wanted to delete this particular allow list, I would have to go into the Hasura cluster and first take it off of that and then be able to terminate it, right? Hasura one, I'm not using it yet, so I'm allowed to delete it here if I need to. So let me go back here, let me go to Settings. I can also go to Quick Links, Edit IP Allow List, and just add this one. So you can see here, the different allow lists that you create. It's as simple as clicking on and off here to get them out. Okay, so it looks like it's applying that one as well. Soon we'll have both available to us. All right, awesome, so that takes care of the first step. Let me take a quick sip of water and we'll continue.

Next step, what we wanna do is we wanna go ahead and configure the Hasura Cloud instance to use the Yuga by DB Cloud, so go back to our Hasura project and we're gonna launch the console. Right now we're on the API tab. We wanna go ahead and maneuver over to the Data tab and then we're going to use the Connect existing database tab in order to get our connection working. So our database name, we'll see here that we already have one that you need to use called yugabyte-cloud-instance and I'll show you why we need to use that in a little bit, but let's just go ahead and copy and paste it over. We're gonna continue to use the Postgres SQL driver. Yugo by DB is Postgres compatible, so we'll be able to use the exact same driver in order to connect. Database URL is the way to go. You'll see here they have a little connection string for us. If you go back to yugbyte-cloud-instance, you actually have a connection string readily available for your connection to Hasura. If you click on connect, there's gonna be three different connection methods. One is with the Cloud Shell, right, and we'll use that a little bit to verify once we get the data migrated over. You can do a Cloud Cloud Shell where you download the Yugibyte DB clients and you use those from your remote server, or you can connect it straight through here, right. You have a little tab here option to optimize for Azure Cloud, so let's go ahead and do that. I'm just gonna copy this. Make sure that you're using the right API, and then I'm gonna go over here and I'm going to make the necessary changes I need in order to do this.

8. Creating New Database in Yugibyte

Short description:

To create a new database specifically for the application or demo, launch the Cloud Scheme and access the Yugibyte-DB database. Check that there are no tables and create a new database named Sero for the migration. Yugibyte supports all PostgreSQL-related shorthands and compliant queries, allowing the use of existing PostgreSQL knowledge.

You'll see that SSL is authenticated if you do not wanna use SSL. Well, the SSL mode defaults to require anyway so whether you use this or not it's gonna be using the SSL mode of require. But we're gonna go here. This is Yugibyte, right? So one thing I wanna make sure is I'm not actually using the Yugibyte database. It just says the best practices. I wanna go in and I want to create a new database specifically for my application or specifically for this demo that I'm running. So I'm gonna go ahead and launch the Cloud Scheme, and it'll just open up a new tab that allows you to access your database directly. So once this pops up, I'll be able to put in my password and I'll be able to take a look and prove to you guys that there's no tables in there and that we're actually starting from scratch. All right, so we're at the Yugibyte-DB database. Let's go ahead and check. There's nothing here. Let's take a list of all of our databases. We have a number of system databases that are created to us by default. So what I'll go ahead and I'm gonna go ahead and create database Sero. And that way, when we start our migration over using the GraphQL migration feature, we will be able to go ahead and connect to the Sero one and that's what's gonna create all of our tables. All right, so when we come back in here, we'll be able to do that. So let me... So as you could see, right? Like all the PostgreSQL-related shorthands or all the PostgreSQL compliant queries, you can run on Yugibyte. So you don't have to learn a new SQL language. All the existing PostgreSQL knowledge you can use with Yugibyte as well.

9. Connecting Database to Hasura

Short description:

For security reasons, it's recommended to create your own database for applications. When using the cloud, logging in as a Yugibyte user restricts certain actions like creating a schema. Update the database user and password in the connection pool and connect the database to Hasura. Clone the workshop repo and install the Hasura CLI. Edit the config file in the Hasura directory to ensure the migration can connect to the database. Edit the endpoint and admin secret in the config file.

Yeah, exactly. I'd like to also point out that the default database that is created is Yugibyte, but that's owned by PostgreSQL. So when you, for security reasons, we give you an admin user and that admin user has limited functionality for the default Yugibyte database. So you'll want to create your own database for your applications. So that way you can create schemas and other things like that. Or specifically it creates its own schema for a certain thing. So you wanna make sure, and my demo will do the same thing, but I just wanted to point that out while we're looking at this here. Yup. Thanks, Brad. And that is specific to the cloud as well. If you're running open source Yugibyte DB, you won't have any issues logging in as Yugibyte or as PostgreSQL if you wanted to. Again, I think from best practices, it's typically better to just create your own database anyway, but just kind of give you guys a heads up that if you're using the cloud and you log in as a Yugibyte user, you're not gonna be able to do certain actions like create a schema. For example, on the Yugibyte database, you're gonna have to create a new database and then create a new schema.

Okay, so I think for that, right now we have good, so now we're gonna go back to here. We're going to... Put Hasura here because that's the database we wanna connect to. 5433, this is the port that you wanna go to for Yugibyte DB, right? Here we have the hostname. You can also find the hostname by going to Settings and then Hostname right here. And Eric, I saw somebody raise their hand. I don't know if you wanna, I don't know if they've posted it in the chat or in Discord, but if you guys wanna take a look. Look here. I think this is Mary. Any questions? Go ahead and ask. Oh, okay. Nevermind, it was a mistake. False alarm. Alrighty, so okay. So now we're back here. Let's go ahead and we have saved our database user and our database password. So let's go ahead and update that. I didn't create my own password, as you can tell. I used the Ubibyte provided one. This is really what it looks like. It's your typical PostgreSQL connection string. The difference here is that we're gonna have our Ubibyte Cloud hostname as well as the Ubibyte-specific port of 5433. We're calling out our database and the rest. So let's go ahead and copy all of that over. Let's go to the connection pool. Not the connection pool, but the Connect database and Hasura, and then let's go ahead and connect our database. And then shortly we should have this. There we go, in the blink of an eye. So now we have this here. Let's go ahead and view our database. And you'll see we don't have any tables. We don't have any types of other relationships, right? So that takes us to the next step where we're gonna want to clone the workshop repo. So it's already available for you right here. Now you can copy and paste it right here, the entire thing, right? If you want it to actually go through the repo as well, you can do that. You can just click code, copy it right here. So we'll go ahead and clone that. You'll see here that I don't have anything really going on. I'm gonna go ahead and clone this. Now it is here. Let me go into the on, and then you'll see all of our different, you know, different, this is actually what we're reading from on the browser, and then all the different pieces of that repo that we were looking at earlier. Okay. So we've gotta go ahead and do that. If you're not already, please install the Hasura CLI. If you're using something like homebrew to typically do that, you can just do brew install Hasura-CLI, and it'll download it for you. So for now, what we're gonna do is we're gonna go to Hasura directory. We're gonna have to edit our config file. So when we are running the application, we wanna make sure that the migration can actually connect to our database. So let's go ahead and take a look at that. We're gonna go under Hasura and config.yml, and there's two main areas that we'll have to edit. We'll have to edit the endpoint for our application, and then we're gonna have to edit the admin secret to make sure that we have the right authenticity, right? So we already saved that here under Hasura. We can read the font size of the ID a little bit. Yeah. Yeah, this is good. Okay, is this good? Yeah. Okay. So let me go ahead and copy this, I'll move this over to here. We probably don't need this much space.

10. Applying Migration and Verifying Setup

Short description:

We updated the Endpoint and the Admin secret, and ran the migration. The tables and views have been tracked, and the foreign key relationships have been created. We verified the setup by running a sample query in the Hasura GraphQL Console, which returned the expected results. We also confirmed the data in the YugabyteDB side using SQL queries. Now, we need to edit another file to ensure that the application can connect.

Okay, so I will go ahead and change this. You'll see instead of Hasura.ybdemo, it's now going to be ubitedb-demo or whatever your naming convention was. We'll go ahead and we will change this up as well to MySecret, and then we're gonna go ahead and save this. We're gonna go ahead and exit out of that.

So the next part here is gonna be actually applying the migration itself. So we went ahead and we updated the Endpoint and the Admin secret, now we're gonna go and run the migration. So let's go ahead and do that here. So let's go ahead and run this. Awesome, so migration applied. Now we'll be able to go here and see that there are some Untracked tables. Awesome. So here we go. So we have the tables for that particular application that we're gonna need. So what does it mean for us to have an untracked table? It just means that the tables or views are not exposed over the GraphQL API. So let's go ahead and expose them. We'll go ahead and we'll track those tables first. And then once that is completed, we'll go ahead and we will track anything else that comes along with it, right? So if you take a look at here, you'll see that we wanna track all the different tables and the relationships for it. And then we'll have to actually add another relationship ourselves to kinda show you guys what that looks like if you need to manually do this and you don't have the ability to kind of do how we did today. Okay? So it looks like all of our tables and views have been tracked, now we wanna track the different for key relationships. Let's go ahead and do that as well. We'll give it a minute. Awesome, relationships have been created. Let's go ahead and refresh one more time. And let's make sure that there's actually some relationships that are existing in here. So I'll go ahead and let's take a look at the options table, relationships, and it looks like there are some relationships already there. Let's go ahead and create our own relationship now on top of some of the ones that have already been created for us. So let's go ahead and go to poll results, right? Following this we want to create a relationship, array relationship for poll results, relationship name option on poll results table referencing poll ID to the options table dot poll ID. All right. So I'll go to poll results. I'll go to relationships, configure manually. We'll do an array relationship. We're going to call it option, singular, not plural. Yeah, okay. Public schema, reference. We actually want to do the options table referencing that but poll ID for each. Okay. Let's go ahead and save that. Awesome. Another step done.

Step six, verify the setup. Navigate to the Hasura GraphQL Console, run the GraphQL mutation and queries present in the GraphQL JS dot file. So let me go ahead and open this. There's a number of different queries that we can run. Right, we'll go ahead and just run this one to prove it out. And as you kind of scroll through this, you're using subscriptions in order to run this application. And a couple of different things depending on what somebody votes for in the polling app and then obviously making those changes for you. So let's go ahead and that's been added. Let's go ahead and go to API here. So go back to the API tab and let's go ahead and let's run that sample query just to make sure that we're getting what we need. And there we go. So we're running the query here. And we see the results here, the ID of the poll itself, what the question is, what's your favorite from the framework. And then different IDs for the options that you have, React, Vue, VanillaJS, Angular, and Ruby. So let's really quickly take a look on the YouGibyte side too, on what that looks like. So let me go ahead and reopen this cloud shell, doing it to the Hasura database this time. We'll go ahead and confirm that. And then let me go ahead and get my password. And then really what we're gonna do there is I'm just gonna kind of go use SQL in order to kind of show you that the different tables that are in there are gonna have the same type of results. Okay, so we go in here, we're connected to Hasura. We take backslash D, we'll see all of the list of relations that we now have. So all of these different ones here, you're able to see if you take a look at the data tab. So just confirming on the Hasura side and the YouGibyteDB side, that we do see what we wanna see. If I do select star from, you know, option, for example, you'll see that our options map to the different options that you see if I run this again, right. All these ones, if I take a look at what the actual poll is, let's do it this one, you'll see that it's the same question. If we wanted to add more polls, we could. Right? If we wanted to take a look at what users, there's only been one user so far, right? And then we have number of different votes already cast when it's created, the idea of it, the user, right? We just use the admin user for all those some. You'll kind of see that there's already been some votes that have taken place. If I actually wanna see the poll results, and you'd be able to see this once we show the visual, right? You'll see that we have one, two, four, seven, eight votes already cast. So let's go ahead, and now that we've done that, we'll go to the next step, which is the last step before actually running the application. We'll wanna edit another file in order to make sure that it's able to connect to the application.

11. Working with Hasura, Prisma, and Apollo

Short description:

To connect to our Hasura instance, open the Apollo.js file and update the hostname and Hasura Secret. Remove the HTTPS portion from the hostname. Run the commands in the root of the repo to install the necessary dependencies and start the application. Access the application on localhost and vote for different options. Use the Hasura interface to view the GraphQL pieces and see real-time updates in the database. Hasura subscriptions are used for performance and scalability. We will cover Prisma and Apollo in the next part of the workshop.

This time is able to connect to our Hasura instance. So let's go ahead and let us open up that particular file. So it'll be under source, Apollo.js. And then the two main areas that we're gonna have to change it is the hostname, so right here, and then the Hasura Secret again. So for the hostname, please make sure you're doing it that you are not including HTTPS. I made that mistake myself. It's gonna cause some UI issues. So let's just grab everything but the HTTPS portion. Let's go ahead to hostname. That's gonna be right here. So ours is gonna look like this. Go back, grab the secret. Here we go, okay. Let's go ahead and save that. We're done with that one.

Now we will go ahead and see what's next. Okay, we already did all these, and now we just go to the root of the repo and we run these commands. So let's go ahead and do that. Root of the repo, ls k, npm install. Let's run this real quick. And it should be done here in a second. Okay. Then just make sure it's opening up on here. Cool. I'm gonna go in and hit npm start. Okay. Let's go ahead, actually I'm just gonna open it up. So if I go to my local host, it'll show the application. So we see here the same options that we saw when we were kind of doing, verifying our testing of both the UBITE side as well as the server side. We'll see there's already some votes that we've cast. Right, two, three, one, one, one. So this adds up to eight, just like we have within our UBITE DB demo. And then you'll see the different cast load options. So if I go ahead and I click on React, let's go ahead and vote for React. We'll see that that's now up to three as well. If I wanted to take a look at what any of these GraphQL pieces look like, we could do that through the Hasura interface as well. So if I go back over here, I can go ahead and let's say now, let's see, I select start from poll results. It'll show. So now you'll see here that for React, it is now up to three. So as we make changes here live, we'll be able to not just update the UI, but it's also gonna update the database immediately all going through Hasura. So real quick, now it's up to four. Behind the scene, it's actually using the Hasura subscriptions. So the third query in the bottom just show that. Yeah, that one, correct. Like what are the performance you might be seeing with the, like say Postgres, you will see the same kind of performance with your byte. Later on, we're gonna see how we were able to benchmark a GraphQL subscriptions to scale it like a million records. So, and like easily scale out the database. You can start out small, as in, when you have more users start using your API, then you can scale out the database and scale up the subscriptions as well. Ultimately, that's what we wanted to show off this. Yeah, and we will go a little bit, make sure to stay on the Zoom because we will be going through that in a little bit after. So we just finished up the Hasura portion. Now we're gonna play around with Prisma and then Apollo as well. So, you know, Eric, I'm gonna turn it over to you. Nikhil, Eric, is there anything else that you guys think it'd be good to show while I'm already here or anything you guys wanna comment on? Yeah, this was awesome, Marco. I think we have covered all that we want to think we want to cover. Let's just ask the attendees if they have any questions. So folks, if you have any questions regarding how to get started with Hasura and Yugabyte, please feel free to ask us any questions. If you're able to, I mean, if you're following along, if you have any troubles following, I mean getting the app working, you can let us know as well. Yeah, and if you instead want to ask us over Slack, there's a lot of users, Yugabyte DB users, as well as admins like myself, Nikhil, Eric. If you guys have any questions, we're happy to have those conversations over here as well. So in this part, we're gonna be working with Hasura. So I'm just gonna get back out of here. We just went through this Surah part, but next we're gonna be working with Prisma, excuse me. So you can kind of see, we're gonna get started with Prisma. This is, we're gonna create a little Prisma instance, create the Yugabyte Cloud instance. I hope you already have it up. Maybe if you didn't quite get it all the way up during the Surah, hopefully, it's up and working now. And then this'll be the stack that we're gonna be using. So we'll have an Apollo server. You have the GraphQL Nexus, a Prisma Client and Prisma Migrate.

12. Creating Prisma App and Database Migration

Short description:

In this part, we will create our Prisma app, perform a live migration of the data, and create a new database table in real time. We will work with the Apollo server and configure the URL string to connect to our Postgres or YugoByte instance. After installation, we will open the schema.prisma file and modify the Postgres URL. Finally, we will create and seed the database.

So we're gonna actually run through creating our Prisma app and we're gonna migrate, do a live migration of the data and we'll create a new database table in real time, get that started and kind of work through the Apollo. The Apollo server, you'll kind of see the UI. It leads nicely into what Nick Hill is gonna be doing for his portion. So definitely, if you guys can get this kind of going and started, that'd be great.

And so, let's see, I'm gonna post the repo on the chat here. This is what we'll be working on and even go here. I've kind of created it, gotten it already. So here it is, and everybody see everything okay? Yes, is it big enough? Do I need to make anything bigger, let me see, we good? Marco? Making it a little bit bigger wouldn't hurt. Yeah, I think that's good. Great. So we'll do that, all right.

Let's get a terminal session open. This is where we're going to be working. So, if you haven't already, maybe go create a particular folder. I created this Yugabyte Prisma workshop because we're going to be downloading the example and installing all the dependencies really quickly. So you can see here, I also have a Yugo by cloud cluster set up. This is actually a paid one, I have three nodes. You can kind of see here, I have it in the US west. I don't have any tables. I do have some tables currently, but we'll go quickly. And actually we'll just go to delete these real quick. But as that's kind of getting in there, we can kind of roll through. So we're going to install it all. I wanted to put this here. So we're going to kind of like we did when we did with the server, we're gonna switch to the Postgres. We'll kind of work through it and I'll show you where we're going to make these changes for Prisma. So you'll have the kind of the same thing. We'll create a Prosperous provider. We'll have very similar URL and we'll create Cedar database, kinda run through that. Let's see. So let me do that. Can't type this one in. These are all the tables that it will create. Let's see. All right. So now with fresh clean database, there's no tables anymore. So we'll kind of go ahead and- Practically you'll need to just increase the font a little bit on the browser. On the browser? OK. Yeah. Perfect. A little better. I'll try to make sure I get going. Okay. So let's go ahead and we'll download, if you haven't. Like I said, create a folder there for this if you'd like. But we'll go ahead and just copy this one. And I'll get this a little bigger too. I'm going to leave that for you guys. So we should have our GraphQL folder with everything for Prisma already in there. So let's hop in there. And you should be able to see okay, we have everything loaded up. So now we're going to go ahead and install. So similar to our server demo, we're just doing NPM install. And it's going to go through and get everything created for us. So now what we're going to do is we're going to go allow it to connect to our Postgres instance or our YugoByte instance using the Postgres API. So if you have a particular IDE that you like, or if you want BI, however you want to do it. what we'll do here is we'll open this. So I'm already... Here's the GraphQL. In the Prisma folder, we'll have our schema.prisma. So we'll go ahead and open that. And this is where we will create everything. So you have your data source DB. And we're going to make this Postgres URL. And we provide our URL. So kind of similar to how we did it with the Assura, you can just go to connect, right? We connect our application. And since the first thing that we did early on was, was to serve, so that's what we have are optimized for us, but it's the same for Prisma. We can just take this URL and insert that into our URL string. So I already have my here and I have a very secure password because it's fun and so if we come over here, we can just add that in. And then we'll save that and if we come back to our repo and our shell, we can go ahead and create and seed the database.

13. Setting Up and Using Prisma with a Database

Short description:

We run the command npx prisma migrate dev to create and connect to our database and create tables. It may take a little longer than expected. Once the migration is complete, we can see the new tables. We also have a seed.ts file to insert data into the database. After running the command npx prisma db seed, the data is loaded. We start the graphQL server by running NPM run Dev and can interact with the database using Apollo at localhost Port four thousand. We can create queries and mutations using the graphiQL interface. We create a user by running a query and can also create drafts for social posts. We can retrieve the drafts and verify their creation.

So we can just run this command, npx prisma migrate dev, and this will go ahead and create, connect to our database and create some tables. So we'll run this. Now this does take a little while. I noticed for Prisma it does, connects, it definitely takes just a little bit. But as we come back, we can see those, we can already see we have that new table, one new table started, and as it kind of goes through and syncs, the rest will be created. So it'll take a few minutes, not a few minutes, but it takes a couple of minutes to kind of sync up, get everything created. I'm not sure why Prisma takes a little bit, I haven't dug into it that much yet, but it does take just a little bit longer than I think it does for Hisura. But eventually once this runs, this. There we go, now it's applying the migration script. So, running the SQL statements. Let's see some more tables. Right, now we have our user profile. How about profile table? That's interesting. That's fun. That's fun. Right, everything's in sync now. So as we can see, we actually have a lot of extra tables I'm quite surprised about. Okay, seemed I cleaned it up. That was odd. Not gonna lie, I've run through this a few times. It seemed to want to create some extra tables that it hasn't done before so I apologize for that but that was interesting. Gotta love live demos. All right, so from there we can now have another file called seed.ts. So if we go look at that real quick, we'll open it and what we'll see is we have just some data. So you can construct the user data and we can have this file with some data and it's gonna go ahead and insert that data into the database. So we can run our MPXprismdbseed and it's gonna go ahead and load that data for us. So yep, you can see that, all right, it's created those users. So if we come back and hop into our Cloud Shell, get back to the Prisma table. I should really keep this open more, but just let that open up. Should be just left to open. Right. Apologies for the delay. Sometimes it's gonna take a little bit. Creates own instance for the shell. Um. Any questions so far is everybody able to go and get this going as well? Please feel free to ask, I can. Yeah, answer any questions while we're waiting here. Let's give it a refresh, see for the travel. Try again and get back in there. There we go. OK, so we could link the user table. user, and we can see that we have three users their ideas and ready to keep moving. So now we're gonna actually start the graph to a server so all we have to do is run the NPM run Dev. And it will start it up. Okay, that starts up nice and quick. So then we can just go to localhost Port four thousand and we will have our Apollo. So you just click in here and now here we have our okay, we have our UI and we can interact with our database nice and easily. You know, if you're familiar with graphico, we can start building our queries here. So I have some queries already set up that we can kind of start with. So let's go ahead and copy this one. And but instead of that stuff, sir, let's go ahead and create me. We'll create my name and we'll say, be proud at ebike.com. So if we run this query, boom, it goes and makes my user. We can check it out. There I am. We want to see if we click down here, there's a few more different queries we can run. So let's go ahead and create a new draft and say, this is social posts, right? So go create this draft. Nice and easy. Run that. Now we've created a draft for this user Alice. She's on prison slack. Right? So once that's there, we can kind of come back and we can now receive, we can go and say, okay, let's go get that draft back and see, you know, she actually, if that actually was created. So nice and easy. Boom. Here's another draft. Here's the draft that was created by Alice. All right. No problem. So this all works pretty well. We can run this.

14. Migrating Database and Updating Application Code

Short description:

We're going to migrate the database and update the application code live. We created a new table called Profile, allowing users to link a profile to their user ID. Let's continue with this section.

So. Right. So here we have some, um, here's all the published posts by their authors. We can see that pretty much only Alice, um, my mood is also created one. Um, so as you can see, we won't have to run off through all of them, but if you want to, you'd more than welcome to kind of go. There's a bunch of different examples here of, uh, different, uh, graphical queries that you can run. We can create posts, we can delete posts. We can do whatever we want. Um, but the main part, one of the things I thought was really cool about this Prisma was evolving this app. So this section here under evolving the app, um, we're going to migrate the database and we'll update the application code. Um, so we do it all live. It's quite cool. So we created a new table called Profile. So it'll allow users for this particular application to create a profile that they can link to their, um, you know, to their user ID. And as we do this it's all alive. So we don't actually take anything down. Um, so it's, it's quite cool. So we can start this section here. Um, I'll see if there's any questions so far, anybody, anything on discord? If not, we'll just kind of keep moving through.

15. Adding Column to Profile Table

Short description:

We're going to add a new column to our profile table that will be linked to our user table. After running the migration, the new table will be created. The migration process may take some time, but it doesn't heavily impact CPU usage. Once the migration is complete, we'll move on to updating the application code to interact with the new table.

So what we'll do now is I'm actually going to add another shell here. We'll go to our workshop. So we're here and what we're going to do is reopen our schema prisma. So we have that here. I'm actually get rid of that. So here's a schema. Right. So what we're doing is we're going to add another. Column to our profile table. And this will be linked to our profile. The column will then be linked to our profile table. So we'll just go here and we'll grab this. This is what we're adding. So it's nice and easy for you guys. All right. So now we have our new table, right? Our profile, our ID and the bio that we're going to do. The user. Our class name, right? Our profile, their idea to the bio that we're going to do the user and the user ID. Um, this will be linked to kind of back to our user table where it'll add that profile section. So what we can do is now migrate this and it's going to go real time and create our new table. So let's run this here. Okay. And so this takes a little bit, right. It's going to go kind of similar to what we did before, where we create our initial tables. It's going to go ahead and create that profile table. We kind of look here. Okay. We can now see our profile kind of does this migrating part. Um, as it gets everything ready and then kind of cleans up after itself. This one goes through. Once it's done, we'll see that it'll be synced again, and you can see if it's here yet, so if not quite there. Once this syncs up, we'll be good to go. This is the only part I found kind of interesting. It does take a little bit. Like I said, I haven't really looked into the Prism apart as to why, but it does take a little bit of time. We can see, we come to Performance, that it is there is a little bit of latency creating those, so more. But we don't really, you know, we don't hit a lot of CPU while we're doing these operations, so it's not that intense. This isn't, these aren't actually particularly large nodes either, right? So we're only four CPU. We have four CPUs, and two gigs of RAM on each, so not particularly big, but it's okay. Just going through and creating those still. It's all right. It does give us time for questions if anybody has them, again. The next part we'll actually go through and update the application code. This, that part's a little more involved. We do have to make quite a few different changes to the code so we'll be able to interact with our new table, be able to query and look at it, so if you have any questions on that part, I'll take it a little bit slower.

16. Table Creation and Application Code Migration

Short description:

We encountered some issues with the table creation and migration, but after restarting and using the correct file, we were able to successfully create the profile table. We then proceeded to modify the application code to interact with the profile table. We added the profile object type and adjusted the user object type to include the profile field. After applying the migration and verifying that the profile table was in sync, we continued with the application code migration. We made the necessary changes to the schema.ts file to incorporate the profile object type, allowing us to interact with the profile table.

Okay, so then it looks like we finished back up. We should have our table. Did it not do that? It didn't look right. Well, we'll restart it. Restart the... restart it. Let's see if we picked it back up. It did not create our... Mine didn't upgrade our table. Give me one second here. Let's see the profile. Is there? What's there? Nor is it in here. Let's see. Maybe the CLI is timed out now started. Yeah. Okay. I'm just. Could it be the problem. It's not the correct file. All right, well that's on me. All right, let's do this again with the correct file, how about that? Where is my? How about then? Hope I'm growing strong. No. Okay, that's. All right, let's try and run this one more time. So, migrate that, add the profile. That's, actually I know what I did wrong. I didn't save the file, so, that's on me. Sorry, guys. So, we'll run through this one again. And while that's going, we'll kind of look at the next part as it runs through. So, in the source file for the schema TS, we'll kind of run through this. This is the application code. We're going to create our profile object type. This allows us to interact with our profile table. And then here, we're going to adjust the user object type, right? Our user table, and we're going to add that profile field. We also have to include it in our make schema section. And then, we will also have to create the graphical mutation. So, it allows us to do mutations against those tables. And so, we'll walk through this one. It's not a lot of crazy code changes, but we will have to make some. And so, one thing I noticed in the Prisma examples, if you go to... I pulled this from Prisma, their examples. In this particular part, I had to change a little bit in order to get it at least working for me. And we can talk about that later. It's going through and creating those tables. So, the interesting part, I thought, was, as you saw with the Sera, it creates it quite quick. In Prisma it does take a little bit here. So, I apologize for the delay. But, please ask any questions now. If anybody is trying to catch up, I probably have the time now. So, we'll take a look here. Okay. Now we're applying the migration finally. It's adding that profile. And now we can see our profile table. Now we can see it's in sync. We didn't take the application down this time. It's still running, we could still interact. So, that's fine. So, now that that's done, now we'll move on and actually do your application code migration while everything's still running. So, we can get rid of this guy, we can come back here and we'll open our Schema TS. So, if we come back to the YouCube workshop, we go to GraphQL, we're gonna go to source, go find our application code, and so we want, right, our schema.ts file, TypeScript file. So, here it is. Can everybody see that all right? I'll get rid of the chat. Okay, a little bit bigger so that we know what we're doing here. Okay, so this is all the application code. And if we come down to the user type, what we'll do here is we'll just make a little room for our profile, copy this part here, and we'll add it to our code. And then, we'll just get rid of this part. Okay, so now we've added our profile object type. This is going to allow us to interact with our profile table.

17. Adding Profile to User Table

Short description:

We're going to add this particular section to the user table to interact with the new column. We'll clean it up a bit and then move on to the make schema section. After adding the profile, we'll include the profile GraphQL notation. Once we save and apply the changes, the application will update without any downtime. We can then test the new mutation by adding a bio for a user. The user now has a profile, and we can verify the link between the user and profile tables. This concludes the workshop, but you can explore more mutations in the Prisma documentation and experiment with the example provided. Feel free to create your UVI Cloud and try out different queries and performance optimizations.

And then, we're going to add this particular section here to the user table so we can then interact with the new column that we created in our users table. So, this one gets a little interesting. You just gotta kinda clean it up a little bit. Not quite. So, I'm just gonna little bit to have it down. It doesn't really matter. You don't technically have to do it, but it's all right. Not the greatest, but we won't spend too much time making it pretty.

So, then we have to come down to our make schema section here, so we have our make schema and let's add our profile called there. Don't forget our comma. And now we're going to add our great profile GraphQL notation. So, let's find this one. We've to come back up actually. This is gonna be, we find our rotation section. Sorry, a little fast here but here's our rotations. So you'll see some of the different ones in here. This was, you can create your draft. Here's how you toggle a publish post. So these all relate back to a lot of those mutation fields we saw earlier up where we had that long list that you could do. So let's see here. Let's just make a little space here. There we go. We'll add that. Just line it up a little bit. Right there, and we will then get rid of this. Okay. So let me just check our blocks here. I just want to make sure everything is kind of okay. That looks all right. Okay. So I'm going to bring this back up. What we'll see here is our application is still running. So once we save this, it's going to go update the application so that we are able to use everything again. Right? So we'll just hit command S. So we'll save this and we'll see it over here, applying everything. And we're ready to go. If there was an error, you would see it over here. So I'll show you real quick kind of what that looks like. So in the PRISMA example that they have on their site, I'm open to GitHub issue for it, but if we're like, let's say we remove this and we control save it. Now we have an error. So you can see that it'll tell you what the problem is live. So this would be an error that I hit using sample code. And so to resolve that, what we did was we added this. So that takes care of it. Don't know how familiar everybody is with application code and stuff. I won't go into the details per se of it, but that allows us to get past and to keep working with our application.

So now that that's good, we can come down here and now we can test the new mutation. We can say, okay, for Mahmoud, he's gonna have this bio, or anybody, we can, this is just the example we have here. We can add this and run it. and now this user now has a profile. So if we come back in and we, let's say, start from user, we can see, right, we have him here. He has an ID of three. Now we can look at our profile table. And there he is. You see, this is the ID of the row in our profile table. But we can see that it is linked back to our user, ID of three, so those tables do are linked. We did that all live without having to take down our application server. I think that's pretty cool. At least on the Prisma side. And that kind of sums up this particular workshop. You can go back to the actual Prisma workshop. They have a lot more mutations and things you can play around with. But you can kind of go through, check out the Prisma doc. This example, like I said, that we did here, you have there in the chat. You can go mess with it by all means. Please go create your UVI Cloud. Kind of run through this. We can see like our performance live queries. They're not running at the moment, but we can go ahead and let's say let's go create another one for me. And we'll have a look.

18. Apollo GraphQL and Yugabyte DB

Short description:

In this part, we will dive into Apollo GraphQL and Yugabyte DB. Apollo platform provides more control over implementing GraphQL servers and includes features such as Apollo Studio for reviewing types and monitoring servers. Apollo server can easily connect to Yugabyte DB using Node.js ORMs like SQLite. We will create a data store for Yugabyte DB using SQLite and build a space explorer app that interacts with SpaceX's read-only APIs. The app will allow users to book seats on crew dragons and store information in the Yugabyte DB data source.

This part's kind of cool. I like snowboarding, personally. I just can't wait for the season to start. I live in Denver. If you guys are interested. And I'm waiting for snow. Still don't have any snow. So let me just kind of pull this out and we'll see if we can catch that live query. As it runs, it runs pretty quick. That ran way too quick, so it didn't quite catch it in time, but it's going. There is a pet section for slow queries. We'll talk a little bit about this later in our clear tuning. But we can go ahead and look at our profile. That's related to my user ID in our users table. So I think it's a pretty cool example. I will be turning it over to Nikhil here. And we, he's gonna run through kind of a cooler example. This is like what I liked about this is that we're building upon this and you'll see he's gonna work through Apollo and have a really cool example. We'll give it a minute. And we'll start off with Apollo GraphQL and Yoga by Bp. This is our last of the GraphQL servers we wanted to showcase today with you by DB. As you could see, right, like all we were doing is like existing steps that you were using with any other database, like say Postgres, MongoDB, not Mongo, Postgres or MySQL. That will continue to work with your body DB as well. And if you're like a Postgres shop and you're looking to use you've got by DB as or if you're like looking to use a highly scalable system because you have hit some of the limitations of a single node Postgres, then you've got by DB can be a good fit for such use cases where you need scalability and always on kind of a setup where you cannot take outages. Some of the deployment topologies that Marco explained in the beginning, right? That will help us a lot to build like always on architecture. And this workshop we just wanted to showcase like how simple it is for anyone to get started with you the right whether it is OSS or cloud it's just few configuration changes and all the existing GraphQL concepts that you are aware of will continue to work with your device as well. Okay, sounds good. Let's get started with Apollo GraphQL. So Apollo platform actually provides few more things compared to what Prisma provides and also it's similar to how Hasura GraphQL engine is, but one difference is in Apollo platform, you have a lot more control of how you are implementing your GraphQL server. These are the four different categories that Apollo platform kind of touches upon, is like it allows you to build your graph. In Apollo's perspective, when we say graph, it is nothing but building the GraphQL queries or GraphQL types itself. So obviously you can build your schema and everything and write all the resolvers that are required for queries and the mutations using the Apollo server APIs, and obviously most of the Node.js applications use Apollo client as Apollo client for creating the backend GraphQL servers. Doesn't matter if it is Prisma or Hasura or Apollo itself, like you will be using some Apollo client for creating your fetching the information from the server. And one other thing that Apollo platform provides is it provides the Apollo Studio, which is nothing but like a UI where you can go ahead and see like kind of review all the types and run some GraphQL queries. In addition to that, Apollo Studio has the capability of monitoring your Apollo servers as well. Like what is the throughput? What is the latency and things like that. All those things you can monitor using Apollo Studio. And one important thing is even the Apollo Studio is kind of, it's a free software you just have to create an account on Apollo Studio. And if you want to use some more advanced analytics features that Apollo Studio provides for identifying like outage patterns and things like that. Then only you'll have to go for like the more advanced like paid versions of Apollo Studio. Otherwise whatever the existing we have it will continue to work with the free version of Apollo Studio. And obviously like all the other GraphQL servers you can also federate a bunch of different data sources whether it is your different databases, your data Lake or the different rest APIs or even if you have SOAP APIs, you can all federate that using the GraphQL server. So similar to how we get got started with the other two GraphQL engines, even the Apollo server, you can easily connect to you got by DB using Apollo server APIs. Behind the scene, what Apollo server does is since it's a Node.js based implementation, you can use any Node.js ORMs that supports. In today's session we'll be using SQLite. SQLite is one of the popular Node.js ORMs out there. So all the code that you write for instantiating and creating the current operations will continue to remain the same across databases. Only the way you instantiate the SQLite object itself or create a new store that kind of changes between different database implementations. So if you already have a GraphQL server that you have implemented with Postgres, it will like super easy to connect it to Ugerbyte. But if you have connected to, if you're using any other databases like MySQL or SQL server with just minimal code change you can migrate your application from those databases onto Ugerbyte DP. We'll just show how those steps are. As part of this workshop, we'll go over that, how to create a data store for Ugerbyte DB using SQLite. And obviously all the APIs that SQLite provides like find by one, find by user ID or the column of the table itself. All those continue to work with Ugerbyte itself, Ugerbyte database. It's similar to how we were working with the other demos are the workshop apps that we went to. In two days for Apollo, we'll be building a space explorer app. This is like a cool app that I found for Apollo. So we just have extended this application to use Ugerbyte DB. The workshop repo is here. You can do bit.ly slash Ugerbyte DB Apollo workshop. I'll just paste that for everybody to see here, it is bit.ly Ugerbyte.Apollo, you can navigate here and you'll be redirected to the platform, whatever the workshop we'll be building. Obviously, similar to the prerequisites, whatever we had for the previous workshops a little bit understanding of GraphQL, familiarity with Ugerbyte DB fundamentals, like how to create tables, how we can instantiate which we have already gone through, and some of the other things. This needs to change. Instead of Hasura GraphQL, it should be Apollo server. Okay, what we'll be building today. We'll be building a space explorer app. It's a futuristic app which is able to hit, read only APIs that SpaceX provides, where it has a list of all the rocket launches they are doing. And you can book a seat on one of their crew dragons that they have. And obviously whenever you are building a reservation, you will want to store your information in data source. For that we'll be using gigabyte DB for storing their database.

19. Building the Schema for X Center App

Short description:

We'll be building the GraphQL server part using Apollo and also there is a NodeJS ReactJS client which actually has things like, you can see all the upcoming rocket launches and you can add a lock at launch to the cart and check out the car called for your profile. We'll be doing the hands-on session starting with building the schema, consoling the data source using SQLite for gigabyte DB, right there query resolvers and then mutation resolvers. If you will see, if you compare between Hasura and Apollo, the one changes in Hasura, it kind of all these things are a black box. Once you track your GraphQL table or the database table itself, Hasura takes over building all the queries, resolvers and the mutation resolvers behind the scene. If you want more granular control of building the queries and what should happen in your resolvers, then you can start using the Apollo Server APIs where you'll get more control over how to build the resolvers. And finally, once we have the server up and running, we will run our client UI application which has the actual UI for this app. The first thing that we'll be doing as part of this rundown is like building the schema for our X Center app. As I explained, there's two domains that we are tackling here, right? Basically, those are the domains for which we want to define our schemes. So there is a domain. The first domain is our rocket launchers itself and the second one is the user reserving a seat in those launches. That's why the first, in our schema, they should be able to handle these things like fetching a list of upcoming rocket launches, fetching the specific launch by its ID, logging in a user, and for a user to book a launch. And also, you have the ability to put the launcher, like the reservation, into a card, checking out and canceling the previous. So, some of the basic write operations that you would want to do with any domain. And the first thing, as I said, we'll be building the GraphQL types and queries and mutations that are required for our domain. If you navigate to source and schema.js, you'll see an empty file. And the first thing you would do when building a GraphQL schema is obviously, since this is an Apollo server, we'll be using, we'll be bringing down the Apollo server API so that we can build our GraphQL types.

It has two components of it. We'll be building the GraphQL server part using Apollo and also there is a NodeJS ReactJS client which actually has things like, you can see all the upcoming rocket launches and you can add a lock at launch to the cart and check out the car called for your profile. So we'll see what all, how the schema of this looks like, what are mutations we need and what are the queries that we are gonna be building as part of this application.

If you scroll a little bit back, as I said, we'll be doing the hands-on session starting with building the schema, consoling the data source using SQLite for gigabyte DB, right there query resolvers and then mutation resolvers. If you will see, if you compare between Hasura and Apollo, the one changes in Hasura, it kind of all these things are a black box. Once you track your GraphQL table or the database table itself, Hasura takes over building all the queries, resolvers and the mutation resolvers behind the scene. If you want more granular control of building the queries and what should happen in your resolvers, then you can start using the Apollo Server APIs where you'll get more control over how to build the resolvers. And obviously we'll try to connect to Apollo Studio and run some sample queries. And finally, once we have the server up and running, we will run our client UI application which has the actual UI for this app.

Okay, the hands-on session are actually linked here. If you click on this, it will navigate to the workshop.md. Obviously we have been talking about creating an instance on Ubiquite Cloud. Even I have pre-created a cloud instance for myself for this workshop. And if somebody joined new, if you want to start creating a new cluster, you can go here and click on add cluster and you can click on the Ubiquite free tier instance. Since we have all, it's limited to one per user account. So if you have already created one, you'll not be able to create one more. And all the other options that you have for the paid version is similar to the free tier as well. You can give your cluster name, whatever the availability zone, which cloud provider you want to run it in all those things you can select here for just... In order to save some time, we have already pre-created that instance. As you know, it's not that the cluster is pre-provisioned. We will be creating VMs and the containers that are required for running a gigabyte on the fly. That's why it takes a few minutes for us to configure the VM and the container. So we just wanted to save that time. I have created an instance already. This is the instance I'm going to be using. As we said previously sure, there's like few things you would want to have as part of your instance itself. Like you would want to have the credentials that are required, as well as some of the host name that you have for connecting to this instance itself.

Cool. Hopefully, this is the third time you are explaining that you have now experts in how to work with gigabyte cloud instances. Once you have created gigabyte cloud instance, obviously, the other thing you would want to do is configure the load balancer so that you can connect to this instance. So for me, I have already set that up since we have already gone through this twice. I learned over how to add a new IP. The next thing that we would want to do is set up an account on Apollo studio. Okay, let me quickly do. Studio.apollographQL.com. If you go here, you will navigate to the GraphQL schema and all those things. You can go ahead and preview it. We'll come back to this page once we go along with the hands-on session. Cool. As I said, if you go to the project structure, let us go ahead and first clone the project structure itself, I'll go to GitHub clone. If you are using IDE and using Visual Studio Code, we can go ahead and create a new terminal here. In this new terminal, I can say I'll go to the temp directory and I'll say, git clone. I've cloned the project, I'll navigate into the project. And if you see here, there's a few folders that we have created. There is the client folder, and the server folder, and the server initial folder. The server folder is actually in the completed code. If you are feeling lazy, if you don't want to go with the step-by-step of building the server itself, you can just go ahead and start the server there. But for today's walkthrough, we'll be using the server initial project. I'm going to import the server initial project as part of this repo. Let me go and say Open Folder. Okay. Move to server initial. Since I'm the author, I trust myself. Let me start the terminal again so that we can start working on this. Okay, cool. Hopefully, you have set up your IDE and you have the server initial project in your IDE. And now let's navigate back to the workshop hands-on session. Any questions so far on the file structure of the project itself? If not, I'll just continue with the hands-on session.

Cool, okay. As I explained, right, Repo has three folders, the server initial and the client itself. So the first thing that we'll be doing as part of this rundown is like building the schema for our X Center app. As I explained, there's two domains that we are tackling here, right? Basically, those are the domains for which we want to define our schemes. So there is a domain. The first domain is our rocket launchers itself and the second one is the user reserving a seat in those launches. That's why the first, in our schema, they should be able to handle these things like fetching a list of upcoming rocket launches, fetching the specific launch by its ID, logging in a user, and for a user to book a launch. And also, you have the ability to put the launcher, like the reservation, into a card, checking out and canceling the previous. So, some of the basic write operations that you would want to do with any domain. And the first thing, as I said, we'll be building the GraphQL types and queries and mutations that are required for our domain. If you navigate to source and schema.js, you'll see an empty file. And the first thing you would do when building a GraphQL schema is obviously, since this is an Apollo server, we'll be using, we'll be bringing down the Apollo server API so that we can build our GraphQL types.

20. Building GraphQL Server and Connecting Data Source

Short description:

We define the GraphQL server and build object types, queries, and mutations. We use the SpaceX API to fetch launches and transform the REST response into GraphQL. Mutations interact with the database to store information. We connect the Apollo server to the data source, including the SpaceX REST API and Yugabyte DB. We create reducers for the Gigabyte DB data source to handle user creation, trip booking, cancellation, and fetching launches by user ID. We set up connectivity between the Apollo server and Yugabyte DB using SQLite as an ORM with the Postgres dialect.

I'll go ahead and first copy the content for creating our GraphQL server. Every, like the syntax is pretty straightforward. The first thing you will be like bringing down the required dependency, and we're calling it GraphQL and you're gonna define a bunch of type definitions and export that type definition so that it can be used later on in our index.js. And the second step is for us to build the object types. Obviously, GraphQL is a very rich type-based language. So we have like bunch of type defined for launch, rocket, user, mission and the trip data. So these are the ones that we want to create, we'll go ahead and copy the types that are required for us and place it here.

And once we have the types, the next thing that we want to do is to build or define the GraphQL queries. For each of the GraphQL query that we define, you would obviously have to do the resolvers because there is no auto-generated resolvers for us in Apollo. When we are building Apollo server, whatever the query that we need for our specific domain or the business use case, we would want to have those resolvers as well. So for us, the first query that we would want to run is the Launcher query, right? We will be like querying the SpaceX API to fetch a bunch of launches, and we would want to, whatever the information that we are getting from the REST response, we would want to like transform that into a GraphQL based on these queries. So each of the query, we can define what attributes I need from the REST response. That is what we are gonna do with this type query here. We are gonna define a query here, and then hopefully this is big enough. Let me make my page a little bigger. Okay, cool. And after that, obviously we want to write few mutations, right? These are the mutations where we'll be like interacting with the database to store the bunch of information. So I'm gonna go and copy the mutations. It's pretty straightforward, right? If you see here, I'm saying, okay, I'm booking trips, cancel trip and login. And as you can see in the book trips, it is going to use the trip update response. So whenever I put this in that mutation, this is the response that's gonna go back to the client. So the client will see a message saying that it was successful or not, and what are the launches that were actually booked. So this is how simple it is for you to build the graph-dual types itself. As you can see, there is not a lot of complicated code that you'd have to write for building the types itself. It's pretty straightforward to design your own graph-dual types that you need as part of your application.

Okay, I'm gonna quickly go and save this. And we're not gonna launch the server now because we have not still connected the data source. Let me go ahead and connect the data source first or talk about the data source itself. As you can see, we have a folder here called data sources. And in our data source, we have two things happening. We have a SpaceX REST API data source. As we were saying, Apollo server is capable of doing a federation across multiple data sources, whether it is different databases or REST API. So in this particular example, we are going to federate the data sources between REST API response and also like Gigabyte DB Database to store the information. So in our first data source is the SpaceX REST API data source. This is published by SpaceX. You can, if you go to launch.js here, we can see how the reducers are written. The reducers are nothing but kind of formatting whatever the REST response you get from the REST API and then put it in a graph table format. So, since it's a read-only it is not very interesting for our specific database use case, you can go ahead and just review how the reducers are written for the REST response. What we are interested in is on writing reducers for the gigabyte DB data source itself. So, for the gigabyte DB data source, as you could think about, there's like few things that we want to do at the database. The first thing is to find a, create a user. If the user is not there, you want to insert that user into the database. And if it's already there, we just want to fetch the information. That's it, right? And the second one is like booking the trips. This is like again, an insert into gigabyte table in the database. The cancel, obviously removing or deleting the entry from the database and getting launches by user ID. This is like a query select by user ID that you can do or select by launches on the user table. You can do. And another one is whether to determine whether a user is booked on a launch or not. So these are the queries that we'll be building. Before building the actual reducers for these queries, the first thing we want to do is we want to set up the connectivity between our Apollo GraphQL Server and YouGuyTb database. Let's go back here. Let's go to string.utils and copy this code. I'll explain what is happening in this pair of code one by one. Let me first copy this content here. And I'm going to place it and start with this. So this util is just like a cleaner way of coding so that you could have had this create store and the index itself. But for the code readability perspective, we are just creating a utils.js. So what we are actually doing is, the first thing we are creating a new SQLite instance. So SQLite, as I said, it's a ORM. It supports connecting to multiple databases. So for us, we'll be using Postgres dialect. As we were talking since the beginning it's a, UGIDB is a Postgres compliant database and since we are using UGIBITE Cloud, there're like few things we want. We want the hostname and the username and password that we were using. Okay let me now navigate back to my instance. I'll copy the hostname here. I'll copy the hostname. And the another thing is password. I have a password somewhere. Let me get my password.

21. Configuring GraphQL Server and Defining Tables

Short description:

We configure our GraphQL server to use Yugabyte DB and define our tables using the SQL API. We create user and trips tables and write reducers for user data source using SQLite API. After creating the store and the necessary reducers, we start the application and use Apollo server to generate the GraphQL API. Compared to Hasura, Apollo server offers more control over complex queries and database execution. We can explore the Apollo studio to review the objects, types, mutations, and queries created. The Apollo server seamlessly integrates with the data sources created in the code, allowing for easy review and understanding of the supported queries and mutations.

Let's put my password here. And I'm gonna save it. And one other thing that you can see here is we are gonna connect over SSL because UGIBITE Cloud instance only communicates over TLS or SSL. And obviously if you want authentication, if you want your client to authenticate the server, you should be able to like download the root certificate and configure the root certificate in the SSL block here. That is actually implemented in the, if you go back to the actual the completed code, I'll show you how to configure the authentication as well. If you see here, all you would need is to download the root certificate from gigabyte cloud instance and specify the CA file. Once the CA file is done, so what will happen is, so the node.js client that we are building or the GraphQL server we are building, it will authenticate the server as well, which means that you will know you are connecting to the right server. You are kind of avoiding all the man in the middle kind of attack and all those kinds of things. So for now, since it's just a demo purpose, it's not a production ready app. If you are building a production ready app, you have to configure all these things. So that's how simple it is for us to like configure our GraphQL server to use gigabyte DB. And after that, what we are doing is we want to define our tables. So SQL provides API called db.define, where you can define all your tables and the columns and the specify and the corresponding data types for that. So we are creating two tables. We are gonna create user and trips table and that which is going to be used by, used in our index.js for starting this GraphQL server itself. So once we have created our store, the next step for us, is to go and create our mutation resolvers. So if I go to user.js, so if you see in our user.js what I am doing first is, I am gonna first, get this store. I mean, first, this is just like a data source that we are building, in our data source, the first thing this user data source will use, it takes a store, you can pass it in any store, in our case, we will be passing in a Dubai DB store, you could very well pass a SQL Lite or a MySQL or any of those things. And now let's copy this code which is going to be writing our reducers. Okay, let me copy the reducers. As you can see right, the reducers is pretty straightforward. And what I'm doing is, we kind of defined all our methods that we wanted to create here. If you go into the implementation of this, we are first kind of doing a validation check whether there is email ID or not, are the rights of valid email, and then we are gonna use the SQLite API, right? So all we are doing is we are using the SQLite as like a reference and... querying the user's table, and we are checking whether find or create, these are like a common set of APIs that SQLite kind of provides. If you are already familiar with SQLite, you will know it's how simple it is to get started with that. So this find or create, it will just run select star query on the database. If it finds some data, it will just return. If it doesn't find a data, it then moves ahead and does an insert. So all these things are kind of black box to the user. He just needs to be familiar with the APIs that SQLite provides.

Once this is saved, and rest of the things are pretty straight forward. Like if you're seeing that destroy, if you want to delete an entry, you just use a destroy API. You can take a look on how easy is it to write reducers. Once we have written the reducer, I think it's time for us to go ahead and start the application itself. Before we go and start our server, let's see how we are using the stores and whatever the reducers we have written, right? If you see here, the first thing we are saying is, we are creating a store. Once we do this, this is going to tell this Node.js application to make a connection to your other DB and then we are creating launch API and user API reducers, which is going to be passed into the Apollo server. If you see here, we are gonna pass the data sources and there is always that are going to be used for Apollo's, used by the Apollo server for generating GraphQL API or serving the GraphQL queries itself. That's how simple it is. It's like within three or four steps, you can design your GraphQL types, write the required data sources and the reducers that are required for your schema and start the GraphQL server. As you could see, right, like, see, even though there's like few steps that you would have to do compared to Hasura, but there is a lot more control you get here. It depends, like convenience versus control. If you are like quickly trying to do something, a prototype or if you do not want to mess with the backend code, you can use Hasura. But if you are in like a lot more control and you're writing complex queries and you want to know how the queries get executed on your database, if you want that kind of control, you can use Apollo server. Okay, let me save this. I'm gonna do NPM install. Hopefully that's about it. Okay. PNP install. Okay, let me see. There was a typo. Okay, this takes a minute or two. Any questions so far, folks? On the GraphQL Apollo Server on the console we just talked about. Okay. Okay, we have built our app, the node.js application, let me go ahead and say npm start. This should start my server. Okay, awesome. As you could see, there's like one thing that I'm doing whenever, when I'm doing my, when I configured my SQLize. Since it's a ORM, right? In my SQLize I'm seeing dbsync force true, which means if there were tables already that were created, I wanted the tables to be dropped and create new tables. So automatically, SQLize ORM is able to drop the tables and create the new tables for us. If I go back to, here, okay connection is closed. Let me, go back and launch the cloud shell. We can see that there is no data. And also we can now go here and explore our Apollo studio. And if we can go here and see all our objects and whatever the different types we created. As you could see, we created three different mutations. We can book the trips, cancel the trip and we can log into the user. And obviously we can query a few things, launch, launch connections. So all the databases on the data sources, you create in your code, the Apollo server is able to send those information into the Graphe studio where you can go ahead and review everything. It is very straightforward for you to understand, like figure out what all queries and mutations you are supporting in your GraphQL server.

22. Running Mutations and Starting the Client

Short description:

We create a user and log in with an authentication token. Then, we run mutations to book trips using the GraphQL API. The trips are successfully booked, and we can see the launches where we have a seat. We explore the client implementation, which uses Apollo Client for querying the database. The client is built using React JS and is configured to connect to the server. We install the necessary dependencies and start the app. The client is up and running, and we can interact with it.

It is very straightforward for you to understand, like figure out what all queries and mutations you are supporting in your GraphQL server. And obviously there's like few objects, types we are supporting. These are the types which we created. You can easily see what each of them have. Drill down into each of the things and what you would want to create in your GraphQL query. So once this is done, it looks like my connection is also here. Go and get my password. Copy this. If you see here, I'll do select start from users, or they shouldn't be any data since we created a new set of tables. And also I'll say trips. I also have some residual tables from another application that's a marcos app. But that's okay. As you can see, the new tables that we created, since we dropped, there are no data. Now what I'm going to do is I'm going to go to the Apollo studio. I'm going to run few mutations out of that. So the first thing I want to create a user and say I'm going to do an kill a databyte T.P. I'm going to say, login user. And once we log in the user, we will get the authentication to a token. Let's make a note of this authentication token. We'll be using that in the next mutation we will be doing. If we go back to our code here, there are like two mutations that we are running. One is the actual logging into the user itself and booking trips for that user using GraphQL API. So I have already. So since we have created a new user, if I go into the database itself, I can say, like start from users. I should be able to see an entry there. As you can see, we created a new entry now for a link and let me go by TP. Let's use my login token ID for actually creating a new booking entry. Let's go back to our code. Read me documentation. Let's copy the mutation here. And if you go to the headers here, there is an authorization tag that we need in this authorization. Okay, perfect, it's already there. I'm gonna say, book trips. This should be able to book us a trip. Trips were successful, and these are the three launches where I have a seat. Hopefully, this thing comes through in our lifetime where we are able to do space travel like how we fly on flights. And maybe whatever the things that you are seeing in future, you will be the one who will be implementing like a website or a portal for booking flights on space launches. Cool, let me go ahead and see if the data is reflecting here, I'm gonna say from trips. Obviously, what are the three things that we have? It's showing here, awesome. If we can do the similar thing on using a client that we are going to have. Let me create a new terminal here so that we can start a new one. I'm gonna say, okay, sorry, yes. I'm going to create a new terminal, okay. Navigate into client. Let me restart the app so that the data gets refreshed. Okay, I think there are no more data. Okay, perfect. Now, let me go to the client. So the client is actually built using React JS. Since this is a backend-based workshop, we're not going to delve much into how the client has been implemented. It's pretty straightforward. If you see, there will be a GraphQL client. Each of those GraphQL clients that will be used for querying against the server that we have created. If you go into the server and if you go into the index itself, you'll see some of the pages. As you can see, we are using Apollo Client here for querying the database, and Apollo Client is configured to go into the default location, which is where we will be running our server. And obviously, all the APIs that are required, all the queries that we have is already documented here. If you are interested in that client implementation, please feel free to go ahead and look into this code. For now, I'm gonna just do npm install and start the app. Okay. Okay. Let's give it a minute for the install to complete. Let me go back to tutorial. Okay. Awesome. Now I'll say, npm start, this should start the client. Oh, sorry. Oh, cool. It stopped. Oh wow, I've already logged in. Cool.

23. Scaling GraphQL Workloads with Yugabyte DB

Short description:

In this part, we'll explore scaling GraphQL workloads using Yugabyte DB, a distributed SQL database. We'll discuss the challenges of scaling vertically with traditional databases and the need for resiliency in the cloud. Yugabyte DB allows you to start small with a three-node cluster and easily scale out by adding nodes. It provides load balancing and handles increased traffic and data. In addition, Yugabyte DB ensures data availability, even in the event of GraphQL server or database outages. We'll also share a case study where we scaled GraphQL subscriptions from 25,000 to 1 million subscribers within the same cluster. This demonstrates the linear scalability of Yugabyte DB. If you're interested, we can provide details of the benchmark setup. Finally, we'll touch on query tuning and best practices for working with distributed SQL. Prat will cover this in the next part of the workshop.

As you can see, I'll just log out, and this is the, this is the app, the UI part of the client. We can go ahead and see. yeah, megabyte.com. See in login. This should have an entry for me as a user now. Okay. Entry for me. And if I go to the launches in my profile, I don't have any books, any trips booked. I will go to home. This is the first Crew-1 Falcon, it's the first Falcon rocket that SpaceX use for launching astronauts into space. We all would love to be on that, space launch, right? Or the rocket launch. Would love to also be on that. And say but there is entry to the other interesting ones. Okay, what now, okay, let's click on another. And say add to cart. I go to my cart. I have two rocket launches that I want to be on. I'll say book all. Now, if I go back to my profile, I have two launches that I'm booked on. This is, we can see that in the trips as well. So this is how simple it is, folks, for us to get started with GraphQL and book it with or configure it with a distributed SQL like you go by DB. Obviously here, we just wanted to show how easy is it for you to like get started. But if your app that you have built with you go by DB or like GraphQL itself, right, goes, let's just say wider, right, how do you handle such situations? That is the next part we wanted to talk about. How do you actually scale your GraphQL workloads? I'm going to take quick 10 minutes to show how you can scale out your GraphQL workloads using you by DB. Obviously we kind of found out how easy is it for you to like get started with all the popular GraphQL engines. If there is any other engine that you want, it is as simple as configuring, getting the right credentials and configuring to work with you go by. And obviously you've built a app using GraphQL API, right? And what would happen? And what is the current bottleneck we are seeing with our customers? So pretty much many of the folks get started with say Postgres database. And what we are seeing is, as and when the number of queries are going to the GraphQL, are being served by the GraphQL APIs, the kind of the resource utilization on the database also increases, right? So some of the customer use cases, they have hit the higher ceiling of how much you can scale vertically. So there is so much that you can scale vertically, right? Like that, obviously there will be a CPU threshold that you will hit one point in time. And also there is no, if this single instance of your database goes down, your entire app goes down. You shouldn't be in that situation in this cloud space, or in the cloud environment currently. There can be cases where your entire cloud can go down. As recent as like two months ago, the entire US East and West of AWS was down. And there's like a lot of internet scale applications, I cannot name the people there, they were all down. So it shouldn't be in the entire situations. That's why, you need to think about scaling as well as resiliency of your architecture. When moving to a cloud native database like Yugabyte, you kind of get all these things out of the box. In Yugabyte DB, you can start off small. You can start off with like a three node cluster. We are handling like say 500 requests per minute or a second. And if you want to scale out, like if you're getting viral and if you want to handle like say 5,000 requests per second in a span of a few hours, you can do that by adding new nodes to Yugabyte DB instance. So Yugabyte DB is able to like easily kind of spread or kind of load balance the query request as well as handle more traffic. As I'm when you are getting more data, it's not only handling more amount of data, but also the compute. You can scale out the compute as well with Yugabyte DB. And obviously the other thing that you would, it can happen is, it can be outages, right? And also your database can go down or your GraphQL server itself can go down. So most of the times the GraphQL server, it kind of is stateless. So all the state is actually being maintained in the database. So if you lose a GraphQL instance, obviously you can keep spawning new GraphQL instances, but it becomes essential for the data to be available. If you have a single node instance like Postgres which doesn't have any redirect because, or it's hard to configure read replicas, you will have outages in your architecture. That's where a distributed SQL comes in. That's where you go by DB kind of provides all these things out of the box, which Marco kind of explained the different topologies you can have, you go by DB in multi-regional multicloud where you can have your apps to be always on. We wanted to like kind of figure out how much scale we can do linearly. So for that, we did an exercise where we wanted to scale out GraphQL subscriptions linearly, we started off small, it's a pretty straightforward use case that we wanted to do. Obviously like, we wanted to like kind of simulate an eCommerce application where there'll be users and users will be placing orders. Obviously, you know, situations where like Black Friday or Cyber Monday, the amount of traffic will considerably increase. So you can scale out your database for that kind of events and as and when the traffic dies down, you can scale down as well. So in this exercise, we're just going to show the scale out part what we did. We started with a simple 25,000 subscriptions using a three node cluster. So we are able to scale that same cluster from 25,000 subscribers to 1 million subscribers without any downtime or without any latency, like without stopping the GraphQL servers or the GraphQL servers taking user traffic. So we wanted to simulate that. We went from 25,000 subscribers to a million subscribers within the same cluster. That's how scaling linearly scalable you are. You can start small and as an application gains traction, you can scale out your packet. This is a benchmark setup. If you are interested, please feel free to ask me like and I'm going to show you where the code and all are there. Everything was run on Kubernetes so that it's easier for us to like scale out each tire, whatever we wanted to benchmark. Please feel free to ask us in the chat if you need any details on the benchmark setup itself. Cool, that's all I had as part of the scaling, now is the most interesting part that Prat is going to use. Obviously, distributed SQL is like a few cultures you need to know, how do you query tune? What are the do's and don'ts of working with distributed SQL, right? That part is going to be covered by Prat. Let me stop sharing my screen and give the control back to Prat.

24. Query Tuning with YugoByte

Short description:

Query tuning with YugoByte involves performance tuning and query debugging. Performance tuning focuses on OS-level metrics like memory, CPU, and IO to determine if queries are poorly performing or if more nodes are needed. Query debugging involves identifying slow queries and analyzing them using tools like Asura's analyze function or PG stat statements in Postgres. Resetting the PG stat statements table and ordering queries by total time descending helps identify the top 10 worst queries. The UbiPlatform offers a slow queries tab for enterprise users. Running an explain plan can reveal sequential scans on tables, which can be challenging in a distributed system like UbiByte. Sequential scans require scanning the entire table, which may be spread across multiple nodes. As more nodes are added, sequential scans can take longer. The explain plan provides information on cost, number of rows, and width of the scan.

All right, so we'll go ahead and I'll share my screen here again. Easier. All right. So query tuning with YugoByte. As Nico mentioned and most of the things they're working on are ORMs and they have particular ways that they like to query your database. So even if you write your own SQL, for instance, like in a Sura, they have some interesting pieces where you can write your own SQL, but when you track that, if you remember back to when we were looking at Sura, it changes how it interprets your regular SQL code to work with Sura's ORM and how it decides to create that query for itself. So usually I like to start with, where do we start? Right, you have performance tuning versus query debugging. Performance tuning, right, usually more often involves OS-level metrics. So you're looking at memory, CPU, IO, how do the queries affect these statistics above? Do we have enough horsepower, right? As Nikko was mentioning, at what point do we scale out versus try to tune queries. And so often, we're looking at these metrics to decide, okay, is it, do we have some poorly performing queries or just do we not have enough nodes to satisfy our workload? And that's the great part about YouGabyte is you can just add nodes as we've come to the realization that, you know, we just need more CPU or more IO or we need to service more queries, we can just add nodes. It's really that simple for us. And then query debugging, right? What we'll look at is how do we identify slow queries? How do we analyze them to decide, you know, where are the problems within said queries? We can do that in a Asura. Prisma is a little different, and then we have like explain versus explain analyze and what they do and what they are. So Asura has this particular really nice built-in function. It allows you to run the equivalent of an explain analyze within Postgres. So in the Asura console, we have this button called analyze, right? And it's there in the API section. What it does, it'll give you the generated SQL and the execution plan. So what we see here is one of the queries that we ran against for the polling application. And it gives you the generated SQL. This is really nice. is this big enough for everybody? Just realize what we can see here is the owner I'm creating in this generated SQL that runs. So you'll see it has the select from what we're looking for. And then it has these left, outer join laterals. That one always gets me from the different tables and stuff that we have to get all the information from. And these are not easy to look at. They're not easy to kind of decipher what they're actually doing. And then you can see kind of below we have the execution plan for these particular queries. And again, they're usually kind of tough to look at. We can see here that we do this nested left loop join and we run down on a function scan. We materialize that, we kind of move down these. And so for any of these, each of these have to be each of these like function scan, whoops, did not mean to do that. Each of these parts has to get the piece below it, satisfied to work all the way back up. And in this one in particular, a lot of the times what we're looking for is sequential scans, and maybe some of these aggregates. And I'm gonna hear in a few kind of talk about what all of this means. But the first part is, how do I identify a slow query? You could buy it. So for us, we have, since we're Postgres, we have PG stat statements. And it's built into Postgres. And so it helps us dig into problem queries. It shows the query ID, the query number of calls, right, total time it took, gives us the min, max, and mean times, number of rows that it pulls back. This is a great, great table to use as you're trying to debug some of the problem queries that you may end up running into. So what you wanna do generally, is what you start with is that the PG stat statement aggregates all of the queries. So the first thing you wanna do is to reset the table. So that way, you know, if you know this particular query is kind of, or if you don't know which query is, you know, running poorly, or you're trying to figure it out, because maybe you have a bunch of different queries going all at the same time. If you run this select PG stat statement reset, it will then, you know, completely wipe out that particular table. It doesn't affect you. Right, this is just a statistics table in Postgres for us to use. So it doesn't matter, it's not going to impact your application, but then this, the query I like to use mostly is this one below where we select the database ID, the query, you know, yada yada yada, all the different things. And then we order by total time descending. So give us the worst queries, the top 10 worst queries, and that'll allow us to see, okay, what are the top 10 ones. Let's pick them and let's start working on them. So then we also have, I'm going to give a plug to our UbiPlatform, it's a enterprise kind of version. We have a slow queries tab in there. And then as you saw in my other example, you also have that slow queries in our cloud. So both of these you can also look at, they are the same thing. It's just a view of the PGstat statements. So we just kind of give that to you in a nice little UI space here. And we can look at you know, how long some of these took. But this is just our shameless plug for our data by platform for our enterprise users, if you like. So, okay, we figured out what, you know, which slow query it is. Now, what do we do? So the first thing I like to do is run an explain plan. So this is just a really basic explain select star from this table foo. And what we see is that we have a sequential scan on that table. Now, what that means is that, I mean, if you're familiar, but we have to scan the entire table to find, you know, all of that data. Now, as Marco pointed out earlier in this, we shard it across multiple nodes, right? So sequential scanning tables in Ubite is kind of rough because if you think about it, we have to then scan the entire table and that data may be spread across multiple nodes, and then we have to go do that for all of them. So you can see how this can kind of be a little rough, especially in a distributed system like Ubite. You know, as you add more nodes and more nodes, these can take longer and longer and longer. So what we're looking at here, as we see, we have a sequential scan on foo, we have a cost, we have the number of rows and we have the width. So for cost, the range number dot dot number is because it shows the cost for starting the operation row and costs for getting all rows. And by all, I mean, all returned by this operation, all on the table. So what we see here, as we have a cost of zero and then 155.

25. Understanding Query Analysis

Short description:

The starting cost for a sequential scan is zero, as it reads the page and returns the rows from top to bottom. The 'rows' field indicates the number of rows returned by the operation, and the 'width' field represents the average number of bytes in a single row. It's important to note that this query is not executed against the database, but rather analyzed by the operators.

So the starting cost, right, is that zero, and for a sequential scan, it's going to be zero because well, we just read the page and return the rows. We just start at the top and then go all the way down. That's pretty much it. Rows, obviously, is self-explanatory, it's the number of rows for the operation that it pulls back. And then you have width. So width is the PostgresQL idea, or the Postgres idea, on how many bytes, right, on average there are in a single row returned by this operation. So what we see here is since our fake table just has a single int column, it's stored as four bytes. So there you go. One important thing to note here is what's also great is this query is not actually executed. So we're not actually running it against the database. It's just kind of figuring, and you know, the operators are looking at it, figuring out what it's gonna do. But we don't actually run the query. So we're okay here. So we'll go to the next slide.

26. Analyzing Query Performance with Explain Analyze

Short description:

Explain analyze shows where time is spent in queries, important for understanding slow queries. Sequential scans on large tables can be slow, indexes in Yugabyte greatly improve query speed. Adding indexes can have an immediate impact on query performance. Working through explain analyze plans can help identify areas where time is being spent. Looping through data multiple times can indicate a need for indexes. Indexes in Yugabyte are strongly consistent and significantly improve query speed. Adding indexes can reduce query times from minutes to seconds.

So let's look let's look at explain analyze, right? So this is very similar to explain, but it actually shows you where our time is spent. And this is very important, right? When you're trying to understand slow queries and problem queries within databases, it's all about where you're spending your time. So we can see we have the explain analyze, select star from our table T, and we limit 100. So again, you can see the limit, our cost is zero to 9.33, number of rows is 100, right? Because we're limiting that to 100 rows and our width looks to be about 608 bytes. But then you can see we have our actual time and you can see the number of rows, the number of loops and that we're doing that sequential scan on table T. So again, then if we actually look down, it runs the query. It shows this actual time and where we spent. So we're looking at 100 rows and we look once. This is estimate a number of rows actually in the table but we're limiting on 100. So we only really grab that top. But since we do a sequential scan, we have to look at the whole table. This is why you can see sequential scans are pretty rough. Within Yugo Binding So let's look at something a little more complex. Right, so we have a table of breweries. We have the brew ID, the name, city, state, primary key which is gonna be our brewery ID. And then we have beers. Right, so name, beer ID, ABV. If you're familiar with beers, you'll understand what some of these are. I like beer, I think some of them do. And then we have our constraint. We have a foreign key relation, brewery ID and then it references the brewery's table brewer ID. So we're gonna go ahead and run some queries against this to kind of take a look at it. So here we have our explain and we select some columns from beers and breweries and we have a little joint there. And so now we can look at the query plan. So we do a nested loop and we have a sequential scan on beers. Get our cost of zero, our rows just have a thousand there and our with. And then we use an index scan using the breweries PK on VR. So this query plan shows the execution plan the optimizer came up with based on its knowledge of the data. The query is not executed again. This is how it will be executed. It does not show timing or statistics or anything but it has some important statistics, right? We have in line one, there's no startup cost, right? We know that. The nested loop gets a thousand rows. A nested loop takes a single row from the first row source, okay? The driving row source matches that with the second row source, the probing row source until the driving row source is exhausted. So it works this way down this query plan. So we move down. In line two, we can see that we do that sequential scan on the beers, there's no startup cost and a total cost of a hundred. So this row source must produce 1,000 rows to have the row source get 1000 rows, the next lines row source. So, then we do an index scan on breweries on the index breweries PK and each probe of this row source costs 0.11 and returns one row, right? You can see that here at the bottom. So those are some pretty important statistics about how we're running it. And then we'll move on to the next one here. Well, that was it, sorry. That's kind of explain analyze. For the most part, those are what I like to run through when I'm looking at queries, especially like slow performing ones. In will autograph QL with the ORM kind of making its own queries because that's kind of how it runs. It can be tough to read these really long queries that they create. If we come back here to this, you know, this generated SQL is cut off, these are really long queries that it creates. So you kind of have to dig through and work down through each portion of this query to figure out what it's doing. And as you can see, these are much more involved than our kind of smaller queries. And so you have to kind of work down these execution plans to figure out, you know, where are you spending your top? Where you're actually spending the time? So, one of the things, if we look at, let's go, let's go this way a little bit. What we'll see here is loops. You can kind of see this. This is how many times it has to loop through to get all of the data of this particular portion or part of the data, right? So, if there is a piece where you can say, okay, we're gonna get a hundred, you know, 10 rows, but we have to loop through a hundred thousand times, that's gonna be a problem. It's gonna be very slow. And so, when this, you know, if you see something like that, we'll want to look at why we're having to loop through that many. Maybe we need to get an index in there. Indexes in YouCubite are, very, very advantageous. We use them all the time. You know, if you are familiar with like Cassandra, we have our Cassandra API, if you're a Cassandra user maybe previously, index can be a little tough. Our indexes are strongly consistent, so we encourage people, if you see something like a sequential scan, we need to figure that out and probably add an index to that because we don't wanna scan the entire table. We add an index, we can greatly, greatly increase the query speed times. I had a customer recently, and he had a table and he did a join in a particular column that was present in two different tables. Now, in a singular monolithic Postgres database, that's not gonna be a bad, right? Because it's just kinda gonna go hit that one database, run through it, find data and come back. And YouGuy, it wasn't that poorly performing. I think it was coming back in four, 500 milliseconds. But once we added that index, it started coming back in two milliseconds, right? So you could see that it was an immediate impact just by adding that index. In some other cases, and then particularly a GraphQL case, I was working with a customer and we had queries that were taking minutes because it was joining a couple of different tables. We kinda had to work through, it can get pretty intense as you're trying to work through how we're getting the data and joining these three different tables and working through some of these explain analyze plans. And again, as we've worked through and figured out where we're spending the time, especially where we were trying to kind of aggregate these different columns, we realized that, oh, we're spending all this time trying to gather the data for this particular column, we're looping over it a ton of times, right? So we're looping it over this particular part of lunch just to get a small amount of data. So we admin index there, and it was great. I mean, not great, but we've got the query to come down to only running in a couple seconds down from 1.5 minutes.

27. Optimizing Queries for Yugabyte DB

Short description:

Adding an index can significantly improve query performance. Adjusting the join order of a query can also have a positive impact. It's important to consider your SLAs and the specific needs of your application. ORMs are primarily designed for single-instance Postgres databases, so you may need to optimize queries for distributed systems like Yugabyte. Reach out to us for assistance in optimizing your queries and leveraging Yugabyte's capabilities.

And that was just by adding an index. Now it took a bit more time, we had to work through it a little bit more to get it to something you'd actually want to work well. But that was another example.

And my last one, I can't really share these queries because they are customer data, but I like to talk about them, is that even just adjusting your join order, right? If you're looking through and trying to figure it out, I had a customer that was joining a couple of tables the join order, it was a rough query, it was taking, I think, one and a half seconds to come back and they needed it to be, in the milliseconds, right? A couple hundred milliseconds needed to come back. So we just adjusted the join order of the query itself for YouGiByte. And that allowed it to actually come down quite a bit. I think we've got it from like a 1.5 seconds to like three or 400 milliseconds. And that was acceptable for that particular customer.

So that's the other last piece I say involved is, what are your SLAs, what times do you need to meet? Sometimes, the ORM makes it, creates the query in the best way it thinks it should. And oftentimes that can be, these are built mainly for your Postgres, your single instance, Postgres databases. And so with Ugobyte, you'll wanna kind of take a look at where you're spending your time, reach out to us, ping us, we'll help work with you, take a look at some of these. And we can try to guide you on how the best use these in a distributed system like Ugobyte.

Watch more workshops on topic

GraphQL Galaxy 2021GraphQL Galaxy 2021
140 min
Build with SvelteKit and GraphQL
Top Content
Featured WorkshopFree
Have you ever thought about building something that doesn't require a lot of boilerplate with a tiny bundle size? In this workshop, Scott Spence will go from hello world to covering routing and using endpoints in SvelteKit. You'll set up a backend GraphQL API then use GraphQL queries with SvelteKit to display the GraphQL API data. You'll build a fast secure project that uses SvelteKit's features, then deploy it as a fully static site. This course is for the Svelte curious who haven't had extensive experience with SvelteKit and want a deeper understanding of how to use it in practical applications.

Table of contents:
- Kick-off and Svelte introduction
- Initialise frontend project
- Tour of the SvelteKit skeleton project
- Configure backend project
- Query Data with GraphQL
- Fetching data to the frontend with GraphQL
- Styling
- Svelte directives
- Routing in SvelteKit
- Endpoints in SvelteKit
- Deploying to Netlify
- Navigation
- Mutations in GraphCMS
- Sending GraphQL Mutations via SvelteKit
- Q&A
Remix Conf Europe 2022Remix Conf Europe 2022
195 min
How to Solve Real-World Problems with Remix
Featured Workshop
- Errors? How to render and log your server and client errorsa - When to return errors vs throwb - Setup logging service like Sentry, LogRocket, and Bugsnag- Forms? How to validate and handle multi-page formsa - Use zod to validate form data in your actionb - Step through multi-page forms without losing data- Stuck? How to patch bugs or missing features in Remix so you can move ona - Use patch-package to quickly fix your Remix installb - Show tool for managing multiple patches and cherry-pick open PRs- Users? How to handle multi-tenant apps with Prismaa - Determine tenant by host or by userb - Multiple database or single database/multiple schemasc - Ensures tenant data always separate from others
React Advanced Conference 2022React Advanced Conference 2022
95 min
End-To-End Type Safety with React, GraphQL & Prisma
Featured WorkshopFree
In this workshop, you will get a first-hand look at what end-to-end type safety is and why it is important. To accomplish this, you’ll be building a GraphQL API using modern, relevant tools which will be consumed by a React client.
Prerequisites: - Node.js installed on your machine (12.2.X / 14.X)- It is recommended (but not required) to use VS Code for the practical tasks- An IDE installed (VSCode recommended)- (Good to have)*A basic understanding of Node.js, React, and TypeScript
GraphQL Galaxy 2022GraphQL Galaxy 2022
112 min
GraphQL for React Developers
Featured Workshop
There are many advantages to using GraphQL as a datasource for frontend development, compared to REST APIs. We developers in example need to write a lot of imperative code to retrieve data to display in our applications and handle state. With GraphQL you cannot only decrease the amount of code needed around data fetching and state-management you'll also get increased flexibility, better performance and most of all an improved developer experience. In this workshop you'll learn how GraphQL can improve your work as a frontend developer and how to handle GraphQL in your frontend React application.
React Summit 2022React Summit 2022
173 min
Build a Headless WordPress App with Next.js and WPGraphQL
Top Content
WorkshopFree
In this workshop, you’ll learn how to build a Next.js app that uses Apollo Client to fetch data from a headless WordPress backend and use it to render the pages of your app. You’ll learn when you should consider a headless WordPress architecture, how to turn a WordPress backend into a GraphQL server, how to compose queries using the GraphiQL IDE, how to colocate GraphQL fragments with your components, and more.
GraphQL Galaxy 2020GraphQL Galaxy 2020
106 min
Relational Database Modeling for GraphQL
Top Content
WorkshopFree
In this workshop we'll dig deeper into data modeling. We'll start with a discussion about various database types and how they map to GraphQL. Once that groundwork is laid out, the focus will shift to specific types of databases and how to build data models that work best for GraphQL within various scenarios.
Table of contentsPart 1 - Hour 1      a. Relational Database Data Modeling      b. Comparing Relational and NoSQL Databases      c. GraphQL with the Database in mindPart 2 - Hour 2      a. Designing Relational Data Models      b. Relationship, Building MultijoinsTables      c. GraphQL & Relational Data Modeling Query Complexities
Prerequisites      a. Data modeling tool. The trainer will be using dbdiagram      b. Postgres, albeit no need to install this locally, as I'll be using a Postgres Dicker image, from Docker Hub for all examples      c. Hasura

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
Top Content
We all love GraphQL, but it can be daunting to get a server up and running and keep your code organized, maintainable, and testable over the long term. No more! Come watch as I go from an empty directory to a fully fledged GraphQL API in minutes flat. Plus, see how easy it is to use and create directives to clean up your code even more. You're gonna love GraphQL even more once you make things Redwood Easy!
Vue.js London Live 2021Vue.js London Live 2021
24 min
Local State and Server Cache: Finding a Balance
Top Content
How many times did you implement the same flow in your application: check, if data is already fetched from the server, if yes - render the data, if not - fetch this data and then render it? I think I've done it more than ten times myself and I've seen the question about this flow more than fifty times. Unfortunately, our go-to state management library, Vuex, doesn't provide any solution for this.For GraphQL-based application, there was an alternative to use Apollo client that provided tools for working with the cache. But what if you use REST? Luckily, now we have a Vue alternative to a react-query library that provides a nice solution for working with server cache. In this talk, I will explain the distinction between local application state and local server cache and do some live coding to show how to work with the latter.
GraphQL Galaxy 2022GraphQL Galaxy 2022
16 min
Step aside resolvers: a new approach to GraphQL execution
Though GraphQL is declarative, resolvers operate field-by-field, layer-by-layer, often resulting in unnecessary work for your business logic even when using techniques such as DataLoader. In this talk, Benjie will introduce his vision for a new general-purpose GraphQL execution strategy whose holistic approach could lead to significant efficiency and scalability gains for all GraphQL APIs.