1. Introduction to GraphQL
I am a technical support specialist with a background in DevOps and Linux administration. I am learning about Kubernetes and cloud native technologies. GraphQL is a schema query language that solves the problem of unnecessary calls and filtering in REST APIs. It allows you to request exactly what you need from the server. This workshop is self-contained and provides an overview of GraphQL.
Hello, everyone. So about me, I am a technical support specialist in solo, and I have recently joined a month or so. I have a background in DevOps and Linux administration. And yeah, this is a this. This is a whole new thing for me.
Kubernetes and all these cloud native things, but I'm learning as I go. So when you open the link, you will be shown this sort of page and you can just click on this. So once you click the start track option, it will take around two minutes to create the environment. And meanwhile, you can watch the video that is that has an overview of GraphQL.
So what exactly is GraphQL? So GraphQL is like a schema query language. So if you heard of SQL, the famous SQL database, MySQL, PostgreSQL. So it's something similar. But this time it is for REST APIs. So then you have a bunch of REST APIs. You know, most of the times you have to query one and then you get a lot of information from it and you have to filter information from it. So this can get a bit cumbersome considering you know, you want for example, just a name of a user, but due to REST being REST you get a bunch of details like their phone numbers and the other details like address, etc. And then you have to filter it. So this causes unnecessary calls. And if you have, say, a shopping cart or something like that or some shopping query, like you know, give multiple REST API calls just to get a little bit of data, then you know, in that terms, REST becomes a bit cumbersome. And you have to have multiple calls to just get some information from the back end. Whereas in GraphQL, what happens is you have a server on the back end. And this server is like, a sort of a Uber to your, Uber chat we would say, to your application. And you just tell the server in a schema definition language, like what you want exactly and how you want it. For example, if you want, if you have a pet store application, and you know, you have a pet, and you have a store. So if you want to have pet names, all the pet names in the store, you could just tell graphql, like give me that and graphql will give you only that. So you get what you ask basically, and not all the noise that is unnecessary and not required.
Okay, so a little bit about Instruct. This is a self contained workshop. Everything that you require is inside this workshop, you do not have to switch over to your local machine or anything as such, you can just run everything in this lab inside this web browser, you will have these UI buttons that you can click on sometimes, in case you're not loading or anything as well as the refresh button here on the right hand side, you will see this, this bar which can be resized in case the text is too small. So you can go through the details at the workshop provides will give you a basic overview of what graph QL is and how it works and all.
2. Integration of GraphQL in BlueEdge
GraphQL is integrated within our BlueEdge product, serving as an API gateway for microservices. Unlike typical GraphQL deployments, we have integrated it within our product. This enterprise feature is not available in the open source version. Contact us for a trial license to test and explore its capabilities.
So it's very useful in terms of microservices, where you have multiple rest API's. So we have a product called BlueEdge, that is sort of a entry point to your application. It's like an API gateway, say, and everything passes to that. So we have integrated graph QL within this product. So it has its own. So there is no separate graph QL server that is running or as such. When you have when you usually deploy graph QL, you have to deploy it with the server. Mostly there is a Apollo graph QL and there are various servers that are available that you can deploy and sort of use. But we have it integrated within our product. Also this is an enterprise feature. So the open source version does not have this. In case you would like to try it out, you can always reach out to us or me and we will help you with a trial license which you can use to test this and do your testing on that.
3. Installing BlueEdge and Deploying Demo Application
Let's begin by installing BlueEdge. We provide automatic schema generation in GraphQL. Mutations are used for CRUD operations. Glue edge is installed in the glue system namespace. We will now deploy the demo application and test the REST API. In Blue, we create virtual services for system entry.
So yeah, let's begin. So first we will begin by installing BlueEdge. You can just copy the commands in the right and just hit shift and insert. And it will copy the commands for you onto the screen. You do not need to sort of copy paste using right click. You can just directly do it with the keyboard.
One feature that we provide is automatic schema generation. So those who know, those who don't know, or those who know, GraphQL is like a schema language, right? So you have to define a schema for your APIs. And you give that schema to the GraphQL server, and you tell it that this is how my GraphQL schema is. When the users query, there is a special type of type query in GraphQL that the users can sort of get query, and there is mutations. So these two are very special types in GraphQL. So you usually query the mute type query. And inside that you define all your data structures and your types, basically, and functions. So a little bit on mutations, mutations is something like, you know, your CRUD, where you can update delete and create, you know, queries, but using GraphQL and the query type is like an entry point to your GraphQL server, basically, or your GraphQL backend. The entry type is where everything is defined. You define your types, the data that you know, your client can access basically, and you can also have functions as such. You can have various input types also.
So we have installed glue edge now on the cluster. And if you see, if you do, to run this command, basically you will be able to view all the rollouts happening. Glue edge is installed in a namespace called glue system. So if you do, kube cut will get pods on it, you'll be able to see all the pods that are in this namespace and you can see they are all running right now. So we will now deploy the demo application. So all the files required for this workshop is within this instruct module itself. So do not need to copy paste from anywhere. In case anyone has a query you can always post it in the chat. I'm not sure if anyone is using the q&a section. But I'm keeping an eye on the chat so in case any query comes up I can answer it. So we have deployed the application, the demo application and we can sort of check if in case everyone are following we have now deployed the demo application and we will just test the rest API now. So these are rest APIs basically, and we will just create a. So in blue for anything to you know enter to the system, we create something known as a virtual service.
4. Creating Virtual Services and Using Glue Edge UI
We will create a virtual service similar to Apache's virtual host. The prefix in the URL will route queries to the upstream backend server. The Glue Edge UI provides a read-only view of virtual services and upstreams. Glue automatically creates upstreams from deployed services. Automatic schema generation is available for GRPC and REST services.
If anyone has deployed Apache applications or must have heard of virtual host. So this is something similar to that. So we will create this virtual service. We can also view the contents of it. So you can see we have set a prefix. So whenever queries hit if the URL is prefixed with blocks. Then we will route it to the upstream. So upstream in this case is a back end server or a back end service basically that is in the blue system. So we can just click on this and you can see that you know we got the address from the JSON spec of this service. And we have just hit a call request to it. So you can see the data that is written. Its ID user username title content, title content. So you can see that a lot of things that are written. And in case I wanted just the ID, I'd have to probably create a another route that says give me slash ID or give me slash user like get user, get ID something like that. And this is like, you know, excess information to let's see how GraphQL sort of solves this issue.
So we will now expose a glue edge UI so we have an edge UI that we that comes with the glue and you can view everything in this UI too. It's like a read only UI sort of. So in case it shows you a blank page you can just hit the refresh and you'll be able to see this UI. So you so you can see if you go here you can see there are different virtual services. For example, the one that we created right now you can see the various upstreams that we have. So we have deployed these upstreams. So glue automatically creates upstreams from services that you deploy. So you do not need to automatically or manually create any service by hand. Glue will automatically take care of that. We also have, we also have a gluectl. We do have a gluectl-cli. It doesn't seem to be installed at the moment. So we have automatic schema generation basically for GRPC and the rest services. All you have to do is label the service and glue automatically picks it up and generates the schema out of it. So as you can see, there is no GraphQL API CRD objects created right now.
5. Labeling Service Blocks and Generating Schema
We will label the service blocks and generate the schema. The executable schema in GraphQL generates the schema. We use a REST resolver for the REST API. Mutations are used for CRUD operations. We have a POST type for posting blocks and a GET query for retrieving all blocks. The schema definition is the GraphQL schema definition. Inputs can be passed by the user.
So what we will do is we will now label the service blocks. So now we have labeled a service block. And in a few seconds, we will see that this has been generated. So you can also check what schema is generated in case this is not the thing that you wanted, you can always edit the schema.
So you can see a lot of things have been generated. So let's take it step by step. So we have something called an executable schema in GraphQL. That is where your schema is generated. We have something called a resolver functions.
So in this case, we are using a REST resolver. So this is the REST API. So we are using the REST resolver. In case it's a gRPC service, then you will have a gRPC resolver. So we sort of give this name, like a can, it's like a alias basically, it just tells you what type of sort of REST API is this, like it's a mutation. So if you remember mutations are basically CRUD. You can create, read, update and delete with those.
So this is for posting blocks. So you can also, you know, form the generated schema. See that we have these arguments. So these are sent to the REST API from GraphQL. So we do not need to pass the, because it'd be automatically sent, provided you give this input. So this is a POST type and we also have a GET query here, where you can get all the blocks, basically. And the upstream that is here is the backend. So this is a backend REST is on the baseline. And if you notice here, we have something called a schema definition. So this schema definition is the GraphQL schema definition. The link that I shared before, this is the schema that you give GraphQL and it will sort of let you query it. So if you notice here, there's an input. So this is an input. So the user can pass these as inputs.
6. GraphQL Types and Mutations
The content, ID, and title are different types. There is also a custom input type called user. The mutation allows posting logs using the blog input. The resolve name points to an alias, which determines the REST API to be accessed by GraphQL.
The content is a string type. The ID is an int type. And title is a string type. And if you notice, the user is another input. So if you notice below, it's called user input and it's a user name. So this is a custom type, you could say. Custom data type, basically. And we also have a mutation, where you can post logs using this as a blog input. And if you notice here, there's this thing called a resolve name, and it points to this alias. And this alias is located here. So this is how, in case you're wondering how GraphQL knows which REST API to reach out to. In case it requires any data, this is how it does.
7. GraphQL Types and Virtual Service Creation
Two special types in GraphQL: mutation and query. Query serves as the entry point for fetching data. Type block and type user are defined within it. In Blue Edge UI, you can explore the generated inputs, resolvers, and raw config. Create a virtual service with a GraphQL endpoint to query and get answers. Copy the unique URL from the command and replace it in the GraphQL explorer's URL bar. Running it will return an error.
And this is also, these are two special types. One is mutation, one is query. Query is your entry point. Whenever you type a query, you have to type this and you get the data. And this resolves to this get function. And then we have a type block and a type user which has been defined within it. So when you query get blocks, this is what you can fetch from that.
And all this is also, in case what I have explained is a bit confusing. This is also explained on the right hand side within this instructor. And if you go to your Blue Edge UI, you can sort of play around and see the... So we have introspected this graph QL spec for you and you can see all the generated inputs and the resolvers here. We can also view the raw config. And you can also... There's also a graph field, like a graph QL explorer that we will now expose in the next few steps.
So let's create a virtual service now to sort of, like, we will generate a virtual service with a graph QL endpoint. So it can query that and you can get your answers from it. So we have created a virtual service now. With okay, get vs. You can see we have created this default virtual service. To query the graph QL endpoint, you have to use this command here that will give you a URL. So you have to copy this URL. It will be unique for everyone. So you copy this and you go to the, you go to this graph QL explorer. So if you want to see where it is, you can go to the main UI, click on APIs. Under the graph QL section, you should see this default blog. Click on Explore. Click on Show URL Bar. And just replace it with this, the URL that you got here when you ran this echo command. And now if you run this, it will say nothing. It will just give you an error.
8. Querying Data in GraphQL
In GraphQL, you get exactly what you ask for. The data is returned in the same order as requested, eliminating any guesswork. This is different from using Python dictionaries, where the order is not guaranteed. GraphQL provides a more structured and predictable way of retrieving data.
So currently nothing. So let's try a sample query. On the right hand side, you will have a GraphQL query here. You paste this, you will get this query, myQuery. And if you run it, you will get the answers. So if you notice, we have got what we have asked. For example, we ran a function called get logs and we wanted the content, we just wanted the ID and inside the user, we just wanted the user name. So we got the same response that we asked. So you get what you asked in GraphQL and how you want it is the same way you asked it. So it's a very implicit declaration, there is no guesswork as to how your data will come. Will it come, will the user come above the ID or will the ID come at the top? However, you ask the data, that is the same way GraphQL returns it to you. So no more, if you have used Python, there are dicts that were first unordered. So it used to become a huge problem like this where you do not have the order of the dict, but now it is much better, I guess, there are ordered dictionaries in Python. So it's something similar to that.
9. Benefits of GraphQL Aggregation
GraphQL is useful for aggregating data from multiple REST APIs into a single response. It eliminates the need for multiple API calls and allows the backend servers to handle the heavy lifting of resolving and retrieving the data. This is particularly beneficial for mobile apps, as it reduces reliance on the user's network and device capabilities.
So one of the very frequent challenges in REST is aggregation of data. When you have to pull data. Okay, so in this, as I was saying before, there is a very particular use case for GraphQL when it comes to aggregation. So aggregation is when you have multiple REST APIs, but you want your query to be returned in a single response basically. So GraphQL is very useful when you have mobile apps or anything as such, because you will have just one endpoint, like one query that you have to hit. And you know the backend, that is your powerful servers that are running in data centers, they will do the heavy lifting for you of resolving and getting all the data. So you don't have to rely on the user's network or their mobile phone and how modern it is or how old it is. You just have to hit the REST API and get the data.
10. Aggregation and Schema Generation
Aggregation allows combining data from multiple REST APIs into a single query. Automatic schema generation is supported, but custom schemas can also be used. The GraphQL schema defines one-to-many relationships and allows for specific data requests. The right-hand side provides additional explanations. Questions can be asked in the comment box or Q&A section. The query defines the entry point, and a GraphQL API is created. A virtual service is then defined as the entry point for the GraphQL API.
So aggregation is this thing where, for example, if I have three REST APIs and I have to get data that is related to all three, but I don't want to hit all three REST APIs. So that is where I can just aggregate them in one query and I can get back the answer.
So let's begin with this third section. As I said before, we have automatic schema generation. But you know, there are always edge cases or times that you want to have your own schema, so even that is supported. So if you notice, there is this configuration tab here. If you open this tab, you can go to this file that is called gql1-gql.yaml, and all this is already created here for you.
So we have two resolvers here. Both are REST resolvers. One will fetch the blocks for you, all the blocks will be fetched, and one will fetch the comments on these block posts. And you notice there is a block ID that is given here with parent.id. So if you notice here, we have this thing called comments. And now as you know, one block can have multiple comments, right? So we have a one-to-many relationship defined here, and this thing around this comment, this is called as a list. If you notice in this schema definition, there's a type block, okay? And you will notice here that the comments are in this list format, okay? And beside it, we have resolved it to get comments, which is defined here above, okay?
So if you highlight it, you will get to see that this is getting resolved from this rest API. Now what we have here is a one-to-many relationship, like one block can have many comments, right? So if you had a rest API, you would have to probably query the blocks first, okay? Then within that, you would have to get all the IDs from there, and then you would have to fire that to the comments to get all the answers, right? So in GraphQL, you just define the schema. You said that I want the ID, the user, the content title, and comments. And comment is a comment type. So within comments, you have these fields, it's ID, user, comments. So you can request GraphQL to, like, if you only want the comment ID, for example, or you want just a comment user who has commented this, you want to get the number of comments or something as such, right? And you also have a user type that has a user name, okay? So all that I'm explaining is also on the right-hand side, just in case it's a bit hard to understand anything, you can always refer to the side, to the right-hand side.
So once we have gone through this, in case anything is confusing, you can always post it in the comment box. I'm not sure if everyone is using the Q&A section. All right, so, and this is the query. So this is the type query that, this is your entry point basically. So we will apply this now. So you should get this thing that says GraphQL API created. Now if you do get GraphQL API, we should see a new blogs-graphql-defined. And if you notice, this is what we have sent basically to the GraphQL. So we have defined our own GraphQL API. Now that we have defined our GraphQL API, let's define the virtual service for it. So as I mentioned, all your virtual service is like an entry point, or if you want to think in terms of Apache, it's like a virtual host where you tell it what URL, what prefix, et cetera.
11. Configuring Virtual Service and Running Queries
You can give an entry point to your upstream by configuring the virtual service. The GraphQL API can be mapped to the virtual service using the GraphQL API reference. There is a singular endpoint for GraphQL. You can switch back to the blue edge UI, go to the API section, and click on the blog-graphql-graph to explore and run queries. Soap to REST conversion is supported for generating a schema. The larger query is for aggregating data from different APIs. The query defines the type and structure of the data to be retrieved. The comments can be requested along with their IDs and users. The content includes the ID, user, and comment. Let's explore other options from the user.
And you can sort of give an entry point to your upstream. Upstream in this sense is your backend services. You can see that this virtual service has been configured. If you want, you can also check what is there in this virtual service. You can see we have defined this GraphQL API reference, which is pointing to the blogs-graphql that we defined earlier. So this is how we sort of map the GraphQL API to the virtual service.
And you notice we have just one singular endpoint for GraphQL. So now that we have this, we can switch back to the blue edge UI. Go to the API section, and in case you don't have just a URL, you have this command that is given just below that. And you can get this URL once again. You can just click on the blog-graphql-graph. And then go to the explore section. And if you run this, you still get the blogs, but let's just change the query.
So we do not support Soap automatic GraphQL generation, but you can convert your Soap to REST and then generate a schema using REST. So we have a way to convert a Soap to REST API basically. So now we have a much larger query. So before running it, let's just go through it and see what exactly is happening. Because it shouldn't be like, you run something and you just see the output. So let's just go through it. So you notice this get blogs is what we have in our type query. If you go up here, you'll notice that this get blogs is what we have in our query, and it is of the type blog. So in blog we have ID, we have user, we have content, we have title and we have comments, which is further a comment type. So you see, it's sort of aggregating no different API right now. So we can request whatever we want. So if I run this, and so here whenever we give something, we just tell it like I want comments. And in comments I want ID basically. We've given all the comments. Everything in the comment will come here, the ID and the user. And in the content we have the ID, the user and the comment. So now from the user we only have username right? Let's try to do something else.
12. GraphQL Schema Stitching and Super Graph Creation
You can request specific data from GraphQL and receive exactly what you ask for. The amount of data returned is exactly what is required and requested. We will now move on to combining multiple GraphQL schemas through schema stitching, creating a super graph. In the demo, we will deploy a service called users and integrate it with the blog and comment services using a Resolver called get-user-details.
So if we remove this ID, so within the comments we have removed the ID from there. So you notice we don't have the ID number right? Right? So similarly you have a nested structure basically. So as you notice that we've just told GraphQL like just give me this data and that's all that I want.
So if I remove this comments block, this entire comments block, So for example, I can remove the user block. I run it. I won't have a user block twice. So you see we don't have the data, but we have the remaining data that we require, right? So you get basically what you ask. You do not get any excess data. You do not get any less data. You get the amount that is required and the amount that you requested basically.
So everyone with me on this, is anything about this confusing or something as such? Is it exciting? Anything you can just, I mean, tell me about it. So even I know, like, you're not sort of following around, are you, right? Going once. Going twice. Okay, so let's, let's probably move on. So you all have seen that you get a JSON response from this. Now we will basically pull up multiple GraphQL schema. So the ones that you have right now, multiple REST sources, now we will sort of combine multiple GraphQL schemas, it's called schema stitching. So you stitch multiple GraphQL schemas into one schema and you call the smallest schema as sub-schemas. So we'll just see that in the next section. So we will create a super graph right now, which will be a combination of multiple graphs. Let's see how this is done in the, in the demo.
So you will be deploying a service called users and a preexisting GraphQL API. So we have a Resolver called get-user-details, which we will integrate with our blog and comment services. So if you see, we have this GraphQL API defined here. We have a remote upstream basically. And we have defined a schema here. In this, we have a type user And we have a type query. So this is the entry point. We have get-user-details and it takes a username, which is in a string format. So you notice we have now given a filter sorter.
13. GraphQL API and Schema Stitching
We give get-user-details a username, and it will give us the details of that user. The user type includes ID, username, last name, and first name. The username is required as an input. We have a remote executor and a service called users. We create a GraphQL API for this service. We see the remote query and schema stitching. We haven't created a virtual service yet. We will stitch these schemas together to get a unified output. Let's look at the stitch schema.
So we give this get-user-details a username, and it will give us the details of that user. What kind of details it will give us is this user type. So if you notice up here, this type is defined with ID, username, last name, and first name. And if you notice here, there's an exclamation mark after this string. So this means that it is required. So you will have to provide this username as an input to this function basically. Without that, you will get an error. So let's apply this.
Okay. So you will notice we have a, we have something called a remote executor. So first let's just see what it is. Okay. Apart the namespace. So we'll give them a namespace. We have a service name called users. So basically, this is the service that is the API. And we haven't, if you notice, we haven't given any sort of rest resolver or anything as that. We have just referenced this upstream that is mentioned here. So you can see this is just a normal suggest like a normal API and we haven't given any URL or anything as such. It's just a normal Kubernetes service. So now we will create this, we create a GraphQL API for this. So we have created a GraphQL API. So if you click here, you'll see this next section on schema. So this is a remote query and this is schema stitching. So if we go here, we view APIs and we see remote GraphQL users, we have get user details, so we haven't created a virtual service yet. So if you notice here, we have, we can see all these definitions that are created in the GraphQL API. So let's move on and we will now stitch these schemas together. So we can then get a unified output basically. So let's take a look at this GraphQL API schema that is called a stitch schema.
14. Stitching Schemas and Creating Super Graph
We are stitching existing schemas into one using type merge in GraphQL. By merging on the username field and query, we can create a super graph that combines the remote and blog schemas. This allows querying both using a single GraphQL endpoint.
So let's take a look at this GraphQL API schema that is called a stitch schema. So now we are stitching schemas, we are not creating any schema definition here. We are stitching existing schemas into one and what we are doing here, we are telling a sort of way to merge basically. And it's called type merge in GraphQL. And what we can do is, we can sort of merge it on the field that is called username and the query. So we have given it username as a field and we have given the arguments, so we will send the username argument and it will stitch both this remote as well as the blogs into one super graph basically. So you'll have only one GraphQL endpoint, but you can sort of query both using the username.
15. Stitching Schemas and Running Queries
We have applied the stitch schema and updated the virtual service to expose the endpoint. The GraphQL API reference is a combination of all previously generated schemas. In the Glue Edge UI, we can see the stitched GraphQL API and the merged subgraphs. Ruffles asked if schemas can share common types. If two schemas have the same type, they are merged into one with all the fields. The merge configuration can handle foreign key relationships. We have a query for get blogs and get user details, but we only have get blogs here. The query retrieves comments and user details, and the blog type includes the user type.
So let's apply this stitch schema also. So we have applied the stitch schema. Now we will update the virtual service. So now we have updated our virtual service to expose the end point. Basically, you'll notice here, we have given a GraphQL API reference to the stitch schema that is a combination of all schemas that are generated and that we have defined previously.
So now let's go back to the glue system, the glue edge UI basically. And let's go to the API. So you'll notice we have a stitched GQL here. And you'll notice we have sub graphs. So these are the GraphQL APIs by themselves. These are both GraphQL APIs. And we have just combined them into this one stitched API, stitched GraphQL API. And you can also see all the you know, these are merged basically, and you can see how So, Ruffles is asking, can this schema share common types? Could you, are you saying that if there were two type logs, like in both the schemas, are you saying that? So, if that's the case, then they are merged together. So, it's like a merge... The types also merge. Yeah. So, in this case, what will happen is both the types will be merged and it will be merged into one type with all the fields. So, you have to specify the merge config and using that, it's usually merged. So, in case you have foreign key relationships, then you can sort of specify the merge config and then you will merge it accordingly. So, yeah, let's move on.
So, we have a query now, and you'll notice that we have something called as get blogs. And we have also another function called get user details. But we only have get blogs here. And you'll see how this is sort of, you know, combining all together. So, let's run our query. So, we run a query get blog, we asked for the comments. In the comment, we asked for the actual comment, the id. And we also wanted a user. So, we have the user, new first name last. Now, if you see here, blog has a user type, right? And user is defined here.
16. Securing GraphQL APIs with External Authentication
So, we have a single supergraph in this case. Let's see how we can secure your GraphQL APIs with external authentication. We will create an API key to secure the API. The key is stored in base64 and an auth config is created to guard between the backend and your applications. The virtual service will attach the auth config to check the validity of the API key.
So, you notice that these both are merged together, right, based on the user. So, we have a single supergraph in this case. Is everyone with me till here? That you can see your. It returns a get get user details. It returns a username.
OK, then let's. Move on, so like any other API. You have to also secure, you know your GraphQL APIs. So let's see how we can do that. So with blue edge we have something known as external authentication or extended authentication where you can you know integrate OIDC, OPA, LDAP, basic authentication and API keys.
OK, so we are going to create an API key. And we will secure the API using this key basically. So let's create a key. So this is the actual value of the key. It's stored in base 64. You open this and you will see this key that is there. But once you. So once you open it, you will not be able to access it. You see. So it's already base 64 encoded, so now we have to create something called an auth config. This is the mechanism that Glue uses to sort of guard between the backend and your applications. So this is how you create an auth config. So we are going to use API key alt in this. Sorry about that. So you can see this command here that says kubectl apply authentication file. And the secrets are mapped with this label called team infrastructure, all right? So we have created the auth config. And now let's update our virtual service to sort of, you know, attach this auth config. Like the virtual service will attach it. So when the request comes in, it will check that and then it will tell me API key, if it is valid or not. And then we will get the answer.
17. Securing API with API Key and Data Loss Prevention
Let's go to the Blueedge UI and explore the stitched GQL API. Add the API key to the request headers to secure your API. Data loss prevention can be used to mask sensitive values in access logs. Define a DLP block and specify the regex for the username to be masked. Only 60% of the characters will be masked.
Let's go to the Blueedge UI, once again. Click on the APIs, go to the stitched GQL API. Click on explore again. Okay. You should see this URL. If it is not there, you can run the command that is given in the chat. And we will just run this command. And you will see this error, saying unexpected end of data. We will just check once again, if this URL is right.
Okay. So let's add this to the request headers. So if you notice there's a small bar down here, that says query variables and request header. Let's just add this and run it again. And you'll notice we got, we got the query that we are looking for from here, all the details that we wanted. So this is how you can secure your API with an API key. So you can give the user an API key basically, and they will be able to query. You also have things like rate limiting. And we have a lot of other features, rate limiting, web application firewall, where you can apply all these features and you can secure your data.
The next feature that we have is something called as data loss prevention. So when you have sensitive data and you are sending the sensitive data to users, so all, you know, in access logs, you sort of want to mask these sensitive values, right? So you can use data loss prevention to sort of mask, like the data. So if you notice here, let me just make this quick. We have defined this in our virtual service, where, you know, all the entry point happens basically, we have defined this DLP block. And we say that we want to mask a character that has got name test. And we have given a regex. The regex is the username. So anywhere there is a username, anywhere this regex matches, it will be replaced with this character, the Ashtec character. So, and we will only mask 60% of that. So if it's got 10 characters, only 60% of that will be masked. The remaining won't. So let's try to...
18. Configuring Virtual Service and Simulating Failure
We have configured the virtual service and demonstrated how to mask sensitive data in your applications. Resiliency is crucial to prevent service downtime and recover from failures. In Kubernetes, running replicas ensures high availability. We will simulate a failure by modifying the upstream and scaling down the deployment to zero.
We have configured the virtual service. And now let's run this query again. And now you'll notice that in this username section here, we have username and we have mask 60% of the data. So this is how you can mask sensitive database in your applications.
So the next feature that we have is called... So is everyone following with me? Like is everyone reached here? Any questions, any queries, anything as such? So we almost reached the end of the session. So if any enquiries or anything as such are there.
So if there are no queries, we can just move on to the last part of the section. So resiliency is a very important feature. You do not want your services going down. There is no such thing as always 99.99 uptime. There are failures, there will be some or the other issue. It's how you recover from that basically. So services do go down, network is never, how do we say, network is never reliable. You'll always see some of the other service going down or some network of getting missed or something like that.
That is why we have, when even in Kubernetes, you run replicas. So if one replica goes down, you have other two replica running. So let's take a look on how we do that here. So we will, it's already created for us. We have a user's service sort of, user backup service. We will check the deployment is running. And we can see that it is running. Now we will sort of modify it, this upstream, to create a, to simulate a failure basically. So this will update the default. So it'll cause the course's deployment to fail basically. So let's apply this. If we have created an upstream, which will fail. So you can perform this one last time and everything is working as expected. Now let's simulate a failure. You will be scaling down the deployment to zero and we will check if the pods are running.
19. Simulating Failure and Resiliency
The pods are terminating, simulating a failure. The failover section switches the backend to users-backup if there is no pod running. Resiliency is built in by specifying the backup service URL.
We'll see that it is terminating here. So it will take some time. So the pods are terminating basically and that will simulate like a failure because there is nothing running. So we have no users, no user pod running right now. So there is no user service and let's see what happens.
So you notice that there was nothing that happened like we did not fail because we gave this failover part here. We gave a failover section here which said in case there is a failover, you change the backend to the users-backup. So in case the health check fails and there is no pod running, it will automatically switch to the default backup service that we have in the default namespace. So this is just a service URL basically. So this is how we have a sort of resiliency built in where you just specify the backup and you can go through it.
So this is it. This is the end of it. Thank you for staying around. Have a nice day.