Live e2e test debugging for a distributed serverless application

Rate this content

In this workshop, we will be building a testing environment for a pre-built application, then we will write and automate end-to-end tests for our serverless application. And in the final step, we will demonstrate how easy it is to understand the root cause of an erroneous test using distributed testing and how to debug it in our CI/CD pipeline with Thundra Foresight.

Table of contents:
- How to set up and test your cloud infrastructure
- How to write and automate end-to-end tests for your serverless workloads
- How to debug, trace, and troubleshot test failures with Thundra Foresight in your CI/CD pipelines

146 min
15 Nov, 2021


Sign in or register to post your comment.

Video Summary and Transcription

Tundra is a tracing and debugging solution for applications, tests, CI pipelines, and workflows. It offers products like Tundra Foresight for monitoring CI-CD builds and Tundra APM for tracing and debugging microservices. Tundra provides distributed tracing, time travel debugging, and chaos engineering support. It helps troubleshoot test failures, identify and fix bugs, and simulate potential failures in production. Tundra supports Java, JavaScript, and Python runtimes, and integrates with popular CI providers.

Available in Español

1. Introduction to Tundra and its Products

Short description:

Thanks for joining our workshop. Today we'll talk about Tundra and its products. Tundra is a tracing and debugging solution for applications, tests, CI pipelines, and workflows. It can be used in various environments and is integrated with popular CI providers. Tundra Foresight is the main product, allowing monitoring and troubleshooting of CI-CD builds. Tundra APM helps trace and debug micro-services and service applications on any platform. Distribute tracing is crucial for monitoring micro-services.

Thanks for joining our workshop, I hope you will find it useful. Let me introduce myself first. This is Serkan, CTO of Tandra, the co-founder, and the co-writer of Throughout M. Modern cloud applications, including service applications as well, and the CI pipelines. And I am honored to be recognized and nominated as AWS Servers CTO, and also a co-organizer of Cloud OnService Turkey meet-up. And was part of the team that organized many national and international AWS-centric event in Turkey. And I have been actively working and researching in the servers era for five years. And welcome you all again and giving the microphone to Ilker to introduce himself.

Thank you Serkan. I am Ilker. I work as a Front-End Engineer for Tundra for three years. Yeah, Ozan, you can continue.

Thanks Ilker. This is Ozan. I am the Solutions Engineer at Tundra.

Okay, thank you Ozan. Okay, let's start with the boring part before we start our hands-on part of our workshop. Let me talk a little bit about today's agenda. Today, first of all, we'll talk about Tundra in general without taking too much of your time, of course. We'll talk about what kind of product Tundra is and which problem offers solutions to. And later we are going to talk about how we can write end-to-end tests for service applications and how we can automate it. Then we will get to the fun part of the workshop and we will see how we can debug and trace the failures in our end-to-end tests using Tundra through different examples, test failures, and use cases.

First of all, I would like to briefly introduce Tundra. Basically, Tundra is a tracing and debugging solution for applications, tests, CI pipelines, and workflows. This actually means that developers, testers, and QA engineers can use Tundra in many different environments from development to production, such as, during local development by integrating their IDE, and CI, CD for the pull request and build and release pipelines, and staging before going into production, and then production of course. Additionally, Tundra is platform agnostic, so that means you can use Tundra both in service environments, such as AWS Lambda, and also in the non-service environments such as Docker, Kubernetes, and VMs, and so on, and in many cloud environments of course, such as AWS, Azure, and the Google Cloud, and Tundra is also integrated with the most popular CI providers, like GitHub, GitLab, CircleCI, and BitBucket, Jenkins, TeamCity, and many others are coming. And also the other good point, with Tundra, is that you can still use Tundra, you're on your local where you are developing on your IDE. So you don't need to use Tundra in the remote environment. You can still use Tundra with the same capabilities on your own local environment. Your development environment. And besides that, Tundra is integrated with many popular runtimes, including Java, NodeJS, Python, and Go, and.NET, and with the many web frameworks such as Express, Quo, and Happy for the Java script and NodeJS, and Spring for Java, and Flask, Tornado, FastAPI for Python, and Asafi.NET for.NET and so on to provide in-depth debugging and rich tracing capabilities to you.

OK, so Tandra basically has three main products integrated with each other. The first one is Tandra Foresight, which is the product we are going to use mostly today in our workshop, and with Tandra foresight, you can monitor your CI-CD builds from one place. By finding the bottlenecks in your CI process and optimizing your CI times, you can reduce your CI cost and time to go to production with confidence. And additionally, by monitoring the tests run and execution metrics, you can historically follow the failures and the performance problems in your test. Moreover, the most important feature is that you can troubleshoot problems very easily by tracing and debugging the distribute flow between your tests and the applications in your end-to-end test. And even with our time table debugging feature, you can record your test execution down to line-by-line and take snapshots during the execution. By snapshots, I mean you can just take the snapshots of the local variables, arguments, and return values, and the properties of the objects and so on, and then you can replay the problematic test execution test runs and debug them without having to reproduce the problem, reproduce the test problem in your local. Sometimes it is not feasible or very hard to reproduce the same test failure on local because of many reasons like having the same state, same database state, or same application state. So it is very crucial to be able to debug the failure on the real environment, on the target environment. And Tundra Foresight can work in integration with many CI tools and testing frameworks. Currently, we have integration with GitHub, GitLab, Bitbucket, Travis CI, Circle CI, and other pipelines. Jenkins, TeamCity, and CI tools, and also we are expanding our CI ecosystem integration according to the future request coming from our customers and users. And at the moment, Node.js, JavaScript, and Java Runtimes are support over JSJ units and cellular integrations. On the other hand, Cypress and the test engine integrations are on the way for JavaScript and Java Runtimes. And also Python Runtime support over PyTest is planned to be released by the end of this month. So this means that Python users can also use Sundra Foresight and layer test and CI processes by having all the cool features provided for other Runtimes we are going to talk about today. And then let me start with our second main product.

The second main product is Sundra APM, which helps developers to trace and debug their micro-services and services applications running on any platform, like Docker, Kubernetes, VMs, AWS, Lambda, Fargate, and even on local and so on. And especially with the rise of micro-services, distribute tracing has become a very crucial part of the monitoring process.

2. Introduction to Sundra APM and Tandra Scikit

Short description:

Developers can use Sundra APM to understand how their applications interact with various components. Tandra Scikit allows debugging applications on the targeted environment, including production. It eliminates the need for breakpoints and introduces trace points.

And with Sundra APM, developers can get a picture of how their applications interact with each other, with the databases, cloud services, third-party APIs, and so on. And so when there is a problem in the business flow, developers can trace and debug the whole transaction end-to-end to pinpoint issues, to find the problematic component to the problematic micro-service. And, of course, Sundra APM and the Forsythe site are deeply integrated with each other, so developers can trace the whole flow in their end-to-end tests easily and go to production with confidence. And let me continue with our final product, our third product, Tandra Scikit. Our third main product is Tandra Scikit, which enables developers to debug their applications on the targeted environment and even on production, not only locally. The motivation is that sometimes it is hard to reproduce the issue on local because of many reasons. So we believe that developers should be able to debug their applications on the real environment against real services and real data, not the simulated ones. So in terms of this context, Tandra Scikit allows developers to debug their applications without stopping at the breakpoint. So we have a new terminology as the trace point, which enables developers to debug their applications on the target, on the real environment without passing their applications on the put trace point there. And I think it is already enough for the intro.

3. Workshop Overview and Technology Requirements

Short description:

Today we will sign up for Tandra, clone a sample blog site application, run it on Oracle, and simulate AWS services using local stack. We will then explore the AdCounter API console, familiarize ourselves with Tundra API, and proceed to Tundra Foresight. We'll create a test project, run tests with injected bugs, and use Tundra Foresight to trace and debug. Now let's move on to the fun part of the workshop. We need Node.js, Python, Docker, AWS CLI, and a Tundra account. Sign up at and select Tundra Foresight. Creating a project is essential, and we have integrations and UI demos available. Let's get started with the IntelliJ Teams-T integration and manual integrations.

Let me talk about what we are going to do today, how we will go through the workshop. At first, we will sign up to Tandra, of course, to create our account and get our API key to be used in our test and the applications. And then we will clone our sample blog site application from GitHub prepo and then run it on Oracle to have a quick look. The blog site application includes serverless applications implemented in Node.js runtime for the AWS Lambda platform. And on Oracle we are going to use local stack to simulate AWS services, and we are going to use serverless framework to deploy the applications to local stack. So basically, we are going to run an environment with for AWS cloud on our local by using the local stack. So without need to deploy the application on the remote environment. A letter we will go to the AdCounter API console and see the picture of how applications behave and interact with each other. And at this point we will be familiar with the Tundra API product and get the point of what Tundra API does. And then we will go through the Tundra Foresight. At first, we'll go to the Tundra Foresight console to create our first test project to be used in our end to end test during the workshop. And then we will run our test by just framework which are already available for you at the different branches for our sample application on the GitHub repo. And of course, they will fail because we have explicitly injected the bugs into the application to fail the test. So we'll use Tundra Foresight to trace and debug the failing test and troubleshot the failures with the provided information by the Tundra Foresight and the APM. And okay, I think I have completed the boring part and giving it a stage to go with the fun part of the workshop. Thank you all again. Let me stop my Share screen. Thank you Serkan, let me share my screen. Yes. Okay, thank you Serkan for the introduction. These are the technologies we need today. First we need Node.js version 10 and beyond. And we need Python 3.6 and beyond. And we need Docker and AWS CLI. If they are ready on your machine, you are good to try these demos with us today. And finally, you need a Tundra account to monitor your tests and applications. So let's go to and create our new accounts. Here I will create a new account. Sorry. Yeah, yeah. You need to go to this sign up step to create a new account. This is our application selection page. These are the apps Serkan mentioned before, as you remember, Forsyth, Sidekiq, and APM. Today, we are going to mostly use Forsyth to see how our tests are doing. And we will use APM to see what happens under the hood of our application. So first, let's start with Tundra for site. These are the onboarding steps we provided. Later, you can spend more time for reading these, if you want. Creating project is actually the essential part. This is where everything starts, actually. And we also have some integrations prepared for you for some open-source projects. After me, Ozan will show them to you, and actually that's it. And also, one more thing, I also have some UI demos stuck on my screen, so if you wanna check it out, a link to that is also coming up soon. Please pay attention to them. And after that, next to it, I have some more demos and some more tests. So, let's go. Yep. We are good to go. So let's create a project, IntelliJ Teams-T integration. And also, we have some manual integrations.

4. Cloning the Sample Application and Getting API Key

Short description:

Our project is ready. We will clone our sample application and guide you through the steps. If you need assistance, our team is available on Discord and Zoom. Let's wait for everyone to create their Tundra account and get their API key before running the application locally. Don't hesitate to ask questions or seek help. Let us know when you're done and we'll continue by starting the application.

Besides JavaScript, we're also very familiar with Java. The core site works really well with Java. And actually, our project is created. And if you want, you can spend more time reading these options later. So, our project is ready. And we will go to the Preferences tab. And here is your API key and project ID. So, we will need these two bits of information later. So, please remember where to find them when you need. But no worries, actually we will be guiding you in all these steps. Yep.

So, right now, we will clone our sample application. Clone our sample application. Actually, I assume that most of you have created your projects. So, these projects are essential in our demos. So, in the following, we will first clone our sample application. So, let's continue with that. And these are I had on, this actually explains the steps, but I will also run these one by one. So do not worry. And if you need any assistance, also our team will help you in the discord channel or in the Zoom chat. So do not worry about that also. So I will go to my IDE and first... Chrome... Paste. Yes. And after the cloning, you should check out the base branch. Oops, sorry. Yep. And I think it's good. Yeah. So, open this in a new, let's call it window. Okay. So this is our, actually, this is the, this is today's repo. The most important part is putting the API, right API key here. But yeah. Our terminal is, my terminal is also ready and I will use... Maybe we can wait everyone to get their, to create their Tundra account and get their API key. We can continue by studying the application on local. Yeah, actually this is a good point to wait. Yeah, thank you. Did everyone like it? Yeah, let's just create Tundra account and get your API key as Iskair did, so we are waiting for you here, and then after we get the point here and then we can continue by starting, by running the application on the local to be synced with each other, and then yeah, we are waiting at this point to get your API key and put the API key to your Mac file. before starting the application. So let's wait a couple of minutes, right? Yeah, please and don't hesitate to ask any questions or whenever you need help, just thing us through the Zoom chat or the Discord channel, we are happy to help you. So I posted the repository in the Discord chat, but I will post it here as well. So if anyone's wants to follow, if care and doesn't have an access to Discord, they can go to here. So I see from our dashboard that we have a couple of signups to Kandler, but not sure if everyone had a chance to go and check out. Yeah, please just let us know when you are done. And let us continue by starting the application. Actually, right now we do not have any test runs, but during our demos, you will see this screen will fill up with test results. Okay.

5. Initializing API Key and Running Commands

Short description:

Let's continue by putting our API key and running the necessary commands in the make file. While waiting for the components to start, we can also initialize the front end part. Remember to save the endpoint URL, as it will be used in our frontend app. After initializing the React App, you can see the frontend application. You can add blocks and edit the content.

I think we have waited enough at this stage, so I continue with putting my API key. So let's go to our make file. And then I run make install first. Then I run make start. Let's check. Actually do not bother with these trials. Vocalstack our Docker takes a bit of time to be up. So these are trials from the Vocalstack side. We have another account for this. This is the starting process, it may take, it may take a while.

While these things are getting up actually I will initialize our front end part also. By the way when you run this first time it may take some time to fetch the required Docker images, I mean the local stack Docker image to to to mimic the AWS environment on your local. So, yeah, downloading the local stack Docker image can take just a few minutes to fetch the image. Yes, our backend actually seems fine. Please, actually this endpoint URL is important, please save it to somewhere or be sure that not losing that endpoint, because we will put it in our frontend app.

I am initializing our React App. Yes... It came alive... ... you see the upper part... ... we will put our... ... endpoint URL here, but we have to keep that block part... ... so be careful about that. ... and we set our URL... ... yeah. So actually this is are frontend application... ... when you... this is what you will see when you clone the... ... the provided repo. This is a simple application. You can add blocks... ... and let me edit here, for example, title... ... text monitoring, and the post is... ... founder of... ... And the user name is Ilker. ... and when you submit it...

6. Exploring Tandra APM

Short description:

Let's explore how the application looks in Tandra APM. We can see the details of the 'post blog' and 'search blogs' Lambda functions, as well as the full trace and plan. Thunder APM utilizes the local stack to run applications locally without deploying to AWS.

Actually, I will show you how this application looks in Tandra APM. So let's go to Tandra APM. At this point, it's important to mention that one account is usable for all three products, so you can easily switch from Tundra Foresight to Tandra APM with just a few clicks. Let's go to APM.

When we land on the Thunder APM page, we need to provide our name, company, and position. At this stage, we have just submitted our blog post, which is shown under the 'Submit Blogs' section. Here, we can see the details of what happens behind the scenes. For example, when I click the submit button to add a sample blog, the 'post blog' endpoint is called, and we can see the invocation details. The body of the request contains the data we sent to the 'post blog' Lambda function. We can also see the 'search blogs' Lambda function and its trace. If we open the 'post blog' Lambda function, we can see the full trace and plan.

In Thunder APM, there are three Lambda functions being used. We are utilizing the local stack to deploy and run the applications locally without the need to build and deploy all the packages to the AWS environment and create resources.

7. Exploring Thunder APM and Distributed Tracing

Short description:

That one account is usable for all three products. We can go to the APM from Tandra 4 site easily. This is our first landing to the APM, Thunder APM. We can see the details of the background Lambda, the invocation, the body of the Lambda function, and the trace of the Lambda. There are three Lambda functions used, triggered from the API gateway, SQS queue, and DynamoDB streams. DynamoDB is used as a key-value store, and elastic search is used for searching through blog post titles and contents. Tundra enables us to see the entire flow end to end and identify issues in the microservices.

That one account is usable for all three products. So, you should be able to go to the APM from Tandra 4 site easily with a few clicks. Yeah, my internet has a bit problem. So, I need to... what was the thing? I think it was test.js.summit. 2021. Yeah. I wish I had saved it. Yeah. Yeah, let's go to APM.

This is our first landing to the APM, Thunder APM. So, we need to give our name and company, and our position. Yeah. So, at that stage we have just submitted our blog post. And yeah, it is shown here under the submit blogs part. And here you can see what has been done under the hood, actually. For example, when I click the submit button while adding a sample blog, this post blog, post endpoint is called, and this is the invocation. When you clicked, you can see all the details about the background Lambda, all these informations. And yeah, actually this is the body that is going on to do that Lambda function, as you see these are the, this is the test mentoring and Thunderbolt, so it is awesome, these parts are the data that we have just sent to our blog post, post blog post Lambda. And this is the search blogs, actually, search blogs Lambda. And when you, let's see, when you click it, again, you can see all these informations. And from these, you can see the trace of this Lambda. Maybe you can, it is better to open the post blog post Lambda function, which triggers all the flow and then we can see the full trace and plan. Yes. Yeah. Maybe you can explain it a little bit more. Let me quickly jump at this point.

And here you can see that there are three Lambda functions. So basically, we are using the local stack here to deploy and run the applications locally without need to build and deploy all the package to the AWS environment and create the resources. So that is the benefit of using the local stack. And here you can see that there are three Lambda function and the first Lambda function is triggered from the API gateway. And then, these Lambda function just gets the request and sends a message to SQS for the async processing to handle the request asynchronous letter and then just returns the accepted response code. And then another Lambda function, the blog post processor Lambda function, just triggers from the SQS queue and get the message from the queue and process the message, and then sends a notification to the SNS topic and then writes the message, writes the blog post item to the DynamoDB. And then we also configured DynamoDB streams to stream the DynamoDB changes, I mean, S updates and the delete to the stream, and then this stream triggers under Lambda function, the blog post replicate Lambda function. So basically these Lambda function just gets the content from the DynamoDB, just gets the edit updates, and the delete item from the DynamoDB streams and it applies the changes to the elastic search. So basically we are using the DynamoDB as a key value store to access the blog post items. And we are also using the elastic search to be able to search through the titles and the contents of the blog post here. And we are using the DynamoDB streams to keep the DynamoDB and elastic search states synchronized with each other. So basically whenever we add, update or delete an item to DynamoDB table, we also applies the mirror changes to the elastic search. So with Tundra, we can automatically see all the flow end to end. So without having such kind of distribute tracing it is very hard to find the problematic microservice or the component in your old flow because with the rise of the microservices there are many applications in the business flow. So then there's an issue, or the origin of the issue might come from, might be called from totally different microservice.

8. Exploring Tundra APM and Distributed Tracing

Short description:

These Lambda functions retrieve content from DynamoDB, apply changes to elastic search, and keep DynamoDB and elastic search synchronized using DynamoDB streams. Without distributed tracing, finding issues in a complex microservices flow is challenging. Waiting for everyone to catch up before continuing with tests. Explaining the blog submission process and triggering of Lambda functions. Observing changes in the APM Functions list. Editing and accepting a blog post, observing changes in the trace. Publishing the post and observing changes in the APM Functions list and trace.

So basically these Lambda function just gets the content from the DynamoDB, just gets the edit updates, and the delete item from the DynamoDB streams and it applies the changes to the elastic search. So basically we are using the DynamoDB as a key value store to access the blog post items. And we are also using the elastic search to be able to search through the titles and the contents of the blog post here. And we are using the DynamoDB streams to keep the DynamoDB and elastic search states synchronized with each other. So basically whenever we add, update or delete an item to DynamoDB table, we also applies the mirror changes to the elastic search. So with Tundra, we can automatically see all the flow end to end. So without having such kind of distribute tracing it is very hard to find the problematic microservice or the component in your old flow because with the rise of the microservices there are many applications in the business flow. So then there's an issue, or the origin of the issue might come from, might be called from totally different microservice. So, in this context, having a proper distribute tracing is very important for you to find the origin of the issue in your business flow. And I think this is the right point to wait, everyone to catch up with us, before we continue with the tests. Yeah, actually I can spend some more time by explaining this application. Ah, yeah. Sorry, sorry Stephan. No, no problem. You have actually around five minutes to catch up. So, yeah, at this stage, you know, we just submitted our blogs, and we just has a setup like that, when you submit your blog, you can, another person, maybe review your blogs. And when your blogs are reviewed, you will see your review blogs here, and if a revision is done, you can just publish your blogs. And let's go back to our Summit Blogs list here. I will, yeah, by clicking this, I open a model, and I just edit that blog, maybe here. Tundra Foresight is an awesome tool. And I will accept that post. And after that, we can see our blog, our post has been done under this Review Blogs part. So while we are doing that, we are triggering some Lambda functions, and all of these are actually can be seen in the APM Functions list. As you see, we do lots of search blog posts requests, so it is increasing, it is count increasing much faster. So we have blog post replicator called twice here, and this review blog post is a new thing when we click the Accept Post button from that model, we have just invoked that Lambda actually. So let's go explore more with this invocation. And maybe we can see our additions here. So yeah, here you can see that, this is what our post has become. As you remember, we added our post, it captures that. And when we go to its trace, it is still here, but as we use the application, our topology gets more complicated, but with APM, you can easily observe what has been done. And for example, these messages, I can show them to you. No, this one. For example, yeah, this is my latest attempt when I edited the blog post. This thing comes from the review blog post part, but I think this thing comes from my initial blog post. Yep, like you remember, this is the initial post. Uh, you can observe what, what has been done, uh, with the help of Tandra. So let me finish by publishing that post. Yeah, click publish. Yes, everything is fine. Let's publish this post. Now we can see it on their published blogs. It may take a while. Let's take it again. Yep. Uh, here you can see the post is, uh, this is the, this is the published state of that post. So when we go to, uh, Tandra functions, Tandra APM functions, uh, the list now we just, yeah, this is the new thing published blog post, uh, uh, function, uh, get cold. So we captured that. And yeah, we just, uh, changed the state of the post to published. And when we go to trace off that invocation, uh, yeah, as, uh, as well as the previous, you know, the attempts, uh, the third, uh, one is also added here. This is the, uh, this, this, this came from the published action that we just did.

9. Tundra Foresight and Thunder APM

Short description:

This is a quick review of our application and the issue causing latency in the blog post search. The delay is not related to the application or Tundra itself, but rather the spinning up of a new Docker container when triggering an endpoint. The delays in the UI are not caused by the application logic or the Tundra agent. These are the lambda functions we will be testing today. Tundra Foresight is a playground environment with selected open source projects integrated. It provides information on test runs, specific tests, and performance steps. Thunder APM allows for deep dive analysis of specific tests and provides logs and performance data. The trace map button in Thunder APM shows the trace chart and spans for the specific test, allowing integration with other services.

Um, yeah, actually, uh, this is a quick review of, uh, our, uh, application and the, sorry for interrupting you just to clear, so you may have noticed that there is a, is a latency, uh, while searching the, the blog post here. So in fact, uh, the issue is not caused by, uh, by the, by the application itself, or the Tundra itself. So basically the reason is that, uh, when we just triggers an end point, the local state just spin ups a new docker container to run the service, um, the run, the service application to run the Lambda function. So, so that was the reason that why we see some delays, uh, before we just put a button to trigger a flow. So, so basically that's, that's the reason that we, we see delays in the UI. So that is not related to the, the application logic itself or the Tundra agent itself, okay. Sorry, as you can see as a circumstance, uh, you can see how, how many, a, what is the duration of that invocation also here? Uh, yeah. Uh, thank you for the clarification. Uh, so, uh, yeah, uh, these are the, for, from, for as last words, uh, these are the lambda function functions. Actually. Uh, we are going to test today that you will, you will see these a lot with, with, uh, with these, uh, traces. So actually this is a, uh, a small, um, how can I say, uh, intro, uh, we will see these, uh, these topologies a lot today in the following examples, especially with tasks. So, um, yeah, thank you for that part. Uh, actually all some, maybe you can continue with, uh, Tundra for site, uh, explanation. So, uh, let me just, uh, talk about what, uh, what Tundra Foresight is. And before I deep dive into the concepts that we are, uh, solving, uh, I want to show you our, uh, playground environment, which is a foresight that Tundra that life. So, uh, my friends, uh, my colleagues can share links for you as well. So in this environment, uh, basically this is, this is public to everyone that we, our team is, uh, uh, succeeded, uh, carefully selected the few open source projects. Uh, so here is the list of the few created lists, like seven projects that we integrate the Tundra Foresight with. Tundra Foresight with. And apart from that in the homepage, you will see that there are many more, uh, open source projects that we try to populate the data in this, in this environment. So if we can, uh, for example, we are going to use LocalStack a lot today. So if we go into the local stack, uh, test page, I will show you a simple, a very simple test case to, uh, show how it's, how it's seen from Tundra Foresight and Tundra APM. So if we, let, let's say we select this, uh, this one has a couple of tests in it. And, uh, I want in this screen, let me just, uh, give a quick description of what's happening here. So, uh, here is the, uh, here is your projects test run overview page, which is basically if you run your test on your local or your, on your CI environment, your workflow round as a whole will show up here, and you will see the execution time, how many tests are passed, how many failed, how many skipped, et cetera, and which branch, uh, is it running? So you will have all the metadata to understand what's going on, as well, as you will see that, uh, let, let's actually selected, uh, some complex test to see, see, like, what this, uh, blocks are doing. So in this case on the left side, uh, you will have the old test suites, uh, separated from each other. And within that, you will see the specific tests and here, uh, you will see the most erroneous tests, most, uh, the slowest test suites, slowest specific tests. So this, this, uh, information board will give you, give you the ability to take action. So if your CI world is too slow, if we have, if you have a, uh, let's say a faulty tests, uh, you will immediately see. And, uh, that's for example, clicking this, uh, let us, uh, test, test suite. And, uh, for this specific test suite, we all have the information for before or after and, as well as the before and after. So, uh, let's say, let's say I want to, I want to check one of these tests and, uh, right now we are in the deep dive. So we got right into the specific tests that that's failing, that strobing a redis connection exception. And we'll see the logs here, which is, uh, which is, uh, caught by the thunder agents and you will see the, uh, that's a stack trace of the error. Here. Uh, we also have a couple of tabs here, which is the, will the other important thing, uh, type is the performance step, which shows you a histogram of the last, uh, uh, let's say, uh, 10 rounds, 14 runs within your data retention. You will see the, uh, uh, performance of the, of the specific tests, how long it took. And is it, is it failed or passed in the past in the, in the past? So, um, the, this is quite helpful to, uh, actually catching the, uh, performance effect of your changes. And here is the logs, uh, tab, uh, in the logs that if you have any logs for that, uh, specifically, which in this case, we don't have any log. Uh, you will see them here. Um, and for the second shot, uh, I have a, uh, very good demo. Uh, soon I will, we'll come to that later. Uh, but if you have a, uh, front-end test with, uh, Selenium or browser stack, uh, those sequential that you recorded through your test will show up here. Um, so let's, let's see, uh, how it's going to look on, uh, Thunder APM. I'm hoping this is a integration test and not a unit test, uh, on the left-hand side, we see a trace map button, uh, which is, uh, since you are using your account for all three product, you can click this button and go directly to the APM page for this, uh, specific test, which in this case, this is, this is a one test and we clicked that and you'll see it all the trashed chart and the spams for this, uh, for this test and what's going on with them. So if this test integrated with other services, uh, uh, you will be able to see them here, uh, which we'll show you in the upcoming, uh, demos for, for, for our sample projects. But, um, uh, let's say let's, let's say, let's go to the local state, which I know there is a very simple tests with, uh, with a rescue service, uh, attached to it. So this test success passed in the, uh, in the last, uh, I don't know how many runs here, 10 runs, I guess. And if we click the trophy button, um, we'll say that this specific does integrate, uh, actually talking to AWS, uh, SQ, SQ.

10. Tundra's Solutions and Features

Short description:

We are solving difficult problems related to distributed systems and serverless environments. Tundra provides distributed tracing for Java, JavaScript, and Python runtimes. By integrating your lambda functions with Tundra, you can visualize the trace map and spans on the dashboard. We also offer time travel debugging, which allows you to record and replay invocations, making debugging serverless architecture easier. We are expanding our framework support, including self-owned human browser flicks and Java on time.

And for, for this test, you will be able to see that, uh, interaction here. And if you click on that, you will see, see the tags on the logs that comes with this. Uh, so I'm hoping this is this cleared a lot of questions, but if you have any more questions, uh, about our live environment or our products, uh, feel free to ask them. And since this is public, you can go and check out all the, uh, all the tests that are available here. Um, so I'm gonna, yeah, I'm going to talk about, uh, the main concepts now, now that we, we showed our products. So, um, so for as Tundra we, uh, as a product, we, we are solving, uh, we are solving a couple of problems which, which, uh, which are, uh, which are difficult, uh, but not impossible to solve. So we know that distributed systems are hard. Uh, it's hard to develop how to maintain how to observe, uh, but, uh, this is, uh, strictly, uh, uh, difficult for serverless environments as well. So we as Thunder evolved from, uh, service modeling, uh, uh, service monitoring market. Uh, but now we are serving for your applications, your tests, but, uh, this, this, for example, this distributed tracing, uh, problem was, uh, uh, the solution originated from there. Um, so currently this, uh, distributed tracing available for all of our runtimes, but our main runtimes are Java, Python, not jest. Uh, so go on dotnet are available, but, uh, since the, uh, the main runtimes our users using, uh, Java and JavaScript and Python, uh, they are directing us from that, uh, specific, uh, environments. So, uh, most of our flagships are, uh, first available for these three. So let's, let's talk about what's, uh, uh, what we mean by distributed tracing. So this, let's say, for example, you want to develop an application, uh, that's integrated with different services, which is very similar to what Yilkar showed before. And, uh, this is the flow. It doesn't have to make any sense, but, uh, for the sake of presentation, let's say that this is what you want to do. And once you integrate it, your lambdas with, um, Tundra, you will be with, you will be, you will see, uh, very similar almost identical trace map on Tundra dashboard. So you don't, which isn't possible. You don't need to integrate DynamoDB xs. So these are not possible, but as Tundra, we are putting all the pieces together and, uh, basically completing the puzzle for you. So you all need, you just need to integrate your lambda functions and all we need, we are catching inbounds requests and outbound requests, and we are completing the puzzle for you. And, uh, with this trace map, we also have a trace chart with the, uh, all the spans are distributed. Uh, all the spans are visible here. So, um, so this is the, this is one of the main concepts where what we're trying to solve at Tundra. I'm hoping there isn't any questions so far. Um, so let's continue with the second concept, which is quite exciting. So, um, chaos engineering isn't something that, uh, let's say you are not quite open, have the chance to do, or, uh, you don't do it every day, but, uh, it saves a lot of pain when it's done right. And when it's done at the right time, at the right place. So, um, which, uh, this as a practice is, uh, you do a control experiment on your distribution system. You, uh, simulate the failure, you inject the latency, you, uh, you, uh, let's say you'll create a network partition, a false network partition, and, uh, and you'll see how your, uh, the whole architecture behaves when this happens. So, when you do these experiments, uh, in a controlled, uh, environment, and, uh, with your, let's say with your while you are observing them, uh, you might catch some big failures before it's going to ship a production. So you will be able to save a lot of pain for a lot of people. And for, for chaos engineering and distributed testing, distributed testing is something we've shown so far, but chaos engineering we'll also show in our upcoming demos, the tarps, uh, and uh, mainly our differentiated feature is the time travel debugging, uh, which has two main parts. Uh, the first part is to record the invocations, uh, record requests that are coming to your services. Or yours. So the slumbers, and the second part is to play them between your attention time. So you can revisit your, uh, invocations, your request and play the whole snapshot of that, uh, of that invocation with the, uh, with the variables and the metals are changing sides. So you will be able to see the live debugging that's already in the past. Um, the reason that we spent a lot of time for time travel debugging is, uh, as I've mentioned, we are coming from the service marketplace and, uh, debugging serverless architecture is quite difficult. And reproducing them is nearly impossible. If you have a difficult error or something's happening out of your site. So every time there will be travel debugging available and enabled for your services, uh, basically, when the error happens, uh, you don't need to, you already caught them and you will see them on, on Tundra. We will also show demonstrate this feature for you. Uh, so, um, it will be able to see that how it's going to work. Okay. So, uh, let's talk about the feature and what's ahead for Tundra Foresight and Tundra as a product. So, um, main things, there are two main things here. Uh, one thing is the Supreme Debugger, which I've already show you the screenshot tab, but we are currently working on expanding the, uh, frameworks that we are supporting. For example, self-owned human browser flicks are already supported or Java on time, uh, but for JavaScript and not just run times, uh, these, uh, frameworks and the supports are, uh, all the way to cooking and it will be.

11. Tundra APM Screen Debugger

Short description:

We are working on an upcoming release in November or December. We have a screen debugger feature that captures screenshots and recordings for front-end tests. It integrates with BrowserStack to trace commands and correlate front-end and back-end operations. You can visually debug the application and see the test execution time.

Um, I don't know if we have an exact release date, but I'm hoping it will be available at the end of November or in December. So, let's see how it's going to look. I have a quick screenshot and demo for screen debugging. In this picture, we captured screenshots and screen recordings for our front-end tests, which can be seen on the Tundra APM dashboard. We are integrated with BrowserStack to trace the command and capture screenshots. We can correlate front-end operations and screenshots with back-end operations triggered by client actions. With the screen debugger, you can visually debug the application from the end user's perspective and see the test execution time.

12. Tundra SQL Debugger and CI Pipeline Monitoring

Short description:

You can use the Tandra SQL debugger feature to see how your application looks and test execution time. We also have a working example for the SQL debugger using browser stack. CI Pipeline Monitoring is a feature we are working on to make it easier for developers. It will enable you to see and enable Tundra Foresight with a few clicks. You can install the GitHub application and Tundra Foresight will hook itself into your workflow runs, providing a dashboard with information from different repositories and CI providers. We are excited about this feature and its capabilities. The architecture page shows how the lambdas in the example interact with each other. After that, we will use Foresight to demonstrate sample tests.

So you will be able to see how the application looks like to ring the test execution time. So in here you can see that there's a one request, the highlight with the blue color, and then you can see that the state has transitioned into the processing state and with the, with the green color, with the highlight, with the highlight, with the green color, processing or the process states not able to see. Yeah, it's finished. Yeah, yeah. Finished state. And then just, we just transition into the processing state. So this is an example of the test failure. So basically you can see how your application looks like from, from the end, your perspective by using the Tandra SQL debugger feature.

Okay. Sorry. Sorry, Ozan, again. Not at all. Thank you, Serkan. So, I actually have a different demo for, for example, browser stack. So, I believe it's, you can see that now. So here is a actually working example for SQL debugger. So here we are, we've used browser stack, for example, which, which available for java runtime. And here you can see that to do a sample to do zap and it stages here and how, how did, how this test test. And if you click on the right hand side, there is a C on process stack, which has been from this a test run. When we click that you will redirect to the browser's tech dashboard, and you will see that all the, all the frontal tests for you available and you can play them here and you will have all the capabilities of this product, of this framework integrated with four sites seamlessly. So this is a good example for how the second divider is going to work. I hope that clarifies a lot of things. If you have any questions, feel free to ask.

So let's go back to the second part of our feature rendover. So CI Pipeline Monitoring is something we are working on again, which will be available at the end of November. So currently you need to integrate your codes and change your configuration, change a lot of configuration, and change your CI pipeline for enabling Tundra Foresight to see your tests. But we are currently trying to make this more easy for our users, for developers, so when this feature is released, you will be able to see, enable Tundra Foresight with a few clicks. So you will install the GitHub application, give the permission, and Tundra Foresight will hook itself into your workflow runs, and you will see those different runs from different repositories, multiple sources, and multiple CI providers in one dashboard. Unfortunately, I don't have any demo for that since it's already in the developing stage, but I have a few screenshots for you for how it's going to work roughly in the future. So here we have three different repositories with the latest five workflow runs information is available, the formations of average execution time, the success rate, etc., and as well as the test run information that's parsed from your results file. And within these repositories we also have different graphics and different... well, within the granularity you will be able to see a lot of information for this workflow run. So one step further you will see all this duration and success rate charts and all the past runs for this specific workflow run as well as the test runs that are linked from this. So you can enable Thunder Rapport by code with the CI pipeline monitoring and all work together perfectly. And once go further you will see the jobs and the steps for this specific run and all the metadata that are available to understand the origin of this execution for example. So this is something we are quite excited about. So I'm hoping it will you will think the same as well. So I think this is my time and I hope that this few minutes gave you some deep insights about Tundra as a product as well as its future. So I think I can pass it to Ilker now to start the workflow demo and run the first tests for our project. Thank you also. So let me share my screen. Yes. Actually, I want to explain something more about our APM page. As you see, we have just played with this sample application. And as a result, we have these invocations and these functions listed here. I want to show just one more page, which is the architecture page. So this example has actually seven lambdas. And from that architecture page, you can see how these things are interacting with each other in a bird's eye view. After that, I will just start using Foresight and showing you the sample tests. Before that, I just wanted to show you that the application architecture is actually like that.

13. Exploring Test Environment and First Test Run

Short description:

Our architecture page helps understand the lambdas and services. I will explain the review blog post test. Put the correct API key and project IDs in the preferences. Our first test is running, checking the project. We add a blog post and transition its state from submitted to reviewed. We encounter errors and explore how to fix them. The test appears in the latest test runs page. There is a delay before marking the test run as completed. The test has failed with an error message.

And our architecture page really helps a lot to see what these lambdas or the services are doing, etc. So maybe after the workshop while you are playing with this application, you can also visit that architecture page.

From here I just want to go to our Forsythe application and closing all these other tabs. And from here I go to Forsythe. And from here. This application will just start running or maybe close it. And I go, yep, first, I will take all my changes back. Then I check out the test one branch. Yes, my branch is ready. And as you see, right now we have one test, which is review blog post test. I will explain the details later. And the most important part is putting the correct API key and project IDs here. You can also, VB also explain all these details in the readme files of these branches, but here you should be prepared. Let's go to the preferences of this app. Your API key, sorry, your API key is copied, put it here. And this is your project ID. Put it here. Yep. We are ready. And right now, I will just make a test here. Yeah, let's see. What happened? Where am I? Should I? Let's try make install first. Yeah, sorry. Yeah. Sorry, sorry, sorry. So, in the README file we also have a description how to, or description on how to set up this test environment. So, you can go at your own pace. So that's fine as well. So right now our first test is running and when we click the project, yep, it's still not captured here. Or maybe I can explain you what are we doing with this test? As I said, this is a test to... If you remember, when I first added blockers, it is state is, it is first state is submitted and here we test that a blog post state transitions from submitted to reviewed. So in this test, we first add a simple post, a blog post and then we make sure that it is... We make sure that it is correctly added by looking to the search submit results. Yeah, object here. So the search for submitted posts here and after that we just, after we make sure that our post is correctly submitted and in submitted state. we just try to change its state from submitted to reviewed. And probably here we will encounter some errors and we will explore together what are these and how to fix these. As you see right now while our test is running, you can see that it is already started to appear in our latest test runs page. And because it is still running right now, you can't do anything much. Yeah, this can take a while, up to two minutes to be able to understand that test run has been completed. So we are working on it to minimize the delay between the test run actually being completed and we detected that the test run has been completed. Currently, there's some delay, up to two minutes, before marking the test run has completed right now. Please go ahead. Yeah. Okay. This is our first test. As you see our tests has failed here and it gets an error message. And when I refresh that page... Yeah, it has been two minutes. It has lost updated.

14. Exploring Failed Test and Error

Short description:

I refresh the page and see that one of our test suites has failed. The failed test is UpdateBlockStateToReview. Let's investigate further by exploring the test suite and using the TraceMap. The test consists of three parts: posting a blog, searching for the blog in the submitted state, and transitioning the blog to the reviewed state. We encounter an error during the transition. Let's dive deeper and analyze the error.

I refresh the page and as you see, we got one under the failed test part here, but we still have to wait a little bit more to sort some issues in the background. So actually in a minute it should be ready to be clicked. So what we will explore what has failed. Let's see, I refresh the page. So actually if you couldn't keep up with the first part with our front-end etc. After that with these sample tests we have three sample tests this is the first one. We do not have any maybe we won't go back to our frontend application so it is still, there's still time for you to get started. There's still time for you to get started by creating a project and just checking out this branch after calling our repo. And you know the things that you have to do is just putting these API key and project IDs to the make file and then first run make install. Didn't do the mistake that I did. So after that, when you run make tests, you can still replicate what I have been doing right now. Sorry, let's refresh it. Now our test is ready. We see that it failed, so let's jump in. So we see that this is our test run overview page. We have one failed. Actually, one of our test suites has failed. We see it here, and let's go deeper. Yep, and this is our test suite. This is a review block test file. And you can see that under this test suite, we have one test, UpdateBlockStateToReview, which is its name, as you can see from here. So we see that that test has failed. So wonder what happens to our project and why that test is failed. So let's go one step deeper, and you can observe more with these tabs here. And finally, let's go to the TraceMap and see what is the issue. So, as you see, as I explained before, while explaining the tests, it has three parts. So first, we post a blog post which was, let's click and explore. I do not remember right now. Let's see, yeah. As you can see, the fields are here. First, we post a sample blog, and these are the fields. And after that, we searched our blog to see that if it is available under submitted, under submitted blogs. So, I think something gone. So they, yeah, okay. Yeah, I actually found it. Ilker, sorry for interrupting. Let's give a five minutes break after your example is finished. Yeah, yeah, yeah, sure. I think I have at most 10 more minutes. Then after that, we will give five minutes break. Is it okay? Sure, sure, cool. Let's complete the first example and then just give five minutes break, yeah. So as you can see, we searched our blog post with submitted state here. So there is no problem here, but finally, then we try to transition the state of that submitted blog from submitted to reviewed, we get an error and we can follow our error from here. Oh yeah. We got an error. What is it? Blah, blah, blah. So let's go deeper. Let's go deeper.

15. Troubleshooting Blog Post Review Error

Short description:

We encountered an error in the review blog post part. The issue was with the initial state of the blog post, which was misspelled in the DynamoDB. Thanks to Tundra, we were able to quickly identify and fix the bug. If not for Tundra, it would have been challenging to pinpoint the exact problem. Now, I will rerun the test, but it may take a few minutes. Feel free to ask for assistance if you encounter any issues.

Let's go deeper. Yeah, let's start with, yep, we get an error here. Yes. So tell me more. What could have been gone wrong? It says to review blog, blog post must be in submitted state. So maybe we had some errors with the initial states of the blog post. So from here we can see that it's looks for the state from DynamoDB with submitted, but with a single T. It should be a double T. So then maybe we can look for our API page to see that. What could have been wrong? And here we have prepared it for you. Yes, actually this is the thing which is wrong. It should be a written in double Ts. So yeah, we catched the bug. But maybe without Tundra, it could have been very, very hard for us to go to explore which part is wrong. Because I know that the failing part is review blog post part. But if I couldn't see that topology, maybe I could look for a replicator or search blog post, etc. It could have been very confusing for me to start where to look. Yeah, I fixed the issue here and then I just rerun the test. Yes, still it could take a couple of minutes. In the meantime, if you have any issues with running the test, you can always ask us and we are happy to help you on making it work.

16. Exploring Test 2 and Chaos Testing

Short description:

In the second part of the workshop, we will use distributed tracing, chaos engineering, and time travel debugging features of Tundra to troubleshoot test failures. The topology provided by Tundra helps identify problematic components in the flow. The latest run of our test is successful, and there are no issues in the trace map. Now, let's move on to test 2 by checking out the TestJS Summit 2021 Test 2 branch. We will run an AdBlockhost chaos test to observe the behavior of our architecture under chaos conditions.

Okay. Actually, we have a new site now on GitHub. So, let's go ahead and create a new site here. And go to the website on GitHub. And if you open that site, you can see the current version, it has become NixOS, so we are already deployed on NixOS. Yes, yeah, I can also show it from here. Yeah, you're right. Yeah, this is the next one is still going. So, I expect. Yeah. Maybe while we are waiting to the test has completed, I can talk about the second part of the workshop. In the second part we have two more examples, two more end-to-end test examples here. And we are going to use distributed tracing, chaos engineering, and time travel debugging features of Tundra to troubleshot the remaining test failures. So, of course, the time trail debugging feature might be very interesting for you to be able to trace your application line by line and replay back to reproduce the issue later you discard the test failure. And if you need to save the file, you can use any one of the two related Home Screen editors to take a download or remap the data. So actually our example is very simple, but this simple example, still includes 3, 2, 3, 4, 5 Lambda functions that are talking with each other, working together. aaand so think about a harder case. For example, let's say 20 Lambdas. You need to test how these Lambdas are working together, and in that case, it would be really hard to know where to look If you do not have some picture like that. The main part here is that this topology really helps you to understand where to look. This is the main catch here. Yeah, let's say that Can you open the trace map here? Let's say that there is an issue with the BlockPostReaptKeyLambda function which is not directly triggered from the test itself. So in the test itself, we will be able to see that the ReAView BlockPost or the postBlockPostLambda function completes successfully but there is an issue with the indirect lambda function like BlockPostProcessor or the BlockPostReaptKeyLambda function which fails the BlockPost to be able to save it to DynamoDB or Elasticsearch to be able to retrieve it or search later. So in this case it will be hard to find the problematic component in the flow without having a proper distribution tracing solution like Tandra. So we need a couple of more seconds actually. And is the test still running? No, no, no test has succeeded, we can see it from here I can also show it here. It passed but we need a couple of minutes to sort out the background issues. Just a couple of seconds maybe. This test took actually around 250 seconds, which may sound long but you know it depends my local machine, local host, etc. So, a local stack, sorry. So, yeah, it could have been improved. Right Serkan? Yeah, most of the time has been captured by the local stacks spin-off process to start the local stack instance. Yeah, so as you can see we are good with our latest run of our test, there is no error right now. and when you click, our previous run has an error, and our latest run has a success. And when we click it's trace map, as you can see there is no issues right now. So we are good with our tests, everything is correct. Let's say, I think I've already checked that, so let's go for that code. By the way, sorry for interrupting Ozan. Does everyone have any issue with the test 1, the previous example, by running the test itself? Do you have any issue or need help for the previous example? Okay, I think we are good to go and if you have any question or need help or any issue while running the previous example just let us know. Sorry, Ozan. Okay, awesome. No worries. So for test 2 we need to check out the TestJS Summit 2021 Test 2 branch which I've already done. And I've set the API key and the project ID according to the Ilkerz project that he created. So we are using the same credentials here. So let's see what we have here. And we have an AdBlockhost chaos test, which is in this scenario, we are injecting chaos error to our architecture and see how it's going to behave when we run the test. So let's make sure Docker is running and since I'm using AWS CLI 2, I have to enable the Python virtual environment, which I'm going to do now with the following commands. Let's enable virtual environment and activate it and I can run the make install. I'm hoping it's seelable from this font size, so if I run the make install it will install the NPM dependencies as well as the serverless and local stack packages that we need for this test.

17. Running Chaos Tests and Exploring Chaos Engine

Short description:

Let's run our chaos tests and discuss the Chaos Engine. The Chaos Engine simulates potential failures in production to test applications beforehand. It can create chaos through network operations or inject failures and delays at the client side. Tundra provides the ability to inject failures and latencies at the service, resource, and operation level. It offers various configurations for Chaos Engineering support. With Tundra, you can inject failures without changing application or network configurations. The Tundra agent and programmable configurations allow you to specify the services and resources for chaos injection. This avoids the need for application-level code changes. Tundra's Chaos Engineering support helps simulate potential failures and delays before deploying to production. Let's check the test results in APM.

So let's wait a couple of seconds to make sure it's successfully installed all the packages. Okay, now we can run, make tests, to run our chaos tests. Let's run this test and talk about what we are checking here, which almost identical to our test one. The one thing is we don't need to do the review part, and we are also doing injecting a chaos here. So while this is running here I'm going to go login with Incare's account to our Forsyte dashboard. Let's see what did I do wrong here. Olsan, not Gmail. It would be Tandra. Oh, right. Sorry. Sorry, force of habit. Yeah. Let's go with that. Thank you for pointing that out. To the Forsyte app. And we already see the test is here. It's processing here already. And maybe while the test is running, we can talk about the Chaos Engine. Basically, the motivation of the Chaos Engine is that we want to simulate the potential failures that can happen on the production but we want to test our applications against potential failures before going into production like the third party API services or the database failures or the latency or delays. There are two approaches for injecting chaos into our architecture and the first one is creating chaos through network operations, just dropping down the network connection between the services, between the applications, the database and so on and another approach is injecting the failures and the delays at the client side like we did at Tundra. We already have ability to instrument and trace the calls to the third-party APIs and services and databases, we can also inject the delays and the failures to the communications to the outside world. So here you can see that we are injecting an error to the operations to the Elasticsearch services. So we have the ability to inject the calls and to inject the failures and the latencies at the service level and the resource level like a table name or the queue name and so on and also the operation type level. Like for example, we are able to say that, just inject latencies to the write operations to just users tables in the DynamoDB and just write operations to the to the test queue in the AWS SQS queue and so on. So we can also combine the different criteria for injecting the call. So here we are just using a very simple configuration just to inject the failures for all the operations to the Elasticsearch. But we can also narrow down the scope only for specific Elasticsearch indexes or only for specific Elasticsearch operations like reads or writes or deletes. So there are a range of configurations available for the Kiosk Engineering Support by Tandra. Thank you Serkan. So here we are injecting at 100 percent. So basically, every time we run the test, this configuration will inject the failure to our operations. So our test has failed as we expected. And it says, I'm expecting a one blog post. I'm not getting any. So let's see how it's going to look on Tandra. We just need to wait a little bit to make sure that it's closed. Yet the good point of it Tandra Kiosk Engineering Support is that you don't need to change your application level or network level configuration just to inject lattencies or delays. So by using the Tandra agent and the configurations by programmatically or declaratively through environment variables, you can just specify the points, the services, and the resources and operations to inject the chaos without making any application level code changes. So without messing up your own codebase just for injecting the failures. The delays. So this was another good point of using Tandra chaos engineering support, while to simulate, to mimic the possible potential chaos or the failures before going into production. Yep. Let's check again. And it's failed. Let's see how it's going to look on APM. So, click on the detail page. We see the error message and once we click the test map button up, sorry, just took me out. Sorry about that. Let's see. Let's try again.

18. Troubleshooting Intermittent Test Failure

Short description:

In this example, we simulate a failure scenario in our architecture to demonstrate how we approach and resolve such situations. By injecting a failure condition into the Elasticsearch instance, we can observe the error and use Tundra to troubleshoot it. We can easily fix the issue by adjusting the configuration and rerun the test. It's important to note that we are using a shared Elasticsearch instance, but each attendee writes to a different index, ensuring data separation. Feel free to try the tests at your own pace, as the workshop will be available for later viewing. In the final example, we demonstrate Tundra's time-free debugging feature by troubleshooting an intermittent test failure. We explore the flow of a successful test execution and contrast it with a failing test execution. By leveraging Tundra Foresight and APM, we can pinpoint the cause of the failure and resolve the issue. Let's switch back to the IDE to continue.

Okay. So, as you can see here our whole architecture is shown with the test. Not like the previous test, we didn't do any reviews, but we posted a blog post and we expected it to see on our Elasticsearch instance and we see there in this PostReplicator lambda, and if we click on that, we see that this is actually a Chaos error that we named like this, and we see that there are messages here, and all the details that come with it. Of course, you will see all the trace request details as well. So this isn't something... This might not feel like a real bug, which isn't the point, the point is that, in this architecture, sometimes writing into the Elasticsearch instance fail, and when it fails, how we are going to approach it to that situation. So this is to mimic that exact scenario, exact failure scenario, and do that in this controlled environment. So let's quickly fix that. Since we didn't make any code changes and declare it through our environment variable, this configuration here which is outside of our architecture and passed into this lambda, we can easily just inject the percentage to set this to 0, and this will success all the time. So you can also give complex conditions here as well. So let's run the test again. And this will take a couple of minutes as well. About this time issue with the tests, since we are using Locust Stack on our machine, as Ilker said, it really depends on the resources on your machines and since Locust Stack is creating all the Docker instances when it's triggered, it may take a couple of minutes to basically roll up and be available for our tests. Yeah since there are many Lambda functions it is highly recommended to give more CPUs to the Docker to be able to run the applications to make test execution properly. By the way, I forgot to mention that here we can see that we are using the Elastic Search instance but we are not creating the Elastic Search instance just for the test. We have created a public Elastic Search just for the workshop. So you can use this Elastic Search instance globally, but the thing is that everyone writes the data to the different index for the global Elastic Search. So you don't need to worry about just conflicting the data with the others so you can just use our global Elastic Search instance because we are just creating different index, different Elastic Search index, different Elastic Search instances. So indexes for every attendee. So just run the make test and then the application will use the specific Elastic Search index for you. Yep, you can also try it on your own pace, on your own time, and this workshop will be available for you to watch later. As you can see, our test has passed and he just disabled the Chaos injection basically, so that's the point of this stage. We can go back to our dashboard again and we can see that it's coming and we see that there is a one successful test. Once this is finished, we see this trace map with all lines. It's a little quick to see and it'll... Yep, all green now and we can see that just to make sure on our test map, so we have the previews run and we go to our APM and we see that everything is working just as fine. So, I hope this example gave you some ideas how we are implementing Chaos Injection and what is chaos engineering in general and how we are going to implement, integrate it into our tests and our everyday tooling. So, I can pass it to Serkan now I think, and stop my sharing. Thank you Olsan and Ilker for the demo. And should I just wait a few minutes for everyone just to able to try the test one or test two before continue with test three test, failure three? And also in the meantime, let me share my screen. And I will wait a few minutes here before continue with the example three. So, just ping me or ping us. If you have an issue while running the previous test failure, test one or test two. Okay, let me continue with the final example here. So in this example, we are going to use our famous time-free debugging feature. So basically here, we are going to troubleshoot an intermittent test failure. So because we call test failure intermittent because the test fails only for some input not for all the input. So basically, sometimes it passes and sometimes it fails. So because of that behavior it is not very easy to fail the test every time. So in this example, you can see that this is a flow of a successful test failure, successful test execution. You can see that we just triggered the three API gateway endpoints. So first we sent the blog post message to the API gateway endpoint, and then the post blog post lambda function triggers and SQS queue and then this triggers lambda function. And this writes the blog post to the DynamoDB table and then it is replicated to the Elastic Search to be able to search later. And then we are able to get and search the blog post later by using these API gateway endpoints. So this is an example of the successful test execution. But on the other hand, we also have another failing test execution. And in this particular example, what we are doing is that we are just posting a blog post to the API gateway endpoint, and then we are trying to get and search the post blog... the blog post item. But unfortunately, we are not able to search it in our tests. So we are going to use Tundra Foresight and APM collaboratively to find the pinpoint issue, to find the reason of the issue. So let me switch back to my ID.

19. Getting Tundra API Key and Test Project IDs

Short description:

Get the Tundra API key and project IDs from the Tundra Foresight APM console. Copy them to the Makefile for Tundra Foresight and Tundra APM agents. Run the test and analyze its details.

Get the Tundra API key and test project IDs from the Tundra Foresight APM. Let me just switch to the Tundra Foresight console to show you how to get the API keys. There's an issue with my Chrome, but I'm using the same credentials as ECL and Osan. In the Tundra Foresight console, I can see a test project created by ECL. In the project preferences, you can find your API key and project ID. We'll copy these values to our Makefile for Tundra Foresight and Tundra APM agents to use during the tests. I've copied the API key and project ID and switched back to my ID. You can see that I've added the API key and project ID as attributes. Now, let's run the test and discuss its details.

20. Running and Troubleshooting the Test

Short description:

In the test, we create a blog post by triggering the blog post endpoint and verify that it is saved to DynamoDB. We then search for the blog post using the search endpoint. We periodically check for verification as the flow is asynchronous. The aim of the workshop is for the test to fail so that we can troubleshoot it using Tundra Foresight and APM. We identify a missing Elasticsearch interaction in the blog post replicator. The test fails because the saved blog post cannot be searched through the search endpoint.

And then I'm just running the test itself but before, so maybe I should run the test and then while the test is running, I should talk about the details of the test. So let me first run the test, but before you should be able to see to our test three branch and let me share the branch name through the Slack. Sorry, through the discord channel.

Okay. So... Let me just start the test, and then I'm gonna talk about the test itself. So, what I am doing inside the test. And let the test run. So, in the test itself, what we are doing is that we are making the similar things with the first two examples. So, basically, we are just starting the local stack before the test, and then we are just shutting down the local stack just after the test itself. So, here, what we are doing inside the test itself, we are just creating a blog post by just triggering the blog post endpoint first, and then at the second step, we are just verifying that the blog post was able to save it to DynamoDB by checking the blog post by using its ID. And then, when the step two has passed, we are sure that the blog post was able to save it to DynamoDB, and we are able to retrieve the blog post item itself. And then, at the third step, we are just trying to search the blog post item by calling the search endpoint. So, basically, this endpoint just search the blog post through the Elasticsearch, and since we are able to save the blog post successfully at the second step, and we verified that blog post is able to save it, because we were able to fetch the blog post item by its ID. So, we are expecting that that blog post should also be able to search here. And also, you can notice that we are making changes periodically, because since the flow is asynchronous and event-driven, so when we just call the first endpoint, and it just returns us with the accepted status code, so this doesn't mean that the blog post has processed and saved it to the storage. So, it just says that the blog site application just gets the request and sends it to the queue to be processed later. So, that is the reason why we are just checking the verification periodically whether it passes eventually. And, at the third step as I said, we are just expecting the blog post needs to be able to search through the search endpoint, and then we are just finishing the test. So, normally, the three steps should pass, because we are just submitting the blog post first and then verifying whether the blog post is able to retrieve it, and then we are just trying to search the blog post whether it is able to search by the end user. And, test is still running, and in the meantime, let me see which test runs. And, test is still running. And here, we are expecting of course the test will fail, because that is the aim of workshop. So, we are just failing the test, and then using Tundra Foresight and APM to troubleshot the test failure, and then fixing the test, and then running test, and then we are verifying test. Verifying that the test is passed. And, let me switch back to my ID. Still, the test is running. And, let me back to my presentation. So, as you can see that, in the failing test execution scenario, there is a missing elasticsearch interaction at the blog post replicator side. So, we are going to also use, we are also going to see these behavior just in a few minutes, by checking the Tundra Foresight console. But here, it's obvious that the reason is that, somehow the blog post is not able to be indexed to elasticsearch. So that is the reason that, why we are not able to search the blog post through elasticsearch. So, normally we are expecting that, whenever we write to the DynamoDB table, the blog post should also be mirrored to the elasticsearch instance, to be able to search later. So, you can see that between the, there's the only difference between the successful and the failure test execution. There's a missing elasticsearch interaction in the blog post replicator. Just, sorry, just let me zoom in. Yeah, there's a missing blog post, there's a missing elasticsearch interaction from the blog post replicator lambda function. Okay, let me back to my ID, hope that test completion, test has completed. Bye, failure. Still, it is running. And we are just periodically printing, just logging the collected API responses here. Just the data returned from the code API. Just the returning the response from the get blog post URL. And the searching endpoint as well. And then when we get the search result, we are expecting that there will be only one item because we only post one single item, one single blog post to the post endpoint. And then we are just verifying that the attributes, the properties of the blog post must be the same with the properties of the blog post that we have posted in the first step here. Okay, the test has failed. And it says that we expect that there must be one blog post item in the response, but somehow there's no blog post item in the response here. So actually this means that we are not able to search the saved blog post item through the search endpoint. But since we passed the step two, this means that we are able to get the blog post by it's ID, but somehow we are not able to search the blog post through the search endpoint.

21. Troubleshooting Test Failure and Analyzing Code

Short description:

There seems to be an issue with synchronizing the DynamoDB content to Elasticsearch, resulting in the inability to search. By analyzing the trace map, we discovered a missing Elasticsearch interaction. The Lambda function responsible for replicating the blog post skips writing to the Elasticsearch instance due to a condition check. This condition check incorrectly handles the partition number zero, causing the failure in searching the blog post. Through time travel debugging, we traced the test execution line by line and observed the values of local variables. We identified the cause of the failure and the specific code responsible.

So there must be any issue while synchronizing the DynamoDB content to the Elasticsearch to be able to search better. And let me back to the Tundra Foresight Console. And it says that still the test is running because it's just took some time to understand that test has completed. Okay, test has completed with failure, and since we only have one single test, there's a one fail test here, and when we click the test itself, we can see the same failure test failure message, we can see the logs printed during the test execution here. And when we click the trace map here, sorry.

Okay, so you can see that at first, we are just triggering the post endpoint, and then this just sends a message to the process block post queue, and then this triggers block post processor, lambda function, and then this triggers some function, and this just writes the data to the DynamoDB table. And these just triggers in the lambda function, replicate the lambda function, but there's a missing elastic search interaction here. So normally we are expecting that the block post needs to be, the elastic search put request needs to be there. So to understand the behavior of the lambda function itself, when we just click the block post replicate the lambda function, you can see that there is a debug item in the, in these method spec here. And also then you click the method here. You can see the actual just code of the selected function. And then by using our time travel debugging feature, you can just play forward and back through the test execution. And also you can, you can observe the values of the variables at the right hand side. So for example, you can, we're at the beginning of the save block post to index method here. We can see the content of the actual block post item here. And we are just forwarding. And we are just, here we are just, we are making some optimization here. So instead of using a single elastic search index, we are just writing the block post to different indexes. And so we are basically generating a hash code from the user name. And then we are just getting to modules of the hash code according to the total number of partition count. And then we are just mapping the block post content by its user name to a specific Elasticsearch index. So we are not just using a single index. We are just mapping the block post according to their users to different Elasticsearch indexes to distribute the loot between the block post items to reduce to the index overhead on a single Elasticsearch instance node here. So we are just getting the block post username here, the John Doe, and then you're just calculating the hash code and you can see there is a calculate hash code here. And then we are just getting it's module according to the index part of total index partition count. And then we just calculated that our partition number is zero. And then when we just forward, we can see that it just branches to the else statement here because the reason is that even the partition is not undefined, if the partition is still zero, then if condition evaluates is as false. And then we are just switching to, we are just branching to the else statement. So normally this condition check should not be handled like this. So we can just say like partition is not equals to undefined or something like that. But when we just use this statement, we are also ignoring and skipping the partition number zero. So that was the reason that we just calculate the partition as zero. So normally we need to write to the partition's zero index here, since this statement just evaluates the condition as the false. We are just branching to the else statement. And here we are just skipping the execution. We are just skipping, the writing to the elastic search instance. And so that was the reason that why the blog post is not be able to search through the elastic search here. So this is the reason of why the test has failed. And also, we can also trace the test itself line by line. And when we go to the blog post test itself here, you can see that there are some spans with the debug item here. For example, when I click one of them, you can see the test code itself. And when we go to the test execution itself, I can see the values of local variables at the right hand side. And I'm just making the URL, and you can see the URL of the blog post here. And then going forward, so this is the result. These are headers. You can see all the, you can observe all the properties, all the properties from the captured snapshot of the variables while executing test itself. And then we are just going forward. And yeah, we are just going forward like this. And here, we are just running. And we are just running another anonymous function, and when we just jump into the anonymous function, we can see the actual deadline here.

22. Tanda Time Travel Debugger

Short description:

With Tanda Time Travel Debugger, you can trace your test and the application to understand what's happening. Instead of replicating the issue locally, you can capture and replay the problematic test execution to pinpoint the issue.

And also, we are just able to run the actual task here, and when we just jump into it, we can continue from the test execution in the test method, in the main test method here. So we just submitted a search request. And you can see the search result here. And you can see that there is no data here. And then we are just going forward and back to see the values of the variables while the test execution. So with Tanda Time Travel Debugger, you can both trace your test itself and the application itself, like we did for the PostReplicator Lambda function, to understand what's going on in our Lambda function itself. And then, instead of trying to replicate the issue on our local, we can just capture the execution of the problematic test execution and then replay the problematic test execution better line by line to understand what's going on under the hood. Because sometimes the issue might relate with our own code base, not the remote service, not third-party service. So that is the reason that why such kind of time travel debugging capabilities are very usable to pinpoint issue.

23. Configuring Environment Variables for Tracing

Short description:

In the environment variable, specify the necessary configuration for your application or test. Tandra will trace the execution and capture variables, allowing you to understand the issue without making code-level changes.

And so let me switch to my idea to see what I did just for having such capability. So in the environment variable, I just say that two environment variables and one port is just a trace line by line, every files under the service folder. And also I have another configuration definition for the test itself. So here I'm just saying to Tandra that just trace line by line the every methods and the modules under the tests folder and the service folder here. So that is the reason that why we are able to see the traces in the test itself, as the time travel debugging traces inside the test itself and inside the blog post service method code from the main handler here. So you can just, without changing any code level updates without making any code level changes, you can just specify the required configuration through the environment variable for your application, for your test. And then Tandra will take care of the rest just to trace your test execution and the application line by line and by capturing the variables properties and so on. And then we can understand the issue later.

24. Fixing Application Code and Examining Test Results

Short description:

Let's fix the issue in the application code by addressing the incorrect zero check for the partition number. After saving the code, we rerun the test to verify the fix. Using the Tundra foresight console, we can examine the flow and interactions between the applications and services. Clicking on specific applications allows us to delve into their details, such as the line-by-line trace of the blog post creation and search. The test is still running, so let's wait for the results. If you have any questions or need assistance, feel free to ask. Additionally, I noticed a question in the Discord channel regarding the backend language, which is Node.js. All the applications and tests are implemented in the Node.js runtime.

So for example, let's go to the function of the application itself and apply the fix here. So instead of just the zero check, we are just making that, the partition must not be undefined. So if the partition is undefined, we just skip it. Because we say that partition might be undefined here, the reason is that we might not be able to calculate the hash code here for some reason. So that was the reason that why there is such kind of check here, but check itself was written wrong because we were not taking care of the zero value. And in case of zero partition number, we are just branching to the last statement here. I'll let me save the lambda function. Let me save the code here and just rerun the test. And then when I rerun the test, the test should pass because we just fix it the back. And then we will be able to verify the fixed behavior by using the Tundra foresight console.

Okay, let the test run and back to the Tundra foresight. Console here and you can see that with the Tundra foresight and the ATM. At first, you can see all the flow here by checking the interactions between the applications and the services here. And also when you just click down. So when you just click one of the applications here, you can just narrow down the scope and go into the details of the application itself. So, for example, when I just click the blog post here, I can see the line by line trace and blog post message here. So this is the actual blog post here. So I'm just creating the parameters here. I'm just sending a message to SQS. And there's also a search blog post here. And then I just go to here. And therefore that only the username and the state criteria has specified to be able to search the desired blog post item here. And I'm just creating conditions here. And I just pass over the keyword, because I am not specifying the keyword inside the test. And I have specified the username here. I just created the condition here. And then there's no start timestamp. I'm just skipping it, and there's no end timestamp here. And then there's a state of course there, so this should trigger another condition. And then I'm just calculating the hashcode here. And then I just calculate the index name here. And since the partition is zero, I'm just skipping it and just searching by the wildcard here. So that is the final index name here. And then I'm just searching the blog post through all the elastic search indexes starting with these graphics here. But since the blog post and the replicator lambda functions was not able to index the, the past blog post item because of a bug in our application itself. We just skip the indexing phase to elastic search. And let me back to my ID test is still running. Back to the Forseyes console. Test is still running with the fixed code. So I expect that the test should pass. Let's wait it. And in the meantime, if you have any question or need help regarding to the test fairly examples, we demonstrated you or any question related to the products. And we have talked about. We are here to help or answer your question. By the way, I have seen a question in the discord channel. ah In fact, the backend is written in Node.js, so all the applications, test itself and the application itself is written in Node.js. So maybe we have missed it to showing that part. So these are the source folder here and these are the lambda functions and code base. So basically all the functions are implemented in the Node.js runtime here. For example, this is a server PL and you can see that there are many different lambda functions here.

25. JavaScript Runtime and Time-Travel Debugging

Short description:

We use JavaScript for all the applications and tests. The Lambda functions run inside Docker containers managed by the local stack. The code runs in the NoGS runtime. We can use the time-travel debugging feature to troubleshoot issues in our codebase without making changes. Distributed tracing helps identify problematic components, but debugging is necessary to understand the root cause. Time-travel debugging allows us to trace test and application execution, pinpoint bugs, and apply fixes. Both distributed tracing and time-travel debugging are crucial for distributed applications.

The search block for lambda function and the post block lambda function and the others. And for example, uh, these, this is the block block API implementation here. These are the lambda function, uh, lambda handler implementations. So these are triggered from the API gateway end point, and then this just passed the business logic to the service, service layer, and the service layer just handles, uh, the logic. Uh, and then it returns the required response or the data to the, to the API layer to the control layer. So, so in fact, basically we are just, so we just use the JavaScript language for, for, for, uh, for all the application and the testspot. And Mario, is that the question? Are you clear? Is it the answer are you looking for? I mean, So basically we are targeting the JavaScript runtime. Okay. Let me back to console here. Okay. We are waiting the test as marked as completed by tundra for site. And in the meantime, let me check the chat. Okay. There would, there's any question. There is one question but the care answered to host. So maybe attendees cannot see that. So for our Mohammed's question about the PDF export right now, we don't have any export feature, but I believe you can use a secret capture to produce a report for your, but yeah, uh, expanded. Yeah. Just for the Mario message. Yeah. So actually we are not, we are not using any python. So python is basically used by the local stack. So we are not relying on any Python rates quote. So let me open my make file here. So yeah, there's a Python requirement here, but the Python requirement is, uh, but the Python installation is required for uh, for the AWS for the local stack and AWS local. So we are not using the DS to directly from our applications. So we are just heavily depending to the no GS runtime in JavaScript runtime. So basically, and in fact, uh, just the test itself is using the no GS runtime from our local and the Lambda function itself is running inside the Docker and the no GS runtime is managed, uh, by the local stack itself. So basically these Lambda functions, for example, these, the service GS file and these, uh, we look API GS files and the code here, just running inside the Docker container made by the local stack. So we are just, uh, bundling all the code here and send and deploying the application to the local stack and then local stack just runs the application inside the Docker container. So basically all the code running here, the test itself and the application itself just pure Java script and code base. Okay. Uh, let me see which two four side console. Okay. The test has passed and also when I go to the trace map of the test itself, I should be able to do core. I should observe the correct behavior. Uh, by the local repository, the function. Okay. You can see that there's an interaction to the elastic search here and when I click the replicator Lambda function, I can see that there's an elastic search interaction here and I can see the body of the elastic search message here and also when I click the debug item here, I can see that, uh, there's a fixed quote here and I just calculated the partition, uh, for the, for the username John Doe and the partition name is zero. And since I'm just checking whether it's undefined or not because of some, uh, because of some potential failures while calculating the hashcode, but I'm handling this little condition here. I just, uh, branches to the, to inside the if statement here, I just, uh, generate the index name here. And you can see the index name here with the partition is zero post six here. And then I'm just, uh, searching to do to the Elasticsearch itself. When I jump into the, it, I can just go to the, uh, the target, the target Elasticsearch operation here. So these is the Elasticsearch URL here. And this is the body of the submittal payload here. And here we can see that how we can use time-travel debugging feature to troubleshot the failures in our old code base, because sometimes distribute tracing is not enough to, to pinpoint the issue, distribute tracing tells you the problematic component. So here with the distribute tracing feature, we have identified that these Lambda functions, the problematic component, but without any, but without any proper debugging feature, it is hard to tell what's wrong with these Lambda functions. So that is the reason that why we have used the time-travel debugging feature to understand what's going on under the hood and with the time-travel debugging feature, without making any code changes in our code-base, we can just trace the test test and the application execution here and understand just keeps the saving to the Elasticsearch instance because of a bug in our own code-base, and then we just apply the fix and run the test and we just passed the failed test execution before. So with the help of the time-travel debugging feature, we can easily pinpoint the issues related to bugs in our own code-base for our end-to-end test. So having the two parts of the tracing, I mean the distributed tracing and the time-travel debugging feature is very crucial for your distributed applications. So even on your local or even on your CI environment before co-production or even on the production, it is very crucial to have the both parts. I mean the tracing part and the debugging part is very important to find issue because with the distributed tracing, you just find the problematic component.

Watch more workshops on topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Featured WorkshopFree
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Few developers enjoy debugging, and debugging can be complex for modern web apps because of the multiple frameworks, languages, and libraries used. But, developer tools have come a long way in making the process easier. In this talk, Jecelyn will dig into the modern state of debugging, improvements in DevTools, and how you can use them to reliably debug your apps.
TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Top Content
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
TestJS Summit 2022TestJS Summit 2022
20 min
Testing Web Applications with Playwright
Top Content
Testing is hard, testing takes time to learn and to write, and time is money. As developers we want to test. We know we should but we don't have time. So how can we get more developers to do testing? We can create better tools.Let me introduce you to Playwright - Reliable end-to-end cross browser testing for modern web apps, by Microsoft and fully open source. Playwright's codegen generates tests for you in JavaScript, TypeScript, Dot Net, Java or Python. Now you really have no excuses. It's time to play your tests wright.