GitHub Actions for Node.js Apps

Rate this content
Bookmark

GitHub Actions offer a convenient, feature-rich solution for building CI pipelines. Actions consist of composable steps controlled YAML files checked into your code repo. Come learn how to perform tasks that are commonly required of modern Node.js codebases, such as package installation, linting, running tests as part of pull requests, building Docker images, and deploying when code is merged into the main branch.

32 min
01 Jul, 2021

AI Generated Video Summary

GitHub Actions allow for continuous integration tasks, defined in YAML files, that can be versioned and reviewed through pull requests. Workflows can be triggered by events such as pull requests or merges, and steps can refer to external GitHub repositories. Docker containers can be built and deployed using GitHub Actions, with configuration setup and deployment defined in YAML files. Values can be used and shared between GitHub Actions, and Node.js internals can be instrumented for performance monitoring.

1. Introduction to GitHub Actions

Short description:

Hi, I'm Thomas Hunter, and welcome to my talk, GitHub Actions for Node Apps. In this talk, I will give an overview of GitHub actions, which allow you to run continuous integration tasks related to your code base. GitHub actions are defined using YAML files and can be versioned and reviewed through pull requests. They are composable, allowing you to build dependency graphs and run code in parallel. Steps can refer to external GitHub repositories.

Hi, I'm Thomas Hunter, and welcome to my talk, GitHub Actions for Node Apps. And the content from this talk is based on a book I recently published called Distributed Systems with Node.

Alright, let's dive into it. So first, we're going to look at an overview of GitHub actions. So you might be wondering what is a GitHub action? Well, it's a way to run continuous integration tasks related to your code base. And it's actually super convenient if your code already happens to be hosted on GitHub, which most repositories these days seem to be.

The way it works is it ultimately allocates a virtual machine for you somewhere, and then runs a bunch of code for you once a GitHub action has been triggered. And so as far as billing goes, it ends up being based on minutes. And so if you have a free account, you get 2000 minutes per month, a pro account 3000 minutes per month. And if you have paid accounts, you can, you know, sort of, you get different tiers, and you can pay for them as well.

And so this provides continuous integration that's defined using code. And so these GitHub actions end up being defined using YAML files that are checked into the repository. So they're going to live in the GitHub directory inside a folder called workflows. And then you can end up creating multiple YAML files for each different workflow that you want to define. So they end up getting committed. They're versioned, you know, checked in, you can make pull requests, sort of verify and review that they look good. And honestly, for using a system like Travis CI or CircleCI, you know, this approach isn't going to be all that different. One thing that's nice, though, is you don't have to create a new account. You don't have to, you know, authorize it, configure and all that stuff. Since everything lives under the GitHub house or roof, it all ends up working together pretty, pretty seamlessly.

One nice quality about these workflows is that they're composable. And so workflow ends up being made of one or more jobs and then a job is made of one or more steps. These different jobs, they end up running on different virtual machines. You can specify dependencies. You can say that this job depends on that other job. And so by defining them that way, you can build out some sort of dependency graph and run code in parallel. And the individual steps, those end up getting run sequentially. So the steps can actually refer to external GitHub repositories. And so for example, this uses line here, that represents code that you would see within one of these workflow files. And so this is saying that it's using actions slash set up dash node at V2.1.4.

2. GitHub Actions and Workflow Example

Short description:

And so that actually ends up translating to a GitHub repository. Another thing you can do is define configuration used by these workflows. It's a way to just set key value pairs that you can use within your workflows. If you adopt GitHub Actions, consider creating actions for common organization patterns. Let's look at an example workflow for a pull request. The output for these workflows is contextual within the pull request. Now, let's look at a workflow file. It represents boilerplate needed for different workflows. We have a workflow called PR lint test.yaml. It specifies the trigger for executing this workflow. We have a jobs clause with a build job defined.

And so that actually ends up translating to a GitHub repository. In this case, it's the actions organization, which is maintained by GitHub, and then the setup node repository and that. And then that at symbol there, that actually references a tag. And so this is saying that we want to use the V2.1.4 tag.

Another thing you can do is define configuration used by these workflows. And then that configuration ends up sitting inside the project settings. And that configuration is called a secret. Those secrets are sort of kept from the eyes of those who shouldn't necessarily see it. But essentially, it's a way to just set key value pairs that you can use within your workflows.

Another one thing that you should consider doing if you adopt GitHub Actions, is to then create actions for common organization patterns. So, for example, if your company has perhaps like a dozen node repos, and they all end up deploying microservices within your infrastructure, it would then make sense to create GitHub Actions that you can then share amongst all those projects.

All right, now let's look at an example workflow, this is one that we're going to use for a pull request. And so much like the other CI solutions, if you've used them, is that the output for these end up being contextual within the pull request. So in this case, we can see that there's this poorly crafted pull request with no description. But we can see at the bottom that the results of the workflow have been published. So they're very convenient to see contextually within a pull request.

So let's actually look at a workflow file. So this represents a bit of boilerplate, which you'll need to use within these different workflows. But it's not too bad. In this case, we have a workflow called PR lint test.yaml. So the first thing we do is define a name. So this will be displayed in the GitHub user interface. So in this case, the name is linter and acceptance test. We then have this on clause here, which specifies essentially the trigger for executing this workflow. And so what this is saying that is when a pull request is made against the main branch, that it should kick off this workflow. You also notice that sort of dangling field there called workflow dispatch. And so by providing that, you're able to then manually trigger this workflow as well using the user interface. After that, we have this jobs clause. And inside there we have a build job defined. And we're saying that jobs is going to run on the latest Ubuntu image.

3. Running Steps in a Workflow

Short description:

The code is checked out using the check out action. The node environment is set up using the set up node action, providing the latest LTS version. Dependencies are installed using npm install. The linter is executed assuming a lint script is defined. A Docker container is built and started using the Docker compose command.

And so these end up living within inside of a virtual machine. So this is going to run within a virtual machine with Ubuntu installed. So that build step or job is then made up of individual steps. And so the first step here is that the code ends up getting checked out. So a lot of the workflows that you'll write will probably begin with this step to check out the code. Though not necessarily all of them. In this case, we're using the check out action in the official actions organization, and we're using the floating tag called v2.

Alright, so here's some additional steps as well. So the first thing we want to do is set up the node environment. And so this is using another GitHub action called set up node. And then you can see that we have an additional property here called width. So that's a way that we can provide sort of arguments to these steps. So in this case, there's an argument that that step accepts called node version. And as you might have guessed, that represents the version of a node that we want to get installed, in this case, latest lts. So this action provides like the node binary, it's probably also providing like npm, yarn and some other niceties that are probably applicable to a node application.

After that, we're actually going to install the dependencies. And so we're going to run npm install. In this case, there is no uses clause, it's just using the run clause. And so what that means is that we don't need to use like an external action dependency for this, instead, just execute the following shell command. And again, we're going to do a similar pattern with the linter. And so this is how we run a winter, this assumes that we already have a lint script defined in our package dot JSON file. So at the bottom, you can see a screenshot of this step. And so each one of the steps they have executing, and you get to see the output, and then you can expand them. Then so here we see that the linting process took about one second, and we're able to see the output from the command. And so standard out standard error, and that's going to get displayed here so that you can view it and then debug failing action. So here's another one that's probably applicable to a lot of projects. And this is going to let's see, build a Docker container. And so the name of the step is build and start Docker containers. And so it's using the Docker compose command. And so the GitHub workflow virtual machines, they provide just a bunch of convenient tools, one of them happens to be a Docker.

4. Running Docker Compose and Deploying Containers

Short description:

We run Docker Compose to run the Docker containers in the background. We specify the environment for running MPM tests. The Docker compose YAML file is configured to listen on port 1337. We provide the hostname and port to the test suite. A screenshot shows that 110 tests passed. Another workflow is triggered when the pull request is merged into the main branch. The workflow file is called main deploy.yaml. It sets up the Docker environment and builds and pushes the container to the registry.

And so here, we're running Docker Compose, we're referencing the Docker Compose YAML file, which is checked into the project. And so we're saying that we, the up flags is stating that we want the Docker containers to run and then the dash d flag, states we want it to daemonize and fork to the background. So once the containers are then run, then goes to the background and the next step continues. So in this case, for running MPM tests, specifying a environment to the application.

And so here the Docker compose YAML file has been configured to listen on port 1337. And then so we provide that host name and port to this particular test suite. At the bottom is a screenshot of the step. Once it's complete, we can see that 110 tests passed within one second. And if, and if we were to scroll up, we would see the output from each individual test. So here's, here's what the full workflow output looks like. So within a, within a pull request, you can click it to get more information. So here we see, we have the linter and acceptance test workflow. We have the build job has been selected, and then each individual step is displayed on the right.

Alright, now let's look at another workflow. This is a workflow that's going to represent when our code, when our pull request gets merged into the main branch. So again, we have a bit of boilerplate. In this case, we have a new workflow file. This one's called main deploy.yaml. Again, it has a name, this case, deployed to production. The on field is a little bit different. And so this is saying that when a push happens to the main branch, that's when we want to trigger this workflow. And again, I have the workflow dispatch field in here, so that we can manually execute the code. Alright, and so down here in jobs, we have first we have a build job. So again, this is saying that it wants to run on Ubuntu latest. And then the first step here again is to check out the code. Alright, so now we have some additional steps. And so this one here, we want to set up the Docker environment. And so this is using a repository provided by the Docker company. And then we want to build the container and push it to the registry. And so we're using another repository provided by the Docker company.

5. Configuration Setup and Deployment

Short description:

This part covers the complex configuration setup for the Docker container, including providing the context, setting the push flag, and defining tags. The tags include a specific tag for the build using the SHA name of the code. The deployment job is defined to run after the build job and uses SSH to execute commands on a remote server. A third-party SSH action repository is used for this process.

This one has a bit more complex configuration setup. And so, the first thing we're doing is providing the context and that represents the path to essentially the root of the file system where we want the Docker container to get built. We also have this push true flag that's saying that we want to actually push it to remote server. And finally, we have some tags defined and so this is using a multi line YAML string.

And so, these tags here, these are the full verbose long tag names. But you're saying the first tag is for registry.foo.com, slash x, slash y. Admittedly, horrible tag names. And then finally, the most important part that was after the colon. So, we're saying that when we build this, we want it to be the latest tag and we also want to specify a more specific tag for this particular build. In this case, the name of the second tag is sha, then a hyphen, then the actual sha name of the code at the point in time that we checked it out to run this code. So, notice how there's that dollar sign and then the double curlies and then github.sha. So, that syntax there allows us to inject variables into these scripts. And so, there's a bunch of different variables that you can get from different sources, but one of them happens to be the actual github hash. So, that's provided as github.sha. By tagging this with latest, we're able to then pull this image, referencing it as latest, but then by also tagging it with a sha, we're able to actually refer to it later and sort of postmortem, refer back to previous releases.

So, now let's look at the actual deployment. So, here we have a second job defined for this workflow, name of this job is deploy. And so, here we have the needs field set to build. So, what that's saying is that this deploy job needs to run after the build job has completed. We're also specifying that this runs on Ubuntu latest again. And in here, we only have a single step. And so, this step here is to deploy the application to a VPS. And so, this is going to represent sort of a poor man's deployment process. And so, this one it's pretty simple. We're just going to use SSH to execute some commands on a remote server. But of course, you could build out a more complex integration using something like Kubernetes, for example. In this case, we're using a third-party step. So, this is by someone named Appleboy. And then we're using their SSH action repository using a very specific tag name. And then this again accepts some configuration.

6. Configuring Deployment and Dispatching Workflows

Short description:

We pass in the host, username, and key from the secrets collection. The connection settings are not configured in the YAML file to avoid checking in confidential information. The script runs on the remote server, pulling the Docker image, stopping and removing the application, and then running it again. Secrets can be injected as environment variables into the application code. There may be some downtime when stopping and starting the application. Workflows can be dispatched from the Actions tab if the field is defined in the YAML file.

So, we're passing in the host, the username, and the key. And then so these end up pulling the data from the secrets collection for this project. And so, again, we're using the dollar double curlies, but this time, we're using secrets. and then the name of the secret. And so, for this project, there are three secrets defined, SSH host, SSH user, SSH key. And then those are then pulled in and then passed into this project here.

And so, you wouldn't really want to configure these connection settings within your YAML file because then you'd have to check in potentially confidential information into your repository. So, here's the final part of that deployment job. And so, here we actually specify the script that's going to run on the remote server. So, what this is saying is that the first thing we'll do in that server is pull the Docker image from the server, from the remote registry. Then we'll stop the application, which would throw an error normally if the application wasn't running. So, we just attack on that or true. We also remove the application. Again, continuing if it fails. And then finally, we run the application again. And so, here we're running docker run dash D. So, this is going to daemonize the process to run in the background. Otherwise, the command would hang. We're also going to pass in an environment variable. So, this shows how you can take some of these secrets and then inject them into your actual application code. So, in this case, there's a secret called some value. And then that's getting assigned to an environment variable called some value and then passed into the container. Here the name of the application is also set to the to my app, which is the same application that we had stopped previously. And then we're specifying that we're running the latest version of the application. And finally, we're saying that within the container, execute the node binary using the specific entry point.

And so there is a bit of a shortcoming with this approach in that, you know, you're going to have some downtime as you stop and start the application. So this is definitely not something you would necessarily use in production, but it should give you a feel for the power that you have available to you using GitHub Actions. So I mentioned earlier that the workflows are dispatchable. If you have that workflow dispatch field available in your YAML file. And so within the project, if you do have that defined, you can go to the Actions tab.

7. Running GitHub Actions and Defining Inputs

Short description:

Then here on the right side, there's this Run Workflow dropdown. You can configure the workflow and execute the code. You can define inputs in the YAML file and enter them in the dropdown to run the workflow with the correct target. GitHub Actions can be defined using node. The official GitHub Actions org provides an example repository called Hello World JavaScript Action. It needs a file called Action.yaml, which describes the action, specifies inputs and outputs, and defines the environment it runs in.

Then here on the right side, there's this Run Workflow dropdown. So you can click that, you can configure the workflow, and then click Run Workflow, and it will execute the code. So by default allows you to select the branch, but you can actually configure the YAML file to have other inputs as well. So for example, maybe you wanted to, maybe you have a workflow that represents deployment. So you could, for example, define like a target for the deployment, like maybe you want to deploy to production versus staging. And so you could then define those inputs in your YAML file. And then here within this dropdown, you could then enter those inputs in and then click Run Workflow to actually execute the code with a correct target in mind.

All right, so finally, let's take a look at some of these GitHub Actions. And so as a nice little bonus, these GitHub Actions, they can actually be defined using node. And so the official GitHub Actions org provides us like an example repository that we're going to look at now. And the name of this repository is Hello World JavaScript Action. And so I took the code from that for this presentation, but then cleaned it up a little bit so that it would fit within the slides. So this isn't exactly what you would find within the repository. So GitHub Action repository. It needs a few things. One of them is a file called Action.yaml. So within this file, first you do a bit of describing the action. So here we have a name and a description for the action. You can also specify the inputs. And so these correlate to the with entries in that YAML file. And so here we're saying there's a single input defined called whom. And then the description of the whom is who to greet. This input is not required. So it's set to false. And then a default value of world is supplied. We can also specify outputs. And so in this case, we have a single output defined called time. And then so that time field will represent the time that this code gets executed. So you can use these inputs and outputs to sort of chain values between these different steps within your build process. And finally, we're saying the environment that this runs in.

8. Using Values in GitHub Actions

Short description:

This part explains how to use values in GitHub Actions. The code in index.js grabs input values, logs them, generates the current time, and assigns it to an output. The catch block handles errors and allows the task to fail. The example workflow file includes a hello world job that runs the hello world action using a GitHub repo.

So this is going to run on using node 12. And the entry point is going to be index.js. So this is what index.js looks like. So this requires two packages. Both of them are provided in the NPM org, add actions.

So the first one is core. And the second is GitHub. These allow us to interact with the GitHub actions API in a convenient manner. So the code here is getting wrapped in a try catch. So the first thing we're going to do is grab one of the input values. And so here we're grabbing core.getInput. We'll be looking for the whom value. And then we're assigning that to value a variable titled name to greet. And then we're just going to log that. So we're going to call console log with that value. And then the output that we print to the log ends up getting displayed in the output for the action.

After that, we're just going to generate the current time, and then assign it to an output. So core.setOutput, passing in the name and the value. And now there's also a bunch of data available at GitHub.context.payload, which represents the action that's being executed. In this case, we're not actually doing anything with it. But just keep in mind that it's there. And finally, we're wrapping everything with this catch. And so we catch the error. And then we're going to call a core.setFailed, passing in the error message string upon failure. And so this will allow us to fail the whole task and provide some information to the user as well. And so this is how you can actually use these values.

And so this represents like a truncated version of one of these workflow files. So here we've got this hello world step, or sorry, the job is called hello world. And then the first step here runs that hello world action. So here we can see that we're using that GitHub repo.

9. Using 'hello' Step and Poll Results

Short description:

We're using the name field to specify the step as 'hello' and providing the 'whom' value. Then we have another step that echoes the output, demonstrating how to use the value. We call steps.hello.outputs.time to access the variable from the previous step. Thanks for watching, follow me on Twitter at TLHunter. The biggest winner in the poll is repo integrated, with 40% choosing GitHub Actions and GitLab. It's a bit surprising, as I expected more dedicated third-party tools. But more companies are using GitHub Actions or GitLab for the convenience of integration with their git system.

We're using this name field which says that this step is going to be called hello. And then we're also using this with things that we can provide the whom value. And then after that, we have another step that runs. The second step is just going to echo the output, just sort of showing how you can use that value. And so here we're calling steps.hello.outputs.time. And we're able to then use that variable that we had outputted from the previous step.

All right. Well, that's the talk. Thanks for watching. Feel free to follow me on Twitter at TLHunter. And this presentation is available online as well.

So Thomas, I would like to invite you to join me on stage so we can have a look at the results of your poll. Hey, Thomas, thanks for joining. Hey, thanks for having me. All right. Let's see what did the people answer. So the biggest winner, 40%, is repo integrated. So GitHub Actions and GitLab. And well, I hope this doesn't surprise you with your talk. What do you think? Yeah, just a little surprising. I thought it would have mostly been the dedicated third-party tools. Yeah. How come? Well, that's what most of my previous employers had used. Oh, yeah. Yeah. Yeah. For me, I think in the past, it was usually Indeed Circle or Travis or also Jenkins a lot, but more and more, I'm seeing companies using GitHub Actions or GitLab just because then it's integrated into the, well, like you said, into their git system, and that's more convenient, right? Oh, yeah. 100%, yeah. Yeah. So, not to say anything bad about these other solutions, because they have great products also with different value, but it is easier to step into if it's the same IDE and you don't have to worry about authentication, that sort of things.

QnA

Q&A on YAML Anchors, Moving Platforms, and Jenkins

Short description:

The first question is about YAML anchors and reusing steps/config in GitHub Actions. The second question is about moving GitHub Actions to another platform and using it for self-hosted repositories. The third question is about using GitHub Actions as an alternative to running a Jenkins server.

So, Thomas, we're going to go into the Q&A. The first question from our audience is from Lara M. When I last looked into the GitHub Actions, it didn't support YAML anchors in the workflow. Is this still the case? And if so, what is the best way to reuse these steps slash config? Actually, I must apologize. I don't know what a YAML anchor is. Okay. So, Lara, if you can give some info about what you mean, and we can get back to this question maybe later.

Next question is from Dara Mohammad. How straightforward is it to move GitHub Actions to another platform like GitLab? And also, is it possible to use GitHub Actions for myself hosted repository? Yeah, good question. I will say that my employer, we're currently considering moving from one of these other solutions to GitHub Actions. And as far as I know, there's no easy way to automate that. Somebody might have a solution that automates moving from one to the other. But at the end of the day, I suspect it's going to take some manual work. You know, even if you sort of redesign these yaml files, you know, there's usually going to be some sort of, you know, reference to some CI tool-specific features. And so, I think, sadly, I think it is a bit of a manual process. But, you know, definitely something worth considering. Price is one thing. But yeah, convenience of having everything under one roof is absolutely going to help productivity. Yeah. It's also in none of these companies' interest to provide, like, an export script, right? So, that's probably why it's hard to automate this. It would only be for GitHub, it would be wise to have an importer. But maybe one day. Maybe one day. Or you can write it yourself, of course.

Next question is from PM Bonugul, using GitHub actions. Is the alternative to running my own Jenkins server? Yeah, it's definitely an alternative to running your own Jenkins server. I suppose there's some caveats. With Jenkins, you can get a bit more control. And so you can, you know, put it on a really beefy build machine, or, you know, you could perhaps even run Jenkins on Mac OS, if you're, like, using it to build iOS apps, for example. And then if you're using a different tool like GitHub Actions, I think they support the ability to run it on operating systems other than Linux, but then you're probably going to pay a premium that'll be much higher than, you know, maybe some self-hosted Jenkins, for example.

Instrumenting Node.js Internals

Short description:

You can instrument Node.js internals like event loop delays or GC frequency to monitor performance. Measure event loop delay, file descriptor count, and memory usage. Dump this information to fail a build or provide a report for the CI run.

Yeah. Alright. Question from Alexi. Can you recommend ways to instrument Node.js internals like event loop delays or GC frequency to send to the services? Absolutely. It's actually a section of my book, Distributed Systems with Node.js, is about instrumenting Node applications. But I suppose as it relates to continuous integration, maybe you have a test that's looking for regressions. If you're running an application, perhaps you want to make sure that the performance isn't dropping. And so, yeah, to do that, there's a few ways to hook into the Node internals. So you can, yeah, measure things like event loop delay, file descriptor count, memory usage. And then you could, you know, dump that information somewhere that you could then either fail a build, or at least provide a report and like the output for the CI run. Yeah, it's definitely possible.

Sharing Cache and Deployment with GitHub Actions

Short description:

Someone asked if there is a way to share cache between on-pull request jobs and merge jobs. The speaker mentioned that a third-party GitHub action might exist for this purpose, using representations of the cache stored on disk. For deployment with GitHub Actions, the speaker recommended using the deployment method that best suits the infrastructure setup, as GitHub Actions can support various deployment tools.

Okay, I get feedback from someone that your volume is a bit low. Can you turn it up a little bit? Yeah, definitely. Awesome. Is that better? Oh, well, where is the Loud? No, it's fine. Sorry, lame joke. I couldn't resist.

Next question is from Austin. Is there a way to share cache between on-pull request jobs and merge jobs? Oh, yeah, good. Good question. Yeah, so the share cache, that's usually referring to, for example, the node modules directory. And so, you know, off the top of my head, there's, I can't think of an easy way to do it, but I would not be surprised to find somebody else had created, like, a third party action that would perhaps allow you to, you know, take some sort of representation of what would be on disk. So, for example, with node modules, one way to sort of represent that is to, like, hash the package-lock.json file. And so, you could hash that and compress node modules. You could then write that tarball to, like, Amazon S3. And then in a different run, you could have another action that checks S3 before running NPM install and then, sort of, you know, download it based on that hash. And if it's there, then you download the file. If it's not, you would then perform, like, an NPM install. So, I would not be surprised if somebody has a third-party GitHub action out there just for that.

All right. So, if someone knows about this, then let us know in the Discord channel if this is available so we can help out. We have time for one more question. And it's from Martin. What method of deployment do you recommend with GitHub Actions? What method of deployment? Well, really, that all, sort of, depends on, you know, however your infrastructure is set up. So, you know, with these CI tools, they're quite independent from your deployment process, so if you're deploying to Heroku or, you know, Kubernetes, or maybe you're just SSHing into a remote machine, you know, you can still use any of those deployment tools with GitHub Actions. And so, I've got, like, a side project it'll build a file, push an image to a remote Docker container, and then, you know, instruct Kubernetes to then download that image and then rerun the application. So, you know, I would not actually recommend any particular deployment approach, but I would say that with GitHub Actions, you can really support many different deployment tools. Whatever suits your needs. Yeah. So, like I said, we have no more time. We did get an...

Q&A on YAML Anchors, Debugging, and Sharing Cache

Short description:

We received questions about YAML anchors, debugging in actions, and sharing cache between pull request and merge jobs. The winner of the book will be contacted via Discord, and the book will be shipped directly to them. Thank you, Thomas, for joining us!

I'm going to just read the questions so you can judge them for your book giveaway. We got an answer from Lara, who was asking about... I only see her answer now... about YAML anchors, and she answered that YAML anchors let you reference data so you don't need to repeat it multiple times, keeping configurable more dry. So that kind of hooks into the question we just had, right? So is there a way to share cache between pull request and merge jobs? So hopefully, YAML anchors can help with that.

Another question's from AMP. I'm having a hard time debugging something in actions. Any suggestions? I've set up a local environment with Docker Compose and everything is cool. But then, when I run it in actions, I get an error.

And one more question is from D. Nekto's act is awesome. I used to do my- Oh wait, this is not a question, this is just a comment on someone else. So, that's all the time we have. So, can you let us know, first, how the winner will be receiving their book? And then I'll give you a drumroll, and we can announce the winner. Yeah, definitely. I will contact the winner directly over Discord, get the email address, and I'll provide it to the publisher and then they'll be able to ship it directly to him. Oh, awesome. Okay. So, everyone, hold your breath, it's time for the drumroll. Cool. Yeah, I really like the question about the caching. That's an interesting one that I haven't personally thought about with GitHub until now, but I've used it in previous solutions. So, I like that one. The question is, is there a way to share cache between on-pull request jobs and merge jobs? Awesome. So, Aston is the askee. I don't know if that's a word, but the question asker. And Thomas will be contacting you and you will get this nice book. Be sure to let us know on Twitter when you received it. Maybe we had a nice photo. Would be awesome. And also tag Thomas, of course. And let him know what you thought of the book when you're done reading it. Thomas, thanks a lot for joining us. It's been a pleasure to have you again, and hope to see you again soon. Cool.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 

Workshops on related topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.
React Summit 2022React Summit 2022
164 min
GraphQL - From Zero to Hero in 3 hours
Workshop
How to build a fullstack GraphQL application (Postgres + NestJs + React) in the shortest time possible.
All beginnings are hard. Even harder than choosing the technology is often developing a suitable architecture. Especially when it comes to GraphQL.
In this workshop, you will get a variety of best practices that you would normally have to work through over a number of projects - all in just three hours.
If you've always wanted to participate in a hackathon to get something up and running in the shortest amount of time - then take an active part in this workshop, and participate in the thought processes of the trainer.