The following technologies and soft skills might be needed):
- Basic understanding of Git & GitHub interface
- Professional/Intermediate English knowledge for communication and for allowing you to contribute to the Node.js org (As all contributions require communication within GitHub Issues/PRs)
- The workshop requires you to have a computer (Otherwise, it becomes difficult to collaborate, but tablets are also OK) with an IDE setup, and we recommend VS Code and we recommend the GitHub Pull Requests & Issues Extension for collaborating with Issues and Pull Requests straight from the IDE.
The following themes will be covered during the workshop:
- A recap of some of GitHub UI features, such as GitHub projects and GitHub Issues
- We will cover the basics of Open Source and go through Open Source Guide
- We will recap Markdown
- We will cover Open Source governance and how the Node.js project works and talk about the OpenJS Foundation
- Including all the ways one might contribute to the Node.js project and how their contributions can be valued
- During this Workshop, we will cover Issues from the nodejs/nodejs.dev as most of them are entry-level and do not require C++ or deep technical knowledge of Node.js.
- Having that said, we still recommend enthusiast attendees that want to challenge themselves to "Good First Issues" from the nodejs/node (core repository) if they wish.
- We're going to allow each attendee to choose an issue or to sit together with other attendees and tackle issues together with Pair Programming through VS Code Live Share feature
- We can also do Zoom breakrooms for people that want to collaborate together
- Claudio will be there to give support to all attendees and, of course, answer any questions regarding Issues and technical challenges they might face
- The technologies used within nodejs/nodejs.dev are React/JSX, Markdown, MDX and Gatsby. (No need any knowledge of Gatsby, as most of the issues are platform agnostic)
- By the end of the Workshop, we'll collect all (make a list) the contributors who successfully opened a Pull Request (even if it's a draft) and recognise their participation on Social media.
Node.js: Landing your first Open Source contribution & how the Node.js project works
AI Generated Video Summary
Today's workshop covers the Node.js project, governance model, collaboration, and contribution. The Node.js organization has important repositories and follows a governance model for proposing changes. Collaboration involves bug reporting, pull requests, and detailed contribution guides. Strategic initiatives aim to improve Node.js, and the website redesign initiative faced challenges. The workshop focuses on technologies like React, Next.js, Jest, and Storybook, and provides hands-on experience and collaboration. Participants can work on various issues, including component migration and testing, and contribute to the Node.js project.
1. Introduction to Node.js Workshop
Today's workshop will cover how the Node.js project works, the governance model, collaboration, managing projects, and making contributions. GitHub features like projects, issues, actions, and the new file navigation design will be used. Recommended tooling includes Visual Studio Code with LiveShare and GitHub pull request extensions. It's important to have the latest version of Node.js LTS and install git and githubscli for smoother contribution workflow.
So today's workshop, we will pretty much cover, you know, a little bit about, you know, how the Node.js project works, you know, a little bit of governance model, also about how collaboration works, how certain areas of the project work, and then how you can manage your projects very first contribution to the Node.js project.
So this is me, I'm Claudio van der. I'm a Senior Software Engineer at Hubspot, and I also do a lot of open source stuff on my free time, like on the openJS foundation, on the GNOME foundation, the Node.js project, and others, to list a few.
Before we start, you know, anything at all, I'm not sure how much each one of you are familiar with GitHub. So I wanted to recap a few of the GitHub features we are going to use today, which is available on the links, you know, on the slides. Pretty much, we're going to use today GitHub projects, GitHub issues. The project in GitHub that we're going to use is also linked on the slides. We're going to use GitHub actions, not actually write a GitHub action, but it will be part of the workflow when you're doing the contribution. And you can also see a sneak peek of what are the GitHub actions that we use in the repository that we're going to work on today. If you have familiarity with GitHub actions, you know, and you want to read it. We also recommend you to enable GitHub's new file navigation design. It's a new experimental feature that is enabled on GitHub. For most people, it's disabled. On the slides, I'll also add a link of how you can enable this new navigation design and where you should do it. It's very, very useful to be honest.
Now, for the tooling, I truly recommend you to have Visual Studio Code with LiveShare and GitHub pull request extensions which are linked in the slide. LiveShare would be very useful when you're having trouble during the workshop and you want me to help you to debug and go to GitHub pull request for collaboration which basically enables the whole GitHub pull request features directly on Visual Studio Code. So, you can see comments directly on your code and all kind of order sort of stuff. Finally, I recommend that you have the latest version of the Node.js LTS which is the version that is used by the repository that we are going to use today. You can use Node.js 20 or 16 but there might be incompatibilities, so I really recommend you to install Node.js Version 18. Even though because npm which is the packaging manager that we are going to use on today's workshop will complain that you're not using the correct version. You can use any version of v18. Lastly, I also recommend you to install git and githubscli because it makes just the comments that you need to use today with githubscli, they will be very useful like for example checking out and clearing the repository, making a fork, just very nice tooling that allows you to speed on your steps of the contribution today. In my opinion, githubscli is a very cool tool and it really improves the workflow for interacting with GitHub.
2. Node.js Organization and Governance Model
In the Node.js organization, there are important repositories such as Node.js, slash help, nodejs.org, and discussions page. The governance model involves the TSC and the OpenJS Foundation. Collaborators can be regular, core, or TSC members. The governance model follows a process of proposing changes, receiving feedback, and iterating. An example request is the API docs meta data proposal, which involves a working group and the TSC. The governance model ensures the longevity and sustainability of the Node.js project.
So, the Node.js organization. In the next section we will talk about the relevant pages of the organization, the Github org, the Node.js governance and collaboration model, working groups and initiatives and what the hell they are. If in any moment I was talking too fast or you're having difficulties, please say so in the chat so I can accommodate to each one's needs.
So there are few, let's say very important repositories that exist inside the Node.js org. Of course, the first one is Node.js, which is the core and where the magic happens on Node.js. This is the repository where all the new features of the Node.js runtime are done. All the bugfixes and all the collaboration in general, like documentation, tooling, API docs, all kinds of things that are related to the runtime itself, they are inside the slash Node repository. Then we have slash help, which is a very useful but not known repository, which was designed for people that have questions with Node.js to ask. Maybe you have help, you sorry, you need help with a specific API of Node.js, you're having an error when you're writing code, or you just don't know what are the best practices for something. That's the best repository for asking questions because there are a lot of people that will be there answering questions. Then we have the nodejs.org repository, which is well, the Node.js website. Then we have the discussions page on the nodejs.org, which is not a repository is actually a GitHub feature where, you know, the community connects with contributors, collaborators, and vice versa. It's a place for people to ask questions. A place for people to know, make suggestions or just talk about Node.js in general. It's also a place where collaborators will make announcements from time to time. So it's very useful for the people to want to interact with the older community. Finally two repositories that I find personally very important for the governance itself, the Node.js project or respectively the build repository, where all the infrastructure of the Node.js project happens or let's say the configuration files, the scripts, the cron jobs, all the things that we use in our infrastructure from all the Arduinos and devices we have that keep running Node.js to see that, you know, all different kind of platforms and architectures Node.js is working. I am just kidding about Arduino, we don't actually support Arduino, but you know we have a few, let's say, smart devices that we keep testing and a few Raspberry Pis and etc. Then we have the admin repository, which is pretty much a historical repository that people request things to the government of the Node.js project. For example, you want to add new members to the organization, you want to create a new team or you want to request access to certain resources that we own, or you want to suggest a new idea for the organization as a whole, that's the repository. And if you browse the repository you'll see a lot of very administrative actions happening there, and they're 100% transparent.
3. Collaboration Life Cycle and Contribution Guides
Open source for big projects like Linux or Node.js requires a lot of cycles of feedback and iteration. Bug reporting involves triage teams determining if an issue is a bug or a user-side problem. Collaborators can contribute to bug resolution, but the triage team filters the noise. Pull requests can be merged quickly or take months, depending on complexity. The collaboration guides provide detailed information on contributing to Node.js, including the collaborator guide, contributing guides, resources for getting involved, and the code of conduct.
It is no joke that in my opinion open source for big projects is a very bureaucratic model. Way more than companies because there is this public sentiment that in open source things are very fast or that there is no governance or real decisions, people just make things because they want. But the reality is that on really big projects like Linux or Node.js or thegnome project it is the whole opposite. It requires a lot of cycles of feedback and iteration.
And now in the collaboration life cycle of the Node.js core repository. I'm going to just tackle one side of it which is bug reporting. You know, you will pretty much have a bug being reported. You know, where people will say, hey, this seems to not be working. It's given error or it's not behaving as it's supposed to behave. Then somebody from the bug triage team, they will realize, wow, if this is really, you know, a bug on the user side or if this is actually a bug on Node.js, they will request more information, you know, and pretty much decide the future of that issue. If it should get attention from any of the contributors, collaborators or not. It's valid to say that any collaborator can at any moment, chime in, you know, in the issue and comment and also ask for information. But pretty much on the early stages because we have hundreds of issues being created every day, we will decide to leave the triage, you know, team to, you know, filter the noise.
Otherwise it would really make the collaborators' life miserable because there's so much time each person have to actually contribute on open source when they're not actually being paid to work for open source, which is the case actually with like around 95% of the people that work in open source. They're just doing it on their free time. Then the bug gets worked on, you know, if any contributor, newcomer or existing contributor realizes, yeah, I can work on this, this makes sense. You can pull requests. That's where the actual, let's say feedback iteration will start. And but requests can sometimes be merged very quickly in like 24 hours or less, if they are fast tracked, which means that an existing contributor is requesting something to be merged urgently, 48 hours in most common scenarios, if it's a simple review, if it's a more complex code change, it could take for a few days to honestly, um, wow, a lot of months, um, which, you know, it's how it is, how it is, for example, Node.js still doesn't have support for the quick protocol implemented created by Google, well, more than a decade ago, and pretty much because, you know, based on the existing governance model and collaboration model, we can't have people rejecting that issue in general.
There are of course, many other mechanisms in place to avoid, you know, a test from getting stale forever, but pretty much when it's very complex and it does a lot of fundamental changes of how the runtime works, and in this case, the HTTP, a module of Node.js, which is one of the most important modules of pretty much Node.js, while it really needs to be spot on, like you wouldn't like to just be a regular user in Node.js and then a behavior that you use to completely change and break what you're doing. In the end, the pull request gets shipped, which pretty much, depending on the size of the change, or the backwards compatibility, or what the nature of the change is, while it can be backported, it can be only added on the next major release or on a minor release. All of those are also included on the collaboration guides that I provide in this slide, which are exactly in this slide. So we have Node.js collaborator guide, which is a huge document with everything you could imagine to read, you know, from the smallest of details to more macro information. We have a folder inside our repository, which is a set of contributing guides, for example, what should I do when I want to add a new method to a certain class? What should I do if I'm changing this module? What should I do if I'm making API changes, like from the docs, so we have everything very well documented. It's just, sometimes very well hidden. It's not a folder that most of the newcomers would find by themselves. We have also a link for resources for getting involved in Node.js, which states the numerous ways of getting involved with a project. And finally, the code of conduct, which is very important, because without code of conduct it will be chaos.
4. Node.js Teams and Strategic Initiatives
The Node.js project has various teams and working groups that contribute to different aspects of the project. The security team ensures the security of Node.js, while working groups like streams, Docker, release, diagnostics, and more focus on specific areas. Strategic initiatives, led by the technical steering committee, aim to improve Node.js and keep it relevant. These initiatives include performance, startup snapshot, core promises API, and others. The Next.in strategic initiative ensures Node.js remains relevant in the future, while the single executable initiative allows creating executables with Node.js without the need for the entire runtime.
Not necessarily some of those people work on the core development of Node.js, which means actually on the runtime itself, but still they are as important as any other work. For example, we have the security team, which is not mentioned here, but we have the security which pretty much handles and ensures that Node.js is secure. From handling vulnerability reports or public stated CVEs, the security teams does a lot of things about ensuring that the code that you run will not be exploitable.
Then we have a number of different working groups that work on specific parts of Node.js. Here the project or Node.js itself, we have, for example, the streams working group, which is possible and implementation of streams on Node.js. We have the Docker working group, which basically are people that work on all the pipelines that we have on Node.js and pretty much maintain the Docker images that we have. We have the release working group, which is very important, which basically handles and manages all of your release cycles of Node.js, the actual releasing, signing, and compilation of the binaries that are shipped to you. And also all the part of announcement, blog posts, and et cetera, pretty much people that make Node.js get released. We also have the diagnostics working group, which is responsible for ensuring that Node.js has very good debugging and diagnostics built-ins, which allow you to debug, or allows us to analyse and, you know, diagnose Node.js itself.
5. Node.js Website Repository and Redesign
We will be working on the Node.js website repository, which contains various web technologies. The website includes the distribution of Node.js releases, API documentation, code coverage, Node.js status, and metadata. The website redesign initiative aims to keep the website relevant with a modern technology stack, refresh its look and feel, and provide a better collaboration model. The infrastructure of the website has some long-term issues, including caching problems and an outdated server. The initiative started in 2019 but faced challenges with the use of Gatsby framework. The team eventually abandoned the project due to complexity, issues, and maintainability problems.
And it's something that we just released on Node.js 20, which is the latest version of Node.js we released this week and it's pretty much fascinating, in my opinion, but I'm biased. Let's get our hands into action.
So the repository that we're going to contribute today is something that has a lot of web technologies in it, but it's not as complex as the runtime itself, which is the Nodejs website. So pretty much on the next section, I'm going to talk an overview about the website repository, what is the website redesign initiative, getting your environment ready, and then another round of questions.
The Node.js website. Well, for most of the people it just when you hit nodejs.org, you don't know Node.js or you go to the API docs, pretty much everything happens magically in the backend. Not that we actually have a backend, it just like happens on the bottom of the iceberg. But actually a lot of the parts of the website are not from one repository, they're a collection of different things. Today what we're going to approach is the main website, the main part that is where you download Node.js or see the blog posts or stuff like that, the actual website. But in other terms, what is actually the Node.js website? Well, it's from the distribution of Node.js releases, the API documentation, the code coverage, the Node.js status, the human-friendly website, and also we have a lot of metadata that we host there for, like even Visual Studio Code uses this kind of metadata which pretty much has all the release information of Node.js, so there's a lot of really cool things happening in play, which is also a lot of information, I'm not expecting you at all to give an eye on any of those. What we're going to approach today is just the website repository.
So resources I would definitely recommend you to give, you know, an eye. Well, the website repository itself, which is always on the slides, the contribution guidelines for the website repository, pretty much all the information that you need to get started is there, from how we do any tests, how we do coverage, how we create a new component. Pretty much all the guidelines still will be there. And there is a list of issues with the label of the issues, filtered of the issues that you can work on today, on the workshop.
So, well, I hope most of you already opened the website of Node.js. Well, to be very honest, it looks old. That's because the original design was made more than a decade ago. And the technologies that are used on that repository originally are also from 1 decade ago. Long before times of React, GSX, or even Angular, those were different times. And because exactly for many reasons, like infrastructure is very old and the technologies behind the website are very old. Well, the TSC created the website redesign initiative, which pretty much had the following goals. You know, keeping the website relevant with a modern technology stack, refreshing the look and feel of the website, and providing a better collaboration model that is just friendlier for newcomers, and from the technologies themselves, the tooling that is behind it, the documentation, the resources that are available for the contributors, and also, to mitigate a few long term pains behind the infrastructure of the website.
Regarding the last two points, so the previous point, pretty much, of the frameworks that were used until very recently, Hamdo Bars and Metalsmith, they are very newcomer friendly. The framework that we are using today, Next.js, well, it's pretty much out of the box. There isn't much you need to learn to use Next.js. React is as friendlier as it gets for web development. Of course, the other frameworks are always great, like Vue and Angular, but I think in terms of, that's a personal opinion of course, on writing plain HTML and making it get rendered on your browser, React is pretty good. And for mitigating the long-term pains of the website infrastructure, while we have a recurring issue that pretty much, let's say in a very short version, we have a server where the website runs which doesn't use any of the fancy technologies we have nowadays like CDNs or, you know, edge networks or stuff like that. Or replica nodes, it's just an old server which pretty much handles everything. Which is behind Cloudflare, not sure if any of you know what Cloudflare is, but it's a content distribution network on steroids. And we have a few issues with caching.
So for example, every time we publish a new build of the website, the whole cache gets purged. This causes long term issues, which in the end make downloading Node.js impossible or many other issues which are very, very crude and horrible. We had even an incident last month with our infrastructure. All of this post-mortem and a very interesting read on the infrastructure behind the website are available on status.nodejs.org, including the incident itself and how it happened. And there's also a blog post on Node.js.org itself, if you go to the blog page, you can also read it, if you're interested. I think it's a very, very interesting engineering experience, let's say. But the initiative isn't actually something new, it exists since 2019. But you might ask, well, why the Node.js.org website still looks old? Well, back then, the people behind the initiative, they were a little bit ahead of themselves. They tried to use cutting-edge technologies that were very new, a framework on the time called Gatsby. They tried to make a what we call a Big Bang migration, which basically is create a whole new code base from scratch, everything there, and discard existing Node.js website. There are a lot of people that are against this, and for very practical reasons. How you ensure migration is seamless when you pretty much just record the whole thing? Over time, the team realized Gatsby is very complex, has a lot of issues and pains. There are also issues on GitHub, on the Node.js organization if you search, addressing exactly those topics where we discuss all the problems that Gatsby has and etc.. And ultimately it has a lot of bugs, performance issues and maintainability issues. In the end, they just made the whole thing get stale. Over time, the people that were very invested in working on that, they just abandoned the project.
6. Migration to Next.js
After rethinking our strategy, we explored the pain points of Gatsby and alternatives. Ultimately, we decided to try Next.js, which is available on the nodejs.org repository. The discussion about migrating to Next.js and why it was chosen is an interesting read for engineers looking for the best framework for their project.
A few years passed and we are in 2000. When the initiative was revised in 2022. Three years passed. Almost four. So pretty much we had to rethink our strategy here. What are the pain points that Gatsby brings and what are the alternatives out there? Pretty much the issues of Gatsby are mentioned on the left side. Ultimately, after a lot of discussion, we decided, well, let's try Next.js. For many reasons and a few ones described here, it's also available on the nodejs.org repository. The whole discussion about migrating to Next.js and why Next.js, which is also a very interesting read. Because a lot of engineers, when we want to create a new project, we're like, okay, what is the best way to do this? This is a framework, this is a tool. And looking at it, you realize, okay, what is the best framework for my project? Because there's no such kind of best framework. But the best framework for what you need to do, right? There's always a better tool for what you need to do. But. Oh, you know, what is the best tool before even using it? So it's a very interesting read, in my opinion.
7. Website Redesign and Migration
We migrated the existing repository to Next.js, adopted new technologies like server-side rendering and Storybook, and migrated the infrastructure to Verso for server-side rendering and routing. We are also adopting a new design for the website and migrating components, layouts, and pages without replacing the existing ones. Storybook is essential for testing components without existing layouts or pages. We are now in the process of migrating components from the node.js.dev repository to the nodejs.org repository.
Website redesign. Well, first we had to switch from Metal Smith to Handlebars, right? On to Next.js. That was a very big pull request that is also available on the repository, but I don't expect you to read that because it's a very big repository, but the link is there anyway, so if we want to read the discussion behind it, so first we needed to migrate the existing repository to a new framework and have the exact pages with the exact look and feel, the exact features and working for the same browsers and being built in the exact same way and the exact same infrastructure. So there are a lot of requirements and, wow, it's very interesting because in the issue we always explain all the shortcomings and how we've made this possible.
Then the next step was to adopt new technologies. For example, we want server-side rendering, which for the people that don't know what it is, it just means that you have a server, you know, like a Node.js server application, that will render the whole webpage on their server side and render to the client, and already compiled, let's say, version of what you're seeing. So that when you open the page and you have, for example, React on the client side, you have a faster initial loading because the initial HTML content that you get is already the first hydrated version of your application. And in other shenanigans, pretty much is a way to skip that React needs to build, you know, your application on the first time that you open it. And that makes a few things faster. It also has other utilities, but pretty much, you know, that's the gist of it. We also wanted to adopt a framework called storybook, which allows us to do manual testing. So for the ones that do not know, yes, it is using SSG at the moment. It's a full static version of the website. We are in the process, which is a point three on migrating to Verso, I'm going to get there. Storybook as a framework, pretty much allow, you know, when you're creating your components, people to preview it, to test it, you know, to see how it looks as a component on the smallest unit of code. So, you know, the next step was to adopt a few new modern technologies, you know, alongside Next.js and best coding practices for the modern era.
And then migrating infrastructure. You know, there are a lot of things that are at play, like for example, when you have a full static SSG system, how you create dynamic content, right, how you do incremental builds, which pretty much incremental builds is the notion that instead of building the whole website at once, which can take a lot of time depending on how many pages you have, it does just pretty much cover pages. And as soon as people request new pages, it will keep building the pages, which is very interesting. But there are a lot of complications right there. But this is also available on GitHub, if you're curious, but pretty much next step was is, you know, to migrate to Verso to allow server side rendering, to allow middlewares, routing, etc. And the main goal on this is that so we can split the traffic that is going to the very old server that I mentioned before, to be on it for distribution on the binaries of node.js, and the remaining just for the website. Because well, the website gets updated often releases, while we only have once, a quarter, once whenever that we need to release a new version of node.js. So there are other very interesting engineering topics on there, like, for example, how we will split the routing on Cloudflare, etc. And it's all open in GitHub, if you want to give a read afterwards. The reason why I'm mentioning all this is because as an engineer, there are a lot of really cool things happening, you know, here on the migration of the website. Many will just imagine yeah, just adopting new framework, or, you know, just updating the design of the pages. And for projects as complex as node.js there are a lot of very, very high level technical decisions and requirements that need to be done there. So it's not as easy as you would imagine. Yeah. And finally, we want to adopt the new design. So there is a website that you can actually open right now, which is called node.js.dev. Oops, I put on one Http. Hold on. Which is pretty much that old website redesign initiative made in Gatsby. But it's pretty much what we want to accomplish the whole new layout, new API docs, new everything, but on Gatsby. And pretty much the goal now is, well, we want to migrate the components that we were able to do on this node.js.dev repository to the nodejs.org repository. And pretty much that's the state we are in right now. And that I'm going to show to each one of you. So pretty much what we have at work is we need to migrate the components. Each component and its dependencies need to be migrated. We need to ensure the coverage and that the component works as expected. Then we migrate the layouts, which pretty much use those components, extra styling that is exclusive to the layout. Then finally we migrate pages, pretty much, which are markdown pages, which will be built into final, you know, HTML pages by Next.js. And all of this needs to happen without replacing or touching the existing components, pages and layouts. So pretty much every other work that is done right now, it's a work that, you know, we add all of these new things without actually using them. And then the moment that we have all the pages and layouts and components done, we simply remove the old stuff and switch to the new stuff. That is very interesting. That's why Storybook, also that framework of menu testing has an important play because, well, if you don't have any layout or page right now to show the component, how will we actually see the component and test it right? And ensure that it's looking the way that it's supposed to look. And pretty much, you know, we are getting very ready to actually start working on what we need to work.
8. Installation and GitHub CLI
Install GitHub, CLI, Visual Studio, Code.js. You can clone the repository and install dependencies. Take a short break. GitHub CLI simplifies forking, cloning, and checking out branches. Feel free to use Git if you're confident. No need to install GitHub CLI.
So pretty much if you haven't already, you know, install those things that I mentioned before, GitHub, CLI, Visual Studio, Code.js. You don't need to clone the repository yet. But if you want without doing anything, you can already clone it and install the dependencies. And, you know, read the contributing and you know, README in the meantime, because now we have another FAQ and also a short break. Because again, it is a lot of information.
I would recommend GitHub CLI. It's as a personal experience, it fastens a lot of things. So for example, you will need to fork repositories, right for Node.js or repository. With playing Git, well, you first need to go to github.com, fork it, then you need to clone, then you need to manually add the upstream remote, which is the original repository. Then you need to do a lot of other things with GitHub CLI, you just paste one command, and it does everything for you, you know, so it is very, very useful for cloning for checking out branches like PRs, or forking, it has its advantages. But if you feel confident enough with playing GitHub, sorry, playing Git. Sure, that's fine. You're not required to install GitHub CLI.
9. Introduction to Technologies and Collaboration
We'll be using React, Next.js, Jest, Storybook, SCSS, and an ES interpreter. We'll focus on migrating existing components, ensuring functionality, updating styles, and adding storybooks. The coding part is limited, but it will provide valuable hands-on experience and collaboration.
Pretty much the technologies that we're going to use today, well, are React, Next.js, Jest, Storybook, SCSS, which is pretty much CSS, you know, ES interpreter. This might sound like a lot of things, which actually are. But the cool thing here is that since we're talking about migrating components that already exist, most of the work is actually ensuring that when you copy the code base, things are working, changing imports, updating a few style things, adding the storybooks, the UNITAS. It'll be very interesting, this experience. But the actual coding part is not that much, you know, so I'm not actually asking you on this workshop because it's very limited time to code something from scratch. But it will give you very good hands-on experience on all these technologies and how they work together. And we all be collaborating together. Of course, each person will probably work on their own issue, but you know, if you have any questions you want to live share or whatever, I'll be here to help each one of you.
10. Overview of Technologies and Workflow
Today we will work with React, Next.js, Storybook, and Jest. You don't need to be an expert in these technologies. The tests are simple, focusing on component behavior. Templates and existing tests are available. Follow the instructions in the issues, migrate the component, add tests and storybooks. Ensure correct linting with ESLint and Prettier. Storybook allows hot reload for focused component rendering. Styles are compatible between repositories. Create a draft PR for feedback. VersoBot generates Storybook previews. Connect on Discord for further discussion. Your opinion matters as we aim to improve the Node.js experience. Start cloning the repository and refer to slide 38.
On the next slide that you have, if you have the slides always open, pretty much just a quick overview of what we do with each one of those technologies. Which if you're familiar already, you know, I'm not really going to talk about it, but, you know, React, Next.js, Storybook and Jest. You also don't need to be a pro in any of those technologies because most of the test are very simple. You just want to, you know, render that component. And, you know, if the component has different, you know, if you have a prop, suppose you have a button, right? That changes color or changes value of it every time you click on it. So what you want to test on the INI test is if you click the button that the value actually changes to what is expected. So it's very simple. Just checking that, you know, component behaves as it's supposed to be. A lot of templates and existing tests are there to help you. And for storybooks, the same. We already have a lot of storybooks and templates that you can use, which are available in the contributing guidelines. I worked a lot on the last night and this morning to update a lot of those contributing guidelines to make it really, really explained. But I'm very open for feedback. So if anything on those guides doesn't sound straightforward or contradiction, you know, just give me the feedback. Or you can even make a pull request yourself, you know, to add more stuff or proposed changes. So pretty much, this is, you know, it's basically eight steps of why are we going to work here today. You will pick an issue, you can always go back to the slide. You will follow the issues of the, the instructions of the issue, which hopefully are also very well explained. Then, you know, you will migrate the component, you know, and add the tests and storybooks. You know, then you want to ensure that, you know, all the linting is done correctly. So there are a lot of ESLint and pretier, I'm not even sure if you would call it pretier or prettier. I guess prettier. Rules that will actually enforce your code is on the style that we use across the whole repository. So pretty much, you know, if you do imports in the wrong order, and any kind of thing that you know, it's really out of the part, which is completely valid because you're new to the code base, you're not supposed to know how we code in this repository. We will either give you a warning or autocorrect for you. So always good to run ESLint and prettier to ensure that everything's fine. You will keep iterating. You know, a cool thing about Storybook is just hot, you know, pluggable, hot reload. So while you're coding on your component, it will reload at the moment, and is really cool because you will only render your component. So you don't need to focus in anything outside of the component yeah. And also, pretty much the styles of both repositories are compatible. So each component has styles, right? When you migrate those styles, all the variables and et cetera, they are 100% compatible. So you don't need to scratch your brain with, okay, what I need to do to fix that style. And I would definitely recommend you folks since because of the amount of time that we have to create a draft PR, even if it's very verbal, it's very beginning, very anything. So that you know what can give feedback and can look what you're doing, et cetera. The moment that you created the PR, we have a VersoBot that we basically already create previews of the Storybook. So pretty much it will generate a link that anybody can open that contains the Storybook stuff, which is really cool. And if there's not enough time, you're not supposed to finish today. Like a lot of the reading material that I shared today and a lot of the things that I talked today, I'm not expecting you to memorize them or to know or to send them, but to read afterwards and I would love to connect either on Discord or anywhere else. To chat, you know afterwards, you have an opinion because you're the community and we are here to serve the community, to be very honest. Like pretty much the work that we do is to create a better experience for you using Node.js or anything surrounding the Node.js ecosystem. So your opinion matters. And pretty much, these are the key comments you want to keep in mind. Pretty much if you want already, you can start cloning the repository. These aren't exact comments that you can use with the GitHub CLI. Which is cloning, then we're going to the- Can you say please the slide number, then it's easier to jump to the- I guess I don't really see here where I am. I'm really sorry. But let me see what slide number are. Oh, 38, 38. Yeah, like pretty much it doesn't show.
11. Switching Branches and Choosing Issues
To switch to the Major Website Redesign branch, use the git switch command and specify the upstream remote. Conventional commits are recommended but not required. If you clone the main branch, use git patch --all to fetch all branches. Push your changes using git push origin head. Before committing, run npm run lint or npm run format. You can also use npx turbo lint for faster linting. Choose the issues you want to work on and consider creating a branch directly from the issue for automatic linking.
Oh, now it showed me, I just had to put my mouse on the very corner of the page 38. Thank you, Uriel. It's Uria or how should I pronounce your name? Uria. Uria. Did I say correctly? But most Germans are challenged by it anyhow. It's okay.
So, Yep. Then you will fork the repository, right? You will switch to the branch. The base branch we're working today is called Major Website Redesign, which is where all the redesign work is being happen. You know, it's basically a huge feature branch so that we don't pollute the main branch with all this development stuff.
The comment with get-switch. Pretty much, you will ask git to switch to new branch locally called Major Website Redesign. And we will tell git that the remote of it is on the upstream of, you know, of your fork and not your fork itself. The reason why we do this is because one of the common pains of forks is keeping your branch up to date. GitHub nowadays has a button that you can press in the UI to sync forks. But I really like this command because if you have branches that you want to keep track directly from the source repository, that's what you do. You pretty much just tell which, you know, remote you want to use, which in this case is upstream. And then when you're on the Major Website Redesign branch, you can check out your own branch, like the actual branch where you're going to work on your stuff. And we pretty much recommend conventional commits for common names and descriptions, but you don't really need to abide by that if you don't want to.
Did you use the Github CLI comments? And I'm doing it at the moment. Yeah, seeing you on the comment. Oh, you used to Git clone. Yeah. The thing is that you probably need to mainly add the upstream remote. I think the moment that you fork the repository, if you forked on the GitHub UI itself, not the GitHub CLI, it should already have the major website redesign branch. So you can just always use git checkout major website redesign instead of the git switch command. Git switch command is useful if you have the upstream remote on your git repository, because then you don't need to keep track, like always updating. Because it's a very common thing that you work in your feature.
Yeah, also the same. If you also, by accident happens to just clone the main branch, can always use git patch minus minus all, which will fetch you all the branches like that or on the remote. Again, by default, when you just did a regular fork on the regular clone, you will not add the upstream remote. And that's very GitHub CLI. It's really cool because if you not only clone the repository, but configure all the remotes and shenanigans for GitHub. So, yeah, this slide is the slide 38. Another thing is that once you're in your branch and when you do changes, push your changes to your remote, which you can do by doing git push origin head or git push region name of the branch. I just find that putting head, it's easier because I don't need to always write the name of my branch. I'm very bad with memory. So I'm speaking out sometimes I forget the name of the branch and running a command just to get the name of the branch sometimes is a hassle. And before committing, because we don't use hooks, like git hooks to automate pre-commit processes, run npm run lint or npm run format. You can also run, and I did not update the slides with this, but we use a technology called ThorboWrapple, which is by Virso. You can just run npx turbo lint. Basically pretty much any command just prefix it with npx turbo. The contributing guidelines are already updated with this, I just know the slides. The main benefit is that it will cache your commands. So sometimes linting can take some time depending on your hardware. With this it will just be faster. Now we want to start choosing the issues. Like I mentioned, we have a lot of issues with website redesign. To be honest- Would you recommend to directly create a branch from the issue? Then they are automatically linked. You can.
12. Working on New Header Component
If you're working on the website redesign issues, keep the base branch as major website redesign. If you're working on other issues, use the main branch. Comment on the issues you're working on to keep track. Feel free to choose issues and ask questions. You can work on issues even if someone else expressed interest but hasn't opened a PR. I'll be sharing my GitHub tab, but you don't need to pay attention to it. If you have any questions about the workshop or Node.js in general, feel free to ask. You can work on creating a new header component by following the instructions in the repository. Make sure to use the appropriate technologies and components from the Node.js org repository. Search for similar icons using the react-icons library. Most scenarios will have a one-on-one match to what you need.
I mean, you need to keep in mind that the base branch needs to be major website redesign if you're working on the website redesign issues, but you can try to create from there. That's not a problem. I was just going to say that you don't necessarily need to work on the website redesign issues. There are a few older issues here that maybe are just updating content and et cetera. But if you're working on any issue that is not website redesign, then your base branch is main, not the major website redesign. Just let me know. I mean, what tasks your folks are starting to work. You can simply go to the issue. I think I commented here. Just comment on the issues that you're going to work with. Like just basically, Hey, I would like to work on this specific part of the issue if it has a bullet point. Or Hey, I want to just work on this issue if it's just one item because then I get a notification that allows me to keep track of what you all are doing. Yeah. So pretty much feel free to use next minutes to just choose issue, questions if you need any troubleshooting or et cetera. And I guess you folks can start working. Again, if any concept is unfamiliar, if any issues unclear or anything, let me know, I'm here to help. I'm actually going through the notifications now because there's a lot of stuff here. Also, a very important thing is that even if in any of those issues, people already wrote, hey, I want to work on this. If there's no PR open for that, you can work on that. It's a very common thing in open source that people will just, hey, I want to work in this and either they never do it or take too long. So like you're free to work in whatever you want. It's usually like first served, you know, the first one that opens the PR and gets merged is the one, to be honest. Yeah, for now I pretty much be just sharing my GitHub tab so you can see what I'm doing on GitHub, but you don't need to take any attention in it. But just, if you want to have a sneak peek of how I do stuff in GitHub, yeah, that's pretty much it. But from now on, workshops started, but you can still make questions from any of the previous slides. If you're still reading some of the content or even unrelated questions about Node.js in general, like the project, feel free to ask, feel free to shoot. I'm looking at the moment on, we had a footer. Components. I see there is someone that wants to take the footer, but I guess I can look on the header. Yeah, you can pick the header, definitely. Okay, so just to create a little components live, like in the components folder, and you component next to the header called new header. Yeah, pretty much you will create a new file, sorry, a new folder called new header because on this repository, each component is encapsulated by a folder. So all the tasks, and styles, and et cetera which component are there. And since we already have a header and footer component from the old layout, well, here we cannot name the same thing. So pretty much you create a new folder called new header, and then like you mentioned here, there's the auto repository, some instructions are written down here of what you need to do. But the links of the original components on the original repository are here. Like if you go into bullet points, so this is the original header component. Pretty much what you want is to, in a certain way, copy paste and change certain things. For example, we don't use Gatsby on the old repository, we use Next.js. So for example, you cannot use this import. So okay, you will need to search on the Node.js dot org repository. Okay, what do we use for localize links, right? And as you go through the other components that exist, you will go, aha. So this is the name of the thing that we use for having localize links. Well, you also noticed that okay, we don't use Ford.avsome, and we also don't have this logos here, exact file name, so you need to check on the public folder of Node.js or org, okay. Do we have any SVG that is the same as the one of this repository? If yes, well, just use it. Otherwise, also create a new SVG file. And for Ford.avsome, well, we use another library on the nodejs.org called react-icons. So you can search for similar icons, if you're not familiar, React-icons is pretty much just a packaging of famous icon libraries. So all the font have some material UI icons out there, so which is really cool. So pretty much in most of the scenarios, you'll have a one-on-one match to what you need.
13. Component Migration and Environment Detection
Some components use hooks and may need to be migrated. Return types and props have specific guidelines explained in the contributing guidelines. Translation keys can be copied from the original repository to the i18n folder. There is documentation available for migrating translations and adding translation keys. If someone has already made a PR for the footer, it may not be accurate or up to date. The detect environment function has been recently rewritten, and the code can be repurposed for the DetectOS utility. The code uses updated methods and does not support old browsers.
And when not, well, that's when you get creative. Some components also use hooks, right? You might also want to migrate the hooks. Though, as other things that you might not want to do. So for example, here we have a return type on this exact component Oreo called GXXElement. And on Next.js, we don't use GSX types. They're not exported. So you don't actually even need to do this. But it's all explained in the contributing guidelines. What are the types that I need to do for my components? If I need to have props, what are the standards for naming of the props and et cetera? Yeah. And if the component has- Where did you find the file you're now looking at? Which I think Node.js depth.
So here on the issue, where you have the header component, you see that this thing here, this bullet point is a link. If you click, it will go to the actual original component. Some components also have translation keys. So pretty much when you want to translate, copy the translations, you need to find the i18n folder from the original repository, and just paste them on the i18n folder that we have on origins.org. There is actually guides also for copying these translations on a file called translating.md, which is available here. So if you have questions about migrating translations, adding translation keys, or how to use translations, it's all documented here. Even though the person, Augustine Merrill, he mentioned it, they want to work on footer, if they haven't made a PR yet, I'm not sure if they have, you can always work on it. And even if they made, this workshop here is a learning experience. So you can always work in the same thing. I was just speaking yesterday or so.
All right, see there is an open PR for it. For footer? For both. Oh, I see. So if it's the PR that I think you're talking about, which is... Is migrate header and footer that has change requested, you can ignore this one. That person actually didn't migrate the components, they renamed existing ones. Just to a new name misunderstood what the task was about. And even fingers person will actually update their stuff, but I don't know. Interesting. Well, I'd say since it's duplicater, it's too probably not to be used, but I'm pretty sure it's still shipped. What might happen is just because in general, user agents are also duplicated in a certain way, it might not get updated. It might also not be accurate. Like I remember that in certain cases, Windows 11 might not even ship an updated version of this. While I think we were doing the old version of the Node.js website, which is also not very fancy, we recently rewrote this detect environment thing. So this is on this file here on this line. Let me share. It will also comment on the issue. But pretty much addressed Navigator app version thing. We wrote this big jungus here. From line 83 to line 119. What you can do, if you want, is to repurpose this thing, or at least onto line 106. And replace what is on DetectOS on the utility. So pretty much this switch, you could try to replace with line 83 to 106, which is a little bit more updated. It also uses a few other things. So for example, navigator, CPU, class and OS CPU, are all those very old things, and that respectively only exist on Internet Explorer and Firefox. That's what is mentioned in the comment. I'm not expecting people with such old browsers to be still around, even though because we don't support those browsers anymore. So if you want, you can just go to line 93. To be very honest regarding, for example, the architecture. Now regarding the platform, I'm pretty much sure that what line 86 does should be enough.
14. Testing, Detecting OS, and Contributing
To handle unknown cases, a user agent array can be created for unit testing. The detect OS function focuses on familiar desktop OSs for downloading Node.js. The user agent with a simple regex is the recommended method for determining the platform. Issues with the len-fix command can be addressed by making a quick PR to add the necessary slashes in the package.json. To contribute, fork the major website redesign branch, create your own branch, make changes, commit, and push. The GitHub pull requests extension in Visual Studio Code allows for easy pull request creation.
What you can do for any testing, if that's what you like. There are a few pages out there even GitHub Gist that I guess it have just a dump of a lot of user agents. And you can just create an array with all this user agents and pretty much have a unit test at four reaches, each one of them and test it. The expected output. So it will help us to handle unknown cases like this where it's neither Mac, Win, Linux because right now on the website of Node.js, well, pretty much, you know, we only want to show Mac, Windows and Linux from like auto-detection thing because those are desktop environments that people will have, you know, open to download, you know, Node.js. Because yes, there will be browsers with other platforms but the detect OS has only one job of detecting like desktop OSs that are like familiar gestures so that you can download. So pretty much what it does is if you go here, you see that on this hero page, you know, it's pre-selected macOS and that's because it detected the i1 macOS. So if I'm a needer of one of these I think it just selects macOS by default, it's up to you to decide what you do on the default case if you want to return unknown or null or whatever that will be commented and discussed on the PR. But that's the main purpose of it is also for this page to, you know, already pre-select this. But pretty much, as you can see, we still provide links for other environments. Environments and, oops, I guess, we also have this website, this page with all the other ways of installing Node.js. Just giving you the context. But yeah, feel free to do what you believe is best. Now, just commenting on the issue pretty much what I've wrote here. You can. Yeah. Both, the Snippet and Legacy main, like I mentioned, uses those two old APIs, OSCPU and CPU class, which only exists respectively for Internet Explorer and Firefox. They still work, like, respectively on Firefox and Internet Explorer. But you should probably not want to port that because it's like a very old thing. Right now, the way really to, you know, determine the platform is with the user agent with a simple regex. That's what I would say. And for the architecture of 64 bits or, you know, 32 bits or x86, pretty much the thing that is in the hook works already, which is what I am writing down here. So you found issues with len-fix command. Do you want to share what is going on? Like maybe the output or et cetera? But it is doing it, no? It is on the package.json. That's what it should be doing. It should have the extra slash slash already there. I know that I actually changed those things today. So I might have forgotten to actually re-add those slashes. And if that's the case, if you find a bug, well, feel free to make a quick PR. I'm just adding that change on the package.json, and be more than happy to ship that. Yeah, pretty much you make a fork, you know. You go to the major website, redesign branch as a base branch. So basically you fork and check out the branch. Then you just create your own branch, right? Like for example fix used detect OS, you know, or in this case, like feed migrate detect OS, I guess. Both the YouTube and the hook. And then you just make your changes, commit and push. When you push by default, the push command will show like here's a link to click if you want to open a pull request. If you're using the GitHub pull requests extension that I recommended before, you actually open an, an EOI directly in Visual Studio Code where you can create a pull request directly from there, which is really neat. Indeed, seems like when I made the changes recently, I forgot to add the slashes there. As you can see, all the order commands that then, that are there, they have the slashes, but I think I made a boo here. Let me just check if we didn't miss anything else. But I wonder what exactly is failing a test. Like, is it not getting any match? Or what exactly? Let me just find another slash. Let me just try pages. There you go. So when I just make a slash, I set it to.hype because this is a test, so when I make a slash, I can fetch a swan swan design, and I wouldn't send a message to the underlying page. And I guess you can search in Google, I bet there ways. Yeah, repeatedly mocking, I guess. Exactly a second row of a flaw about that. Oh, wow.
15. User Agent Matching and Simplification
Remove the x11, mobby stuff and reduce the enumerators to Mac, Win, and Linux. Order them accordingly. Accept false positives for Mac when it is iOS and for Linux when it's Android. Keep it simple.
It's actually correct, you know, X11 in just the one window manager for Linux. So it should return Linux, right? Or what is it returning, actually? What? You got me. Let me see what's the regacts actually that we have. Cancels and I don't know how this one works. We'll open this one. Smells like an error, isn't it? I don't know. It matches the word Mobby. I'm just very curious to see, there's always one simple way to test this thing. This thing right. But all the user agent is supposed to be a string, right? Where's the user agent, a navigator user agent, is it always a string? Let's see what just happens if I try to. Yeah, it is a string. Yeah. Yeah. Just very curious, because I don't see anything really wrong. It's giving x11. Um, I'm just, I just tested, um, the one that you sendd and it's giving an x11. I think to be very honest, um, what I would do, I would remove the Unix and mobile entries from this type, just keep Mac, Win and Linux. And everything else just put, um, else. I guess, or something else. So, um, but what we can do for example in the case, if it matches with x11, then you return Linux. But you don't really need to match with x11, because you see this user agent, both has x11 and the word Linux. So you can remove the part of x11 of the regex. You can just, in my opinion, you know, and it would give you what you want. Um, let me see some. For example, this one is actually a mobile, right? So this one, uh, this one that I just copied here. It gives Mac but it's an Ipad, which to be fair, I mean, it has the word mobile on the end, but it will try the first match, right? First we have win Mac. We don't have the mobile match anymore. Let me see. It's also about the order that you put these things. So if you add mobile here. wireframewolves.com because the RegEx will make the match based on the first string that appears right on the warmup biowares very late. I mean to be honest I don't mind if we have false positives. Like we don't need to engineer ourselves here. If it's giving Mac one in the iPad, sure. Worse that happens is, well, it will show just that you're on Mac which is kind of... and if you click download, well, it's not like you can install Node.js on your iPad, so nothing really will happen. But I think it's, you know, best effort, you know, algorithm here. I think what really matters is that we have at least one match, like, whatever it is, you know. Because in the end, this is a nice-to-have. So recapping, just remove the x11, mobby stuff, just reduce the enumerators to be Mac, Win, and Linux, and maybe order like, instead of else, you can... Yeah, Android would always match as Linux, which is true. Which, fine. Again, but pretty much, you know, then you have Mac, Win, Linux, and order, which is for everything else, and yes, we would have false positives for Mac when it is, well, iOS, and for Linux when it's Android, but it is what it is. I think it's better for us to just keep it simple rather than trying to match every kind of user agent, because, oh boy, there are a lot of user agents out there.