Node.js: Landing your first Open Source contribution & how the Node.js project works

Bookmark

This workshop aims to give you an introductory module on the general aspects of Open Source. Follow Claudio Wunder from the OpenJS Foundation to guide you on how the governance model of Node.js work, how high-level decisions are made, and how to land your very first contribution. At the end of the workshop, you'll have a general understanding of all the kinds of work that the Node.js project does (From Bug triage to deciding the Next-10 years of Node.js) and how you can be part of the bigger picture of the JavaScript ecosystem.

The following technologies and soft skills might be needed):
  - Basic understanding of Git & GitHub interface
  - Professional/Intermediate English knowledge for communication and for allowing you to contribute to the Node.js org (As all contributions require communication within GitHub Issues/PRs)
  - The workshop requires you to have a computer (Otherwise, it becomes difficult to collaborate, but tablets are also OK) with an IDE setup, and we recommend VS Code and we recommend the GitHub Pull Requests & Issues Extension for collaborating with Issues and Pull Requests straight from the IDE.

The following themes will be covered during the workshop:
- A recap of some of GitHub UI features, such as GitHub projects and GitHub Issues
- We will cover the basics of Open Source and go through Open Source Guide
- We will recap Markdown
- We will cover Open Source governance and how the Node.js project works and talk about the OpenJS Foundation
  - Including all the ways one might contribute to the Node.js project and how their contributions can be valued
- During this Workshop, we will cover Issues from the nodejs/nodejs.dev as most of them are entry-level and do not require C++ or deep technical knowledge of Node.js.
  - Having that said, we still recommend enthusiast attendees that want to challenge themselves to "Good First Issues" from the nodejs/node (core repository) if they wish.
  - We're going to allow each attendee to choose an issue or to sit together with other attendees and tackle issues together with Pair Programming through VS Code Live Share feature
    - We can also do Zoom breakrooms for people that want to collaborate together
  - Claudio will be there to give support to all attendees and, of course, answer any questions regarding Issues and technical challenges they might face
  - The technologies used within nodejs/nodejs.dev are React/JSX, Markdown, MDX and Gatsby. (No need any knowledge of Gatsby, as most of the issues are platform agnostic)
- By the end of the Workshop, we'll collect all (make a list) the contributors who successfully opened a Pull Request (even if it's a draft) and recognise their participation on Social media.



Transcription


So today's workshop will be pretty much a cover, you know, a little bit about, you know, how the node.js project works, you know, a little bit of governance model, also about how collaboration works, how certain areas of the project work, and then how you can lend your very first contribution to the node.js project. So this is me, I'm Claudio Wunder, I'm a senior software engineer at HubSpot, and I also do a lot of open source stuff on my free time, like on the OpenJS Foundation, on the Gnome Foundation, the node.js project, and others, to list a few. Before we start, you know, anything at all, I'm not sure how much each one of you are familiar with GitHub, so I wanted to recap a few of the GitHub features that we are going to use today, which is available at the links on the, you know, on the slides. Pretty much we're going to use today GitHub projects, GitHub issues, the project in GitHub that we're going to use is also linked on the slides. We're going to use github actions, not actually write a GitHub Action, but it will be part of the workflow when you're doing the contribution. And you can also see a sneak peek of what are the github actions that we use in the repository that we're going to work on today. If you have familiarity with github actions, you know, and you want to read it. We also recommend you to enable GitHub's new file navigation design. It's a new experimental feature that is enabled on GitHub, but, you know, for most people it's disabled. And on the slides that I was adding a link how you can enable this new navigation design and where you should do it. It's very, very useful, to be honest. Now for the tooling, I truly recommend you to have Visual Studio Code with Live Share and GitHub pull request extensions, which are linked in the slide. Live Share would be very useful when, you know, you're having trouble during the workshop and you want me to help you to debug. And the GitHub pull request for collaboration, which basically enables, you know, the whole GitHub pull request features directly on Visual Studio Code. So you can see comments directly on your code and all kind of other sort of stuff. Yeah. Finally, you know, I recommend that you have the latest version of the node.js LTS, which is the version that is used by the repository that we're going to use today. You can use node.js 20 or 16, but there might be incompatibilities. So I really recommend you to install node.js version 18, even though because npm, which is the package manager that we're going to use on today's workshop, will complain that you're not using the correct version. You can use any version of v18. Lastly, I also recommend you to install Git and GitHub CLI because it makes just the commands that you need to use today with GitHub CLI. They will be very useful, like, for example, checking out and calling the repository, making a fork. Just a very nice tooling that really allows you to speed, let's say, on your steps of the contribution today. In my opinion, GitHub CLI is a very cool tool and it really improves, you know, the workflow for interacting with GitHub. So the node.js organization. So in the next section, we were going to talk about the relevant pages of the organization, the GitHub org, the node.js governance and collaboration model, and working groups and certain initiatives on what the hell they are. If in any moment I was talking too fast or you're having difficulties, please say so in the chat so I can accommodate to each one's needs. So there are a few, let's say, very important repositories that exist inside the node.js org. And of course, the first one is Node, which is the core on where the magic happens on node.js. This is the repository where all the new features of the node.js runtime are done, all the bug fixes and all the collaboration in general, like documentation, tooling, api docs, all kinds of things that are related to the runtime itself. They are inside the slash Node repository. Then we have slash help, which is a very useful but not a known repository, which was designed for people that have questions with node.js to ask, you know, maybe you have help, you need help with a specific api of node.js, you're having an error when you're writing code, or you just don't know what are the best practices for something. That's the best repository for asking questions because there are a lot of people that will be there answering questions. Then we have the node.js.org repository, which is the node.js website. Then we have the discussions page on the node.js.org, which is not a repository, it's actually a GitHub feature, where the community connects with contributors, collaborators, and vice versa. It's a place for people to ask questions, it's a place for people to make suggestions, or just talk about node.js in general. It's also a place where collaborators will make announcements from time to time, so it's very useful for the people that want to interact with the other community. Finally two repositories that I find personally very important for the governance itself of the node.js project are respectively the build repository, where all the infrastructure of the node.js project happens, or let's say the configuration files, the scripts, the Chrome jobs, all the things that we use in our infrastructure from all the Arduinos and devices we have that keep running node.js to see how different kinds of platforms and architectures node.js is working. I'm just kidding about Arduino, we don't actually support Arduino. We have a few, let's say, smart devices that we keep testing, and a few Raspberry Pis and etc. Then we have the admin repository, which is pretty much a historical repository that people request things to the governance of the node.js project. For example, you want to add new members to the organization, you want to create a new team, or you want to request access to certain resources that we own, or you want to suggest a new idea for the organization as a whole, that's the repository. And if you browse the repository, you will see a lot of very administrative actions happening there and they're 100% transparent. So going a little bit deeper on the governance model of the node.js project, we have two big entities that actually govern the node.js project. We have the TSC, which is a technical steering committee, which pretty much manages accounts and resources, define the technical priorities of the node.js project, manage teams and initiatives, and they manage the whole GitHub organization. Then we have the OpenJS Foundation, which I'm a member, where we do the legal and compliance side of the project, where we get the resources to keep the infrastructure running on an outdoor kind of cost that the project might have, even including travel costs when a collaborator needs to attend a certain event, for example. There's the future of the project in a legal and, let's say, business world standpoint. And we also have the marketing and outreach of the node.js project happening there, like Twitter, all the branding side of node.js, and et cetera. And pretty much, you know, you could consider that there are three categories of collaborators on the node.js project. You have regular collaborators, which they work on individual working groups, strategic initiatives and teams, and they contribute on the numerous aspects of the node.js project. One second. Then we have the core collaborators. As the name implies, they work in the core side of the node.js, the runtime itself, and usually they are dedicated to one or more technical specialities. And the slide there is a link showing all the kind of core teams that we have. Pretty much you have people that specialize on containers, people that specialize in HTTP protocol, on DNS, on crypto, or on V8, which is, you know, the javascript engine that we use behind node.js, and all other kind of things. Then we have the TSC, that both in an administrative standpoint and a technical standpoint, they are the authorities of the project. Pretty much they are veterans of the industry, which have a lot of knowledge on maybe one specific thing like security, performance, or even node.js itself. They keep node.js technically relevant and ensure the continuity of the project, plus doing all the administrative tasks of the project. Here's an example of how the governance model works. If I want to make a request, if I want to propose something to be changed, a collaborator will open an issue on slash admin. Depending on the request, it might require input from other members, from strategic initiatives or working groups. For example, if I'm requesting somebody to get added on the internationalization team, pretty much I need to request opinion from the other members of the internationalization team if they're fine with it. The initial sentiment of the issue pretty much decides if it goes further or not. If needed, the author will provide a draft proposal of what they're requesting, why, and why not. If needed, the proposal gets presented to the TSC, either by them having a private meeting by the TSC themselves, or if needed, they will invite the author of the idea proposal to the meeting to present if further discussion is needed. The TSC will debate about the feasibility of the proposal, and they might request changes. The author will provide the changes. It's basically a cycle of iteration. It could be that in the end it never lands, because in the end, after many revisions, the author realizes, well, there's no need to do this, or it's too complex, or it lands. Depending on what it is, it will land either as a team, as a project, as a resource. For example, if I want access to the node.js YouTube channel, pretty much it will succeed the moment that I get access to the YouTube channel. So the governance model creates processes and standards for getting ideas reality and ensuring the longevity and sustainability of the node.js project. An example of requests that has been on work for a few months already is the api docs metadata proposal, which pretty much just changes how the api docs are built, the tooling underneath it, and how our api docs pages look, like in the source and also in the built version that you see on your browser. So there are a lot of actual structural changes inside this proposal. You can read more about the proposal on the link that is in the slide. You can also read more about the governance model on the other link that is also in the slide. An example of how the api docs proposal is going is we have a working group that is Next 10 in this case. I will explain what Next 10 is in the following slides. It assesses the current state of api docs. Well, they realize the current state of api docs has issues from accessibility, from internationalization, from maintainability, from many different standpoints. While then the working group gets together and starts to debate, okay, should we change? And if yes, what should we do? Pretty much all the discussion first happens in the working group, which then gets probably if the whole working group has a positive sentiment of if the change should land, they will create an issue and then create an issue on the slash admin repository, which will start involvement of the TSC. And like I presented on the previous slide, pretty much, you know, in the previous timeline, I mean, it is a process of back and forth of the TSC members providing feedback, requesting changes and pretty much the author reflecting on, you know, on the feedback, realizing if that makes sense, that make not sense, defend their points or actually think on changes that, you know, would please the TSC or that would make sense. Now it's all about compromises and, you know, finding the sweet spot. It's no joke that in my opinion, open source is for big projects is a very bureaucratic model, way more than companies, because there's this public sentiment that in open source things are very fast or that there is no really governance or really decisions. People just make things because they want. But the reality is that on really big projects like Linux or node.js or the gnome project, it is the whole opposite. It requires a lot of cycles of feedback and iteration. And you know, now in the collaboration lifecycle of the node.js core repository, I'm going to just tackle one side of it, which is bug reporting. You know, you will pretty much have a bug being reported, you know, where people will say, hey, this seems to not be working. It's given an error or it's not behaving as it's supposed to behave. Then somebody from the bug triage team, they will realize, well, if this is really, you know, a bug on the user side or if this is actually a bug on node.js, they will request more information, you know, and pretty much decide the future of that issue. If it should get attention from any of the contributors, collaborators or not. It's valid to say that any collaborator can at any moment chime in, you know, in the issue and comment and also ask for information. But pretty much on the early stages, because we have hundreds of issues being created every day, we decide to leave the triage, you know, team to, you know, future that noise. Otherwise it would really make the collaborators' life miserable, because there's so much time each person have to actually contribute on open source when they're not actually being paid to work for open source, which is the case actually of like around 95% of the people that work in open source. They're just doing it on their free time. When the bug gets worked on, you know, if any contributor, newcomer or existing contributor realizes, yeah, I can work on this, this makes sense, they will do a pull request. That's where the actual, let's say, feedback iteration will start. Pull requests can sometimes be merged very quickly in like 24 hours or less if they are fast tracked, which means that an existing contributor is requesting something to be merged urgently. 48 hours in most common scenarios, if it's a simple review, if it's a more complex code change, it could take from a few days to honestly, well, a lot of months, which, you know, it is how it is. For example, node.js still doesn't have support for the QUIC protocol created by Google, well, more than a decade ago. And pretty much because, you know, based on the existing governance model and collaboration model, you can't have people rejecting that issue in general. There are, of course, many other mechanisms in place to avoid, you know, a test from getting stale forever. But pretty much when it's very complex and it does a lot of fundamental changes of how the runtime works, and in this case, the HTTP module of node.js, which is one of the most important modules of pretty much node.js, well, it really needs to be spot on. Like you wouldn't like to just be regularly using node.js and then a behavior that you use to completely change and break what you're doing. Yeah. And in the end, the pull request gets shipped, which pretty much depending on the size of the change or the backwards compatibility or what the nature of the change is, well, it can be backported, it can be only added on the next major release or in a minor release. All of those are also included on the collaboration guides that I provide in the slide, which are exactly in the slide. So we have node.js collaborator guide, which is a huge document with everything you could imagine to read, you know, from smallest details to more macro information. We have a folder inside our repository, which is a set of contributing guides. For example, what should I do when I want to add a new method to a certain class? What should I do if I'm changing this module? What should I do if I'm making api changes, like from the docs? So we have everything very well documented. Just something's very well hidden. It's not a folder that most of the newcomers would find by themselves. We have also a link for resources for getting involved to node.js, which states the numerous ways of getting involved with the project. And finally, the code of conduct, which is very important, because without code of conduct, it will be chaos. So moving to the next part, on node.js, like I mentioned before, we have a certain kind of teams that handle different kind of things on the project. I'm not necessarily some of those people work on the core development of node.js, which means actually on the runtime itself, but still they are as important as any other work. For example, we have the security team, which is not mentioned here, but we have the security team, which pretty much handles and ensures that node.js is secure, from handling vulnerability reports or public stated CVEs. The security team does a lot of things about ensuring that the code that you run will not be exploitable. Then we have a number of different working groups that work on specific parts of node.js, either the project or node.js itself. We have, for example, the streams working group, which is responsible and all the implementation of streams on node.js. We have the Docker working group, which basically are people that work on all the pipelines that we have on node.js and pretty much maintain the Docker images that we have. We have the release working group, which is very important, which basically handles and manages all the release cycles of node.js. The actual releasing, signing, and compilation of the binaries that are shipped to you and also all the part of announcement, blog posts, and et cetera, pretty much the people that make node.js get released. Then we have the package maintenance working group, because while npm is a huge ecosystem or the javascript ecosystem is like tremendously big, and we want to ensure that we are talking with the maintainers of numerous projects and ensuring that node.js is good for people that want to create packages, that want to maintain packages. We also have the diagnostics working group, which is responsible for ensuring that node.js has very good debugging and diagnostic built-ins, which allow you to debug or allows us to analyze and diagnose node.js itself. Then we have the node.js strategic initiatives. Pretty much a strategic initiative is different from a working group. The main difference is that a working group are people that are working on aspects of the organization in general, sometimes in parts of node.js. Strategic initiatives, they are fundamental initiatives that the TSC, the Technical Steering Committee, is pushing of new technologies, new things to improve node.js itself or to ensure that node.js stays relevant. Pretty much you will see the performance strategic initiative. It's to ensure that node.js keeps being fast, or how we can improve the speed of node.js, or how can we improve the benchmarking, or pretty much the performance of the already existing parts of node.js. We have Startup Snapshot, which is an initiative, for example, to ensure that node.js loads fast. We have a few other runtimes out there that beat us already on how fast they are for initializing. Like the first, with startup, I mean, you enter the comment on your terminal or whatever, and the first output that you have, how fast the runtime can be on bootstrap. We have the Startup Snapshot initiative, which is pretty much in how we can excel on that. It's actually going to get released very soon, the initial version, which is very exciting. We also have the Core Promises api. A lot of the APIs of node.js, they are pretty much the use callback APIs, as you might have seen already. And this strategic initiative is about porting parts of our code base over APIs, with Promises APIs, so that you can use things on the parts of node.js without really needing to use callbacks, but you can use await or Promises in general. There are a few other strategic initiatives mentioned here, and then you can also give a read later if you wish. Each one of those items is a link to the actual issue or repository about a working group or strategic initiative. And like I mentioned already about what the Startup Snapshot is, there's also the NextGen strategic initiative, which is pretty much the team that ensures that node.js is relevant in the next 10 years. It's also the team that is responsible for getting feedback from the community. What are features that you want to see in node.js? What are the things that the community is speaking, and how we can ensure that node.js keeps relevant for you? And the single executable strategic initiative pretty much is to allow you to create executables like apps, like you do with C Sharp or Java, with node.js, without the need of having node.js to run those apps, which pretty much converts your javascript code on an app. Pretty much what Election does, but built in on node.js, and without actually having a whole browser embedded on your application. It's just like the node.js binary embedded, so it's way smaller. And it's something that we just released on node.js 20, which is the latest version of node.js we released this week. And it's pretty much fascinating in my opinion, but I'm biased. Let's get our hands into action. So the repository that we're going to work today is not the core of node.js, because, well, to be very honest, most of the parts is in C++, other parts are always in javascript, but it is very high level in general. And even for newcomers, it can be a huge attraction. If you don't have familiarity already with the technologies, just starting out of nowhere on the core repository might be harder for a workshop like this, right? So the repository that we're going to contribute today is something that has a lot of web technologies in it, but it's not as complex as the runtime itself, which is the node.js website. So pretty much on the next section, I'm going to talk an overview about the website repository, what is the website redesign initiative, getting your environment ready, and then another round of questions. The node.js website. Well, for most of the people, it's just when you hit node.js.org, you download node.js or you go to the api docs, pretty much everything happens magically in the backend. Not that we actually have a backend, it just like happens on the bottom of the iceberg. But actually, a lot of the parts of the website are not from one repository, they're a collection of different things. Today, what we're going to approach is the main website, the main part that is where you download node.js or see the blog posts or stuff like that, the actual website. But in other terms, what is actually the node.js website? Well, it's from the distribution of node.js releases, is the api documentation, the code coverage, the node.js status, the human-friendly website, and also we have a lot of metadata that we host there for, like even Visual Studio Code uses this kind of metadata, which pretty much has all the release information of node.js. So there's a lot of really cool things happening in play, which is always a lot of information I'm not expecting at all to give an eye on any of those. What we're going to approach today is just the website repository. So resources, I would definitely recommend you to give an eye. Well, the website repository itself, which is also on the slides, the contribution guidelines for the website repository, pretty much all the information that you need to get started is there. From how we do any tests, how we do coverage, how you create a new component, pretty much all the guidelines still will be there. And there's a list of issues with the label of the issues. There's a few third of the issues that you can work today on the workshop. So, well, I hope that most of you already opened the website of node.js. And, wow, to be very honest, it looks old. That's because the original design was made more than a decade ago. And the technologies that are used on that repository originally are also from one decade ago. Long before times of react and GSX or even angular, those were different times. And because exactly for many reasons, like infrastructure is very old and the technologies behind the website are very old, well, the TSC created the website redesign initiative, which pretty much had the following goals, keeping the website relevant with a modern technology stack, refreshing the look and feel of the website and providing a better collaboration model that is just friendlier for newcomers on like from the technologies themselves, the tooling that is behind it, the documentation, the resources that are available for the contributors. And also to mitigate a few long-term pains behind the infrastructure of the website. Regarding the last two points, so the previous point pretty much of the frameworks that were used to very recently handle Bars and Metasmith, they're not very well, like newcomer friendly. The framework that we are using today, next.js, well, it's pretty much out of the box. There's not much you need to learn to use next.js and react is as friendlier as it gets for web development. Of course, there are frameworks that are always great. Like vue and angular. But I think in terms of, that's a personal opinion, of course, on writing plain html and making it get rendered on your browser, react is pretty good. And for mitigating the long-term pains of the website infrastructure, while we have a recurring issue that pretty much, let's say in a very short version, we have a server where the website runs, which doesn't use any of the fancy technologies we have nowadays, like CDNs or edge networks or stuff like that, or replica nodes, it's just an old server, which pretty much handles everything. Which is behind cloudflare. Not sure if any of you know what cloudflare is, but it's a content distribution network on steroids. And we have a few issues with caching. So for example, every time we publish a new build of the website, the whole cache gets purged. This causes long-term issues, which in the end make, like downloading node.js impossible, or many other issues which are very, very crude and horrible. We had even an incident last month with our infrastructure. All of this post-mortem and a very interesting read on the infrastructure behind the website are available on status.nodejs.org, including the incident itself and how it happened. There's also a blog post on node.js.org itself. If you go to the blog page, you can also read if you're interested. I think it's a very, very interesting engineering experience, let's say. But the initiative isn't actually something new. It exists since 2019. But you might ask, well, why the node.js.org website still looks old? Well, back then, the people behind the initiative, they were a little bit ahead of themselves. They tried to use cutting-edge technologies that were very new, a framework on the time called gatsby. They tried to make what we call a Big Bang Migration, which basically is create a whole new code base from scratch, do everything there and discard existing node.js website. There are a lot of people that are against this. For very practical reasons, how you ensure migration that is seamless when you pretty much discard the whole thing. Over time, the team realized that gatsby is very complex, has a lot of issues and pains. There are also issues on GitHub on the node.js organization, if you search, addressing exactly those topics where we discuss all the problems that gatsby has and et cetera. And ultimately, it has a lot of bugs, performance issues and maintainability issues. In the end, it just made the whole thing get stale. Over time, the people that were very vested in working on that, they just abandoned the project. Well, a few years passed and we are in 2000, when the initiative was revised, 2022, three years passed, almost four. So pretty much we had to rethink our whole strategy here. What are the pain points that gatsby brings and what are the alternatives out there? And pretty much the issues of gatsby are mentioned on the left side. And ultimately, after a lot of discussion, we decided, well, let's try next.js. For many reasons and a few ones described here, it's also available on the node.js.org repository, the whole discussion about migrating to next.js and why next.js, which is also a very interesting read. Because a lot of engineers, when we want to create a new project, we're like, okay, what is the best framework for my project? Because there's no such kind of best framework, but the best framework for what you need to do, right? There's always a better tool for what you need to do, but how you know what is the best tool before even using it? So it's a very interesting read, in my opinion. And what's the current state of the website redesign? Well, first we had to switch from MetaSmith to Handlebars, right? To next.js. That was a very big pull request. That is also available on the repository, but I not expect you to read that because it's a very big repository. But the link is there anyways, if you want to read the discussion behind it. So first we needed to migrate the existing repository to a new framework and have the exact pages with exact look and feel, the exact features and working for the same browsers and being built in the exact same way and the exact same infrastructure. So there are a lot of requirements and wow. It's very interesting because in the issue we always explain all the shortcomings and how we've made this possible. Then the next step was to adopt new technologies. For example, we want server-side rendering, which for the people that don't know what it is, it just means that you have a server, you know, like a node.js server application that will render the whole web page on the server side and render to the client an already compiled, let's say, version of what you're seeing. So that when you open the page and you have, for example, react on the client side, you have a faster loading, initial loading, because the initial html content that you get is already the first hydrated version of your application. And in order shenanigans, it pretty much is a way to skip that react needs to build, you know, your application on the first time that you open it. And that makes a few things faster. It also has other utilities, but pretty much, you know, that's the gist of it. We also wanted to adopt a framework called storybook, which allows us to do manual testing. So for the ones that do not know, yes, it is using SSG at the moment. It's a full static version of the website. We are in the process, which is a point three on migrating to Verso. I'm going to get there. storybook as a framework, pretty much, you know, allow, you know, when you're creating your component, people to preview it, to test it, you know, to see how it looks as a component on the smallest unit of code. So, you know, the next step was to adopt a few new modern technologies, you know, alongside next.js and best coding practices for the modern era. Then migrating infrastructure, you know, there are a lot of things that are at play. Like for example, when you have a full static SSG system, how you create dynamic content, right? How you do incremental builds, which pretty much incremental builds is the notion that instead of building the whole website at once, which can take a lot of time, depending on how many pages you have, it does just pretty much core pages. And as soon as people request new pages, it will keep building the pages, which is very interesting. But there are a lot of complications right there. But this is also available on GitHub if you're curious, but pretty much the next step was, is, you know, to migrate the Verso, to allow server-side rendering, to allow middlewares, routing, et cetera. And the main goal on this is that, so we can split the traffic that is going to the very old server that I mentioned before, to be only for distribution and the binaries of node.js and the remaining just for the website. Because, well, the website gets updated often. Releases, well, you only have once a quarter, once whenever that we need to release a new version of node.js. So there are a lot of very interesting engineering topics on there. Like for example, how we will split the routing on cloudflare, et cetera. And it's all always on GitHub, if you want to give a read afterwards. The reason why I'm mentioning all this is because as an engineer, there are a lot of really cool things happening, you know, here on the migration of this website. And many will just imagine, yeah, just adopting new framework or, you know, just updating the design of the pages. And for projects as complex as node.js, there are a lot of very, very high level technical decisions and requirements that need to be done there. So it's not as easy as you would imagine. Yeah. And finally, we want to adopt the new design. So there is a website that you can actually open right now, which is called node.js Dev. Oops, I put on one HTTP. Hold on. Which is pretty much that old website redesign initiative made in gatsby. But it's pretty much what we want to accomplish. The whole new layout, new api docs, new everything. But on gatsby. And pretty much the goal now is, well, we want to migrate the components that we were able to do on this node.js Dev repository to the node.js.org repository. And pretty much that's the state we are right now. And that I'm going to show to each one of you. So pretty much, you know, what do we have at work is, you know, we need to migrate the components you know, each component and its dependencies need to be migrated. We can ensure the coverage and that the component work is expected. Then we migrate the layouts, you know, which pretty much use those components. And extra styling that is exclusive to the layout. And then finally, we migrate pages. Pretty much which are markdown pages. Pretty much which are markdown pages, which will be built into final, you know, html pages by next.js. And all of this needs to happen without replacing or touching the existing components, pages and layouts. So pretty much all the work that is done right now, it's a work that, you know, we add all of these new things without actually using them. And then the moment that we have all the pages and layouts and components done, we simply remove the old stuff and switch to the new stuff. That is very interesting. That's why storybook also that framework of manual testing has an important play because, well, if you don't have any layout or page right now to show the component, how will we actually see the component and test it, right? And ensure that it's looking the way that it's supposed to look. And pretty much, you know, we are getting very ready to actually start working on what we need to work. So pretty much if you haven't already, you know, install those things that I mentioned before, GitHub CLI, Visual Studio, Code.js. Don't need to clone the repository yet, but if you want, without doing anything, you can already clone it and install the dependencies. And, you know, read the contributing and, you know, read me in the meantime, because now we have another FAQ and also short break. Because again, it is a lot of information. I would recommend GitHub CLI. It's as a personal experience. It fastens a lot of things. So for example, you will need to fork repositories, right? For node.js or repository. With plain Git, well, you first need to go to github.com, fork it. Then you need to clone. Then you need to manually add the upstream remote, which is the original repository. Then you need to do a lot of other things with GitHub CLI. You just paste one command and it does everything for you. You know, so it is very, very useful for cloning, for checking out branches like PRs or forking. It has its advantages. But if you feel confident enough with plain GitHub, sorry, plain Git, sure, that's fine. You're not required to install GitHub CLI. Pretty much the technologies that we're going to use today, well, are react, next.js, Jest, storybook, SCSS, which is pretty much css, you know. ESLint and Prettier. This might sound like a lot of things, which actually are. But the cool thing here is that since we're talking about migrating components that already exist, most of the work is actually ensuring that when you copy the code base, things are working, changing imports, you know, updating a few style things, adding the storybooks, the unit tests. It will be very interesting, this experience. But the actual coding part is not that much, you know. So I'm not actually asking you on this workshop because very limited time to code something from scratch. But it will give you very good hands-on experience on all these technologies and how they work together. And we all be collaborating together. Of course, each person will probably work on their own issue. But, you know, if you have any question or you want to live share or whatever, I'll be here to help each one of you. On the next slide that you have, if you have the slides open, pretty much just a quick overview of what we do with each one of those technologies. Which if you're familiar already, you know, I'm not really going to talk about it. But, you know, react, next.js, storybook, and Jest. You also don't need to be a pro in any of those technologies because most of the unit tests are very simple. You just want to, you know, render that component. You know, if that component has different, you know, if you have a prop. Suppose you have a button, right? That changes color or changes the value of it every time you click on it. So what you want to test on the unit test is if you click the button, that the value actually changes to what is expected. So it's very simple. It's just checking that, you know, component behaves as it's supposed to be. There are a lot of templates and existing tests out there to help you. And for Storybooks the same. We already have a lot of Storybooks and templates that you can use, which are available in the contributing guidelines. I worked a lot on the last night and this morning to update a lot of those contributing guidelines to make it really, really explained. But I'm very open for feedback. So if anything on those guides doesn't sound straightforward or contradiction, you know, just give me the feedback. Or you can even make a pull request yourself, you know, to add more stuff or propose changes. So pretty much this is, you know, how it's basically eight steps of why we're going to work here today. You will pick an issue. You can always go back to this slide. You will follow the issues of the, the instructions of the issue, which hopefully are also very well explained. Then, you know, you will migrate the component, you know, and add tests and Storybooks, you know, then you want to ensure that, you know, all the linting is done correctly. So there are a lot of ESLint and Prettier. I'm not even sure if you'd call it Prettier or Prettier. I guess Prettier. Rules that will actually enforce your code is on the style that we use across the whole repository. So pretty much, you know, if you do imports in the wrong order or outcome, and any kind of thing that, you know, it's really out of the pattern, which is completely valid because you're new to the code base. You're not supposed to know how we code in this repository. We'll either give you a warning or autocorrect for you. So always good to run ESLint and Prettier to ensure that everything's fine. You will keep iterating, you know. A cool thing about storybook is it is hot, you know, pluggable, hot reload. So while you're coding in your component, it will reload at the moment. And it's really cool because you will only render your component. So you don't need to focus in anything outside of the component. Yeah. And also pretty much the styles of both repositories are compatible. So each component has styles, right? When you migrate those styles, all the variables and et cetera, they are 100% compatible. So you don't need to scratch your brain with, okay, what I need to do to fix that style. Then I would definitely recommend you folks, since because of the amount of time that we have to create a draft PR, even if it's very verbal, it's very beginning, very anything, so that, you know, I can give feedback, I can look what you're doing, et cetera. The moment that you create the PR, we have a Verso bot that will basically already create previews of the storybook. So pretty much it will generate a link that anybody can open that contains the storybook stuff, which is really cool. And if there's not enough time, you know, you're not supposed to finish today. Like a lot of the reading material that I shared today, and a lot of the things that I talked today, not expecting you to, you know, memorize them or to know or understand them, but to read afterwards. And I would love to connect either on Discord or anywhere else to chat, you know, afterwards, give an opinion, because you are the community and, you know, we are here to serve the community, to be very honest. Like pretty much the work that we do is to create a better experience for you using node.js or anything surrounding the node.js ecosystem. So your opinion matters. And pretty much, you know, these are the key comments you want to keep in mind. Pretty much if you want already, you can start cloning the repository. These are the exact commands that you can use with GitHub CLI. Which is cloning, then you're going to... Can you say please the slide number, then it's easier to jump to the right. I guess I don't really see here where I am. I'm really sorry, but let me see. It's 38. Yeah, like pretty much it doesn't show. Oh, now it showed me. I just had to put my mouse on the very corner of the page. It's 38. Thank you, Uriel. It's Uria or how should I pronounce your name? Uria. Uria. Did I say correctly? But most Germans are challenged by it anyhow. It's okay. So then you will fork the repository, right? You will switch to the branch. The base branch we're working today is called major website redesign, which is where all the redesign work is being happening. You know, it's basically a huge feature branch so that we don't pollute the main branch with all this development stuff. The comment would get switch. Pretty much you will ask Git to switch to a new branch locally called major website redesign and we'll tell Git that the remote of it is on the upstream of your fork and not your fork itself. The reason why we do this is because one of the common pains of forks is keeping your branch up to date, right? GitHub nowadays has a button that you can press in the UI to sync forks. But I really like this command because if you have branches that you want to keep track directly from the source repository, that's what you do. You pretty much just tell which remote you want to use, which in this case is upstream. And then when you're on the major website redesign branch, you can check out your own branch, like the actual branch where you're going to work on your stuff. And we pretty much recommend conventional comments for common names and descriptions, but you don't really need to abide by that if you don't want to. Did you use the GitHub CLI commands? Yeah, I'm doing it at the moment. Seeing on the comment, oh, you used to get clone, yeah. The thing is that you probably need to mainly add the upstream remote. I think the moment that you fork the repository, if you forked on the GitHub UI itself, not with GitHub CLI, it should already have the major website redesign branch. So you can just always use git checkout major website redesign instead of the git switch command. Git switch command is useful if you have the upstream remote on your Git repository, because then you don't need to keep track, like always updating. Because it's a very common thing that you work in your future. Yeah, also the same. If you also by accident happens to just clone the main branch, you can always use git fetch minus minus all, which will fetch you all the branches like that or on the remote. Again, by default, when you just do a regular fork on the regular clone, it will not add the upstream remote. And that's where GitHub CLI, it's really cool. Because if you're not only cloned the repository, but configure all the remotes and shenanigans for GitHub. So yeah, this slide is the slide 38. Another thing is that once you're in your branch and when you do changes, push the changes to your remote, which you can do by doing git push origin head or git push origin name of the branch. I just find that putting head, it's easier because I don't need to always write the name of my branch. I'm very bad with memory. So I'm speaking out. Sometimes I forget the name of the branch and running a command just to get the name of the branch sometimes is a hassle. And before committing, because we don't use hooks like git hooks to automate pre-commit processes, run npm run lint or npm run format. You can also run, and I didn't update the slides with this, but we use a technology called TurboRepo, which is by Verso. You can just run npxturbo lint. Basically pretty much any command, just prefix it with npxturbo. The contributing guidelines are already updated with this. I just know the slides. The main benefit is that it will cache your commands. So sometimes linting can take some time depending on your hardware. With this, it will just be faster. Now we want to start choosing the issues. Like I mentioned, we have a lot of issues with website redesign. To be honest- Would you recommend to directly create a branch from the issue? Then they are automatically linked? You can. I mean, you need to keep in mind that the base branch needs to be major website redesign if you're working on the website redesign issues, but you can directly create from there. That's not a problem. I was just going to say that you don't necessarily need to work on the website redesign issues. There are a few older issues here that maybe are just updating content and et cetera. But if you're working on any issue that is not website redesign, then your base branch is main, not the major website redesign. Just let me know. I mean, if what tasks your folks are starting to work, you can simply go to the issue. I think I commented here. Just comment on the issues that you're going to work with. Like just basically, hey, I would like to work on this specific part of the issue if it has a bullet point. Or hey, I want to just work on this issue if it's just one item. Because then I get a notification that allows me to keep track of what y'all are doing. Yeah. So pretty much feel free to use the next minutes to just choose the issue. Questions, if you need any troubleshooting or et cetera. And I guess we can, you folks can start working. Again, if any concept is unfamiliar, if any issues unclear or anything, let me know. I'm here to help. I'm actually going through the notifications now because there's a lot of stuff here. Also a very important thing is that even if in any of those issues, people already wrote, hey, I want to work on this. If there's no PR open for that, you can work on that. It's a very common thing in open source that people will just, hey, I want to work in this. Either they never do it or take too long. So like you're free to work in whatever you want. It's usually like first served, you know, the first one that opens the PR and gets merged. So when, to be honest. So yeah. And for now I pretty much be just sharing, you know, my GitHub tabs. You can see what I'm doing on GitHub, but you don't need to take any attention to it. It just, if you want to have a sneak peek of how I do stuff in GitHub. Yeah. That's pretty much. But from now on, workshop started, but you can still make questions from any of the previous slides. If you're still reading some of the content or even unrelated questions about node.js in general, like the project, feel free to ask, feel free to shoot. I'm looking at the moment on, we had a footer. Components. I see there is someone that wants to take the footer, but I guess I can look on the header. Yeah, you can pick the header. Definitely. Okay. So just to create a component, components like in the components folder, and you component next to the header called new header. Yeah. Pretty much you will create a new file. Sorry, a new folder called new header because on this repository, each component is encapsulated by a folder. So all the tests and styles and et cetera, each component there. And since we already have a header and footer component from the old layout, well, we cannot name the same thing. So pretty much you create a new folder called new header. And then like you mentioned here, there's the auto repository. Some instructions are written down here of what you need to do. But the links of the original components on the original repository are here. Like if you go in the bullet points, so this is the original header component. Pretty much what you want is to, in a certain way, copy paste and change certain things. For example, we don't use gatsby on the old repository. We use next.js. So for example, you cannot use this import. So, okay, you will need to search on the node.js.org repository. Okay, what do we use for localized links? Right, and as you go through the other components that exist, you will, aha, so this is the name of the thing that we use for having localized links. Will you also notice that, okay, we don't use Fort Fsum and we also don't have this logos here, exact file name. So you need to check on the public folder of node.js.org. Okay, do we have any svg that is the same as the one of this repository? If yes, well, just use it. Otherwise, I was a create a new svg file. And for Fort Fsum, well, we use another library on the new, on node.js.org called react icons. So you can search for similar icons. If you're not familiar, react icons is pretty much just a packaging of famous icon libraries. So all the font have some material UI icons out there. So, which is really cool. So pretty much in most of the scenarios, you will have a one-on-one match to what you need. And when not, well, that's when you get creative. Some components also use hooks, right? And you might also want to migrate to hooks. There are other things that you might not want to do. So for example, here we have a return type on this exact component called GSX element. And on next.js, we don't use GSX types. They're not exported. So you don't actually even need to do this. And it's all explained on the contributing guidelines. What are the types that I need to do for my components? If I need to have props, what are the standards for naming of the props and et cetera? Yeah. And if the component has- Where did you find the file you're now looking at? Which is node.js dev. So here on the issue where you have the header component, you see that this thing here, this bullet point is a link. If you click, it will go to the actual original component. Some components also have translation keys. So pretty much when you want to translate, copy the translations, you need to find the I18N folder from the original repository and just paste them. On the I18N folder that we have on node.js.org. There is actually guides also for copying these translations on a file called translating.md, which is available here. So if you have questions about migrating translations, adding translation keys, or how to use translations, it's all documented here. Even though the person, Augustine Marolli mentioned it, they want to work on footer. If they haven't made a PR yet, I'm not sure if they have, you can always work on it. And even if they made, you know, this workshop here is a learning experience. So you can always work on this. I was just speaking yesterday or something. So I see there is an open PR for it. For footer? For both. Oh, I see. So if it's the PR that I think you're talking about, which is migrate header and footer that has changed the requested, you can ignore this one. That person actually didn't migrate the components. They renamed existing ones. Just to a new name, misunderstood what the task was about. I don't even think this person will actually update their stuff, but I don't know. Interesting. Well, I'd say since it's duplicated to probably not be used, but I'm pretty sure it's still shipped. What might happen is just because in general, user agents are also duplicated in a certain way. It might not get updated. It might also not be accurate. Like I remember that in certain cases, Windows 11 might not even ship an updated version of this. What I think we were doing in the old version of the node.js website, which is also not very fancy. We recently rewrote this detect environment thing. This is on this file here on this line. Let me share. I will also comment on the issue, but pretty much addressed the navigator app version thing. We wrote this big chunk here from line 83 to line 119. What you can do if you want is to repurpose this thing, or at least onto line 106, and replace what is on detect OS on the utility. Pretty much this switch, you could try to replace with line 83 to 106, which is a little bit more updated. It also uses a few other things. For example, navigator CPU class and OS CPU, those are very old things, and that respectively only exists on Internet Explorer and Firefox, as what is mentioned in the comment. I'm not expecting people with such old browsers to be still around, even though, because we don't support those browsers anymore. So if you want, you can just go to line 93, to be very honest, regarding, for example, the architecture. Now, regarding the platform, I'm pretty much sure that what line 86 does should be enough. What you can do for any testing, if that's what you like, there are a few pages out there, even GitHub Gist, that I guess would have just a dump of a lot of user agents, and you can just create an array with all these user agents, and pretty much have an unit test at four reaches, each one of them, and test the expected output. So it will help us to handle unknown cases, like cases where it's neither Mac, Win, Linux, because right now, on the website of node.js, well, pretty much, we only want to show Mac, Windows, and Linux from the auto-detection thing, because those are the desktop environments that people will have open to download node.js. Because, yes, there will be browsers with other platforms, but the detect OS has only one job, detecting desktop OSes that are familiar gestures, so that you can download. So pretty much what it does is, if you go here, you see that on this hero page, it's pre-selected MacOS, and that's because it detected that I want MacOS. So if I'm a needer of one of these, I think it just selects MacOS by default. It's up to you to decide what you do on the default case, if you want to return unknown or null or whatever, that will be commented and discussed on the PR. But that's the main purpose of it, is also for this page to already pre-select this. But pretty much, as you can see, we still provide links for other environments. And I was, oops, I guess, we also have this website, this page with all the other ways of installing node.js. Just giving you the context. But yeah, feel free to do what you believe is best. Now, just commenting on the issue, pretty much what I wrote here. Yeah, both the Snippet and Legacy main, like I mentioned, uses those two old APIs, OS CPU and CPU class, which only exists respectively for Internet Explorer and Firefox. They still work on like, respectively on Firefox and Internet Explorer, but you should probably not want to port that because it's like a very old thing. Right now, the way really to do this, the way really to, you know, determine the platform is with the user agent, with a simple regex. That's what I would say. And for the architecture of 64 bits or, you know, 32 bits or x86, pretty much the thing that is in the hook works already, which is what I am writing down here. So you found issues with the linfix command. Do you want to share what is going on? Like maybe the output or et cetera. But it is doing it, no? At least on the package.json, that's what it should be doing. It should have the extra slash slash already there. Note that I actually changed those things today. So I might have forgotten to actually re-add those slashes. And if that's the case, if you found a bug, well, feel free to make a quick PR, just adding that change on the package.json. Be more than happy to ship that. Yeah, pretty much you make a fork, you know, you go to the major website redesign branch as a base branch. So basically you fork and check out the branch. Then you just create your own branch, right? Like for example, fix use detect OS, you know, or in this case, like feed migrate detect OS, I guess. Both the U2 and the hook. And then you just make your changes, commit and push. When you push by default, the push command will show like, here's a link to click if you want to open a pull request. If you're using the GitHub pull request extension that I recommended before, you actually open an EOI directly on Visual Studio Code where you can create a pull request directly from there, which is really neat. Indeed, it seems like when I made the changes recently, I forgot to add the slashes there. As you can see out there commands that are there, they have the slashes, but I think I made a boo here. Let me just check if we didn't miss anything else. But I wonder what exactly is failing your tests. Like it's not getting any match or what exactly? I don't know. Because on your tests, pretty much on each run, I think you need to mock the navigator global or not really mock, but like override the actual value of user agent. I think on Jest, you should be able to do that just by doing navigator.useragent equals something, but I'm not sure. Haven't done that in a while. And I guess you can search in Google. I bet there are ways. Yeah, repeatedly mock, I guess. Exactly second row of flow about that. Oh, well, it's actually correct. X11 is just the one window manager for Linux. So it should return Linux, right? Or what is it returning actually? What? Mobile. You got me. Let me see what's the regex actually that we have. I guess the reason. It's giving you Mobi of mobile. It matched the word Mobi. Yeah, I'm just very curious to see. There's always one simple way to test this thing, right? Sorry. But all the user agent is supposed to be a string, right? Where is user agent, navigator.useragent? Is it always a string? Let's see what just happens if I try to. Yeah, it is a string. Yeah. Yeah. Just very curious because I don't see anything really wrong. It's giving X11. I'm just, I just tested the one that you sent and it's giving X11. I think to be very honest, what I would do, I would remove the Unix and mobile entries from this type. Just keep Mac, Win and Linux. Everything else just put else, I guess, or something else. So, but what you can do, for example, in the case, if it matches with X11, then you return Linux, but you don't really need to match with X11 because you see this user agent both has X11 and the word Linux. So it can be a little bit tricky. The word Linux, so it can remove the part of X11 of the regex. You can just, my opinion, this, you know, that would give you what you want. Let me see some. For example, this one is actually mobile, right? So this one, this one that I just copied here, it gives Mac, but it's an iPad. Which to be fair, I mean, it has the word mobile on the end, but it will try the first match, right? First we have Win, Mac. We don't have the mobile match anymore. Let me see. It's also about the order that you put these things. So if we add mobile here. Oh. Yeah. Because the regex will make the match based on the first string that appears right on the word mobile. Very late. I mean, to be honest, I don't mind if we have false positives, like we don't need to over-engineer ourselves here. If it's giving Mac when in iPad, sure. Worst that happens is, well, it will show just that you are on Mac, which kind of is. And if you click download, well, it's not like you can install node.js on your iPad, so nothing really will happen. But I think it's, you know, best effort, you know, algorithm here. I think what really matters is that we have at least one match, like whatever it is, you Because in the end, this is a nice to have. So recapping, just remove the x11, mobby stuff, just reduce the enumerators to be max, win and Linux, and maybe order, like instead of else you can. Yeah, Android would always match this Linux, which is true, which fine, again. But pretty much, you know, then you have Mac, Win, Linux and order, which is for everything else. And yes, we would have false positives for Mac when it is. Well, iOS and for Linux when it's Android, but it is what it is. I think it's better for us to just keep it simple rather than trying to match every kind of user agent because, oh boy. There are a lot of user agents out there. And thank you, everybody else. Have a good one.
85 min
21 Apr, 2023

Watch more workshops on topic

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career