GPU Accelerating Node.js Web Services and Visualization with RAPIDS

Rate this content

The expansion of data size and complexity, broader adoption of ML, as well as the high expectations put on modern web apps all demand increasing compute power. Learn how the RAPIDS data science libraries can be used beyond notebooks, with GPU accelerated Node.js web services. From ETL to server side rendered streaming visualizations, the experimental Node RAPIDS project is developing a broad set of modules able to run across local desktops and multi-GPU cloud instances.

26 min
20 Jun, 2022


Sign in or register to post your comment.

AI Generated Video Summary

Welcome to GPU Accelerating Node.js Web Services and Visualization with Rapids. Rapids aims to bring high-performance data science capabilities to Node.js, providing a streamlined API to the Rapids platform without the need to learn a new language or environment. GPU acceleration in Node.js enables performance optimization and memory access without changing existing code. The demos showcase the power and speed of GPUs and rapids in ETL data processing, graph visualization, and point cloud interaction. Future plans include expanding the library, improving developer UX, and exploring native Windows support.

1. Introduction to GPU Acceleration and Node Rapids

Short description:

Welcome to GPU Accelerating Node.js Web Services and Visualization with Rapids. Rapids is an open-source GPU-accelerated data science platform, and Node Rapids is an open-source modular library of Rapids-inclusive bindings in Node.js. Our main goal is to accelerate data science and visualization pipelines fully in JavaScript and TypeScript, and bring GPU acceleration to a wider variety of NodeJS and JS utilities.

Hi, and welcome to GPU Accelerating Node.js Web Services and Visualization with Rapids. I'm Allan Ane-Mark, and I am the lead in the Rapids Viz team here at NVIDIA.

So, Rapids is an open-source GPU-accelerated data science platform, and then you can find more details at and, and Node Rapids, which is the project I'm going to be talking about, is an open-source modular library of Rapids-inclusive bindings in Node.js, as well as some other complementary methods for supporting high performance browser-like visualizations. It's currently in technical preview, but you can find more details about it at slash slash Node.

And really, our main goal in this framework is creating something that can accelerate data science and visualization pipelines fully in JavaScript and TypeScript, which is something that is traditionally done mostly in, say, Python. And our second goal is bringing GPU acceleration to a wider variety of NodeJS and JS utilities, since we feel like the general community is sort of getting not as much access to these high-performance tools as we'd like.

2. Introduction to Node Rapids

Short description:

Rapids provides data science libraries, machine learning algorithms, and visualization tools. It is traditionally used with Python and C++, but can also be used with Windows through WSL 2. In the Viz ecosystem, libraries like Cougraph and DataShader are used for creating dashboards and server-side rendering. Node Rapids aims to bring high-performance data science capabilities to Node.js, allowing developers to leverage existing JS vis libraries and accelerate their applications. It provides a streamlined API to the Rapids platform without the need to learn a new language or environment.

So what do you get with Rapids, which is traditionally Python and C++? You get these data science libraries, such as DataFrame Operations in CUDF, you get CUML, which is a lot of GPU accelerated machine learning algorithms, Cougraph, Forgraph stuff, Spatial, Signal, all the like, and more being developed continuously, and these are continuously getting improved. The caveat being these are mainly around Linux-based systems, so if you want to use Windows with them, you can. It has to be through WSL 2, however.

So what kind of libraries in the Viz ecosystem traditionally happen in Python? So we have our own Cougraph filter, which is a notebook-based cross-filtering tool where you can create these dashboards very quickly in a few lines of Python code, and then very quickly interact with hundreds of millions of rows of data in a pretty customizable way. And we make extensive use of one of the other great Viz libraries out there called DataShader, which is great at server-side rendering hundreds of millions of points. All this is GPU-accelerated. And really part of this great ecosystem of Viz and Analytics tools, which sort of lie on the spectrum between your back-end C, C++, Python that sort of transitions into just front-side JS. And really when it comes to the Data Science and Compute, then mainly analytic stuff, more on the performance side, it all sort of starts with the Python C++ side. And it sort of is like then translated into JavaScript for like interface stuff. You have some that are a little bit more in an intermediary, but really it sort of starts there and then ends up in JavaScript, or just stays in JavaScript. What we're sort of proposing is the inverse. We're going to start with the JS libraries and then bring them back to this more high-performing back-end in Node.js. So give them access to CUDA, and CUDF, Cugraph, all those sort of things.

So our experience with this is sort of started a while ago when we were making lots of demos for RAPIDS, and in this case we were making a great mortgage visualization where you have DECGL and React, and it all was kind of a very fast nice interface. It fits all kinds of different screens and all that, but the backend was a mess. We had multiple languages, multiple servers. It sort of became unsustainable, and we just basically each gave up, said, oh, well, let's just do it in Python and Notebooks. But deep down we were really kind of sad because there's all these great JS Viz libraries and the kind of custom abilities you get using them that we sort of were lacking, and it's a shame because now you kind of have this two continental divides, right? You have Python and C++ and you have JavaScript and TypeScript and this chasm between them where you're sort of separating the capabilities between them. So on one side you get direct access to hardware, you know, most of the HPC, high-performance computing, data science and compute libraries are in this space. Not the best usability because it's like a high learning curve, but this is the place to go for that high-performance stuff. On the other side you kind of have the JavaScript-TypeScript where you have your nice browser environment which is great for shareability and accessibility and compatibility, in my opinion a little bit more refined visualization and interface libraries, but again you don't get that performance because you're sort of bounded by the browser sandbox. So it's kind of a shame because you have, you know, data scientists and engineers and folks in front of those and they're kind of all siloed in their side but they can mutually benefit from each other's tooling and experience. So, hence Node Rapids, where we're hoping to give the Node.js dev community the streamlined API to a high-performance data science platform, Rapids, without the need to learn a new language or environment. So you can then leverage Rapids and Node.js features, you can accelerate the catalog of great JS vis libraries already out there without major refactoring, you can learn locally or through cloud instances and it's sort of well suited for accelerated vis apps, Node service apps, and again you help enable these two communities to more closely work together and vice versa. So, that's sort of the high ideals and what's the actual meat and bones of this thing? Well, here it is, Node Rapids. It's a very modular kind of library. So, it's a buffet style. You kind of pick and choose what you need for your use case. It's kind of organized in these main categories. The main being the memory management that gives you access to CUDA, so GPU memory.

3. GPU Acceleration and Architecture

Short description:

We have a SQL engine for multi-node and multi-GPU processing. The graphics column takes advantage of WebGL as a subset of OpenGL. GPU video encoding and WebRTC enable server-side rendering and interaction with JS. GPU hardware architecture allows for single or multiple GPUs, NVIDIA NVLink for expanded memory and compute power, and traditional cloud architecture with load balancing. Tasks can be separated for each GPU, allowing multiple users and heavy workloads. Consumer-grade GPUs are limited to three NV encoding streams. A multi-GPU, server-side running, multi-user system with massive compute power is possible with off-the-shelf JavaScript. Examples and demos will be shown.

We also have a kind of really nice SQL engine that we bind to, which enables us to do multi-node and multi-GPU stuff when needed. Then there's a whole data science wing, all in Rapids, so you have your CUDF and CUGRAS stuff. And then this is the graphics column here, where we're sort of taking advantage of the fact that WebGL is a subset of OpenGL. So, really what we're doing with these bindings is, you know, you can take your WebGL code and you can then run it in OpenGL and get the benefit of that performance increase from OpenGL. We're also doing things with GLF bindings and, you know, node processes. But again, you can now use your LumaGL, DeckGL, SigmaJS, we're hoping to get two and three Jets and basically run them in OpenGL without much effort.

So, another component to this is since you're already on GPU, you get the benefit of GPU video encoding. So, by taking advantage of that and using WebRTC, you can do server-side rendering, stream that over to the browser and have the browser-side JS interact with it like a video tag and it's lightweight in that sense. It enables a lot more capabilities in that sense. Or you can just do stuff like, you know, interacting with all this stuff in a notebook.

So, what do we mean by kind of architecture? GPU hardware is a little bit different. You know, it's pretty straightforward when you have a single GPU and just doing client-side rendering. So, all the compute happens on the GPU, you send those computer values over, and the client JS renders it, those few values. You know, pretty straightforward and sort of where GPUs excel. It's excelled so well that you can have multiple users accessing the same kind of GPU data and it's fast enough to handle that. Or if you have particularly large data, NVIDIA has an NVIDIA NVLink. And so you can kind of link multiple GPUs together and kind of get expanded memory and compute power that way. Or you can go in a more kind of traditional cloud architecture, where you just sort of have multiple GPUs all separated, you know, you have lots of child processes running on each GPU, Load Balancer running across all of those. And so lots of, you know, instances of people accessing it. And basically whatever GPU is free at that moment is the one that is serving up the information to that user. So a pretty straightforward, you know, again, you still tend to need a lot of GPUs, but, you know, not something terribly unfamiliar.

So now, taking advantage of the server-side rendering and streaming component, if you have some pretty heavy workloads, then you can then separate out those tasks for each GPU. So one can do solely doing the compute stuff, and one can do the rendering and encoding part to the client-side video. And again, it can do this well enough that you can have multiple people accessing the same system and same data. The caveat here being that for consumer-grade GPUs, I think they're limited to three NV encoding streams at once, so keep that in mind. So something that is kind of interesting and new and a little bit more on the wishful thinking but possible side is this basically multi-GPU, server-side running, multi-user, massive system, where in this case it would be like a DGX, which has 8 GPUs and all and all that, and you can divvy up the compute and rendering between them, you have tons of GPU memory, and basically you have this mini supercomputer that you can then access and control with off-the-shelf JavaScript, so it's kind of wild. So, you know, you normally think you need some sort of HPC type software and all that and all the overhead that involves learning that stuff, but really there's nothing stopping you from leveraging a system this powerful with your run-of-the-mill JavaScript, which we think is pretty cool. But anyways, now for the good part. Examples and demos. So for this first one we're going to keep it simple.

4. ETL Data Processing in Node Rapids

Short description:

We'll demonstrate basic ETL data processing in a Node.js notebook. Using a 1.2GB dataset of US car accidents, we'll load it into the GPU, perform operations like filtering, parsing temperature data, and applying regex operations with cuDF. The dataset has 2.8 million rows with 47 columns. The entire filtering operation took only 20 milliseconds, and the regex operation took 113 milliseconds on the GPU. This showcases the power and speed of GPUs and rapids.

We're basically just going to show off some basic ETL data processing stuff in a notebook. A notebook that is running Node.js and JS syntax, but we're kind of basically using it as a placeholder for very common Node services that could be used for like batch services and stuff which you'd need to do for you know parsing logs, or serving up some sort of you know subsets of data, things like that.

So in this case we're going to do it live. Right now I have a Docker container running in our Node Rapids instance, and a notebook. And so here I have a Jupyter notebook, a Jupyter lab. If you're not familiar, it's sort of like the data science sort of IDE. And you can see here all we're doing, like we did in any sort of Node app is require our Rapids AI QDF, for us we're going to be loading a 1.2 gigabyte sized dataset of US car accidents from Kaggle. And we're going to read this as a CSV into the GPU. And well, it was actually extra fast today. It only took under a second, which is kind of wild considering this is a 1.2 gig file. And so how big is this? It is 2.8 million rows with 47 columns. So for us, you know, not that big of a dataset, but for people typically used to doing data and like Node.js and stuff, it's pretty good. And how fast and responsive it is, is kind of impressive.

So we're going to just see, you know, what is the headers in this. It's sort of messy, and we're gonna go through this pretty quick, but basically get rid of some of the ones you don't need. See again, we're gonna then see what the columns are and say, all right, there's some temperature data in here. Let's parse this to see, you know, what the ranges are, sanity checks. And you say, all right, there's some really wonky values in there, as you always have with data stuff. Let's, you know, bracket it so it's sensible. So this whole operation of, again, 2 million rows, 2.8 million rows, took 20 milliseconds to kind of filter out. It's kind of wild. So now this is looking better. We have basically 2.7 million now. And we can start doing some other operations. In this case, we're gonna stringify it, and then with cuDF you get some regex operations. And so we're gonna have a pretty complicated regex here where we sort of mask out commonalities between these terms. And again, 2.8 million, 2.7 million rows, it took 113 milliseconds. So you can see how powerful and fast it is on a GPU. And basically at the end of this, you're saying that, yeah, you know, when there's clouds and rain, the accidents get more severe. But this is kind of just the first showcase of what you can do with GPUs and rapids.

5. GPU Acceleration in Node.js

Short description:

Now you can access GPU acceleration in Node.js, enabling performance optimization and memory access without changing existing code. We demonstrated this using Sigma JS, a graph rendering library, and a geospatial visualization with deck GL and an Uber data set of 40 million rows. With GPU memory loading and server-side compute, multiple instances can access the same GPU and data set, providing real-time results and unique views of the data.

And now you get access to it in Node.js, which we think is pretty neat, and, you know, the JS syntax, which is kind of cool. So we're gonna step it up a bit more. So the next demo is using Sigma JS, which if you're not familiar is a graph rendering library that's only client side. So it's pretty performant and pretty neat, they have a lot of functionality. But what we did is basically take out the memory load graph loading into system memory and load it into GPU memory, and then serve that up or stream it into the client size app.

So in this case, ignore the numbers. It's actually 1 million nodes with 200,000 edges, and you can see it zooms in and pans buttery smooth, even if it's in WebGL. So you get that great interactivity, and all that. But this is technically loaded onto GPU memory. So that means that you're not bottlenecked by web browser limitations. If without using a GPU, you basically are limited to 500,000 nodes, and then the tab craps out on you. So as you can see here, you know, it's using the GPU for the nodes and edges and, you know, straightforward, Sigma JS with very little changes. So we're hoping to do in the future is enable lots of Sigma JS users to kind of get that performance optimization and memory access without really needing to change much of how their code works.

For this next one, we're something more similar. We're going to do a surface side compute and a client-side rendering of a geospatial visualization. It's using a deck GL and Uber data set of 40 million rows. So, you know, not bad. Still not that big for us. And you can see, basically, it's all computed into source and destinations. So by clicking on each of these areas, you're kind of computing how many of those trips went where and what time. And you can see that clicking on each of these, it's essentially instantaneous, and you get the results from 40 million rows. It's so fast, you can basically just start scrubbing the values and getting that interaction back. So what this basically means is, you know, you can get away with having multiple instances accessing the same GPU and the same data set. And because this is a React app, the state is managed on the client side. So that means they all get their unique view of the data. But you're still querying that same GPU, and because it's so fast, it gets the values back, basically still in real time. And so again, like I said, really where the GPU excels is doing the server side compute, and the streaming over the values. So you can get away with quite a lot, even on a single GPU. So next one is a bit more complicated. You can see we're sort of building up modules as we go.

6. GPU Acceleration and Visualization Demos

Short description:

This part showcases a video streamed server-side rendered graph visualization using LumaGL, OpenGL, and NV encoding. It demonstrates filtering and querying a dataset of 1 million edges and nodes in real-time. The Fort Atlas 2 layout algorithm is applied to compute the layout for each node and edge, streamed to the browser as a video. Additionally, a point cloud dataset is used to showcase the ability to pan, zoom, and interact with the data. There are more demos and examples available, including graph renderings, Deck GL demos, and a multi-GPU SQL engine query demo.

This was a video streamed server side rendered and compute of a graph visualization that's so basically then streamed to the client as a video. So you're seeing we're using LumaGL, JavaScript library to then convert it to OpenGL, encode that with NV encoding, stream it with Red Better DC.

We're going to use CUDF for the data frame, and Cugraph for the graph layout compute. So you can see we're sort of starting to wrap this in a little bit more of a GUI app. In this case we're just using a sample data set. We're selecting those edges, colors, and then we're going to render it. So you have this giant sort of hairball, but it is 1 million edges and 1 million nodes that you're able to basically filter and query real time. So you can see how quickly query it, clear the filter, you're back to a million nodes, million edges. You can basically tooltip and hover over each of these.

Now visually it doesn't really make that much sense, but this is more again for a kind of performance benchmark and how you can still get each individual node. And now we're going to run Fort Atlas 2 in real time. So Fort Atlas 2 is basically a layout algorithm, and so it's going to be computing the layout for every single node and every single edge in real time and all the forces are being calculated. And you can see this is doing it on again a million edges and a million nodes, and it's all streamed to video to the browser. You can see it moving through the iterations in real time, which is pretty wild. Again, this is just a single GPU and you're able to interact with this much data. And I kind of love looking at this, it's sort of like looking at the surface of the sun, which is pretty neat. And for the last one, basically very similar, except we're going to be using point cloud data instead. We don't need to do any like graph layout and stuff, but similar thing where we're doing video streams. So in this case, it's a point cloud of kind of a building scan, so you can pan and zoom and interact with it. But again, because GPUs are pretty powerful, I'm just going to duplicate this tab and have multiple instances of the same data set from the same GPU. But I'm, again, able to interact with it uniquely for instance, because of how you're handling the data. And so, again, this is a video stream, pretty cool. So you can use a single GPU for a lot.

That's a lot to talk about. We have even more demos and examples. This is the location of that. We have things where we're showing graph renderings with the GLFW. We have all the Deck GL demos and local rendered OpenGL. We have a really good multi-GPU SQL engine query demo. So we are querying all of English Wikipedia using multi-GPUs.

7. Future Plans and Community Engagement

Short description:

We have examples of the Clustering UMAP and Spatial Quadrant algorithms. Our future plans include continuing demos, external library binding, specialized applications in Cyber, Graph, Geo Spatial, Point Cloud, and broader community adoption. We aim to bridge the gap between JavaScript and Node.js with better developer UX, npm installation, turnkey visualization applications, and Windows WSL 2 support. We're also exploring the possibility of native Windows support. Come check out our GPU accelerated data science services and visualization with Node.js and Rapids. We're still in technical preview and welcome feedback on interesting use cases.

We have examples showing the Clustering UMAP algorithm and Spatial Quadrant algorithm. So they're pretty cool. I recommend you check them out. If you don't even install it, we have links to YouTube demos of the videos.

So what's next? Going forward, we kind of have three main guiding roadmaps ideas. We're going to continue doing these demos and working on doing external library binding. Again, looking forward to doing Three.js and some more work with Sigma.js. From that, we're going to start doing more specialized applications, specifically around like Cyber and Graph, Geo Spatial, Point Cloud, stuff like that. Maybe something that makes it a little bit more easy for turnkey usage. And then, hopefully, more broader community adoption, specifically in non-Viz use case. We're biased being the Viz team for Viz stuff, but we know that providing these CUDA bindings has an opportunity for a lot more functionality that we're just not thinking of because we're not aware of them. So we'd love to hear back from the community of novel ways to use this.

So yeah, RAPIDS is a pretty incredible framework. And we want to bring this capability to more developers and more applications, and we feel like the JavaScript and Node.js, the dev communities can take advantage of some of the learnings and performance from the data science community. So that's what we're trying to do is bridge the gap between the two, in this case with a lot of Viz use cases. But this is what we're working on next, a mainly better developer UX. So it's a bit of a bear to install right now, or if you're just not familiar with Docker images, it can be a little bit of a learning curve. So we're hoping to get this installable through npm. It's going to be a bit of a work for us, but we know that will make it a lot more approachable to JavaScript devs. Like I said, we're going to make a couple more visualization applications that are going to be a little bit more turnkey for more just general analysts. And hopefully we're gonna get full Windows WSL 2 support. So currently you can run Node Rapids on WSL 2 and Windows, but there's no OpenGL support from NVIDIA yet. They're working on it. So you can do the compute side stuff, but not the rendered side stuff. And then maybe native Windows support. This is a little bit of a long shot, but it would be pretty neat for us to be able to have this. So you don't need to use WSL 2 or anything. So with that if any of this sounds interesting or piqued your curiosity, come check it out. You know GPU accelerated data science services and visualization with Node.js and Rapids. We still kind of technical preview. We're working out the kinks and it's still pretty new. But we'd love to hear feedback about all kind of interesting use cases you might have for us. And we're very responsive to that stuff.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
27 min
(Easier) Interactive Data Visualization in React
If you’re building a dashboard, analytics platform, or any web app where you need to give your users insight into their data, you need beautiful, custom, interactive data visualizations in your React app. But building visualizations hand with a low-level library like D3 can be a huge headache, involving lots of wheel-reinventing. In this talk, we’ll see how data viz development can get so much easier thanks to tools like Plot, a high-level dataviz library for quick
easy charting, and Observable, a reactive dataviz prototyping environment, both from the creator of D3. Through live coding examples we’ll explore how React refs let us delegate DOM manipulation for our data visualizations, and how Observable’s embedding functionality lets us easily repurpose community-built visualizations for our own data
use cases. By the end of this talk we’ll know how to get a beautiful, customized, interactive data visualization into our apps with a fraction of the time

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk

Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk

6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at 
, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.
What led you to software engineering?
My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.

What is the most impactful thing you ever did to boost your career?
I think it might be
public speaking
. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.

What would be your three tips for engineers to level up their career?
Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. 
In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 

And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.

What are you working on right now?
Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 

Do you have some rituals that keep you focused and goal-oriented?
Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.

You wrote a book called
Practical Machine Learning in JavaScript.
What got you so excited about the connection between JavaScript and ML?
The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.

Where do you see the fields going together in the future, near or far? 
I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.

You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?
I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.

How has content affected your career?
I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. 
Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.
What pieces of your work are you most proud of?
It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.
Follow Charlie on Twitter

ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!

Workshops on related topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.
: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:
- User authentication - Managing user interactions, returning session / refresh JWTs
- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents
- A quick intro to core authentication concepts
- Coding
- Why passwordless matters
- IDE for your choice
- Node 18 or higher
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.

JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:
- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (
- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow ( 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
React Summit 2022React Summit 2022
164 min
GraphQL - From Zero to Hero in 3 hours
How to build a fullstack GraphQL application (Postgres + NestJs + React) in the shortest time possible.
All beginnings are hard. Even harder than choosing the technology is often developing a suitable architecture. Especially when it comes to GraphQL.
In this workshop, you will get a variety of best practices that you would normally have to work through over a number of projects - all in just three hours.
If you've always wanted to participate in a hackathon to get something up and running in the shortest amount of time - then take an active part in this workshop, and participate in the thought processes of the trainer.