useMachineLearning… and Have Fun with It!

Rate this content
Bookmark
Slides

Machine learning is seen by many as the next step in artificial intelligence towards a new stage of human evolution. And thus helps us find new approaches to solving real-world problems. Phew... That sounds complex… And how is that supposed to be fun? Well, in addition to the big issues of our time, it is ultimately just another tool that we can play with. While it is important to first understand the core concepts of machine learning, we can quickly go way beyond that. Get ready for some unexpected examples of how to get started with machine learning in your React application!

Nico Martin
Nico Martin
9 min
06 Jun, 2023

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Nico, a freelance frontend developer and part of the Google Developer Experts program, provides an introduction to machine learning in the browser. He explains how machine learning differs from traditional algorithms and highlights the use of TensorFlow.js for implementing machine learning in the browser. The talk also covers the use of different backends, such as WebGL, and the conversion of audio into spectrograms for model comparison. Nico mentions the use of overlay for improved detection accuracy and the availability of speech command detection and custom model training with TensorFlow. Overall, the talk emphasizes the benefits of using and training machine learning models directly on the device.

Available in Español

1. Introduction to Machine Learning in the Browser

Short description:

Hi, everyone. I am Nico, a freelance frontend developer from Switzerland and part of the Google Developer Experts program. Today, I will give you a short introduction to machine learning in the browser. In a classic way, we define rules and conditions for algorithms, but machine learning takes a different approach by training algorithms with input and output data. TensorFlow.js allows us to use machine learning directly in the browser with JavaScript.

Hi, everyone. My name is Nico. I am a freelance frontend developer from Switzerland. I'm also part of the Google Developer Experts program for web technologies, which basically means that I just spend way too much of my free time just playing around with all kinds of new browser technologies.

And today I am here to give you a short introduction to machine learning in the browser. So in the past years I have given quite a lot of talks, mostly in English, some in German, but only two talks in Baden Dich which is our local Swiss German dialect. Now in September 2021, I gave my first ever talk in Swiss German, which luckily was recorded. So let me just show you a short clip of that. And so on and so forth. So as you can see, I actually managed to use the word schlussendlich and im Endeffekt over 35 times in about 30 minutes, which was extremely annoying to me afterwards. Both words basically mean finally or in the end.

Now, in February 2023, my second talk in Bandage was just around the corner and it was enormously important to me to somehow stop this thing. So I was looking for ways to detect those words in my talking. Now the most obvious would be to use the Web Speech API for voice recognition in the browser. The problem here is that this works quite well for German, but not for Swiss German or even Bandage. But then again, voice recognition is nothing more than just machine learning models, right? And can't we run them directly in the browser? Of course, we can.

So in this lightning talk I won't be able to deep dive into the details, but I do want to give you a quick overview. So at the core, machine learning is a completely different approach to writing algorithms. In a classic way, when we tried to write an algorithm to solve a problem, we would define a set of rules and conditions and then we would pass an input and we would get an output. And that works quite great for simple problems, but as soon as we have more complex input data, we need a new way to process those. And machine learning takes this different approach. Here the idea is that we would train the algorithm with predefined input and output and then the algorithm finds patterns itself. So that means we have a lot of input data and the expected output. Now the machine learns to predict the expected output of a similar input. Now, this trained algorithm is the core of machine learning and that is called a model. And that is now where TensorFlow comes into play. TensorFlow is an end-to-end open source machine learning platform that allows you to use existing pre-trained models, but also to train new models or extend existing models with your own use case. And since 2019, with TensorFlow.js, we can even use it directly in the browser with JavaScript. Now, like any machine learning task, TensorFlow.js depends on quite complex mathematical operations. Those operations are processed in so-called backends.

2. Machine Learning in the Browser

Short description:

The web can use different backends, such as WebGL, for machine learning. Audio can be converted into spectrograms to compare with models. An overlay can improve detection accuracy. TensorFlow offers speech command detection and allows training custom models with Teachable Machine. Machine learning in the browser enables using and training models directly on the device.

For now, the web is able to use a couple of different backends depending on the browser and the operating system. The most performant way would be to use the WebGPU backend, but that requires the WebGPU API, which is only available in Chrome Canary behind the flag. So in my example, I am using WebGL, which is the most performant backend that is available in most browsers right now.

Now, we probably have all seen basic examples of image recognition, like in this case face landmark detection, where we can give an image as an input and then receive the position of the key points in the face. And the images work quite well with machine learning because in the end, machine learning models expect some numerical input and it returns an output, and images are nothing else than just the numerical RGB values on a 2D rectangle.

Now, in my case, I want to recognize certain words, and well, words are not images, right? Except when they are. So in the end, each piece of audio can be converted into a spectrogram, and let's imagine we have 100 recordings of me saying the words to Sandler. We now have 100 images of this two-second clip that we can now compare with the spectrogram of my talk. Now, of course, a spectrogram of the whole talk that grows over time is hard to compare with my two-second clip, but we can split up the whole track into two-second parts and compare those two seconds with our model. The problem here is that we will miss quite a lot of the words, because we can't be sure that the split actually cuts out one word as a whole. The solution here would be to add an overlay. In this case, we have an overlay of 0.5, which means that we have more images per second to analyze. The bigger the overlay, the more images are there to analyze, and the more accurate is the detection. In my example, I even needed an overlay of 0.95 to have a meaningful result.

Now similar to the face landmark detection, TensorFlow also offers a speech command detection, and just like before, we can import it, we can then create a recognizer, and we can start listening. The default model looks for a couple of predefined keywords, but of course my Swiss-German words are not in that list, so I need to train my own model. With Teachable Machine, Google did publish a web app that allows you to train your own image or audio model based on your own input data. So on the right you see my training data, where I have around one hour of me just talking as the background class, and then we have 50 and 70 examples of the two keywords I want to detect. And with Teachable Machine, I can now train the data in the browser, and it just generates the model for me. Now, all I need to do is I need to pass the created model and the metadata to the Create function, and it will now use the new model to detect my custom input. So my slides are running in the browser, and I can now just activate the listener. That might take some time. Now every time I say words like MandEffect, it will trigger the buzzer. And it actually did work quite well on my latest Swiss German Talk. So I really hope that I was able to inspire you with this short insight into machine learning in the browser so we can use models, we can train new models, all directly on the device in the browser. For more and for deeper knowledge, I can also recommend the free course by Jason Maes from Google Machine Learning for Web Developers. And with this, I would like to thank you for your interest and I wish you a nice rest of the conference.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a new web framework from the creators of React Router that helps you build better, faster websites through a solid understanding of web fundamentals. Remix takes care of the heavy lifting like server rendering, code splitting, prefetching, and navigation and leaves you with the fun part: building something awesome!
React Compiler - Understanding Idiomatic React (React Forget)
React Advanced Conference 2023React Advanced Conference 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
React provides a contract to developers- uphold certain rules, and React can efficiently and correctly update the UI. In this talk we'll explore these rules in depth, understanding the reasoning behind them and how they unlock new directions such as automatic memoization. 
Using useEffect Effectively
React Advanced Conference 2022React Advanced Conference 2022
30 min
Using useEffect Effectively
Top Content
Can useEffect affect your codebase negatively? From fetching data to fighting with imperative APIs, side effects are one of the biggest sources of frustration in web app development. And let’s be honest, putting everything in useEffect hooks doesn’t help much. In this talk, we'll demystify the useEffect hook and get a better understanding of when (and when not) to use it, as well as discover how declarative effects can make effect management more maintainable in even the most complex React apps.
Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Concurrent React and Server Components are changing the way we think about routing, rendering, and fetching in web applications. Next.js recently shared part of its vision to help developers adopt these new React features and take advantage of the benefits they unlock.In this talk, we’ll explore the past, present and future of routing in front-end applications and discuss how new features in React and Next.js can help us architect more performant and feature-rich applications.
(Easier) Interactive Data Visualization in React
React Advanced Conference 2021React Advanced Conference 2021
27 min
(Easier) Interactive Data Visualization in React
Top Content
If you’re building a dashboard, analytics platform, or any web app where you need to give your users insight into their data, you need beautiful, custom, interactive data visualizations in your React app. But building visualizations hand with a low-level library like D3 can be a huge headache, involving lots of wheel-reinventing. In this talk, we’ll see how data viz development can get so much easier thanks to tools like Plot, a high-level dataviz library for quick & easy charting, and Observable, a reactive dataviz prototyping environment, both from the creator of D3. Through live coding examples we’ll explore how React refs let us delegate DOM manipulation for our data visualizations, and how Observable’s embedding functionality lets us easily repurpose community-built visualizations for our own data & use cases. By the end of this talk we’ll know how to get a beautiful, customized, interactive data visualization into our apps with a fraction of the time & effort!

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Concurrent Rendering Adventures in React 18
React Advanced Conference 2021React Advanced Conference 2021
132 min
Concurrent Rendering Adventures in React 18
Top Content
Featured WorkshopFree
Maurice de Beijer
Maurice de Beijer
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?

There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.

Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Hooks Tips Only the Pros Know
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
React, TypeScript, and TDD
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Web3 Workshop - Building Your First Dapp
React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Top Content
Featured WorkshopFree
Nader Dabit
Nader Dabit
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
Designing Effective Tests With React Testing Library
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
Josh Justice
Josh Justice
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn