Beyond JavaScript: Maximizing React With Web APIs

Rate this content

This lightning talk explores how you can enrich your React projects with Web APIs. From handling data to interacting with browser features, attendees will gain insights into leveraging the power of Web APIs for an enriched and efficient React development experience to highlight the out of the box capabilities offered by Browsers.

10 min
13 Nov, 2023

Video Summary and Transcription

This Talk explores how to use browser and web APIs to enhance React applications, covering categories such as Fetch API, Device APIs, Storage APIs, Audio and Video APIs, and Shape API. It explains how web APIs are implemented in React applications and suggests using native JavaScript functions or NPM modules like React Speech Recognition. The Talk demonstrates the use of Battery and Face Detection APIs in a React application, including features like getting battery percentage, live coordinates, and speech-to-text functionality. It also mentions the possibility of creating augmented reality face filters with the face detection API. The conclusion highlights the availability of resources on Mozilla docs and a GitHub repository for further learning and code samples.

Available in Español

1. Introduction to Browser and Web APIs

Short description:

In this part, we will explore how to leverage browser APIs and web APIs to enhance React applications. These APIs can handle tasks ranging from machine learning to GPU utilization. We will also discuss different categories of web APIs, such as Fetch API, Device APIs, Storage APIs, Audio and Video APIs, and Shape API.

Thank you for joining us. This is my first time in GitNation Talks. I'm super excited. Thank you!

Of course, this happens on a daily basis with us. We love and hate JavaScript because it's so popular. It's so good. You can do a bunch of things. But at the same time, you want your applications to have as much JavaScript as possible. Or as less number of NPM modules to make your applications lighter. Because, of course, when you have a lot of NPM modules, your applications do become a lot heavier. And, I mean, if you're just using Git as well, they can become pretty big in size. Like, a huge hundreds of megabytes in size.

So, the idea over here that I want to kind of portray is that can you rely on the web browser on which you are basically running our React applications instead of having to use NPM modules or JavaScript itself. The simple answer to that is yes. We have all these browser APIs and web APIs that essentially allow us to very easily do anything from machine learning stuff like face detection all the way to some standard things like being able to leverage your GPU for doing things like 3D rendering where your browser is capable of being able to fetch the GPU information and leverage that with something like WebGPU. So, all of these can be handled with the help of a suite of different browser APIs or web APIs that you can leverage for your own use case and we'll be exploring some of them and how you can basically integrate them inside of a React application.

So, there are a bunch of different categories of these web APIs. A lot of you might be aware of these. But for the folks who are not, I'll just quickly give a quick rundown of all these seven different web broad categories that I've kind of defined. So, the first one is Fetch API, that allows you to make HTTP requests. The next one is the Device APIs. So, Device APIs essentially allow your browser to be able to replicate a lot of the different mobile things that typically you'll have in mobile devices but also replicate that inside of a web browser as well. So, if you're using a web browser in your laptop or perhaps in your mobile phone. So, for example, the Geolocation API allows you to fetch your coordinates and render them inside of your application. Or, of course, you can draw graphics with the help of the Canvas API. You also get a lot of storage APIs that allow you to do things like local storage or if you want to store the duration of a particular task that you're implementing inside of your browser. So, all of those will be capable of the help of the Storage APIs. And, of course, you have the Audio and the Video APIs that allow you to do a bunch of things with audio and video processing. And then the Shape API. So, these are, I think, personally my favorite.

2. Implementing Web APIs in React Applications

Short description:

You can do a bunch of different things like being able to detect faces or even render certain animations with the Shape API. The browser teams implement web APIs, which are initially in an experimental phase and later released as stable features. In React applications, you can implement web APIs like the battery status and speech recognition, using native JavaScript functions or NPM modules like React Speech Recognition.

You can do a bunch of different things like being able to detect faces or even render certain animations that are typically given to you with the help of the Shape API. The Shape API, I would say, is still in an experimental phase.

Now, if you're curious to know how the browser APIs or these Web APIs come into being. So primarily, the Chrome, if you talk about your main web browsers. So the Chrome platform team is what's basically implementing a lot of these different browser APIs. So a lot of them are also currently, let's say, experimental in phase. So, when you're using these different Web APIs, you'll find that a lot of times some of them might be implemented or might be in effect by default, but some of them might not be supported. And if you want to support them, they will be most likely in an experimental phase, and you'll have to go to your Chrome flags in order to enable them. For instance, if you want to enable the phase detection API, you'll have to enable the Chrome web experimental features flag instead of your Chrome flags in order to basically make that public.

So the kind of way in which basically these web APIs come into fruition is that the browser teams will implement them. They will be in an experimental phase. Once we kind of reach to a point where they are stable enough, they will be released as stable features. And then they'll be implemented in these browsers by default.

Now, let's quickly take a look at how can you actually implement these inside of your React applications. So the first basic example that I like to quote is the battery status. So the battery status example will basically give you the battery status of your device, and you can render that. So what you see over here is the navigator. With most of the web APIs, you're basically using the navigator interface that allows you to identify the user agent, in this case, whatever computer you're using. And in this case, as you can see in the code, over here I'm just using the navigator.getBattery. So I'm not installing some third-party NPM module in order to do that, and I'm able to just use a native JavaScript function that I get from the browser API, and I'm basically just rendering the charging status of what is my battery percentage right now.

Another example where you can actually use some NPM modules as well. So the next one is the speech recognition. So you get a browser API for being able to do speech recognition live inside of your browser. The React speech recognition library essentially provides your custom React hook with the WebSpeech API, so instead of directly using the WebSpeech API, you can just install this NPM module and get out-of-the-box capability. And I'll quickly show a demonstration before we move further. So the first demonstration I'd like to show is with our app.js. Here what you'll see is I'm using actually a bunch of different web APIs, so the first one I'm just setting some states for my battery level, for my location, so I'm using the battery API, I'm using the coordinate that's the geolocation API, I'm using a simple fetch request to show how you can fetch data, and of course I'll be using the React Speech Recognition, which is the NPM model that provides the React hook. Over here, very simple to how I showed in the code sample in the slides, that you just use the navigator and the navigator object and then any function that's typically supported. So in this case, I'm running one to fetch your battery status, similarly I have one for being able to use the geolocation API, then the fetch API to fetch some data, and then of course the final one is the transcript, where I'm using the speech recognition to do a live transcript. So I'll quickly go ahead and run this, and this is a demo, I'll quickly refresh.

3. Using Battery and Face Detection APIs

Short description:

I demonstrate how to get battery percentage, live coordinates, and speech-to-text functionality in a React application. Additionally, I showcase the face detection API, which can detect features of a face and provide bounding box coordinates. I also mention a JS Nation talk on creating augmented reality face filters with the face detection API.

So as you can see, I get my battery percentage as 81, I get my live coordinates of my location, and if I start to do the speech to text, hopefully it works. Hi, I'm at React Summit, and this is my lightning talk. Perfect. So this works out of the box.

Another example I'd like to quickly showcase is using the face detection. So I'm not using any machine learning model over here, this is by default supported by Chrome. Now, in order to experiment with the face detection one, you will have to set up your Chrome flags. So in that case, you'll have to go to the Chrome flags and set up the experimental feature for the web platform feature. Just to keep in mind if you want to use this. But over here, the main thing I am using is the face detector. So with this face detector, it will basically detect if there is a face inside of a canvas element. And it will give you, or it will return you the coordinates where it basically finds the different features of your face. So it will be able to uniquely determine your eyes and your lips and all. And it will detect those. And you'll basically get a bounding box where it is able to uniquely detect a face. And that is what I'm pretty much doing. I'm basically fetching those coordinates that I get from my face detect API. And I'm just rendering a bounding box.

Now this might work, or might not work. Let's try to see. If I click on start, you'll be able to see that I'm able to get the face and it does a pretty decent job, I would say. And if I'm moving, it is able to go and look at the bounding box. Let me try to see. Like, not 100% accurate, but you get the picture. And you can basically do a lot of fun things. In fact, there was a JS Nation talk in 2023 around creating augmented reality filters, face filters, with the help of the face detection API. So definitely check that particular talk out at the JS Nation that is at this year, in 2023. So definitely a must-watch lightning talk, the full-length talk. But those are a couple of examples. So, of course, you get direct APIs that you can use in sort of your React hooks.

4. Conclusion and Resources

Short description:

You can achieve a lot of different things with these native APIs, including using custom React hooks. Check out the resources on Mozilla docs for more information on the main Web APIs. There's also a GitHub repository with code samples. In conclusion, thank you for attending and enjoy the rest of the React Summit talks!

Or, of course, you get some support for custom React hooks for some of these APIs. But, of course, these are, like, in a broad category of a lot of different things that you can achieve with these native... like, with these native APIs.

So definitely check these out. But, yeah, with that, these are some resources. You can take a quick look. These are some resources related to the main Web APIs on the Mozilla docs. And, of course, there's a GitHub repository that talks a lot more about how to use each one of these APIs with some code samples, as well.

But with that, I'll conclude my talk. Thank you so much. And I hope that you have a wonderful late other React Summit talks. Thank you. Thank you very much. Thank you. Thank you. Thank you very much.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

JSNation 2023JSNation 2023
25 min
Pushing the Limits of Video Encoding in Browsers With WebCodecs
High quality video encoding in browsers have traditionally been slow, low-quality and did not allow much customisation. This is because browsers never had a native way to encode videos leveraging hardware acceleration. In this talk, I’ll be going over the secrets of creating high-quality videos in-browsers efficiently with the power of WebCodecs and WebAssembly. From video containers to muxing, audio and beyond, this talk will give you everything you need to render your videos in browsers today!
JSNation 2022JSNation 2022
28 min
MIDI in the Browser... Let's Rock the Web!
If you own an electronic music instrument made in the last 3 decades, it most likely supports the MIDI protocol. What if I told you that it is possible to interact with your keytar or drum machine directly from your beloved browser? You would go crazy, right? Well, prepare to do so…With built-in support in Chrome, Firefox and Opera, this possibility is now a reality. This talk will introduce the audience to the Web MIDI API and to my own WEBMIDI.js library so you can get rockin' fast.Web devs, man your synths!
JSNation 2023JSNation 2023
11 min
Web Push Notifications Done Right
Finally, Web Push API is available in all major browsers and platforms. It's a feature that can take your users' experience to the next level or... ruin it! In my session, after a tech intro about how Web Push works, we'll explore implementing smart permission request dialogues, various types of notifications themselves, and communicating with your app for more sophisticated scenarios - all done right, with the best possible UX.
JSNation Live 2021JSNation Live 2021
34 min
Service Workers: How to Run a Man-in-the-middle Attack on Your Own Site for Fun and Profit
Service workers bring amazing new capabilities to the web. They make fully offline web apps possible, improve performance, and bring more resilience and stability to any site. In this talk, you'll learn how these man-in-the-middle attacks on your own site work, different approaches you can use, and how they might replace many of our current best practices.

Workshops on related topic

Node Congress 2022Node Congress 2022
57 min
Writing Universal Modules for Deno, Node and the Browser
This workshop will walk you through writing a module in TypeScript that can be consumed users of Deno, Node and the browsers. I will explain how to set up formatting, linting and testing in Deno, and then how to publish your module to and npm. We’ll start out with a quick introduction to what Deno is.