ML on the Edge

Rate this content
Bookmark

The world is filled with billions of small, connected, intelligent and compute-efficient smart-phones. What if we can tap into this power and do more on the edge? It turns out, ML fits perfectly here. Let us explore the MLKit library to bake in intelligence into react-native applications. 

7 min
02 Aug, 2021

Video Summary and Transcription

This Talk discusses machine learning on the edge and its benefits for mobile applications. ML on the edge utilizes the computing power of mobile devices for secure, real-time processing and offline capabilities. ML Kit, Google's SDK, provides easy integration of ML solutions in mobile apps without extensive ML expertise. The Talk covers the setup of Firebase and ML Kit integration in React Native projects, showcasing the possibilities of applying filters and generating avatars with ML on the edge.

Available in Español

1. Introduction to ML on the Edge

Short description:

I am delighted to talk about machine learning on the edge and how it can improve mobile applications. ML on the edge leverages the computing capacity of mobile devices, making processing more secure, real-time, and providing a better offline experience. ML Kit, Google's SDK, offers base and custom APIs for vision and language tasks, making it easy for mobile developers to integrate ML solutions without extensive ML expertise. Integrating ML Kit in a React Native project involves setting up the project as a Firebase one and updating the build or gradle file.

So, let's get started. I am delighted to talk to you all about the topic, machine learning on the edge. I am Sangeeta, and I work as a developer at Amazon. For the past four years or so, I've been building mobile applications professionally and as side projects. Naturally, I'm always on the lookout for ways to make apps faster, smarter, more secure, and be able to deliver better customer experiences.

As such, when Google rolled out ML Kit as part of their ION, which promised of machine learning on the edge, I had to check it out. Now, what is ML on the edge and why should one care about it? Traditionally, building ML solutions required developers to gather data, build models, train them, tune them, deploy them to some remote server on the cloud, and have it served to mobile devices on so-called the edges of the internet.

Now, as we all know, with the passage of time, mobile devices have only grown much more efficient in their computing abilities. Why don't we leverage the computing capacity of the devices locally, instead of doing the processing somewhere remote on the cloud? That is ML on the edge. Now, what are the benefits of doing so? Not having to transfer data back and forth across the globe means easy latency and bandwidth spins. Localizing all of the processing happening on the device means the data is more secure, the results are more real time and you are able to provide better offline first experience to your customers. And finally, this greatly reduces the barrier for any mobile developer with little to no ML expertise to integrate ML solutions in their applications.

Hopefully, y'all are sold now on the idea that ML on the edge is interesting and are curious to know more about it. Now, let's try to understand how do we go about achieving this? ML Kit is Google's SDK, which encompasses all of their machine learning expertise in a simple yet powerful SDK. This is built on top of TensorFlow Lite, which is a mobile optimized version of TensorFlow and can be used for both Android and iOS development. Now, the APIs that ML Kit offers can be broadly classified into two types, base and custom. The idea is that if one of your needs is not satisfied by the available base APIs, then you're free to build your own TensorFlow Lite models custom and have it rendered on mobile devices. The available base APIs can be further classified based on their usage into vision and language. For example, smart replies, barcode scanning, face detection, image detection and labeling, and so on. Again, mobile developers need not necessarily understand the machine learning magic that happens under the hood. All of that is cleanly abstracted away from you and is available as out-of-the-box APIs, which can be leveraged with just a few lines of code.

Talking about code, next, let's understand what it takes to integrate ML Kit in a React Native project. We need to know that ML Kit can be used for both native and React Native development, but for the purpose of this talk, I'll be focusing on React Native workflow for Android, but the process should be pretty similar for iOS as well. The first step for integrating ML Kit is setting up your project as a Firebase one. Now, Firebase provides a set of tools that makes application development very easy. Tools such as logging, authentication, all of these heavy lifting process is available as part of Firebase. So the developers at Google thought, why should machine learning be any different? And that's why ML Kit is available as part of Firebase as well. In order to get started with Firebase, you go to the Firebase console and give in your package name. This generates a Firebase configuration file which is placed in the root of your project folder. Next, we update the build or gradle file to declare Google services as one of our dependencies and execute the plugin.

2. Firebase Setup and ML Kit Integration

Short description:

This section covers the setup of Firebase and React Native Firebase, installation of ML Vision model, and using ML Kit APIs for text recognition and face detection. The possibilities of ML on the edge are endless, including applying filters and generating avatars. Overall, we learned about the benefits of machine learning on the edge and integrating ML Kit into React Native projects.

This enables us to use Firebase products in our project. With Firebase setup, let's move to the React native section of the code base. React native Firebase is the officially recommended library for Firebase for React native development. To use this, we first install the react native Firebase app module using either npm or Yarn. Followed by this, based on our use case, we install the required model. In this case, I am installing the ML Vision model and updating my firebase.json file to enable true.

Now for the fun part, with Firebase hooked up and the required ML models installed, we use the APIs to process on our input. In this case, I'm awaiting the download of the Vision model followed by which I provide the path to my local image to the text recognizer process image API. What this does is process the image and return an array of text blocks for each text on the image. Each of this text block contains information such as what is the actual text within it? What are the bounds of it? What are the coordinates? And what is the language of the text? This is an action. We see that walk on the grass has been accurately determined and different text blocks in the output have been used to overlay on top of the image. This is another example of ML Kit's face detection API. Here I've given the image and it has been able to accurately determine the face contours and give us coordinates. Consider this as a stepping point for applying filters on this or generating avatars for this image and so on. The possibilities are simply endless.

Now that is pretty much what I wanted to cover as part of this talk. Today we learned, what is machine learning on the edge? What are its benefits? What does ML Kit and what does it take to integrate ML Kit as part of your React Native project? I hope to have inspired you with the thought of using ML on the edge on your next mobile application. If you have any questions, feel free to drop me a DM on my Twitter or drop me a mail. Thank you so much for watching.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.


Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
React Advanced Conference 2021React Advanced Conference 2021
21 min
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
Top Content
This talk gives an introduction about MediaPipe which is an open source Machine Learning Solutions that allows running machine learning models on low-powered devices and helps integrate the models with mobile applications. It gives these creative professionals a lot of dynamic tools and utilizes Machine learning in a really easy way to create powerful and intuitive applications without having much / no knowledge of machine learning beforehand. So we can see how MediaPipe can be integrated with React. Giving easy access to include machine learning use cases to build web applications with React.
JSNation Live 2021JSNation Live 2021
39 min
TensorFlow.JS 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
ML conf EU 2020ML conf EU 2020
32 min
An Introduction to Transfer Learning in NLP and HuggingFace
In this talk I'll start introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released HuggingFace, in particular our Transformers, Tokenizers and Datasets libraries and our models.

Workshops on related topic

ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Workshop
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.
ML conf EU 2020ML conf EU 2020
112 min
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
Workshop
Are you a Software Engineer who got tasked to deploy a machine learning or deep learning model for the first time in your life? Are you wondering what steps to take and how AI-powered software is different from traditional software? Then it is the right workshop to attend.
The internet offers thousands of articles and free of charge courses, showing how it is easy to train and deploy a simple AI model. At the same time in reality it is difficult to integrate a real model into the current infrastructure, debug, test, deploy, and monitor it properly. In this workshop, I will guide you through this process sharing tips, tricks, and favorite open source tools that will make your life much easier. So, at the end of the workshop, you will know where to start your deployment journey, what tools to use, and what questions to ask.
ML conf EU 2020ML conf EU 2020
146 min
Introduction to Machine Learning on the Cloud
Workshop
This workshop will be both a gentle introduction to Machine Learning, and a practical exercise of using the cloud to train simple and not-so-simple machine learning models. We will start with using Automatic ML to train the model to predict survival on Titanic, and then move to more complex machine learning tasks such as hyperparameter optimization and scheduling series of experiments on the compute cluster. Finally, I will show how Azure Machine Learning can be used to generate artificial paintings using Generative Adversarial Networks, and how to train language question-answering model on COVID papers to answer COVID-related questions.