Scaling Distributed Machine Learning, to the Edge & Back

Rate this content
Bookmark
Slides

This talk will cover why and how organizations are distributing data storage and machine learning to the edge. By pushing machine learning to the edge, we can geographically distribute learning so that the models will actually learn different things relevant to specific locations. By delivering both edge database and compute in a single platform, more people can transition to a distributed architecture. The performance gains from this new architecture cements the value that mobile edge computing brings.

21 min
05 Jun, 2023

AI Generated Video Summary

This talk explores JavaScript's role in distributed machine learning at scale, discussing the lack of tooling and the accessibility of machine learning deployments. It also covers cloud-based machine learning architecture, machine learning at the edge, and the use of HarperDB for simplified machine learning deployment. The concept of iterative AI and model training is also discussed.

1. Introduction to JavaScript ML

Short description:

Hi, welcome to my talk for JS Nation entitled To the Edge and Back JavaScript's Role in Distributed ML at Scale. I am a recovering developer, father of two daughters, based in Denver, Colorado. I work for HarperDB, a Distributed Application Platform built entirely in Node.js. Today, I will explore the JavaScript machine learning ecosystem, tactical architecture, and systems and methods for delivering performant access to machine learning and AI.

Hi, welcome to my talk for JS Nation entitled To the Edge and Back JavaScript's Role in Distributed ML at Scale. My name's Jackson Repp. I am a recovering developer, father of two daughters. I'm based in Denver, Colorado. I've been a part of eight startups, so I've had two exits, five what I call opportunities for learning. And now I work for HarperDB, which is a Distributed Application Platform. We've been around six years and we've got a lot of production deployments and a fairly robust community.

So when I talk about HarperDB as the place I work, I think of more interest to JS Nation is the fact that we are, in fact, built entirely in Node.js. So we are, we've leveraged the language you already love. And it was one of those things where we looked around and we could have chosen any language, but we realized there were tremendous benefits in terms of simplicity and availability of resources and deployment platforms. Where can JavaScript run? So we love to focus on the JavaScript community and machine learning is obviously, it's one of those things that has expanded dramatically in the very recent future. And how does that get done? What are the logistics behind it? And that's what I wanted to explore today.

So the syllabus for this course, I guess, would be understanding the JavaScript machine learning ecosystem. What are the resources we have available to us to build these amazing, cool technologies that function out maybe closer to the user, leveraging a language we all love. And then we have a section called tactical architecture, which is sort of how people do it now or how people did it in the past and where we think it's going over time. How do we continue to deliver performant access to machine learning and AI and these incredibly complex models when running them takes so much horsepower and you don't necessarily have all of the horsepower in the world sitting on your phone or perhaps, you know, in a browser. And finally systems and methods. So how can we approach this problem? What are the considerations we need to have in mind or keep in mind when we're planning a system that is truly distributed and iterative as I'll sort of outline what those architectures look like?

2. Machine Learning Tooling and Tactical Architecture

Short description:

People become aware of machine learning and its potential applications. However, the lack of tooling requires developers to write low-level code to train models and build applications. With the right infrastructure, machine learning deployments become more accessible. ChatGPT has gained significant attention and offers a comprehensive and fast solution. JavaScript is a great choice for pushing machine learning to the edge, with libraries like TensorFlow.js and mobile platforms like CoreML and MLK. The hierarchical nature of accessing data suggests opportunities for cloud, near edge, far edge, and mobile deployments. The tactical architecture involves training, testing, and deploying models.

First, people become aware of it, right? They know the machine learning is a thing. They know that it can help me identify stuff in a photo or they know they can make recommendations using it. But the tooling isn't there. So you're out writing super low-level code to train a model, to build something that can act on user input and give you a recommendation or a classification or accomplish whatever that end goal might be.

And then the infrastructure gets built out behind that to support stuff that we are now capable of deploying because we have the tooling. And with that infrastructure, it becomes more available deployments, which obviously you can roll out to a wider audience, and then it starts to get. So if you look at awareness, the number one thing that everybody's talking about is ChatGPT to the point that the last three weeks of earnings calls have included mentions of AI and ChatGPT in products that didn't even seem like they would take advantage of them because the stock price goes up, because everybody's so excited and aware. And ultimately, we want to deliver this product, this solution, this result. And it's simple, accessible, comprehensive and fast. And ChatGPT nailed all of those things. And it's tremendous if you've ever used it. You know that there's a wait usually to get in line and commercial accounts are hard to come by and expensive, because it takes tremendous amount of resources to do something as impressive as what ChatGPT does. Now, obviously it's also a little terrifying in terms of the scope of what it can do. It's a very large model that's been trained on lots of pieces of data and not everybody needs to deploy a fully comprehensive human-speaking chat engine, but there are a million other applications for machine learning, especially at the edge, that can leverage a lot of the best practices that ChatGPT put in front of us in terms of accessibility.

We look at the tooling then that we have to continue to push this logic out to the edge, right? How do we get closer to those users? And JavaScript obviously, being on every client device and running just about everywhere, is a great choice for that. And while machine learning and machine learning models and AI has traditionally been, you know, on servers with lots of power, a la ChatGPT training a giant model, there's lots of libraries available. TensorFlow.js is the JavaScript cousin to kind of the king of machine learning platforms sponsored by Google. But you've also got lots of other platforms that are available to take data in, generate a model, and ultimately push that out and run it on the edge as well as mobile platforms like CoreML and CreateML on iOS and MLK for Android. So there's lots of ways to push this out as far as you can. Now, again, you have horsepower that's required to ultimately create and use models, so it really depends where you're going to do it. Traditionally, we've done this in the cloud, right? We run a big server with lots of GPU, and we build big models. And then we set up infrastructure on the edge or in another cloud region to leverage that model, take requests from inbound clients, and to take their data and run it against the model and get some sort of a classification or resulting dataset out of it. But as we continue to look at just the hierarchical nature of, say, how we access data, there's probably an opportunity for bifurcation or trifurcation. Just the vision of responsibilities across cloud to the near edge, i.e. the servers that are just in regions closer to you, the far edge, i.e. AWS local zones or on-prem, things that are very, very close to you. And then finally, things you're actually carrying around with you, a mobile app or a browser on your phone or running on a laptop. So there's lots of things that needed to be put in place and have that tooling so that we could actually deliver the results at a more local level. So we look at a tactical architecture, again, the basics are we want to train a model, we want to test it and validate that it works, and then we want to deploy it. We want to put that out there and have it actually start doing things for us.

3. Cloud-Based ML Architecture

Short description:

We want to put that out there and have it actually start doing things for us. And I look at like a cloud-based traditional old-school architecture, I've got a data source, either static from a data lake or some giant database, or I've got streaming data that comes in from applications, from clients, from sensors, and then I have an ML pipeline where I am accomplishing all of the training and the testing. And then I have some sort of ML ops, which is a super hot keyword right now, and there's lots of tools, Kubeflow is one of them, it works very well with Kubernetes. And then that's the distribution out to the infrastructure that will then run those models, and I just ran Kubernetes here because everybody knows that, and it's ubiquitous. So this is the architecture of a lot of machine learning applications.

We want to put that out there and have it actually start doing things for us. And I look at like a cloud-based traditional old-school architecture, I've got a data source, either static from a data lake or some giant database, or I've got streaming data that comes in from applications, from clients, from sensors, and then I have an ML pipeline where I am accomplishing all of the training and the testing. And then I have some sort of ML ops, which is a super hot keyword right now, and there's lots of tools, Kubeflow is one of them, it works very well with Kubernetes. And then that's the distribution out to the infrastructure that will then run those models, and I just ran Kubernetes here because everybody knows that, and it's ubiquitous. So this is the architecture of a lot of machine learning applications.

4. Machine Learning at the Edge

Short description:

And it's very similar to something along the lines of a chat GPT. It's all central cloud-based, lots of horsepower in the infrastructure to build those models and to run those models, but ultimately it's all in one place. The next iteration is to build a big model using the high horsepower infrastructure of the cloud, and then to push that model out and to retrain the model or augment that model with data that comes from more local or regional clients. So you take that model and you ship it out to the edge and then you retrain it or you augment it with local or regional data and that offers a more customized experience. You did most of the work then in the cloud on that high horsepower engine and now at the edge you can use more distinct resources, more discrete resources to retrain and still provide that result in a timely manner to clients. Hierarchical knowledge and hierarchical classification and recommendation is how our bodies work. We call this ensemble learning. There are refinements that need to be made at every level of this to ensure that that model is both relevant and that it's performant enough at that edge because the devices that are calling in, there may be hundreds of thousands of them or millions or billions of them and they're calling in and they want a recommendation, how do you have a model that's localized enough and performant enough to deliver those results out at the edge?

And it's very similar to something along the lines of a chat GPT. It's all central cloud-based, lots of horsepower in the infrastructure to build those models and to run those models, but ultimately it's all in one place. It's not much closer to the user than perhaps the closest infrastructure component.

So client devices may be waiting a while to reach out to get that workload performed, perhaps in the case of, for example, chat GPT, you put in a queue and you have to wait your turn to interact with it, because there's limitations on what that cloud architecture can accomplish.

The next iteration of that then is to build a big model using the high horsepower infrastructure of the cloud, and then to push that model out and to retrain the model or augment that model with data that comes from more local or regional clients. So you may have, for the example, a store, but think of a massive multinational corporation that sells lots of products at thousands of outlets around the world. They know a lot about the general purchasing behaviors of their audience. They know when it's cold, people buy jackets, when it's hot, people buy flip-flops. That is perhaps, one would hope, universal, except it's not because there are certainly regions where having your feet exposed in a flip-flop is considered rude and so we don't sell that many flip-flops. So a machine learning model that was trained on the global data set perhaps wouldn't be the best source of recommendations for a population in an area where cultural or, you know, weather or lots of other factors are in play but they don't have access to that when they're building that one central model.

So you take that model and you ship it out to the edge and then you retrain it or you augment it with local or regional data and that offers a more customized experience. You did most of the work then in the cloud on that high horsepower engine and now at the edge you can use more distinct resources, more discrete resources to retrain and still provide, you know, that result in a timely manner to clients. But we could build this out even further because, again, knowledge much like the human brain is a hierarchical process. The human brain will take in an image through your eyes. It will immediately try to classify the shape. I see an outline of the darkness if I'm in the jungle and the shape looks like it might be a tiger. All I see though is the profile of that tiger, the silhouette. What the human brain will do is it will look at that edge or the edges of that profile. It will rotate that profile around and see if you can classify, I've ever seen that shape, that silhouette before then it will imagine, you imagine sound as an input. Is there what I would consider to be a growl of a tiger? Is it moving in a way that I traditionally associate with a tiger? Is it getting closer to me? Then as I begin to do those classifications, the interesting thing about the human brain as it goes through this process is there's lots of levels where it's able to take in different sensory data, but any single one of them can trigger the chemical reaction that says run or reach for that gun against the tree or make a loud noise or kind of give up because it's over. Hierarchical knowledge and hierarchical classification and recommendation is how our bodies work. If this on the screen is sort of the edge-based one, two tier, you can imagine that an iterative solution may be able to add layers and layers of knowledge on top of a model that may be generated centrally and then distributed out and continues to get more and more refined the closer it gets to the edge. We call this ensemble learning. When we talk about the flow of training a model in the cloud, it is more performant to say use a TensorFlow that's written in Python. That becomes more performant, but the further out you get and the lower the resources available and the closer to the edge and perhaps the more restrictive or sandbox the environment is, you begin to see lots of places you can use JavaScript to continually refine those models and then ultimately deploy them, so JavaScript becomes an excellent tool as you get out across the far edge or on-prem or even running on clients, right? So I can I can train something using the camera on my phone, I can retrain that model and now all of a sudden I can tell it the thing that you're looking at is in fact a tiger. So I can tell it that that model will exist on my phone and anytime I see a silhouette like that it'll be classified as a tiger and all of that will happen locally. However, you can see that this is a lot of moving parts, it looks like it's synchronous and I can just or it's effectively copy and paste and, you know, when you're creating a PowerPoint presentation, you're absolutely copying and pasting but there are refinements that need to be made at every level of this to ensure that that model is both relevant and that it's performant enough at that edge because the devices that are calling in, there may be hundreds of thousands of them or millions or billions of them and they're calling in and they want a recommendation, how do you have a model that's localized enough and performant enough to deliver those results out at the edge? So when we look at the systems and methods by which we would implement a solution like that, there's a lot of considerations. So the ML tooling for JavaScript has some limitations in terms of what it can perform and we're trying to design this iterative system to make as much use of the most appropriate client wherever we can. So on a server you have to look at host resources and you have to look at the complexity, like how many factors am I considering when I'm putting in data that's going to make a recommendation or a classification? And then sheer data volume, right? How much data am I using to train my model and to test it? And do I have the capacity to store that out on the edge on a phone or in a browser or easily accessed? Or am I leveraging terabytes of data in the cloud and pushing it out to the edge? And then when I get out to the edge, what is my phone capable of? What is the browser capable of? What's that sandbox? What do those restrictions mean for me? And finally, it comes down ultimately to user experience. Is it fast enough? Is it good enough? Is the accuracy high enough? And as I built that model in the cloud with lots of resources, how interoperable is it out at the edge? There's, for example, TensorFlow.

5. Machine Learning Deployment and HarperDB

Short description:

The models you generate using the Python model have to be run through what's called TFJS converter. Consider what you're trying to accomplish and what can be accomplished at the edge. Complexity can be a challenge, especially when scaling up. HarperDB is an integrated machine learning platform that simplifies and reduces complexity. It combines a database, applications, and distribution logic. By leveraging HarperDB, you can handle training, distribution, and replication of models. Clients can access the data and models, and iterative AI allows for localized model training and deployment to client devices.

The models you generate using the Python model have to be run through what's called TFJS converter. And there are some limitations to the structures of those models. So you need to consider, what are you trying to accomplish and what can you accomplish at the edge?

But then there's the other hierarchical nature of it. And we saw all those layers previously. And we talk about complexity sort of killing it. It's no good if it's super performant, but nobody in the world can maintain it. Because God forbid, I become successful, I need to scale it up. If you cannot get a hold on all those moving parts, and you might have 10, 15, a hundred moving parts in an application stack of micro front ends and microservices, APIs, and all that stuff, which is fine, but if you want to be in a hundred places so that you're close to all of your users, well, now I've got 10,000 moving parts to worry about. That's obviously no fun, and it lowers the total cost of ownership, obviously, of maintaining a system like this.

So you consider data storage, volume on disk, the business logic, my training workflow, my memo ops, my distribution, the infrastructure, and the horsepower thereof, and ultimately, what's the load, what are the volume of client apps that are going to be calling in and trying to get access to this information. So at this point, I'll just mention that HarperDB, the company for whom I work, is an integrated machine learning platform. It does lots of things, machine learning is one of those things, and we built in all of the pieces we thought would be necessary to simplify and reduce that complexity so that we could be in 100 places, and you still only had to worry about 100 things. So we built a database with an applications here, and distribution logic. So that's replication between nodes of HarperDB. So if you look at the database, that's the data store, right? And the application is where your training workflow and your distribution or replication of these models can be handled by simply leveraging HarperDB's existing solution. So we are not obviously the only machine learning platform, but I'll use this one as an example. We look at all of the sources of data and pull that into a platform like HarperDB. And then you have modules that you can use to train and build those models. And then you have lots of clients that can access this directly. And ultimately, lots of data going in, lots of processing to generate models in real time. And then finally, the clients that can call in and gain access to that data. So a system of iterative AI on HarperDB combines all of the pieces we had in the previous graphic. And you can simply retrain models within the application tier and accept clients within that same application tier who are asking you questions. Those same models obviously can be deployed out to the actual client devices so that they can run Edge as well. But the interesting thing about a platform in iterative AI is that I can ask a question of a node that's very close to me. But perhaps that model is locally trained. It's been thinned down. It's optimized for a bar edge platform or very low horsepower. Maybe it's running on the Edge on a Raspberry Pi. If I can't answer the question, I can forward that question further up the chain.

6. Iterative AI and Model Training

Short description:

And I can flag questions as unknown and unresolved, and then ask more powerful questions with a global dataset. If the question can be answered, the knowledge will come back and can be used as new training data. By training on top of previously unanswered questions, you can continue to improve the model.

And I can flag that question as unknown, unresolved. And I can ask the next most powerful thing with perhaps a more global data set that isn't as locally trained. And if it can answer it, great. That knowledge will come back through that original touch point. I can use that now answered but previously unanswered question as a new set of training data to say, if you see something that seems unanswerable, but it follows a paradigm like this, then perhaps this knowledge is worthwhile. And perhaps that might be the answer or an answer that's analogous to that. So you can train on top of that. And you can continue to go up the chain.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
39 min
Don't Solve Problems, Eliminate Them
Humans are natural problem solvers and we're good enough at it that we've survived over the centuries and become the dominant species of the planet. Because we're so good at it, we sometimes become problem seekers too–looking for problems we can solve. Those who most successfully accomplish their goals are the problem eliminators. Let's talk about the distinction between solving and eliminating problems with examples from inside and outside the coding world.
React Advanced Conference 2022React Advanced Conference 2022
30 min
Using useEffect Effectively
Can useEffect affect your codebase negatively? From fetching data to fighting with imperative APIs, side effects are one of the biggest sources of frustration in web app development. And let’s be honest, putting everything in useEffect hooks doesn’t help much. In this talk, we'll demystify the useEffect hook and get a better understanding of when (and when not) to use it, as well as discover how declarative effects can make effect management more maintainable in even the most complex React apps.
React Advanced Conference 2021React Advanced Conference 2021
47 min
Design Systems: Walking the Line Between Flexibility and Consistency
Design systems aim to bring consistency to a brand's design and make the UI development productive. Component libraries with well-thought API can make this a breeze. But, sometimes an API choice can accidentally overstep and slow the team down! There's a balance there... somewhere. Let's explore some of the problems and possible creative solutions.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Featured Workshop
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Featured WorkshopFree
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
WorkshopFree
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up