Processing Robot Data at Scale with R and Kubernetes

Rate this content
Bookmark

Most people would agree that R is a popular language for data analysis. Perhaps less well known is that R has good support for parallel execution on a single CPU through packages like future. In this presentation we will talk about our experience scaling up R processes even further running R in parallel in docker containers using Kubernetes. Robots generate massive amounts of sensor and other data; extracting the right information and insights from this requires significant more processing than can be tackled on a single execution environment. Faced with a preprocessing job of several hundred GB of data of compressed json line files, we used Pachyderm to write data pipelines to run the data prep in parallel, using multicore containers on a kubernetes cluster.


By the end of the talk we will have dispelled the myth that R cannot be used in production at scale. Even if you do not use R, you will have seen a use case to scale up analysis regardless of your language of choice.

8 min
02 Jul, 2021

Video Summary and Transcription

The Talk discusses the challenges of managing and analyzing the increasing volume of data gathered from robots. It highlights the importance of data extraction and feature engineering in analyzing what happens before a failure. The use of Kubernetes and Packyderm for data management and automatic updates in the pipeline is mentioned. The parallelization of R scripts and the scalability of large clusters for data collection and processing are emphasized. The Talk also mentions the use of AI at the robot fleet level for unlocking new opportunities.

Available in Español

1. Introduction to Robot Data Analysis

Short description:

Hello. My name is Frans van den Een, Chief Data Officer at Expantia. Together with Flodian Pistoni, the CEO of Inorbit, we address the challenges of managing and analyzing the increasing volume of data gathered from robots. Inorbit has accumulated over 3.8 million hours of robot data in the last 12 months alone. One of the main challenges is the data extraction process, where we need to join observations based on the nearest timestamp and perform feature engineering. We analyze what happens before a failure by looking back a certain time period for each observation.

Hello. Thank you very much for your interest in this talk. My name is Frans van den Een. I'm the Chief Data Officer at Expantia, and we help organisations to boost their data analytics capacity.

This talk was prepared together with Flodian Pistoni. Flodian is the co-founder and CEO of Inorbit, a cloud robot management platform. They're taking robots and developing them further to handle robot operations. With the increase in robot usage, especially during COVID, robots are working alongside humans and autonomously, resulting in a significant increase in the volume of data gathered from robots. Inorbit has accumulated over 3.8 million hours of robot data in the last 12 months alone, and they continue to grow rapidly, adding a year's worth of data every day.

One of the main challenges we encountered is that Inorbit offers its services to any fleet, and we have no control over how the data is gathered and sent to the central service. In one of the proof-of-concepts (POCs) we conducted, we faced the issue of many robots sending millions of files, with each file containing data from multiple sources. These sources, such as robot or in-agent operations, had their own timestamps and were not directly related. The first step we took was data extraction, which proved to be more complex than expected. We needed to join observations based on the nearest timestamp and perform feature engineering on top of that.

Nearest time joining involved finding an interval where we could join different signals about mission localization speed to create a single observation. Once we had one observation per time unit, we focused on feature engineering. We wanted to analyze what happened before a failure, looking back a certain time period for each observation.

2. Data Extraction and Feature Engineering

Short description:

The first step was data extraction, which involved joining observations by nearest timestamp. We then performed feature engineering and analyzed what happened before a failure. Due to the large volume of data, we decided to use Kubernetes and Packyderm, an open-source product that offers a versioned data pipeline. This allows for easier data management and automatic updates in the pipeline.

They were not directly related. So the first step that we needed to do was data extraction. And this was a little bit more complex than we expected, especially because we needed to join observations by nearest timestamp. I will highlight that a little bit in one slide further.

And then do the feature engineering on top of that. So what I mean with the nearest time joining, we have different signals about mission localization speed. And we need to find an interval where we can join each and every one of those signals to have a single observation. We worked out how that could be done. And then we needed to start the feature engineering. So once we have one line, one observation per time unit, we wanted to look back. We wanted to look at what happened before a failure. There's a failure right here. And if we go back say 42 seconds, then we need to do that for each and every one. Doing that for and taking into account all the cases where we couldn't include the datum, for instance, when there was a failure within the 42 second time frame was absolutely possible. But then we were faced with an enormous volume where our local computer simply said, no, this is not going to be possible. So we immediately thought about forming this out to Kubernetes.

We set up a bucket with the data that was going in. We packaged them in one day zips. We packed gstuff, Json, and gzzips to make it a little bit more workable and to be able to transport the data with more ease. And then form out the full data extraction of Kubernetes, a second bucket with the intermediate result, farm up the data engineering, we have the result ready for analysis.

What we found is that it is much easier to get the help of something called Packyderm. So Packyderm is a product, it's a company. They have an open source version of this, which is what we use. And what we have there is not a bucket. What we have is a filing cabinet. We have a repository where we conversion the data that is coming in and version the data that is coming out. Doing this kind of data pipeline with versioning means if there is one change at any point in the pipeline, the rest of the pipeline will respond and update the data automatically. So that prepares us to do all the, to have all the heavy lifting ready once we bring this into production. Just a quick look at what this look like. We create pipelines that are very similar to your configuration files and the key thing here is that we can connect the data in the repository, in the Pachyderm repository, PFS is the Pachyderm file system, to what we're running in our R script.

3. Parallelization and Scalability

Short description:

Our parallelized R scripts were not enough, so we farmed out the connection and data preparation. Monitoring the data on a granular level was great. Parallelizing R code is easy with the 'future' package. Working with Kubernetes made the transition to a massively parallel pipeline easier. Large clusters are surprisingly affordable, allowing for scalable data collection and processing. Using AI at the robot fleet level unlocks new opportunities. Visit expanded.com or inorbit.ai for more information.

Because our R scripts were already parallelized, but parallelizing was not enough. So we were now able to farm this out and making that connection, setting whatever data preparation was we're doing next, and for each datum that is the Pachyderm term for each unit of data in the pipeline is, we can see whether it was successful or whether it failed. This is a screen from Patrick Santamaria where we did most of this work.

So being able to monitor on that level what is happening with your data is in practice, it was absolutely great. So parallelizing R code is easy. There's a package called built on future. And it's very easy to parallelize it out. Going from parallel R code to a massively parallel pipeline is doable. For us it was much easier to work with Kubernetes through that.

The other thing was that large clusters are surprisingly affordable, right? We worked on a 80 CPU, 520 gigabyte cluster for under $10 an hour. Which was something we haven't experienced before and now we're using more and more in the work we're doing. For the team that we were working with for in orbit, this also had huge implications. Because it means that as fleets grow, they know that collecting and processing data is you know, becomes critical for them. They also now know that where they understand the scaling of their full platform, it doesn't need to be hard. But scaling up the analysis doesn't need to be hard either. And using AI at the robot fleet level, you know, unlocks many new opportunities that they are working hard on to continue to offer to their customers.

Thank you very much. I hope that this was useful. If you want any further information, please visit our websites, expanded.com or inorbit.ai. We both have blogs running and we like to write about this stuff, so we hope to be in touch sometime in the future.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.


Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
React Advanced Conference 2021React Advanced Conference 2021
21 min
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
Top Content
This talk gives an introduction about MediaPipe which is an open source Machine Learning Solutions that allows running machine learning models on low-powered devices and helps integrate the models with mobile applications. It gives these creative professionals a lot of dynamic tools and utilizes Machine learning in a really easy way to create powerful and intuitive applications without having much / no knowledge of machine learning beforehand. So we can see how MediaPipe can be integrated with React. Giving easy access to include machine learning use cases to build web applications with React.
JSNation Live 2021JSNation Live 2021
39 min
TensorFlow.JS 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
ML conf EU 2020ML conf EU 2020
32 min
An Introduction to Transfer Learning in NLP and HuggingFace
In this talk I'll start introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released HuggingFace, in particular our Transformers, Tokenizers and Datasets libraries and our models.

Workshops on related topic

ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Workshop
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.
ML conf EU 2020ML conf EU 2020
112 min
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
Workshop
Are you a Software Engineer who got tasked to deploy a machine learning or deep learning model for the first time in your life? Are you wondering what steps to take and how AI-powered software is different from traditional software? Then it is the right workshop to attend.
The internet offers thousands of articles and free of charge courses, showing how it is easy to train and deploy a simple AI model. At the same time in reality it is difficult to integrate a real model into the current infrastructure, debug, test, deploy, and monitor it properly. In this workshop, I will guide you through this process sharing tips, tricks, and favorite open source tools that will make your life much easier. So, at the end of the workshop, you will know where to start your deployment journey, what tools to use, and what questions to ask.
ML conf EU 2020ML conf EU 2020
146 min
Introduction to Machine Learning on the Cloud
Workshop
This workshop will be both a gentle introduction to Machine Learning, and a practical exercise of using the cloud to train simple and not-so-simple machine learning models. We will start with using Automatic ML to train the model to predict survival on Titanic, and then move to more complex machine learning tasks such as hyperparameter optimization and scheduling series of experiments on the compute cluster. Finally, I will show how Azure Machine Learning can be used to generate artificial paintings using Generative Adversarial Networks, and how to train language question-answering model on COVID papers to answer COVID-related questions.