Broadening AI Adoption with AutoML


Adoption of AI has been slowed the challenges involved in obtaining performant models, which require significant expertise and effort, and the limited number of practitioners with machine learning expertise. Automated machine learning (AutoML) eliminates the routine steps in the machine learning workflow, thus empowering domain experts without machine learning background to build good initial models, and allowing experienced practitioners to focus additional manual model optimization. This talk describes the extent of automation available for the various steps and demonstrates AutoML with a classifier for human activities based on accelerometer sensor data.


Hello everyone, my name is Bernhard Zum, I am product manager for machine learning with the Mathworks. Let me motivate my topic with some questions to you. Where do you want to apply AI? Are you concerned with the lack of your experience in AI or with black box models? The community widely recognizes these as challenges and barriers to broader adoption of AI across many industries. Today I will focus on AutoML, automation that takes the routine iterative effort and most of the science out of building machine learning models. So what exactly is AutoML? To understand that you need to know a bit about the typical workflow for building machine learning models. The focus of this talk, but building deep neural networks isn't that different. First, you need to process your raw data, deal with its messiness and get it into a shape that's suitable for later stages, like dealing with missing data and outliers. Next, you need to engineer features, extract a few variables from your data that serve as input to your model and capture the majority of the variability. That's fairly easy for numeric data, but a lot harder for signals. Next, you're faced with the choice of different machine learning models. And even to experts, it's not clear which model performs best on any given problem. So you have to try multiple, which leads to the model tuning stage where you assess the performance of some initial models, optimize their hyper parameters, maybe select a subset of features to avoid overfitting, but that may not be enough to get really good performance. You may have to go back, replace some features with others and do this all over again. If you're familiar with machine learning, you will know the most difficult and time consuming stages are the feature engineering and the optimization. If your head is spinning now, don't despair because you don't have to know all this complexity. The point of AutoML is to simplify it. Ideally, to go directly from your initial data and your machine learning problem to a model you can deploy. However, really taking it seriously, that is not a realistic expectation in a single step machine learning is not possible. However, what is realistic is freeing up engineers like yourself to focus on the hard machine learning problems and on your application. Otherwise, without AutoML, you'll have to find that AI expertise either inside your team and organization or outside. And those data scientists are hard to find and expensive. So as the first barrier that AutoML removes, it overcomes the lack of machine learning expertise. But even if you have that expertise, you are increased in productivity because AutoML takes away those time consuming and iterative steps. Finally, AutoML allows you to solve problems that otherwise wouldn't be feasible, like use cases where you need to build many different models representing different variations or different environmental stages. So how do you apply AutoML for engineering? Most of those engineering applications are based on signal image data. And that's where the feature engineering becomes critical for good performance. And that's notoriously difficult. We at MathWorks brought our signal processing knowledge to bear and came up with the following three step AutoML. First, you apply wavelet scattering. These wavelets are very suitable in that time bounded shape to represent spikes and irregularities in your signal. Therefore, you get very good features. Many engineering applications, however, require deployment to memory and power limited embedded systems. For those, you cannot deploy large models. So second, we apply automated feature selection to reduce the maybe hundreds of wavelet features to just a few very performant features and reduce the model size. Finally, and key is the model selection hyper parameter tuning step. You have a choice of different models and for the model to perform well, the hyper parameters need to be set just right. Let's look at that stage in a little more detail. How does that simultaneous optimization of model and hyper parameters work? Well, you can perform random search, but that's not efficient either because the search space is very large. We employ Bayesian optimization that builds a model of the search space. And here you can see how that Bayesian optimization switches between different types of models and optimizes the error over in the course of the iterations. How do we know that AutoML works? We compared AutoML to the traditional manual process on two classification problems. First, we look at human activity recognition, where you take auxiliary meter data from mobile phones. We have about 7K observations in a set we collected and we manually engineered 66 features using various signal processing functions. Second, we look at heart sound classification. Think about being in your doctor's office with a stethoscope and listening to your heart sound. So those phonograms, we could have a data set of 10K observations that's publicly available and engineered less than 30 features. So what results did we get? We can see here with the manual process, we achieved accuracies in the high 90s as you would want to have for such an important application. For AutoML, for one application, slightly lower, but the point is without all that expertise and without the time consuming iterative process, you get very good models in a few steps. So AutoML empowers engineers without AI expertise to build optimized models, including for signal applications where the feature extraction is notoriously difficult. You can apply AutoML to signal applications in a few steps, automated feature generation with wavelets, automated feature selection to reduce model size and make it fit on your hardware, and model selection along with hyperparameter tuning in an efficient way using Bayesian optimization. Finally, to deploy your AI model to the edge and embedded systems, you need low level code like C. MATLAB, you can translate automatically to C, C++ code that can be deployed directly, and thus another barrier to broader adoption of AI removed. Thank you for your attention. And if you want to know more, Monday afternoon or evening, I'll have a longer session on AutoML and interpretability, a seminar on those two topics, one hour and two hours, a hands-on workshop on machine and deep learning using MATLAB online. And now I'll head back to the moderator for questions.
9 min
02 Jul, 2021

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic