November 4 - 6, 2020
ML conf EU
Online
ML conf EU 2020

The Machine Learning conference for software developers

41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
32 min
An Introduction to Transfer Learning in NLP and HuggingFace
In this talk I'll start introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released HuggingFace, in particular our Transformers, Tokenizers and Datasets libraries and our models.
32 min
Computer Vision Using OpenCV
As an AI scientist and a developer, I have been engaged with AI-applications for many years especially focusing on object detection and recognition purposes. I love thinking that we can get creative in designing neural networks. We can train them supervised, unsupervised, semi or self-supervised, and this gives possibilities to mimic the human brain in a narrow domain. However, in vision applications, there are still things where AI is lacking and will be lacking without computer vision knowledge. Computer vision has been solving detection and recognition problems for many years. However, in the last decade, it seems like AI is seen as a replacement of computer vision. AI can find the optimal model for a specific type of data set and it might achieve generalization better. AI can be designed in a way that it can learn life-long which also brings possibilities of creating models which serve better when they are used longer. However, an AI vision system will be lacking capabilities without computer vision knowledge. First of all, it will require a very big data set to train the model what can be expensive or even not possible. On the other hand, computer vision systems can be modeled only using a hand-drawn template image. Training AI models also requires GPUs. Nevertheless, I do not want to encourage everyone to train AI models for solving any simple problem which could be solved easily computer vision. Last but not least, knowing computer vision, machine learning and especially feature engineering methods helps to design hybrid models that might be more robust to adversarial attacks or changing conditions.
In this lecture, I will briefly introduce how computer vision (especially using the OpenCV library) and machine learning can be used for creating detection and recognition models. Some experience with python, jupyter notebook and some machine learning background would be useful to get more benefits from this lecture.
31 min
The Evolution Revolution
Elegant and graceful mathematics make a cool textbook cover, but the inside of those same books are usually dry cold engineering. It's important to mix the theory of innovation with the excitement of practicality, and through the composition of these elements we find innovation. In this talk, I'll show you from an engineering perspective how to explore, balance, and ultimately bottle machined success.
33 min
How to Machine Learn-ify any Product
This talk will be a walkthrough of utilizing machine learning to replace a rule based system for consumers. We will discuss when is it okay to use ML, how to build these models with intelligent data, evaluate these offline and finally how to validate this evaluation to land these models in production systems. Furthermore, we will illustrate various self-learning/interactive-learning strategies that can be used for production systems to automate how models teach themselves to become better.
34 min
Teaching ML and AI to Coders
Often it's thought that to be able to succeed with Machine Learning and Deep Learning, as an onramp to Artificial Intelligence, that you need a deep background in mathematics and calculus, as well as some form of PhD. But you don't. With modern APIs like TensorFlow, much of the complexity is abstracted away in pre-built libraries, so you can focus on learning. In this session, Laurence Moroney, from Google, will explain how he has used this to create courses with hundreds of thousands of students, and from there, how a certificate program was created.
8 min
Deep Transfer Learning for Computer Vision
Chip and equipment manufacturing and tracking is a tough task given the strict adherence to quality standards and processes like six-sigma control checks. In this session, we will be looking at key real-world problems from Semiconductor & Manufacturing, and possible methodologies where we leveraged a combination of traditional computer vision techniques and coupled it with the power of deep transfer learning and machine \ deep learning. We will be covering two main use-cases from the Industry:
Automatic Defect detection at NanoscaleDefect Clustering at Nanoscale
8 min
Processing Robot Data at Scale with R and Kubernetes
Most people would agree that R is a popular language for data analysis. Perhaps less well known is that R has good support for parallel execution on a single CPU through packages like future. In this presentation we will talk about our experience scaling up R processes even further running R in parallel in docker containers using Kubernetes. Robots generate massive amounts of sensor and other data; extracting the right information and insights from this requires significant more processing than can be tackled on a single execution environment. Faced with a preprocessing job of several hundred GB of data of compressed json line files, we used Pachyderm to write data pipelines to run the data prep in parallel, using multicore containers on a kubernetes cluster.
By the end of the talk we will have dispelled the myth that R cannot be used in production at scale. Even if you do not use R, you will have seen a use case to scale up analysis regardless of your language of choice.
9 min
Broadening AI Adoption with AutoML
Adoption of AI has been slowed the challenges involved in obtaining performant models, which require significant expertise and effort, and the limited number of practitioners with machine learning expertise. Automated machine learning (AutoML) eliminates the routine steps in the machine learning workflow, thus empowering domain experts without machine learning background to build good initial models, and allowing experienced practitioners to focus additional manual model optimization. This talk describes the extent of automation available for the various steps and demonstrates AutoML with a classifier for human activities based on accelerometer sensor data.
30 min
Boost Productivity with Keras Ecosystem
TensorFlow has built a solid foundation for various machine learning applications, on top of which the Keras ecosystem can really boost the productivity of the developers in building machine learning solutions. Keras has a simple and arbitrarily flexible API for building and training models. However, we still need a lot of manual work to tune the hyperparameters. Fortunately, with Keras Tuner, we can automate the hyperparameter tuning process with minor modifications to the code for building and training the models. To further boost the productivity, we introduce AutoKeras, which fully automates the model building, training, and hyperparameter tuning process. It dramatically reduces the amount of prior knowledge needed of using machine learning for some common tasks. All you need is to define the task and to provide the training data.
27 min
DeepPavlov Agent: Open-source Framework for Multiskill Conversational AI
DeepPavlov Agent is a framework designed to facilitate the development of scalable and production-ready multi-skill virtual assistants, complex dialogue systems, and chatbots. Key features of DeepPavlov Agent include (1) scalability and reliability in the high load environment due to micro-service architecture; (2) ease of adding and orchestrating conversational skills; (3) shared dialogue state memory and NLP annotations accessible to all skills.
DeepPavlov DREAM is a socialbot platform with a modular design with the main components such as annotators, skills and selectors run as independent services. These components are configured and deployed using Docker containers. It allows developers to focus on application development instead of focusing on the intrinsic details of the manual low-level infrastructure configuration.
8 min
Machine Learning on the Edge Using TensorFlow Lite
What if you could perform machine learning on the edge, i.e on your mobile device? This would mean that you no longer would need the roundtrip to the server, no data will leave the device and you don't even need an internet connection. In this session you will get an introduction to TensorFlow Lite so that you can use it in your own projects.
7 min
Browser Session Analytics: The Key to Fraud Detection
This talk will show how a fraud detection model has been developed based on the data from the browsing sessions of the different users. Tools such as PySpark and Spark ML have been used in this initiative due to a large amount of data.
The model created was able to identify a grouping of characteristics that covered 10% of the total sessions in which 88% were deemed fraudulent. This allows analysts to spend more of their time on higher-risk cases.
8 min
Can You Sing with All the Voices of the Features?
After this talk, you will know how to write the perfect song for your favourite singer! This is not a songwriting retreat but a talk about some of the lyrical, structural, harmonic, and melodic features song analysis includes. We will discuss the extraction of song structures using NLP tools and repetition analysis, musical features, and how to use all of those features to predict which songs fit which artist the best. Attend this talk to discover what is the future that the music industry can achieve with machine learning.
35 min
Power of Transfer Learning in NLP: Build a Text Classification Model Using BERT
The domain of Natural Language Processing have seen a tremendous amount of research and innovation in the past couple of years to tackle the problem of implementing high quality machine learning and AI solutions using natural text. Text Classification is one such area that is extremely important in all sectors like finance, media, product development, etc. Building up a text classification system from scratch for every use case can be challenging in terms of cost as well as resources, considering there is a good amount of dataset to begin training with.
Here comes the concept of transfer learning. Using some of the models that has been pre-trained on terates of data and fine-tuning it based on the problem at hand is the new way to efficiently implement machine learning solutions without spending months on data cleaning pipeline.
This talk with highlight ways of implementing the newly launched BERT and fine tuning the base model to build an efficient text classifying model. Basic understanding of python is desirable.
26 min
Never Have an Unmaintainable Jupyter Notebook Again!
Data visualisation is a fundamental part of Data Science. The talk will start with a practical demonstration (using pandas, scikit-learn, and matplotlib) of how relying on summary statistics and predictions alone can leave you blind to the true nature of your datasets. I will make the point that visualisations are crucial in every step of the Data Science process and therefore that Jupyter Notebooks definitely do belong in Data Science. We will then look at how maintainability is a real challenge for Jupyter Notebooks, especially when trying to keep them under version control with git. Although there exists a plethora of code quality tools for Python scripts (flake8, black, mypy, etc.), most of them don't work on Jupyter Notebooks. To this end I will present nbQA, which allows any standard Python code quality tool to be run on a Jupyter Notebook. Finally, I will demonstrate how to use it within a workflow which lets practitioners keep the interactivity of their Jupyter Notebooks without having to sacrifice their maintainability.
35 min
Dabl: Automatic Machine Learning with a Human in the Loop
In many real-world applications, data quality and curation and domain knowledge play a much larger role in building successful models than coming up with complex processing techniques and tweaking hyper-parameters. Therefore, a machine learning toolbox should enable users to understand both data and model, and not burden the practitioner with picking preprocessing steps and hyperparameters. The dabl library is a first step in this direction. It provides automatic visualization routines and model inspection capabilities while automating away model selection.
dabl contains plot types not available in standard python libraries so far, as well as novel algorithms for picking interesting visualizations. Heuristics are used to select appropriate preprocessing for machine learning, while state-of-the-art portfolio selection algorithms are used for efficient model and hyperparameter search.
dabl also provides easy access to model evaluation and model inspection tools provided scikit-learn.