Deep Transfer Learning for Computer Vision

Rate this content
Bookmark

Chip and equipment manufacturing and tracking is a tough task given the strict adherence to quality standards and processes like six-sigma control checks. In this session, we will be looking at key real-world problems from Semiconductor & Manufacturing, and possible methodologies where we leveraged a combination of traditional computer vision techniques and coupled it with the power of deep transfer learning and machine \ deep learning. We will be covering two main use-cases from the Industry:


Automatic Defect detection at Nanoscale

Defect Clustering at Nanoscale

8 min
02 Jul, 2021

Video Summary and Transcription

Today's Talk focuses on deep transfer learning for Computer Vision in the semiconductor manufacturing industry, specifically defect classification. The speakers discuss using a hybrid classification system with pre-trained models and image augmentation for accurate defect detection. They also explore the use of unsupervised learning, leveraging clustering algorithms and pre-trained models like ResNet-50, for defect analysis without prior knowledge. The process is reproducible, user-friendly, and provides accurate cluster results, with potential for future supervised learning applications.

Available in Español

1. Introduction

Short description:

Today's topic is deep transfer learning for Computer Vision, with a focus on real-world applications in the semiconductor manufacturing industry. Dipanjan and Sachin will share their expertise in this field.

Hi, everyone. So, today we will be talking about deep transfer learning for Computer Vision, and we'll be covering a couple of real-world applications in the context of the semiconductor manufacturing industry at nanoscale.

A bit about ourselves, so I'm Dipanjan. I'm a data science lead at Applied Materials, also a Google Developer Expert in Machine Learning and an author. Over to you, Sachin.

Thanks, Dipanjan. Hi, everyone. My name is Sachin. I lead the data science competence from Bangalore in applied materials. I have an experience of more than a decade handling multiple enterprise-wide use cases. And in this particular scenario, we'll be discussing about semiconductor industry related AI use cases. So let's jump into it, Dipanjan.

2. Defect Classification

Short description:

Today's topic is defect classification in semiconductor manufacturing. We leverage multiple classifiers to identify defects and apply image processing techniques to analyze the defects. Dipanjan explains how we use a hybrid classification system with pre-trained models and image augmentation for accurate defect detection and classification.

Okay, so today we are going to talk about defect classification. So I think all of us know about Moore's law, where it states like every two years that transistors in a chip gets doubled. And how it is possible, like it is possible by the atomic level engineering, what companies like applied the day in day out they do. And for that, they need to develop these recipes. And these recipes are not easy to develop.

The process engineers who work on these recipes have to deal with lot of defects while refining these recipes. They deal with these high end image capturing techniques, which these tools like laser tech and AFM, which is Atomic Force Microscopic Tech tools, are taken into account. And in these particular cases, some of these techniques are very destructive where the wafers are destroyed. That's where AI can help by figuring out these defects and have a lot of cost savings and time savings.

So I'll give you a quick overview of how we are leveraging AI techniques and defect detection for AI semiconductor manufacturing. So if we talk about the defects, like we deal with multiple type of defects. In this particular use case, we have defects like particle, large particles, small, fit and all. You can see there is a lot of noise and variation. So it was not easy to have one single algorithm or one single model to deal and find the whole classification done. What we have done here is we leverage multiple classifiers, a stacking technique where we input the image. Then we do a lot of image processing to different techniques. We do the denoising and then we feed that image into first level of classifier, which figures out whether there is a defect or not. If there is a defect, then we try to figure out the second level of classifiers, whether it's unknown, or a large, or a small defect.

Even if we find that, then there are different types of image processing techniques we apply to bring that effect out of the background. Then we try to do further analysis, like whether there is noise or the threshold levels are ... The noise levels are below threshold so that we can directly go to the fourth level of classifiers, which are to decide the ultimate class, whether it's a pet or a particle. In this way, we try to leverage transfer learning-based techniques to figure out the classification approach, what we have got.

So, I think Dipanjan will get more deeper into it and we'll ... Over to you Dipanjan. Here, we try to apply a hybrid classification system where we fine-tune a pre-trained ResNet model to extract the deep features. We also use traditional image-processing-based features like gray-level co-operance matrices and zonic moment features. We combine these to form a fusion feature vector, and we pass it through a deep learning net to do the final level of classification. And we also do some image augmentation, especially in cases where we have less data, just like in these cases, because we can't afford to have a lot of data, especially with regard to semiconductor-level electron microscope-level scans. And we also leverage pre-trained models like ResNet-50 in the backend of a object-detection model like a faster RCNN to detect and count the number of defects, because sometimes you also have to give the defect count besides the type of defect. With regard to our evaluation performance, as you can see, the performance is 95 percent on a holistic overview in terms of the four types of classifiers, and we do a per-classifier level performance also to understand the level of performance we are getting at each specific hierarchy in our overall hierarchical classification system.

3. Unsupervised Learning for Defect Analysis

Short description:

We explore unsupervised learning in the context of deep transfer learning for defect analysis. By leveraging clustering algorithms and pre-trained models such as ResNet-50, we can identify and analyze defects without prior knowledge. The process is reproducible and user-friendly, providing accurate cluster results and allowing for future supervised learning applications.

Now we will discuss about an unsupervised learning case in the context of deep transfer learning. Over to you, Sachin. Thank you, Dipanjan. So, as we saw, like the last case, we had prior knowledge of type of defects, but that's not the case always. Like many of the times we are not aware of how the defect is getting induced in this whole recipe, and there we need to do a lot of analysis, like what is the defect count? What are the morphology of the shape? What are the elements, like whether it's nitrogen, aluminum? Like at different parts or different steps of the recipe, we can encounter different foreign particles which could result into defects.

So because of lack of prior knowledge, we go to the unsupervised technique where we have built an interface where a user can browse the location where we have got all these defect images. Once the images are fed to the system, we use the ResNet-50 model to extract the features. These feature vectors are then fed to algorithms, the various clustering algorithms of what we have got. You know, get the higher level of clustering which is then provided to the user to decide how many clusters they want. And we go to the finer level of clustering like agglomerative clustering or affinity propagation clustering.

Based on these clustering results, we figure out the right clusters as well as the outliers and that is presented to the user. So this makes the whole process very reproducible and repeatable which is otherwise very subjective between different manual users. So over to Dipanya to get more into details of this whole algorithm. Yeah. So again, here we leverage a pre-trained ResNet model along with computer vision specific features like the hierarchic features of the zonic orthogonal projection moment features and we get those fusion vectors which we typically pass to our clustering models. And that is where we use hierarchical clustering in the terms of forming that dendogram, getting that right level of cutoff forming those clusters from the data itself. And then we also isolate the singleton clusters, so which are basically very different from the regular clusters in terms of similar kinds of defects. And this is a view you can see in terms of the interface which can show the different clusters to the user and then they can save it and leverage it in the future to even feed it to a supervised learning problem. And that brings an end to our whirlwind tour of Deep Transfer Learning for computer vision. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.


Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
React Advanced Conference 2021React Advanced Conference 2021
21 min
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
Top Content
This talk gives an introduction about MediaPipe which is an open source Machine Learning Solutions that allows running machine learning models on low-powered devices and helps integrate the models with mobile applications. It gives these creative professionals a lot of dynamic tools and utilizes Machine learning in a really easy way to create powerful and intuitive applications without having much / no knowledge of machine learning beforehand. So we can see how MediaPipe can be integrated with React. Giving easy access to include machine learning use cases to build web applications with React.
JSNation Live 2021JSNation Live 2021
39 min
TensorFlow.JS 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
ML conf EU 2020ML conf EU 2020
32 min
An Introduction to Transfer Learning in NLP and HuggingFace
In this talk I'll start introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released HuggingFace, in particular our Transformers, Tokenizers and Datasets libraries and our models.

Workshops on related topic

ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Workshop
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.
ML conf EU 2020ML conf EU 2020
112 min
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
Workshop
Are you a Software Engineer who got tasked to deploy a machine learning or deep learning model for the first time in your life? Are you wondering what steps to take and how AI-powered software is different from traditional software? Then it is the right workshop to attend.
The internet offers thousands of articles and free of charge courses, showing how it is easy to train and deploy a simple AI model. At the same time in reality it is difficult to integrate a real model into the current infrastructure, debug, test, deploy, and monitor it properly. In this workshop, I will guide you through this process sharing tips, tricks, and favorite open source tools that will make your life much easier. So, at the end of the workshop, you will know where to start your deployment journey, what tools to use, and what questions to ask.
ML conf EU 2020ML conf EU 2020
146 min
Introduction to Machine Learning on the Cloud
Workshop
This workshop will be both a gentle introduction to Machine Learning, and a practical exercise of using the cloud to train simple and not-so-simple machine learning models. We will start with using Automatic ML to train the model to predict survival on Titanic, and then move to more complex machine learning tasks such as hyperparameter optimization and scheduling series of experiments on the compute cluster. Finally, I will show how Azure Machine Learning can be used to generate artificial paintings using Generative Adversarial Networks, and how to train language question-answering model on COVID papers to answer COVID-related questions.