And then let's see, can you export a new tuned model and just import it next time? Yes, so that KNN classifier is just a regular model, a TensorFlow model, and we'll... I don't show how to save it, but I'll show you how to, like you can import some models. And so, yes, you can save the results of the KNN classifier. And then next time you'd load MobileNet to get the activations and then run it through your pre-trained KNN classifier. And that's how you can do predictions. So yeah, you don't have to train it every time.
Okay, so we trained it on dogs and cats, and we used a dog and a cat and got a cat, a dog. What about carrots? So I'm gonna click on carrots. It's gonna predict. And it predict that these carrots are a dog with 66% probability. And so this is gonna be... The answer to the question would be more accurate if we use more values. If we think about what we did, we have ImageNet, which has 1000 classes, but our KNN classification, that just has two classes, cat or dog. And so when I give it carrots, it has to predict either cat or dog. And so what we're gonna talk about is this kind of out of band data. Whenever we use networks, they aren't magic. They only predict what you passed in. And so here we had exactly four cats and dogs. And these cats and dogs that we predicted on, they look kind of like the images that we used here. If we didn't do that, so if we use a cat that looked a lot different, so like maybe a different, you know, like just the side of a cat or something, then it's going to predict something, it might predict something strange. And so in this case, we have carrots, but we could have a cat here, which doesn't look like our input cats, and it might predict dog, right? And so yes, the more input examples that you can give it, the better it will be at predicting the output. And especially you want your input examples to be as close as possible to the test examples you're gonna give it. And so, yeah, if you have 10 dogs and cats, that'll do it better. If you have a hundred, that'll do even better. And think about this carrot. When you're doing this, think about this carrot. Because it's gonna predict the carrots are dog or cat. So it might be a good idea to, for example, if we know we might have carrots, to have a bunch of either carrots, right? So that'd be a third class, or just like out of band examples, we could call unknown or something, right? And get a bunch of examples which are not dogs or cats, but might be part of your input set. And so I kinda show this as a warning, basically, to know that you can get wrapped up in, or we use mobile net, that's great. We got these dogs and cats predicting, that's great. And so we're done, right? Well, it's very important, even if you're not doing the actual, the deep learning AI creation itself, like you're not actually creating mobile net. You still have to think about what your data is gonna look like and how you're gonna, and what you're gonna do inference on. So hopefully that helps a little bit.
Are there any rules regarding photos which speed up the training part, like different shapes, different background, etc? Which speed up the training part? Uh, so no, not necessarily. So you have to, when you're doing training, you have to look at the entire image no matter what that image is. And when you send it to the graphics card, it takes exactly the same for an image which is all, like say, it was all one color versus lots of different colors. So like, for example, if you're transmitting that over a network, then it's better to transmit like less colors, for example. But if you're doing it on, if you're doing training, it's exactly the same. So no, no speed-ups based on different shapes. Well, except for larger images will take longer because you have, you either have to resize them, and that's gonna take some time, or you if your network is one of the newer kinds of networks, it has to do more iterations to get it smaller. And so larger images generally will take longer to train and to do inference.
If you use the earlier layers to predict which application is running on a screenshot, would you want to attack another neural network at the end instead of a KNN classifier? Okay, so that's a good point. We added a KNN classifier to the end of this, but that's just because, let's see, if we go back to our, this, that's just because I knew that we were gonna have like dog, cat, and carrot. You don't have to use a KNN classifier though. You could actually use anything you want. You could use a conventional algorithm, you could use a whole other neural network, which is what Dennis is suggesting here. And so, it depends on basically the complexity of what you're getting out, the complexity of your final result. So here we just wanted like three classes, so KNN is perfect. If you wanted something more specific, so like, let's say we wanted to predict, so say we had these inputs, and we wanted to predict the position of the I's. So that's like a regression problem that's like actually getting the X, Y coordinate of the I's, for example.
Comments