1. Introduction to Brain-Controlled Interfaces
2. Brain Sensors and Data Analysis
Learn about the different brain sensors and their placement on the head. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore the raw data and available features in the neuro CT notion UI, including focus and calm detection. Discover the process of training custom mental commands using the Notion headset.
At the top is the front of your head and at the bottom is further away down the back. Each electrode has a reference number and letter. So these are important because it will give you an idea of the type of brain waves that you can track depending on the area of the brain that the electrodes are closest to.
So the Notion has eight electrodes, four on the left side of the brain and four on the right side, mostly focused on the top and the front of the head. So this is important to know because depending on the placement of the electrodes, you will get data from different parts of the brain. So it means that you will, what you can interpret from the data that you're getting will vary. So here I made a small animation to explain what I'm talking about. So different parts of the brain have different purposes. At the front, you have the frontal lobe, then the cerebellum is at the lower back, the parietal lobe at the top, etc. You don't have to know this by heart, but, and it might not mean too much to you right now, but they are in charge of different physiological functions.
So, for example, the frontal lobe is in charge of voluntary movement and concentration and problem solving. The parietal lobe at the top is more focused on sensations and body awareness. And the temporal lobe is the one on the side that receives sensory information from the ears and processes that information into meaningful units such as speech and words. So depending on what you'd like to track or build, you will want to check different brain sensors towards position to see they're more likely to be focusing on the area of the brain that you're interested in. For example, one of the brain sensor on one of the previous slides is called NextMind and it mostly focuses on the occipital lobe at the middle back because they claim to be focusing on the user's vision to try to predict what somebody is looking at.
3. Training, Detection, and Raw Data
Learn how brain data is recorded and used to predict mental activity. Personalized training sessions can be challenging but exciting. Explore examples of calm and focus detection, as well as the Kinesis API for mental commands. Trigger UI events based on probabilities and work with raw data.
While you're doing this your brain data is recorded and their machine learning algorithm tries to detect patterns between the two types of mental activity. So it can then predict accurately if you're focusing on the right foot thought or not. So this training session lasts a few minutes and you can test if it worked in the UI otherwise you should retrain it until you get predictions to be more accurate. It can be a bit difficult because it requires focus and it feels weird at first to have to think about movements without actually doing them but it's an interesting exercise and when it works it's pretty exciting.
When you're able to actually think about moving your right foot and being able to trigger something in the UI, it is pretty cool. But this also means that the training sessions are personalized. If I train the right foot command for me it will learn based on my brain data, so if I ask a friend to wear the brain sensor and try to execute a right foot command, it will likely not work because the brain data might be different. So then once you train the platform to recognize patterns in your brain waves, you're able to use it in your code.
So let's look through a few examples. So the first one we're going to look into is the calm and focus detection because it's kind of like the fastest one. So you start by importing the Notion NPM package, then you instantiate a Notion with new Notion and then you're able to subscribe to a few different events. So here we subscribe to the calm event and what it returns is an object with multiple properties, including the probability of a calm state. So then what you're able to do, for example you can trigger UI events when the probability of being calm is over or under a certain threshold. So here if the probability is under 0.25, so if I'm not really calm, we could trigger some classical music to try to calm someone down and see if the probability increases. That probability number is between zero and one, meaning zero not calm and one really calm. And what is nice with the Notion headset in their API is that it's pretty similar for different events in terms of syntax.
So here's the syntax to get the user's focused state. So if you implement the calm state in your code, you don't have to change much to then move on to subscribing to the focused state. So here in this code sample, if my focus is over 0.5, turn off my notifications so I don't get distracted and lose focus. So now apart from calm and focus, there's also the Kinesis API that's available for the mental comments I talked about earlier. So you subscribe to the Kinesis event you want to use, and it also returns an object with a probability of that comment being executed in your mind. So a machine learning algorithm is looking at the live brain data and tries to detect if the pattern is similar to the right arm comment that you trained. And it gives you the probability of the accuracy of the prediction. So what you'd probably want to do is trigger something like a UI event if the probability is over 0.99, for example, if you need it to be very accurate. But if you want to be a bit more flexible, you could trigger an event if the probability is over 0.8, for example. It's really up to you. And then finally, if you do want to work with raw data, you can subscribe to the brainwaves event. And it returns an object containing the data from the eight channels of the brain sensor, as well as frequency and PSD for power spectrum density, I believe, which is a representation of the raw data once processed, and also a timestamp. So the timestamp is important because the data that you're getting is lossless. You will get all the data from the brain sensor, but then it means you might have a little delay to receive it.
4. Using Timestamps and Building Demos
Learn how to use timestamps to compare brain sensor data with UI time. See examples of building a Street Fighter game and controlling the Chrome Dino game with mental commands. Explore the code for these demos and the ease of integrating the brain sensor with the UI. Discover the latest demo using raw brainwaves.
And what it allows you, the timestamp allows you to compare the current time in your UI to the time at which the data from the brain sensor was actually tracked. So depending on the type of application that you're building, this might actually be really important.
So my first ever demo that I built with the notion was a prototype of a Street Fighter game controlled with mental comments. So for this, I actually repurposed a previous demo that I had built using gesture recognition. In my previous demo, I was using another piece of hardware that had an accelerometer and a gyroscope. So usually a device that's around tracking speed and rotation. So I recorded live data while executing real punches and harakens in the air, and use machine learning to find patterns from this gesture data and then be able to predict harakens and punches again, but from live data to apply it to the game. So I could play Street Fighter in real life instead of using the keyboard.
As this previous demo worked, I thought I wanted to try to repurpose most of the code, but use mental comments as triggers instead. And it worked, I trained two different mental comments and then triggered the game when the probability of the comments were over a certain threshold. My second demo was using a single mental comment. So you've probably played the Chrome Dino game before, but it needs a single input, usually pressing on the space bar. So I thought it was the perfect use case for using it with the notion. In this case, I didn't need to build a custom UI, I didn't modify anything in the original UI of the Chrome Dino game, I only trained the right foot mental comment and then used a Node.js package called robotjs, that allows you to trigger events like clicks or keyboard events to trigger the space bar for me. So to me, this is a perfect example of simple interface that doesn't need to be adapted to work with the brain sensor. I only had to add the connection between the brain sensor and the UI.
To show you how quick it can be to put this together, let's look at the code. So this is basically the entire code for the demo on the previous slide. So it's done in Node.js. So you start by requiring the notion and the robotjs packages and you instantiate a notion. Then you log in with your username and password that you can save either in a .env file or you can write them directly if you're prototyping locally, because what I did not mention before is that you have to log in to be able to use your device because the data for privacy and to be able to use your own data and for not everybody to be able to have access to that. Then you subscribe to the Kinesis API with the mental command that you trained. You get the amount of probability and if it's over 0.99 or any number that you want to set. I use RobotJS to trigger a key tap on the spacebar and that's it. Then I ran my Node.js script. I visited the Chrome dyno game page and voila. So you can see that there is not much difference between using the API in the front-end and back-end, which is pretty cool. My latest demo is my most advanced using raw brainwaves for the first time.
5. Controlling Interfaces with Eye Blinks
Recorded brainwaves to predict intentional eye blinks for controlling interfaces or devices. Overcame limitations of computer vision by using brain data. Next goal is to train the model to recognize eye movements between left and right for more interaction possibilities.
In this demo, I recorded my brainwaves between a neutral state and intentional eye blinks. So I built my own machine learning model and then I was able to predict these eye blinks from live data. In this demo, it doesn't look like much because it's a very rough first prototype, but it means that I can control interfaces or devices using intentional eye blinks when I'm wearing the sensor.
6. Applications of Brain-Controlled Interfaces
Explore applications of brain-controlled interfaces, such as detecting focus, scrolling web pages, using brainwaves in music production, and password-less authentication. Discover how brainwaves can optimize workflow, enable hands-free scrolling, trigger effects in music, and provide secure authentication. Imagine a future where brainwaves replace fingerprints and facial recognition for authentication, offering a unique and secure biometric identifier. Early research is being conducted to integrate brainwaves with existing platform authenticators.
So now that we talked about how, what we can track and how to implement it in code, and we went through a few demos. Let's talk about more applications for technologies like this. So one of them is around detecting focus while you're working to optimize your workflow. So there's already an existing VS Code extension that you can install that tracks your brainwaves as you're working. So it could be interesting to know the times of the day when you're the most focused, or maybe the parts of your code base that you find the most difficult. Or you know, you can be able to analyze your drops in focus state and in calm state and be able to see what happened in your environment to provoke that.
Another application is in scrolling web pages. So you can use it for accessibility. If a user is unable to move, they could read a page by using a mental comment. But also it could be useful for anyone. If you're used to doing multiple things at once, and you know your hands are busy doing something else, if you're cooking or eating, you want to be able to scroll down different pages or stop a YouTube video while you're cooking or anything, you could do that with your brainwaves while your hands are doing something else.
Another application is the NeuroCity Mac OS app. So you can download the code from GitHub, launch this app made with Electron, you can connect your device and trigger the do not disturb mode when the device detects that you're in a state of high focus. Now, moving on from focus states, you can also build things that are a bit more maybe creative. If you're a musician, you can also use your brainwaves to help you produce your music. Here, one of the founders of NeuroCity is triggering an effect pedal with his brainwaves while he's also playing the guitar. The quality of the GIF maybe doesn't show it well enough, but the pedal is pushed a few times to show that the sound effect is being triggered. You could think if you're playing an instrument, I'm sure that you can think about ways where while your hands are playing an instrument, you could use your brainwaves directly to apply certain effects.
Finally, something I've been thinking about for the past few days is around password-less authentication. Using the brainwave or brain sensor as a biometric authentication device. We're already familiar with using TouchID or FaceID to authenticate into devices or services. We could imagine the same type of system, but using brainwaves instead. Each person's brainwaves have a unique signature, so we can use this to identify someone trying to log in to a platform. If we're worried about data leaks, when we're using TouchID or FaceID, if at any time someone forces you to log into a platform or uses FaceID while you're sleeping or something like this, we can't really create a different fingerprint or modify your face. But using brainwaves, you can retrain the device with the different mental comments that will be unique to you but different from the one that was stolen. So that can be really interesting. Here, this is a really short demo that I did on Netlify, because as I work there I was able to modify the code myself. But what I would really be interested to see is with working with real platform authenticators. So I really tried to add something to the Touch ID on my Mac. So that's very early research at the moment.
7. Limits and Opportunities of Brain Control
But most importantly, an application can be whatever you're thinking of. I hope that listening to this talk, you might be coming up with ideas of things that you'd like to build using a brain sensor. It's a field that's growing with more and more devices, advancements in hardware and software. And I'd love to see more people come up with ideas for this technology.
So I've talked a lot about what's possible, but let's briefly talk before I finish about the limits of such technology. So, first of all, it's important to know that the quality of the data is different with external brain sensors than it is with implants. Implants have a direct access to the brain as a device is placed really on the brain, whereas a brain sensor gets data that has to go through layers of skin and skull and hair. So if you find a demo built by another company that uses implants, you have to be aware that the quality of the data that you will get with an external brain sensor is different and that will impact what you can do.
Another limit is the fact that as the data is lossless, and that's great, you get all of it, but working with delays will have an impact on the kind of applications that you can build. If you're building things with focus or calm, I guess it doesn't really matter if you get it, if your application changes the state in a second, but if you're really building applications that are really time-sensitive, working with timestamps might be a difficulty, but it's something to think about. When you're getting into something like this, you have to remember that in a lot of ways, we still don't understand everything about the brain and we can't really detect things like, I want to trigger something when I'm thinking about the beach. It doesn't really work like that, so the type of thoughts that we can detect at the moment is more limited. So it's something that you have to keep in mind when you're getting into a space like this.
And finally, as mental commons need training, if you build an interface, remember that common focus are the same for all users, so you don't need any training, the user will just have to wear the sensor, but if you want something a bit more personalized, a bit more complex, the fact that the user will have to do some training before being able to use your application might be a limitation.
Q&A and Moving During Training
Thank you for listening. Machine learning emerged as the winner in the battle between machine learning, augmented reality, and virtual reality. Charlie discusses the possibility of moving while training the model and shares her preference for training without moving. She mentions Neuralink's project involving a monkey playing pong while recording data and expresses her interest in trying it in the future.
Thank you so much for listening and if you have any questions, don't hesitate to ask in the Q&A or on Twitter, I'm at DevDevChannel.
Now, now let's invite Charlie to our studio. She's here. Hello. Hi. Thanks for your session. And we have lots of questions to you. Let me pick some of them. They're on my second screen. So let's start from the very technical one. You mentioned you have to think about the action without actually moving. Is it because moving creates signal noise? Would moving at the same time interfere with the model's training? And did you try it? Oh, actually, you can move if you want to. But it's just that as in general, when you will want to reproduce that thought you probably will not want to move, then it's easier to try to train the model without actually moving your body. But you can do both. I prefer to train the model without moving because I know that when I want to reproduce the thought, I want to do it without moving. But I know that if you look at the project that Neuralink has been doing lately with a monkey that was playing pong, the monkey was moving at the same time as the data was recorded. It is something that I want to try next. But I think that you can do both. I wouldn't know if it creates particularly noise. I wouldn't think so. Because what the brain sensor is getting is data about you wanting to move your body. So I don't think that it would actually create noise. But I would have to try. It's definitely something that I want to do next as well.
Machine Learning Poll and Medical Applications
Awesome. By the way, are you surprised by the poll results that Machine Learning won? I'm not surprised, but I'm excited. Because Machine Learning is something that I'm looking into a lot. And I've been for the past few years. So I'm excited that other people want to do that too. I think there's a lot to learn and really exciting projects to build. So I'm actually excited by the poll response.
Yes, so I think I remember reading some papers about people trying to detect states of potential depression using brain sensors. So that instead of only listening to what people are saying, we would get direct feedback from people's state. If we know that depression is about a chemical imbalance in the brain, there might be things that we can actually detect from raw brain data about the state of a user. And then being able to diagnose certain things like earlier on or having real feedback about somebody's emotional state. Because sometimes the way we express ourselves, we only do it with the words that we know or the experience we've had in the past. Whereas now we could do it directly with data from the brain. And I think there is research done with this at the moment, and I think it's definitely an interesting one as well.
Awesome, awesome. Charlie, thanks for your session. Thanks for your answers. And it was a great pleasure to have you on the GIS Nation conference. Thank you so much.