Neurotechnology is the use of technological tools to understand more about the brain and enable a direct connection with the nervous system. Research in this space is not new, however, its accessibility to JavaScript developers is.
Over the past few years, brain sensors have become available to the public, with tooling that makes it possible for web developers to experiment building brain-controlled interfaces.
As this technology is evolving and unlocking new opportunities, let's look into one of the latest devices available, how it works, the possibilities it opens up, and how to get started building your first mind-controlled app using JavaScript.
Transcript
About Charlie
Hi everyone. Thanks for joining me today to learn more about how to build brain controlled interfaces using JavaScript.
Before we dive into this topic, here's a little bit more about me. My name's Charlie Gerard, I'm a senior front end developer at Netlify. I'm also part of the Google developer experts group. Interwind web technologies. It's a community group. That's sponsored Google for developers who like to give back to the community in different ways. I'm also the author of a book about TensorFlow JS for JavaScript developers. And most of all, I spend a lot of my personal time building and researching prototypes about human computer interaction, which is also called HCI. And that's the study of the design and use of computer technology focused on the interfaces between people and computers. So it can involve a lot of things like AR VR, interactive arts, machine learning, et cetera.
[01:12] And I've been interested in this since I started learning to code. And throughout the years, my research has led me to the topic of today. So it has nothing to do with my day job at Netlify, but hopefully this talk will show you that you can use your JavaScript skills for a lot of different things
Using brain to interact with interfaces directly through JavaScript
[01:27] So the focus of today is our brain and how to use it to interact with interfaces directly using JavaScript. So how are we going to get data directly from our brain activity and write some JavaScript code to use it, to interact with interfaces or devices? So how do we even get this data from our brain? So we do this with the help of brain senses. So these are devices that contain electrodes that you placed on the scalp and in contact with the skin, they're able to transform the electrical signals coming from the brain into digital data that we can work with.
So on this slide, I put a few of the commercial brain sensors that you can buy currently, and you can see that they come in different shapes, they have different number of electrons, and that will impact what you're able to track and what kind of applications you're able to build with it. There's probably more brain sensors available out there, but here are the ones that I mostly heard or played with. So the one that the talk is going to focus on is the one on the bottom, right? That is called the Neurocity Notion they've recently released a new model called the Crown. So if ever you're interested in buying it, it might be called the Crown now, but I experimented with one of their very first version that was called the Notion. So to understand how the number of electrodes impacts the use cases. And it's still briefly talk about how that works.
[02:45] So in the context of the Notion device, I highlighted in green the placement of the electrodes based on their reference number on the 10 20 EEG system. So this is a system that's a reference in your technology, and it's the kind of map representing the placement of electrodes on the user's head. So at the top is the front of your head. And at the bottom is for the way down the back. And each electrodes has a reference number and letter, so these are important because it will give you an idea of the type of brainwaves that you can track depending on the area of the brain that the electrodes are closest to. So the new Notion has eight electrodes, four on the left side of the brain and four on the right side, mostly focused on the top and the front of the head. So this is important to know, because depending on the placement of the electrodes, you will get data from different parts of the brain. So it means that you will, what you can interpret from the data that you're getting will vary.
[03:41] So here I made a small animation to explain what I'm talking about. So at different parts of the brain have different purposes at the front. You have the frontal lobe, but then the cerebellum is at the lower back. The parietal will look at the top, et cetera. You don't have to know this heart, but it might not mean too much to you right now, but they're in charge of different physiological functions.
So for example, the frontal lobe is in charge of voluntary movement around concentration and problem solving the parietal lobe at the top is more focused on sensations and body awareness. And the temporal lobe is the one on the side that receives sensory information from the ears and processes that information into meaningful units suggest speech and words.
[04:22] So depending on what you'd like to track or build, you will want to check different brain sensors electrodes position to see if they're more likely to be focusing on the area of the brain that you're interested in. For example, one of the brain sensors on one of the previous lines is called next mind, and it mostly focuses on the occipital lobe at the middle back because they claim to be focusing on the users' vision to try to predict what somebody is looking at.
So anyway, now that we've talked about a brain senses, what does it look like for us as JavaScript developers? So with neurocity Notion that you have access to a UI in which you can see different graphs. So here is the part of the UI where you can see your real brainwaves. So you can see the different lines. There's eight of them. And each label responds to the name of an electrode position based on the 10 20 EEG system that I talked about a few slides ago. So this represents a wrap up the real data coming live from, from the brain sensor.
[05:18] But in general, when you get started in this space of neuro technology, you don't start straight away experimenting with raw data. Most of the brain sensors out there have implemented things like focus detection or calm detection, or that you can use without having to build your own machine learning model. So focus and calm detection don't need any training because they rely on a pattern of brainwaves that are pretty common, almost everybody, however, custom mental comments have to be trained.
So what do I mean that? Don't want the reading the entire list, but for the Notion headset, the comments you can train are focused on imagining specific movements. So you can see biting a lemon or pinching your left finger, or thinking about pushing something in space. It sits around. And for example, here's what the training, the right foods, mental content looks like. So you can do it also with the API, but in general, to do it faster, you do it through their UI.
[06:12] So you have two animations playing every few seconds to guide you into what you're supposed to do. So you have to alternate between states of focusing on that comment. So thinking about tapping your right foot on the floor and also resting where you're supposed to try to think about nothing at all, while you're doing this, your brain data is recorded and the machine learning algorithm tries to detect patterns between the two types of mental activity. So it can then predict accurately if you're focusing on the right foot thought or not.
So this training session lasts a few minutes and you can test if it works in the UI, otherwise you should retrain it and see how you get predictions to be more accurate. It can be a bit difficult because it requires a focus and it feels weird at first to be like, to have to think about movements without actually doing them. But it's an interesting exercise. And when it works, it's pretty exciting when you're able to actually think about moving your right foot and being able to trigger something in the UI. It is pretty cool.
[07:12] But this also means that the training sessions are personalized. If I train the right foot comment for me, it will learn based on my brain data. So if I ask a friend to wear the brain sensor and try to execute a right foot comment, it will likely not work because the brain data I might be a difference. So then once you train the platform to recognize patterns in your brainwaves, you're able to use it in your code. So let's look through a few examples.
[07:37] So the first one we're looking to is the calm and focus detection, because it's kind of like the fastest one. So you start importing the Notion NPM package. Then you instantiate the Notion with New Notion, and then you're able to subscribe to a few different events. So here we subscribed to the calm event and what it returns is an object with multiple properties, including the probability of a calm state. So then what you're able to do, for example, you can trigger UI events when the probability of being calm is over or under a certain threshold. So here, if the probability is down to 0.25, so if I'm not really calm, we could trigger some classical music to try to console him down and see if the probability increases.
That probability number is between 0 and 1, meaning 0, not calm and 1 really calm. And what is nice with the Notion headsets in their API is that it's pretty similar for different events in terms of syntax. So here's the syntax to get the user's focus to states.
[08:42] So you can implement the calm states in your code. You don't have to change much to then move on to subscribing to the focus states. So here in this good sample. If my focus is over 0.5, turn off my, so I don't get distracted, distracted and lose focus.
So now apart from calm and focus, there's also the Kinesis API that's available for the mental comments I talked about earlier. So you subscribe to the Kinesis event you want to use. And it also returns to the object with a probability of that comment being executed in your mind.
[09:14] So a machine learning algorithm is looking at the life brain data and tries to detect if the pattern is similar to the right arm comment that you trained, and it gives you the probability or the accuracy of the prediction. So what you'd probably want to do is trigger something like a UI event. If the probabilities is over 0.99, for example, if you needed to be very accurate, but if you want to be a bit more flexible, you could trigger an event. If the probability is over 0.8, for example, it's really up to you.
And then finally, if you do want to work with your data, you can subscribe to the brainwaves events. And it returned an object containing the data from the eight channels of the brain sensor, as well as frequency and PSD for a power split transitivity, I believe, which is a representation of grower data once processed and also a timestamp.
[10:02] So the timestamp is important because the data that you're getting is lossless. So you will get all the data from the brain sensor, but then it means that you might have a little, a little delay to receive it. And what it allows you, the types of symptoms that cause you to compare the current time in your UI to the time at which the brain, the data from the brain sensor was actually tracked. So depending on the type of application that you're building, this might actually be really important.Â
DEMOs
[10:30] So now that we've talked about the different kinds of events or states that you can track using the Notion and how to use that in JavaScript, I'm going to show a few example of things that I have built using this sensor.
So my first ever demo that I built with the Notion was a prototype of street fighter game controls with mental comments.
So for this, I actually repurpose a previous demo that I had built using gesture recognition. In my previous demo, I was using another piece of hardware that had an accelerometer and gyroscope. So usually a device that's around tracking speed and rotation. So I recorded live data while executing real punches and hurricanes in the air and used machine learning to find patterns from this gesture data, and then be able to predict hurricanes and punches again, but from live data to apply it to the game. So I could play street fighter in real life instead of using the keyboard. As this previous demo  worked, I thought I wanted to try to repurpose most of the codes, but use mental commands as triggers instead, and it worked. I trained two different mental comments and then triggered the game when the probability of the comments were over a certain threshold.
[11:46] Okay. My second demo was using a single mental comment. So you've probably played the Chrome dyno game before, but it needs a single input usually pressing on the space bar. So I thought it was the perfect use case for using it with the Notion. In this case, I didn't need to build a custom UI. I didn't modify anything in the original UI of the Chrome dyno game. I only trained the right foot metal command and then used a NodeJS package called RobotJS that allows you to trigger events like clicks or keyboard events to trigger the space bar for me. So to me, this is the perfect example of a simple interface that doesn't need to be adapted to work with the brain sensor. I only had to add the connection between the brain sensor and the UI.
[12:30] To show you how quick it can be to put this together let's pick up the code.
So this is basically the entire code for the demo on the previous slide. So it's done in no JS. So you start requiring the Notion and the RobotJS packages. And you instantiate the Notion. Then you log in with your username and password, or that you can say either in a dot end file, or you can write them directly if you're prototyping locally. Because what I did not mention before is that you have to log in to be able to use your device because the data for like privacy and to be able to use your own data and for not everybody to be able to have access to that, then you subscribed to the Kineses API with the mental comment that you trained, you get them on a probability. And if it's over a 0.99, or any number of that, you want to set I use RobotJS to trigger a key tab on the space bar and that's it. Then I ran my Node.JS script. I visited the Chrome dyno game page. And voila. So you can see that there's not much difference between using the API in the frontend and backend, which is pretty cool.
[13:43] And my latest demo is my most advanced using real brainwaves for the first time. So in this demo, I recorded my brain waves between a neutral state and intentional eye blinks. So I built my own machine learning model. And then I was able to predict these eye blinks from life data. So in this demo, it doesn't look like much because it's a very rough first prototype, but it means that I can control interfaces or devices using intentional eye blinks when I'm wearing the sensor.
So there's many ways to detect eye blinks using computer vision in JavaScript. But the issue with computer vision is that you have to be placed in front of your computer. So the lighting  has to be enough for that addiction to work. Sometimes some tools don't work with people with different skin tones etc. So I was excited to get it to work with brain data because now it means it can be, I can be anywhere in the room and control my interfaces or devices. It doesn't matter if it's day or not, and the lighting doesn't affect anything. And it would work with people with different skin tones.
[14:48] So my next goal would be to be able to gather data and train the model to recognize eye movements between left and right. So it could be, it could give more flexibility to users and it means more ways to interact with an interface. You could imagine the intentional blink would be playing or pausing a video or a song. And then looking left will be playing the previous one and right would be playing the next one.
Applications
[15:10] So, yes. Yeah. So now that we talked about how, what we can track and how to implement it in code, and we went through a few demos, let's talk about more applications for technologies like this.
So one of them is around detecting focus while you're working to optimize your workflow. So there's already an existing JS code extension that you can even install that tracks your brainwaves as you're working. So it could be interesting to know the times of the day when you're the most focused or maybe the parts of your code base that you find the most difficult, or, you can be able to analyze your drops in focus states and, and calm state, and be able to see what happened in your environment to provoke that.
[15:53] Another application is in scrolling webpage. So it can be useful for accessibility. If a user is unable to move, they could read a page using a mental comment, but also it could be useful for anyone. If you're used to doing multiple things at once, your hands are busy doing something else. If you're cooking or eating, you want to be able to scroll down different pages or stop a YouTube video while you're cooking or anything that you could do that with your brainwaves while your hands on doing something else.
[16:24] Another application is the neurocity microwave app. So you can download the code from GitHub and launch this app made with electrons. You can connect your device and trigger the do not disturb mode. When the device detects that you're in a state of high focus.
Now moving on from focus states, you can also build things that are a bit more, maybe creative. If you're a musician, you can also use your brainwaves to help produce your music, here one of the founders of neurocity is triggering an effects pedal with his brainwaves while he's also playing the guitar.
[16:59] So the quality of the gif, maybe it doesn't show it well enough, but the pedal is pushed a few times to show that the sound effect is being triggered. You could think, if you're playing an instrument, I'm sure that you can think about ways where while your hands are playing an instrument, you could use your brainwaves directly to apply certain effects.
[17:19] And finally, something I've been thinking about for the past few days is around passwordless authentication. So using the brainwaves, or brain sensor as a biometric authentication device. So we're already familiar with using a touch ID or face ID to authenticate into devices or services. So we could imagine the same type of system, but using brainwaves instead. Each person's brainwaves have a unique signature. So you can, we can use this to identify someone trying to log into our platform.
So if we're worried about data leaks, when we're using touch ID or face ID, if at any time someone forces you to log into a platform or, use this face ID while you're sleeping or something like this, we can't really create a different fingerprint or modify your face, but using brainwaves, you can retrain the device with a different mental comments that will be unique to you, but different from the one that was stolen. So that can be really interesting.
[18:11] Here, this is a really short demo that I did on Netlify because as I work there, I was able to modify the code myself, but what I would really be interested to see if this is working with real platform authenticators. So we really try to add something to the pitch idea on my Mac.
So that's very early research at the moment, but most importantly, an application can be whatever you're thinking of. I hope that listening to this talk, you might be coming up with ideas of things that you'd like to build using a brain sensor. It's a field that's growing with more and more devices. I've been syncing in hardware and software. And I'd love to see more people come up with ideas for this technology.
[18:51] Remember that you can use JavaScript in a lot of different contexts and that's what's awesome about it. And it's a real advantage to being a JavaScript developer in this space, we build websites that run on like desktop, laptop, iPad, phone. You can go frontend and backend things. You can be build electron apps. You can build AR and VR prototypes. We have frameworks to make music in JS. We can control robots. I mean like it's really super exciting and it's not something that developer using a different language can do because they might not have access to an ecosystem. That's as great as the one that JavaScript has.
Limits of the technology
[19:26] So I've talked a lot about what's possible, but let's briefly talk before I finish about the limits of such technology. So first of all, it's important to know that the quality of the data is different with external brain sensors than it is with implants. Implants have a direct access to the brain as the device is placed really on the brain, whereas brain sensor has to go through layers of skin and skull and hair. So if you find like a demo built another company that uses implants, you have to be aware that the quality of the data that you will get with an external brain sensor is different, and that will impact what you can do.
Another limit is the fact that as the data is lossless and that's great, you get all of it, but working with delays will have an impact on the kind of applications that you can build. If you're building things with focus or calm, I guess it doesn't really matter if you get it if your application changes the states in a second, but if you're really building applications that are really time-sensitive working with timestamps might be a difficulty, but it's something to think about.
[20:31] When you're getting into something like this you have to remember that in a lot of ways, we still don't understand everything about the brain. And we can't really detect things like, I want to trigger something when I'm thinking about the beach, right? It doesn't really work like that. So the type of thoughts that we can detect at the moment is more limited. So it's something that you have to keep in mind when you're getting into a space like this.
And finally, as you know, mental comments need training. If you build an interface, remember that, calm and focus are the same for all users. So you don't need any training. The user will just have to wear the sensor, but if you want something a bit more personalized, a bit more complex, the fact that the user will have to do some training before being able to use the application might be a limitation.
[21:15] The biggest opportunity that I see in this industry at the moment is that as JavaScript developers we can help shape the future of brain controlled interfaces. It's still in early stages, but it's moving fast. And we're in a position where we can give feedback to different companies about what we'd like to get them to do with the brain sensor. We can help shape what the web would look like if it was brain controlled and we can contribute to the different focus, force packages etc. And to me, I really think it's a big opportunity.
On this I'm going to stop here, but hopefully giving you a better idea of what is possible when we talk about rent control on the web. And I hope you're as excited as I am about the possibilities that this technology opens up. Thank you so much for listening. And if you have any questions, don't hesitate to ask in the Q&Q or on Twitter, I'm @devdevcharlie.
Questions
[22:05] Maxim Salnikov: Wow. What an inspiring session, thank you much, Charlie, for delivering this at JS Nation and before jumping into Q&A section, let's check what you folks answered to Charlie's question on where you would like to use a JavaScript as a creative coding and the winner is - machine learning. Actually it's a, it was a battle, all the duration of the session between machine learning and augmented reality and virtual reality. And with a very slight, slight difference machine learning won. Now let's invite Charlie to our studio, she's here. Hello.
[22:50] Charlie Gerard: Hi.
[22:51] Maxim Salnikov: Thanks for your session. And we have lots of questions to you. Let me pick some of them. And they're on my second screen. So let's start from the very technical one. You mentioned, you have to think about the action without actually moving, is it because moving creates signal noise? Would moving at the same time interfere with the models training and did you try it?
[23:16] Charlie Gerard: Oh, actually you can move if you want to, but it's just that as in general, when you will want to reproduce that thought you probably will not want to move. Then it's easier to try to train the model without actually moving your body. But you can do both. I prefer to train the model without moving, because I know that when I want to reproduce the thought, I want to do it without moving. But I know that if you look at the project that Neurolink has been doing lately with a monkey that was playing pong, the monkey was moving at the same time as the data was recorded. It is something that I want to try next. But I think that you can do both. I wouldn't know if it creates particularly noise. I wouldn't think so because what the brain sensor is getting is data about you wanting to move your body so I don't think that it would actually create noise, but I would have to try. It's definitely something that I want to do next as well.
[24:22] Maxim Salnikov: Awesome. Awesome. By the way, are you surprised the poll results that machine learning won?
[24:30] Charlie Gerard: I'm not surprised, but I'm excited because machine learning is something that I'm looking into a lot and I've been for the past few years. So I'm excited that other people want to do that too. I think there's a lot to learn in really exciting projects to build so I'm actually excited the poll responses.
[24:50] Maxim Salnikov: Yep. Yep. I bet that after your session more and more folks will use JavaScript creatively for machine learning. So a couple of minutes left, let's take this question and it's a sort of conceptual one. I bet many people's first thoughts are for games, but what kind of interests have you gotten in terms of medical users? What about for psychiatrists and psychologists to use during their appointments to get readings from people as the session happened? And this question was because a lot of counseling moved to online, so it could be a very interesting use case. What are your thoughts on that?
[25:35] Charlie Gerard: Yes. So I think I remember reading some papers about people trying to detect states of potential depression using brain sensors, so that instead of only listening to what people are saying, we would get direct feedback from people's state. If we know that depression is about a chemical imbalance in the brain, there might be things that we can actually detect from a raw brain data about the state of a user, and then being able to diagnose certain things like earlier on, or having real feedback about somebody's emotional state, because sometimes the way we express ourselves, we only do it with the words that we know or the experience we've had in the past. Whereas now we could do it directly with data from the brain. And I think there is research done with this at the moment, and I think it's definitely an interesting one as well.
[26:27] Maxim Salnikov: Awesome. Awesome. So Charlie, thanks for your sessions, thanks for your answers. And it was a great pleasure to have you on JS Nation conference.
[26:37] Charlie Gerard: Thank you so much. Bye.