Building Brain-controlled Interfaces in JavaScript

Rate this content
Bookmark

Neurotechnology is the use of technological tools to understand more about the brain and enable a direct connection with the nervous system. Research in this space is not new, however, its accessibility to JavaScript developers is.

Over the past few years, brain sensors have become available to the public, with tooling that makes it possible for web developers to experiment building brain-controlled interfaces.

As this technology is evolving and unlocking new opportunities, let's look into one of the latest devices available, how it works, the possibilities it opens up, and how to get started building your first mind-controlled app using JavaScript.

27 min
09 Jun, 2021

Video Summary and Transcription

Learn how to build brain-controlled interfaces using JavaScript and brain sensors. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore examples of calm and focus detection, as well as the Kinesis API for mental commands. Discover the applications of brain-controlled interfaces, such as scrolling web pages and password-less authentication. Understand the limits and opportunities of brain control and the potential for using brain sensors in medical applications.

Available in Español

1. Introduction to Brain-Controlled Interfaces

Short description:

Learn how to build brain-controlled interfaces using JavaScript. Charlie Girard, senior frontend developer at Netlify, shares insights on using brain sensors to transform brain activity into digital data. Discover the NeuroCity Notion, a commercial brain sensor, and how the number of electrodes impacts its use cases.

Hi everyone, thanks for joining me today to learn more about how to build brain-controlled interfaces using JavaScript. Before we dive into this topic, here's a little bit more about me. My name is Charlie Girard. I'm a senior frontend developer at Netlify. I'm also part of the Google Developer Experts group in Web Technologies. It's a community group that's sponsored by Google for developers who would like to give back to the community in different ways. I'm also the author of a book about TensorFlow.js for JavaScript developers.

Most of all, I spend a lot of my personal time building and researching prototypes about human computer interaction, which is also called HCI. That's the study of the design and use of computer technology focused on the interfaces between people and computers. It can involve a lot of things like AR, VR, interactive arts, machine learning, et cetera. I've been interested in this since I started learning to code. Throughout the years, my research has led me to the topic of today. It has nothing to do with my day job at Netlify, but hopefully this talk will show you that you can use your JavaScript skills for a lot of different things.

The focus of today is our brain and how to use it to interact with interfaces directly using JavaScript. How we can get data directly from our brain activity, and write some JavaScript code to use it to interact with interfaces or devices. How do we even get this data from our brain? We do this with the help of brain sensors. These are devices that contain electrodes that you place on the scalp. In contact with the skin, they are able to transform the electrical signals coming from the brain into digital data that we can work with. On this slide, I put a few of the commercial brain sensors that you can buy currently. You can see that they come in different shapes. They have different number of electrodes. That will impact what you're able to track and what kind of applications you're able to build with it. There's probably more brain sensors available out there, but here are the ones that I mostly heard of or played with. The one that this talk is going to focus on is the one on the bottom right. That's called the NeuroCity Notion. They recently released a new model called the Crown. If ever you're interested in buying it, it might be called the Crown now, but I experimented with one of their very first version that was called the Notion. To understand how the number of electrodes impacts the use cases, let's talk briefly about how that works. In the context of the Notion device, I highlighted in green the placement of the electrodes based on their reference number on the 1020 EEG system. This is a system that's a reference in neurotechnology, and it's a kind of map representing the placement of electrodes on a user's head.

2. Brain Sensors and Data Analysis

Short description:

Learn about the different brain sensors and their placement on the head. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore the raw data and available features in the neuro CT notion UI, including focus and calm detection. Discover the process of training custom mental commands using the Notion headset.

At the top is the front of your head and at the bottom is further away down the back. Each electrode has a reference number and letter. So these are important because it will give you an idea of the type of brain waves that you can track depending on the area of the brain that the electrodes are closest to.

So the Notion has eight electrodes, four on the left side of the brain and four on the right side, mostly focused on the top and the front of the head. So this is important to know because depending on the placement of the electrodes, you will get data from different parts of the brain. So it means that you will, what you can interpret from the data that you're getting will vary. So here I made a small animation to explain what I'm talking about. So different parts of the brain have different purposes. At the front, you have the frontal lobe, then the cerebellum is at the lower back, the parietal lobe at the top, etc. You don't have to know this by heart, but, and it might not mean too much to you right now, but they are in charge of different physiological functions.

So, for example, the frontal lobe is in charge of voluntary movement and concentration and problem solving. The parietal lobe at the top is more focused on sensations and body awareness. And the temporal lobe is the one on the side that receives sensory information from the ears and processes that information into meaningful units such as speech and words. So depending on what you'd like to track or build, you will want to check different brain sensors towards position to see they're more likely to be focusing on the area of the brain that you're interested in. For example, one of the brain sensor on one of the previous slides is called NextMind and it mostly focuses on the occipital lobe at the middle back because they claim to be focusing on the user's vision to try to predict what somebody is looking at.

So anyway, now that we've talked about brain sensors, what does it look like for us as JavaScript developers? So with the neuro CT notion you have access to a UI in which you can see different graphs. So here is the part of the UI where you can see your raw brain waves. So you can see the different lines. There's eight of them and each label responds to the name of an electrode position based on the 1020 EEG system that I talked about a few slides ago. So this represents a graph of the raw data coming live from the brain sensor. But in general when you get started in this space of neurotechnology you don't start straightaway experimenting with raw data. Most of the brain sensors out there have implemented things like focus detection or calm detection that you can use without having to build your own machine learning model. So focus and calm detection don't need any training because they rely on a pattern of brain waves that are pretty common amongst everybody. However custom mental comments have to be trained so what do I mean by that. I don't bother reading the entire list but for the Notion headset the comments you can train are focused on imagining specific movements. So you can see biting a lemon or pitching your left finger or thinking about pushing something in space. For example here's what training the right foot mental comment looks like. So you can do it also with their API but in general to do it faster you do it through their UI. So you have two animations playing every few seconds to guide you into what you're supposed to do. So you have to alternate between states of focusing on that comment so thinking about tapping your right foot on the floor and also resting where you're supposed to try to think about nothing at all.

3. Training, Detection, and Raw Data

Short description:

Learn how brain data is recorded and used to predict mental activity. Personalized training sessions can be challenging but exciting. Explore examples of calm and focus detection, as well as the Kinesis API for mental commands. Trigger UI events based on probabilities and work with raw data.

While you're doing this your brain data is recorded and their machine learning algorithm tries to detect patterns between the two types of mental activity. So it can then predict accurately if you're focusing on the right foot thought or not. So this training session lasts a few minutes and you can test if it worked in the UI otherwise you should retrain it until you get predictions to be more accurate. It can be a bit difficult because it requires focus and it feels weird at first to have to think about movements without actually doing them but it's an interesting exercise and when it works it's pretty exciting.

When you're able to actually think about moving your right foot and being able to trigger something in the UI, it is pretty cool. But this also means that the training sessions are personalized. If I train the right foot command for me it will learn based on my brain data, so if I ask a friend to wear the brain sensor and try to execute a right foot command, it will likely not work because the brain data might be different. So then once you train the platform to recognize patterns in your brain waves, you're able to use it in your code.

So let's look through a few examples. So the first one we're going to look into is the calm and focus detection because it's kind of like the fastest one. So you start by importing the Notion NPM package, then you instantiate a Notion with new Notion and then you're able to subscribe to a few different events. So here we subscribe to the calm event and what it returns is an object with multiple properties, including the probability of a calm state. So then what you're able to do, for example you can trigger UI events when the probability of being calm is over or under a certain threshold. So here if the probability is under 0.25, so if I'm not really calm, we could trigger some classical music to try to calm someone down and see if the probability increases. That probability number is between zero and one, meaning zero not calm and one really calm. And what is nice with the Notion headset in their API is that it's pretty similar for different events in terms of syntax.

So here's the syntax to get the user's focused state. So if you implement the calm state in your code, you don't have to change much to then move on to subscribing to the focused state. So here in this code sample, if my focus is over 0.5, turn off my notifications so I don't get distracted and lose focus. So now apart from calm and focus, there's also the Kinesis API that's available for the mental comments I talked about earlier. So you subscribe to the Kinesis event you want to use, and it also returns an object with a probability of that comment being executed in your mind. So a machine learning algorithm is looking at the live brain data and tries to detect if the pattern is similar to the right arm comment that you trained. And it gives you the probability of the accuracy of the prediction. So what you'd probably want to do is trigger something like a UI event if the probability is over 0.99, for example, if you need it to be very accurate. But if you want to be a bit more flexible, you could trigger an event if the probability is over 0.8, for example. It's really up to you. And then finally, if you do want to work with raw data, you can subscribe to the brainwaves event. And it returns an object containing the data from the eight channels of the brain sensor, as well as frequency and PSD for power spectrum density, I believe, which is a representation of the raw data once processed, and also a timestamp. So the timestamp is important because the data that you're getting is lossless. You will get all the data from the brain sensor, but then it means you might have a little delay to receive it.

4. Using Timestamps and Building Demos

Short description:

Learn how to use timestamps to compare brain sensor data with UI time. See examples of building a Street Fighter game and controlling the Chrome Dino game with mental commands. Explore the code for these demos and the ease of integrating the brain sensor with the UI. Discover the latest demo using raw brainwaves.

And what it allows you, the timestamp allows you to compare the current time in your UI to the time at which the data from the brain sensor was actually tracked. So depending on the type of application that you're building, this might actually be really important.

So now that we talked about the different kinds of events or states that you can track using the notion and how to use that in JavaScript. I'm going to show a few examples of things that I have built using this brain sensor.

So my first ever demo that I built with the notion was a prototype of a Street Fighter game controlled with mental comments. So for this, I actually repurposed a previous demo that I had built using gesture recognition. In my previous demo, I was using another piece of hardware that had an accelerometer and a gyroscope. So usually a device that's around tracking speed and rotation. So I recorded live data while executing real punches and harakens in the air, and use machine learning to find patterns from this gesture data and then be able to predict harakens and punches again, but from live data to apply it to the game. So I could play Street Fighter in real life instead of using the keyboard.

As this previous demo worked, I thought I wanted to try to repurpose most of the code, but use mental comments as triggers instead. And it worked, I trained two different mental comments and then triggered the game when the probability of the comments were over a certain threshold. My second demo was using a single mental comment. So you've probably played the Chrome Dino game before, but it needs a single input, usually pressing on the space bar. So I thought it was the perfect use case for using it with the notion. In this case, I didn't need to build a custom UI, I didn't modify anything in the original UI of the Chrome Dino game, I only trained the right foot mental comment and then used a Node.js package called robotjs, that allows you to trigger events like clicks or keyboard events to trigger the space bar for me. So to me, this is a perfect example of simple interface that doesn't need to be adapted to work with the brain sensor. I only had to add the connection between the brain sensor and the UI.

To show you how quick it can be to put this together, let's look at the code. So this is basically the entire code for the demo on the previous slide. So it's done in Node.js. So you start by requiring the notion and the robotjs packages and you instantiate a notion. Then you log in with your username and password that you can save either in a .env file or you can write them directly if you're prototyping locally, because what I did not mention before is that you have to log in to be able to use your device because the data for privacy and to be able to use your own data and for not everybody to be able to have access to that. Then you subscribe to the Kinesis API with the mental command that you trained. You get the amount of probability and if it's over 0.99 or any number that you want to set. I use RobotJS to trigger a key tap on the spacebar and that's it. Then I ran my Node.js script. I visited the Chrome dyno game page and voila. So you can see that there is not much difference between using the API in the front-end and back-end, which is pretty cool. My latest demo is my most advanced using raw brainwaves for the first time.

5. Controlling Interfaces with Eye Blinks

Short description:

Recorded brainwaves to predict intentional eye blinks for controlling interfaces or devices. Overcame limitations of computer vision by using brain data. Next goal is to train the model to recognize eye movements between left and right for more interaction possibilities.

In this demo, I recorded my brainwaves between a neutral state and intentional eye blinks. So I built my own machine learning model and then I was able to predict these eye blinks from live data. In this demo, it doesn't look like much because it's a very rough first prototype, but it means that I can control interfaces or devices using intentional eye blinks when I'm wearing the sensor.

There are many ways to detect eye blinks using computer vision in JavaScript. But the issue with computer vision is that you have to be placed in front of your computer. So the lighting has to be enough for the detection to work. Sometimes some tools don't work with people with different skin tones, etc. So I was excited to get it to work with brain data, because now it means it can be, I can be anywhere in the room and control my interfaces or devices. It doesn't matter if it's day or not, and the lighting doesn't affect anything. And it would work with people with different skin tones. So my next goal would be to be able to gather data and train the model to recognize eye movements between left and right. So it could be, it could give more flexibility to users and it means more ways to interact with an interface. You could imagine the intentional blink would be playing or pausing a video or a song, and then looking left would be playing the previous one and right will be playing the next one. So yes, like this. Yeah.

6. Applications of Brain-Controlled Interfaces

Short description:

Explore applications of brain-controlled interfaces, such as detecting focus, scrolling web pages, using brainwaves in music production, and password-less authentication. Discover how brainwaves can optimize workflow, enable hands-free scrolling, trigger effects in music, and provide secure authentication. Imagine a future where brainwaves replace fingerprints and facial recognition for authentication, offering a unique and secure biometric identifier. Early research is being conducted to integrate brainwaves with existing platform authenticators.

So now that we talked about how, what we can track and how to implement it in code, and we went through a few demos. Let's talk about more applications for technologies like this. So one of them is around detecting focus while you're working to optimize your workflow. So there's already an existing VS Code extension that you can install that tracks your brainwaves as you're working. So it could be interesting to know the times of the day when you're the most focused, or maybe the parts of your code base that you find the most difficult. Or you know, you can be able to analyze your drops in focus state and in calm state and be able to see what happened in your environment to provoke that.

Another application is in scrolling web pages. So you can use it for accessibility. If a user is unable to move, they could read a page by using a mental comment. But also it could be useful for anyone. If you're used to doing multiple things at once, and you know your hands are busy doing something else, if you're cooking or eating, you want to be able to scroll down different pages or stop a YouTube video while you're cooking or anything, you could do that with your brainwaves while your hands are doing something else.

Another application is the NeuroCity Mac OS app. So you can download the code from GitHub, launch this app made with Electron, you can connect your device and trigger the do not disturb mode when the device detects that you're in a state of high focus. Now, moving on from focus states, you can also build things that are a bit more maybe creative. If you're a musician, you can also use your brainwaves to help you produce your music. Here, one of the founders of NeuroCity is triggering an effect pedal with his brainwaves while he's also playing the guitar. The quality of the GIF maybe doesn't show it well enough, but the pedal is pushed a few times to show that the sound effect is being triggered. You could think if you're playing an instrument, I'm sure that you can think about ways where while your hands are playing an instrument, you could use your brainwaves directly to apply certain effects.

Finally, something I've been thinking about for the past few days is around password-less authentication. Using the brainwave or brain sensor as a biometric authentication device. We're already familiar with using TouchID or FaceID to authenticate into devices or services. We could imagine the same type of system, but using brainwaves instead. Each person's brainwaves have a unique signature, so we can use this to identify someone trying to log in to a platform. If we're worried about data leaks, when we're using TouchID or FaceID, if at any time someone forces you to log into a platform or uses FaceID while you're sleeping or something like this, we can't really create a different fingerprint or modify your face. But using brainwaves, you can retrain the device with the different mental comments that will be unique to you but different from the one that was stolen. So that can be really interesting. Here, this is a really short demo that I did on Netlify, because as I work there I was able to modify the code myself. But what I would really be interested to see is with working with real platform authenticators. So I really tried to add something to the Touch ID on my Mac. So that's very early research at the moment.

7. Limits and Opportunities of Brain Control

Short description:

Discover the limits of brain-controlled interfaces, including the difference in data quality between external sensors and implants. Learn about the impact of delays and the limitations of detecting specific thoughts. Understand the need for training and the opportunity for JavaScript developers to shape the future of brain-controlled interfaces.

But most importantly, an application can be whatever you're thinking of. I hope that listening to this talk, you might be coming up with ideas of things that you'd like to build using a brain sensor. It's a field that's growing with more and more devices, advancements in hardware and software. And I'd love to see more people come up with ideas for this technology.

Remember that you can use JavaScript in a lot of different contexts, and that's what's awesome about it. And it's a real advantage to being a JavaScript developer in this space. We build websites that run on desktop, laptop, iPad, phone. You can build front end and back end things. You can build Electron apps. You can build AR and VR prototypes. You have frameworks to make music in JS. You can control robots. I mean, it's really super exciting. And it's not something that developers in using a different language can do because they might not have access to an ecosystem that's as great as the one that JavaScript has.

So I've talked a lot about what's possible, but let's briefly talk before I finish about the limits of such technology. So, first of all, it's important to know that the quality of the data is different with external brain sensors than it is with implants. Implants have a direct access to the brain as a device is placed really on the brain, whereas a brain sensor gets data that has to go through layers of skin and skull and hair. So if you find a demo built by another company that uses implants, you have to be aware that the quality of the data that you will get with an external brain sensor is different and that will impact what you can do.

Another limit is the fact that as the data is lossless, and that's great, you get all of it, but working with delays will have an impact on the kind of applications that you can build. If you're building things with focus or calm, I guess it doesn't really matter if you get it, if your application changes the state in a second, but if you're really building applications that are really time-sensitive, working with timestamps might be a difficulty, but it's something to think about. When you're getting into something like this, you have to remember that in a lot of ways, we still don't understand everything about the brain and we can't really detect things like, I want to trigger something when I'm thinking about the beach. It doesn't really work like that, so the type of thoughts that we can detect at the moment is more limited. So it's something that you have to keep in mind when you're getting into a space like this.

And finally, as mental commons need training, if you build an interface, remember that common focus are the same for all users, so you don't need any training, the user will just have to wear the sensor, but if you want something a bit more personalized, a bit more complex, the fact that the user will have to do some training before being able to use your application might be a limitation.

The biggest opportunity that I see in this industry at the moment is that as JavaScript developers, we can help shape the future of brain-controlled interfaces. It's still in early stages, but it's moving fast and we're in a position where we can give feedback to different companies about what we'd like to be able to do with the brain sensor. We can help shape what the web would look like if it was brain-controlled and we can contribute to the different open source packages, et cetera. To me, I really think it's a big opportunity. On this, I'm going to stop here, but hopefully giving you a better idea of what is possible when we talk about brain control on the web. And I hope you're as excited as I am about the possibilities that this technology opens up.

QnA

Q&A and Moving During Training

Short description:

Thank you for listening. Machine learning emerged as the winner in the battle between machine learning, augmented reality, and virtual reality. Charlie discusses the possibility of moving while training the model and shares her preference for training without moving. She mentions Neuralink's project involving a monkey playing pong while recording data and expresses her interest in trying it in the future.

Thank you so much for listening and if you have any questions, don't hesitate to ask in the Q&A or on Twitter, I'm at DevDevChannel.

Wow, what an inspiring session. Thank you very much, Charlie, for delivering this at JS Nation. And before jumping into Q&A section, let's check what you folks answered to Charlie's question on where you would like to use JavaScript as a creative coding. And the winner is machine learning. Actually, it was a battle all the duration of the session between machine learning and augmented reality and virtual reality. And with a very slight difference, machine learning won.

Now, now let's invite Charlie to our studio. She's here. Hello. Hi. Thanks for your session. And we have lots of questions to you. Let me pick some of them. They're on my second screen. So let's start from the very technical one. You mentioned you have to think about the action without actually moving. Is it because moving creates signal noise? Would moving at the same time interfere with the model's training? And did you try it? Oh, actually, you can move if you want to. But it's just that as in general, when you will want to reproduce that thought you probably will not want to move, then it's easier to try to train the model without actually moving your body. But you can do both. I prefer to train the model without moving because I know that when I want to reproduce the thought, I want to do it without moving. But I know that if you look at the project that Neuralink has been doing lately with a monkey that was playing pong, the monkey was moving at the same time as the data was recorded. It is something that I want to try next. But I think that you can do both. I wouldn't know if it creates particularly noise. I wouldn't think so. Because what the brain sensor is getting is data about you wanting to move your body. So I don't think that it would actually create noise. But I would have to try. It's definitely something that I want to do next as well.

Machine Learning Poll and Medical Applications

Short description:

Machine Learning won the poll, which is exciting as it aligns with my interests. JavaScript can be used creatively for Machine Learning. There is interest in using brain sensors for medical purposes, such as during counseling sessions. Detecting states of potential depression using brain data can provide direct feedback on a user's emotional state, offering earlier diagnosis and real-time insights. It's an interesting area of research.

Awesome. By the way, are you surprised by the poll results that Machine Learning won? I'm not surprised, but I'm excited. Because Machine Learning is something that I'm looking into a lot. And I've been for the past few years. So I'm excited that other people want to do that too. I think there's a lot to learn and really exciting projects to build. So I'm actually excited by the poll response.

Yeah, I bet that after your session, more and more folks will use JavaScript creatively for Machine Learning. And a couple of minutes left, let's take this question. And it's sort of conceptual one. I bet many people's first thoughts are for games. But what kind of interest have you gotten in terms of medical uses? What about for a psychiatrist and psychologist just to use during their appointments to get readings from people as the session happen? And this question was because a lot of counseling moved to online. So it could be very interesting use case. What are your thoughts on that?

Yes, so I think I remember reading some papers about people trying to detect states of potential depression using brain sensors. So that instead of only listening to what people are saying, we would get direct feedback from people's state. If we know that depression is about a chemical imbalance in the brain, there might be things that we can actually detect from raw brain data about the state of a user. And then being able to diagnose certain things like earlier on or having real feedback about somebody's emotional state. Because sometimes the way we express ourselves, we only do it with the words that we know or the experience we've had in the past. Whereas now we could do it directly with data from the brain. And I think there is research done with this at the moment, and I think it's definitely an interesting one as well.

Awesome, awesome. Charlie, thanks for your session. Thanks for your answers. And it was a great pleasure to have you on the GIS Nation conference. Thank you so much.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
JSNation 2022JSNation 2022
21 min
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
WebAssembly is a browser feature designed to bring predictable high performance to web applications, but its capabilities are often misunderstood.
This talk will explore how WebAssembly is different from JavaScript, from the point of view of both the developer and the browser engine, with a particular focus on the V8/Chrome implementation.
WebVM is our solution to efficiently run unmodified x86 binaries in the browser and showcases what can be done with WebAssembly today. A high level overview of the project components, including the JIT engine, the Linux emulation layer and the storage backend will be discussed, followed by live demos.
JSNation 2022JSNation 2022
22 min
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
Top Content
In this talk I will show Makepad, a new UI stack that uses Rust, Wasm, and WebGL. Unlike other UI stacks, which use a hybrid approach, all rendering in Makepad takes place on the GPU. This allows for highly polished and visually impressive applications that have not been possible on the web so far. Because Makepad uses Rust, applications run both natively and on the Web via wasm. Makepad applications can be very small, on the order of just a few hundred kilobytes for wasm, to a few megabytes with native. Our goal is to develop Makepad into the UI stack of choice for lightweight and performant cross-platform applications. We intend to ship with our own design application and IDE.
JSNation 2022JSNation 2022
22 min
How I've been Using JavaScript to Automate my House
Software Programming is naturally fun but making something physical, to interact with the world that you live in, is like magic. Is even funnier when you can reuse your knowledge and JavaScript to do it. This talk will present real use cases of automating a house using JavaScript, Instead of using C++ as usual, and Espruino as dev tools and Microcontrollers such as Arduino, ESP8266, RaspberryPI, and NodeRed to control lights, doors, lockers, and much more.
JSNation 2022JSNation 2022
26 min
Quantum Computing in JavaScript with Q.js
Anyone can learn quantum computing! Join Stewart Smith as he describes his open-source passion project, Q.js. What exactly is a quantum computer? What's it good for? And how does Quantum JavaScript fit in? While this talk is for anyone curious about quantum computing, it will resonate particularly with coders, high school algebra survivors, and music nerds.