Building Brain-controlled Interfaces in JavaScript

Rate this content
Bookmark

Neurotechnology is the use of technological tools to understand more about the brain and enable a direct connection with the nervous system. Research in this space is not new, however, its accessibility to JavaScript developers is.

Over the past few years, brain sensors have become available to the public, with tooling that makes it possible for web developers to experiment building brain-controlled interfaces.

As this technology is evolving and unlocking new opportunities, let's look into one of the latest devices available, how it works, the possibilities it opens up, and how to get started building your first mind-controlled app using JavaScript.

Charlie Gerard
Charlie Gerard
27 min
09 Jun, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Learn how to build brain-controlled interfaces using JavaScript and brain sensors. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore examples of calm and focus detection, as well as the Kinesis API for mental commands. Discover the applications of brain-controlled interfaces, such as scrolling web pages and password-less authentication. Understand the limits and opportunities of brain control and the potential for using brain sensors in medical applications.

1. Introduction to Brain-Controlled Interfaces

Short description:

Learn how to build brain-controlled interfaces using JavaScript. Charlie Girard, senior frontend developer at Netlify, shares insights on using brain sensors to transform brain activity into digital data. Discover the NeuroCity Notion, a commercial brain sensor, and how the number of electrodes impacts its use cases.

Hi everyone, thanks for joining me today to learn more about how to build brain-controlled interfaces using JavaScript. Before we dive into this topic, here's a little bit more about me. My name is Charlie Girard. I'm a senior frontend developer at Netlify. I'm also part of the Google Developer Experts group in Web Technologies. It's a community group that's sponsored by Google for developers who would like to give back to the community in different ways. I'm also the author of a book about TensorFlow.js for JavaScript developers.

Most of all, I spend a lot of my personal time building and researching prototypes about human computer interaction, which is also called HCI. That's the study of the design and use of computer technology focused on the interfaces between people and computers. It can involve a lot of things like AR, VR, interactive arts, machine learning, et cetera. I've been interested in this since I started learning to code. Throughout the years, my research has led me to the topic of today. It has nothing to do with my day job at Netlify, but hopefully this talk will show you that you can use your JavaScript skills for a lot of different things.

The focus of today is our brain and how to use it to interact with interfaces directly using JavaScript. How we can get data directly from our brain activity, and write some JavaScript code to use it to interact with interfaces or devices. How do we even get this data from our brain? We do this with the help of brain sensors. These are devices that contain electrodes that you place on the scalp. In contact with the skin, they are able to transform the electrical signals coming from the brain into digital data that we can work with. On this slide, I put a few of the commercial brain sensors that you can buy currently. You can see that they come in different shapes. They have different number of electrodes. That will impact what you're able to track and what kind of applications you're able to build with it. There's probably more brain sensors available out there, but here are the ones that I mostly heard of or played with. The one that this talk is going to focus on is the one on the bottom right. That's called the NeuroCity Notion. They recently released a new model called the Crown. If ever you're interested in buying it, it might be called the Crown now, but I experimented with one of their very first version that was called the Notion. To understand how the number of electrodes impacts the use cases, let's talk briefly about how that works. In the context of the Notion device, I highlighted in green the placement of the electrodes based on their reference number on the 1020 EEG system. This is a system that's a reference in neurotechnology, and it's a kind of map representing the placement of electrodes on a user's head.

2. Brain Sensors and Data Analysis

Short description:

Learn about the different brain sensors and their placement on the head. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore the raw data and available features in the neuro CT notion UI, including focus and calm detection. Discover the process of training custom mental commands using the Notion headset.

At the top is the front of your head and at the bottom is further away down the back. Each electrode has a reference number and letter. So these are important because it will give you an idea of the type of brain waves that you can track depending on the area of the brain that the electrodes are closest to.

So the Notion has eight electrodes, four on the left side of the brain and four on the right side, mostly focused on the top and the front of the head. So this is important to know because depending on the placement of the electrodes, you will get data from different parts of the brain. So it means that you will, what you can interpret from the data that you're getting will vary. So here I made a small animation to explain what I'm talking about. So different parts of the brain have different purposes. At the front, you have the frontal lobe, then the cerebellum is at the lower back, the parietal lobe at the top, etc. You don't have to know this by heart, but, and it might not mean too much to you right now, but they are in charge of different physiological functions.

So, for example, the frontal lobe is in charge of voluntary movement and concentration and problem solving. The parietal lobe at the top is more focused on sensations and body awareness. And the temporal lobe is the one on the side that receives sensory information from the ears and processes that information into meaningful units such as speech and words. So depending on what you'd like to track or build, you will want to check different brain sensors towards position to see they're more likely to be focusing on the area of the brain that you're interested in. For example, one of the brain sensor on one of the previous slides is called NextMind and it mostly focuses on the occipital lobe at the middle back because they claim to be focusing on the user's vision to try to predict what somebody is looking at.

So anyway, now that we've talked about brain sensors, what does it look like for us as JavaScript developers? So with the neuro CT notion you have access to a UI in which you can see different graphs. So here is the part of the UI where you can see your raw brain waves. So you can see the different lines. There's eight of them and each label responds to the name of an electrode position based on the 1020 EEG system that I talked about a few slides ago. So this represents a graph of the raw data coming live from the brain sensor. But in general when you get started in this space of neurotechnology you don't start straightaway experimenting with raw data. Most of the brain sensors out there have implemented things like focus detection or calm detection that you can use without having to build your own machine learning model. So focus and calm detection don't need any training because they rely on a pattern of brain waves that are pretty common amongst everybody. However custom mental comments have to be trained so what do I mean by that. I don't bother reading the entire list but for the Notion headset the comments you can train are focused on imagining specific movements. So you can see biting a lemon or pitching your left finger or thinking about pushing something in space. For example here's what training the right foot mental comment looks like. So you can do it also with their API but in general to do it faster you do it through their UI. So you have two animations playing every few seconds to guide you into what you're supposed to do. So you have to alternate between states of focusing on that comment so thinking about tapping your right foot on the floor and also resting where you're supposed to try to think about nothing at all.

QnA

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TensorFlow.js 101: ML in the Browser and Beyond
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
JSNation 2022JSNation 2022
21 min
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
WebAssembly is a browser feature designed to bring predictable high performance to web applications, but its capabilities are often misunderstood.
This talk will explore how WebAssembly is different from JavaScript, from the point of view of both the developer and the browser engine, with a particular focus on the V8/Chrome implementation.
WebVM is our solution to efficiently run unmodified x86 binaries in the browser and showcases what can be done with WebAssembly today. A high level overview of the project components, including the JIT engine, the Linux emulation layer and the storage backend will be discussed, followed by live demos.
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
JSNation 2022JSNation 2022
22 min
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
Top Content
In this talk I will show Makepad, a new UI stack that uses Rust, Wasm, and WebGL. Unlike other UI stacks, which use a hybrid approach, all rendering in Makepad takes place on the GPU. This allows for highly polished and visually impressive applications that have not been possible on the web so far. Because Makepad uses Rust, applications run both natively and on the Web via wasm. Makepad applications can be very small, on the order of just a few hundred kilobytes for wasm, to a few megabytes with native. Our goal is to develop Makepad into the UI stack of choice for lightweight and performant cross-platform applications. We intend to ship with our own design application and IDE.
WebHID API: Control Everything via USB
JSNation 2022JSNation 2022
23 min
WebHID API: Control Everything via USB
Operational System allows controlling different devices using Human Interface Device protocol for a long time. With WebHID API you can do the same right from the browser. Let’s talk about the protocol features and limitations. We’ll try to connect some devices to the laptop and control them with JavaScript.
How I've been Using JavaScript to Automate my House
JSNation 2022JSNation 2022
22 min
How I've been Using JavaScript to Automate my House
Software Programming is naturally fun but making something physical, to interact with the world that you live in, is like magic. Is even funnier when you can reuse your knowledge and JavaScript to do it. This talk will present real use cases of automating a house using JavaScript, Instead of using C++ as usual, and Espruino as dev tools and Microcontrollers such as Arduino, ESP8266, RaspberryPI, and NodeRed to control lights, doors, lockers, and much more.
Webdevelopment Tailored for 2024
React Summit 2024React Summit 2024
7 min
Webdevelopment Tailored for 2024
Most developers closely follow the framework wars. So busy with these games, that we forget to check what new features HTML, CSS, and JavaScript offer us. Native modals, dynamic viewport units, and optional chaining are just some of the features you should use already! If you stopped following Web Platform development in 2015, it's time to refresh your knowledge. I will teach you to build applications tailored to 2024 and prepare you for the new Web Platform features that will appear in the coming years.