Controlling Apps with Your Mind and AI


What is the future of user interactions? Will we continue using web and mobile or will we switch into VR and AR completely? What is our UX today on web and mobile and how it will change when the digital world will bridge into new dimensions. Would we still use keyboard and mouse or gestures or will we use something else? In this talk we will get a glimpse into the future where we will control apps with our thoughts. Literally. It’s not a thought experiment, but a journey into our brainwaves with consumer EEG headset. We will explore how we can use them and AI to create futuristic experiences, which will lay a foundation stone to our future interactions with the digital world.



Hi everyone. Really excited to be here at react Summit Remote Edition streaming from outer space. And today I want to talk about controlling apps with your mind. My name is Vladimir Novik. I'm a software architect and consultant in Vladimir Novik Labs. I'm a Google Developer Expert, author, engineer, and on a daily basis I work in web, mobile, VR, AR, iot, and AI fields. I'm also CTO and co-founder of Event Loop, and we are creating robust, performant, feature-rich conference experience, online conference experience. And basically instead of having Zoom for your virtual conference, we bring you the set of conference tools and different plugins and widgets and so on that helps you to organize the conference, attend the conference. So if your organizer or speaker or attendee reach out to us and sign up for our alpha product, it will be an open source product. So if you want to collaborate, you're welcome. You can find us at or on Twitter at eventloopHQ. Today I want to talk about the future. And I think we are in kind of transition phase technologically wise. When we're exploring new horizons, we're exploring new medium, we have VR, AR, mixed reality, web, mobile, everything is constantly changing. And that's why we are sort of the ones who break the rules to make the new ones. And let's think about what medium we will use in the future. What dimensions we will use. Maybe VR, maybe AR, maybe something different. I mean, I'm just thinking here, but there are like quantum computing is getting traction and VR is getting traction, AR, and everything is kind of changing, right? So which dimensions we will use. Will there be web as it is today or mobile or we completely switch to different medium? And how we should prepare for that transition, right? How we should adapt. Should we adapt or should we actually shatter foundations that we have and maybe invent new techniques to manipulate things, to interact differently, right? Maybe new UX patterns, maybe new best practices. So we have this ability right now to change things, to adapt and transform things to something totally different. And let's talk what are the very foundations of UI and what are the very foundations of UX. And I think it's 2D medium. And if you think about that, we've gone a long way from cave drawings to the mobile apps. But if you think about all the color theory and lines and how history of art and how everything kind of created the foundation of design and everything in our screens are basically 2D medium. Because we fake 3d, third dimension. It's not really like third dimension. We have 3d models in the browser or in headset or wherever, but all the like shapes and depth is based on shaders, which is basically a function of how light is reflected. So it's kind of fake, right? And we have things on our phones, we have screens. Everything is 2D. And the XR is adding a new dimension to that. So how we've adapted. We took different forms. Let's say you have a sign up form. So we have this form hovering in thin air in VR. Realistic? Not really. Something that we've adapted from 2D medium, right? Or we have in AR, we have arrows pointing to different directions that completely break the notion, but we have it because we've adapted and we haven't invented something new. So I think it's more crucial to create some sort of like reality based interactions in VR and AR. And if you think about that, if you need to log in inside VR. So currently you have a login form, you have this hovering keyboard, you type on this keyboard and then you get into, you type your username and password, you get inside. But is it something that you will see in the reality? Not really, right? So it's more realistic to have some kind of passport or key or whatever that you just put on the spot or turn the key inside the door and it will let you through. It's a sort of reality based interactions, right? So we need to understand our reality in order to create these interactions. The medium is completely different. Now in VR, there is another dimension of something happening behind the viewer, right? So I'm looking at the camera, but something is happening behind. So I cannot use color theory to do this amazing call to action buttons or animation. So I need to use different things like haptics, like sounds, like maybe slowing time and so on. There is also an adaptive UI. Adaptive UI is something that is used on the web and the idea is UI is kind of learning from what are you doing with it. So like forms are learning and adapting. So you can Google that. It's a kind of new trend-ish. Another thing that I propose is actual mind reading. And yeah, obviously I cannot read your thoughts, right? But to some degree. And I want to ask you what is it, right? And obviously we are online, so you can answer in chat. I'll pause. So it's the known universe. And all these dots are clusters of galaxies and it looks amazing. But what is this? It looks pretty much the same, right? But it's actually neurons in our brains. So we are the universe and act accordingly. So how do neurons in our brain, how do they work? So the neurons come in pairs and there is excitatory neuron and it releases glutamate and creates dipole mechanism. Basically have plus and minus and it sort of acts like a battery. So you have potential change between different neurons. So it creates a potential change that can be measured by electrodes on our skull. And it looks like this if you measure your brain. So this is the awake state. This is a sleep state. And you see it's kind of different, right? But it's sort of random-ish, right? So we need to analyze that and to get what does all this mean, right? So in order to do so, we will use EEG headsets. And there are lots of consumer and research versions of EEG headsets. And the idea is to put electrodes on your skull. And based on that, measure the potential change under our skull. So research EEGs look like this. And they are quite costly. But there are consumer ones. And I actually have one here. It's called Muse. It's a nice product that helps you with meditation. So if you're doing meditation, it kind of helps you to focus and so on. And yeah, I will read my brain now, my brainwaves, and you will see how it looks like. So it's fairly cheap. It has only like five electrodes and that's about it. But it's good for our example. So how do we connect this thing to our brain? Now I don't need to open my skull and plug it in. I can connect it using Bluetooth. And specifically, we will use web Bluetooth. I can use Bluetooth by accessing Navigator Bluetooth object. And I call request device. I will filter the Muse service. And I will just connect to and get some attributes from the Bluetooth. Now the support is not quite there, right? So we see it in Chrome and for some reason Opera. But the rest is kind of, no, right? It doesn't support. But we're talking about the experiment, right? It's sort of thoughts experiment, literally. But also like experiment where technology will lead us. And let's look at the demo. So I have this playground here and I can pair to my headset and you will see the brainwaves coming through. And this is my actual brainwaves. As you can see, when I talk and the waves kind of change, when I blink, you see these tiny spikes of voltage spikes. If I do something like this, you see the higher spikes. So I can measure to some degree what's happening on my brain, right? So this is pretty nice. But what do I do with this data, right? So I created here a little app. And before connecting to Muse, actually, yeah, before connecting to Muse, what I want to do, I want to see all the readings of this headset. I will subscribe to readings and I will just console log them. Now I use Muse JS library and this library exposes the readings as RxStream that I can subscribe to and just console log whatever Muse shows me. So let's connect. And let's see what we have here. And as you can see, similar as the graph, we don't really have, like the data is quite weird, right? So we cannot do pretty much anything with that. So the question is how we measure things more precisely. And to measure more precisely, we will use bandpass filter. So we cut out the frequencies on, this is like the graph, right? We cut out these and get only the spikes. Now then we need to cut all these into epochs, which is basically a timeframe because we want like time reference. If in specific amount of time there is a spike, that's probably a blink, right? So we cut these into epochs and also we need to pass it through Fast Fourier Transform. So that means that we take the data that we get is a micron volts and we want to convert that to frequency domain. So we use Fast Fourier Transform and we convert that to frequency domain. So it looks from like the raw data that we get, we will see different frequencies. Now we can recognize different brain waves based on these frequencies and the differentiation is gamma, beta, alpha, theta and delta. And each one of them is different to our state of mind. So for an instance in delta, this is like a sleep, a loss of body awareness, repair and so on. And like gamma is like heightened perception, learning, problem solving tasks, quality processing. As you can see, they are not like super distinctive, there's like a broad range. Beta is generally awake, right? And alpha is like relaxed. So I can measure the alpha state and see if I'm relaxed. So we can react basically to brain wave spikes. And we also can feed this data into machine learning. But before doing some amazing stuff with machine learning, I want to show you something. I want to subscribe to the focus. And basically it will give me the alpha wave readings. So as far as code goes, if... And also I want to subscribe to blinking. So in order to differentiate the blink, basically what I do, I get the readings. I filter them, I get the reading only for the electrode above my left eye, get the maximum of it, like the spike, right? And then I use RX operator switch map to differentiate the spike, because I don't really care about the rest of the data, just the spike. If there is a spike, then this is something that I will return. So how it will look like. These are... okay, I need to also remove this one. Let's connect it again. And what we will see. If I blink, you can see here, me blinking. So as you can see, sometimes it works, sometimes it's not, because it's like a threshold and maybe I put it not really close to my skull. So yeah, so this is the blinking. So now we want to feed this data to machine learning. In order to do so, what I will do, I will go to my app.js and I will add my prediction panel. I will predict some stuff. And here I have three cards. And these three cards, one is web and mobile, other is VRXR and another is iot AI. So what I will try to do, I will connect to my use headset, get all the data, pass through all the filters that I need, and then add this as a sample to KNNClassifier, which is machine learning algorithm. And we'll start classifying which card I'm thinking about. So let me record these waves really quick. So click on this button while looking at the web and mobile. Now VRXR. And now iot AI. So now if I click on classify, I will be able to switch by just looking, hands are here, so I can just look at different cards and switch between them. And yeah, as you can see, the main topic that I'm interested in is VRXR and iot AI. So this is quite cool, but if it will work, I will bring another level of coolness here. So I have MobX store here with enable drone flag. And yeah, I have a drone here, this tiny fellow. So it occasionally works. So let's see if it will work this time. So what I will do, I will first connect to my Muse headset. I need not to blink. And then I will connect to my drone. Okay, so here we have the drone. And hopefully you see it. I will record brainwaves and start classifying. Now I hope it's in the camera view. Now I will try to move it by just looking at it. And let's land it. And it fell. I'm not sure if it was in the view. So let me try to put it in the view again. Now it's probably here and again I'm just moving that with the power of my thought. So that was quite cool. And the main takeaway here is what is the purpose of all of this, right? Like why are we flying drones with our mind, why we're using these devices, they are not that reliable. We have web Bluetooth support only in Chrome. The main reason for that is that we are the ones who made the rules and we break the rules, we invent new things, right? And my main takeaway from this talk is that the future is already here and you are the ones to build it. Thank you. All right. That was incredible. I mean I didn't think that we were going to see somebody fly a drone with their mind this early in the morning. Unfortunately, we don't have much time but we're going to bring Vlad back to the stage for one quick question and then we're going to move on to our next session. So Vlad, I think after seeing something like this, the mind control with the AR, where does somebody even start if they want to start learning this stuff for themselves? I mean the whole point of this talk was like, right, you can change the world basically, right? You're the one who can change everything, right? And then you need to decide what you want to learn like technology wise. You want to learn, be more experienced just in react or you want to broaden your horizons, right? So for instance, if you want to have a react native and AR or have react and web Bluetooth and drones, you can get on different resources. There's which is an amazing website where I will be also doing courses there. Actually, I will be doing a workshop on react native and AR pretty soon. So I actually created this sound for react Summit so I will send the link in the community channel. So if you want to get to this workshop, you can do so. And VR wise, I also had a bunch of talks. So I have a YouTube channel streamed about VR, but I'll be also recording a bunch of stuff on this topic because I mean, I like to teach these things, right? So that's why I also started a Twitch channel. And yeah, that's like one of the places, but obviously there are lots of places where to learn and if you really want to get into this, I would say just DM me on Twitter. I will let you know, like, let me know what you want to do as far as technology is concerned. But I probably will be able to direct you to the learning materials and free sources somewhere and so on. Excellent. Well, Vlad, thank you so much. We really appreciate it. I wish we had more time for Q&A, but to get us back on track, we are going to jump now to a panel discussion.
25 min
02 Aug, 2021

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic