Controlling Apps with Your Mind and AI

Rate this content
Bookmark

What is the future of user interactions? Will we continue using web and mobile or will we switch into VR and AR completely? What is our UX today on web and mobile and how it will change when the digital world will bridge into new dimensions. Would we still use keyboard and mouse or gestures or will we use something else? In this talk we will get a glimpse into the future where we will control apps with our thoughts. Literally. It’s not a thought experiment, but a journey into our brainwaves with consumer EEG headset. We will explore how we can use them and AI to create futuristic experiences, which will lay a foundation stone to our future interactions with the digital world.

Vladimir Novick
Vladimir Novick
25 min
02 Aug, 2021

Video Summary and Transcription

This Talk explores controlling apps with the mind and the future of UI and UX. It discusses the integration of VR and AR into UI and UX, the understanding of neurons and EEG headsets, connecting to Muse via Bluetooth, measuring brain waves and blink detection, feeding data to machine learning, and mind control with AR. The speaker emphasizes the importance of learning React Native, AR, React, Bluetooth, and drones for those interested in exploring these topics.

Available in Español

1. Introduction to Controlling Apps with Your Mind

Short description:

Hi, everyone. Today, I want to talk about controlling apps with your mind and the future of UI and UX. We are in a transition phase, exploring new horizons and dimensions. We have the ability to adapt and transform things to something totally different. The foundations of UI and UX are based on 2D Medium, even though we have fake 3D elements.

Hi, everyone. I'm really excited to be here at React Summit Remote Edition, streaming from outer space. And today, I want to talk about controlling apps with your mind. My name is Vladimir Novik, I'm a software architect and consultant in Vladimir Novik Labs. I'm Google Developer Expert, author, engineer, and on a daily basis, I work in web, mobile, VR, AR, IoT, and AI fields.

I'm also CTO and co-founder of EventLoop, and we are creating robust, performant, feature-rich online conference experience. Basically, instead of having Zoom for your virtual conference, we bring you a set of conference tools and different plugins and widgets and so on that helps you to organize the conference, attend the conference. So if your organizer or speaker or attendee reaches out to us and signs up for our Alpha product, it will be an open source product. So if you want to collaborate, you are welcome. You can find us at eventloop.ai or on Twitter at eventloopHQ.

Today, I want to talk about the future. And I think we are in kind of transition phase technological-wise. When we are exploring new horizons, we're exploring new medium, we have VR, AR, mixed reality, web, mobile. Everything is constantly changing. And that's why we are sort of the ones who break the rules to make the new ones. Let's think about what medium we will use in the future. What dimensions we will use? Maybe VR? Maybe AR? Maybe something different. I'm just thinking here, but quantum computing is getting traction, VR is getting traction, AR is getting traction. Everything is kind of changing. Which dimensions we will use? Will there be web as it is today or mobile or will we completely switch to different medium? How we should prepare for that transition? How we should adapt? Should we adapt or should we actually shatter foundations that we have? Maybe invent new techniques to manipulate things, to interact differently? Maybe new UX patterns, maybe new best practices?

So, we have this ability right now to change things, to adapt and transform things to something totally different. And let's talk what are the very foundations of UI and what are the very foundations of UX. And I think it's 2D Medium. And if you think about that, we've gone a long way from cave drawings to the mobile apps. But if you think about all the color theory and lines and history of art and how everything kind of created the foundation of design and everything in our screens is basically 2D Medium. Because we're fake 3D. Third dimension. It's not really like third dimension. We have 3D models in the browser or in headsets on wherever, but all the like shapes and depth is based on shaders, which is basically a function of how light is reflected. So it's it's kind of fake, right? And we have things on our phones, we have screens. It's everything is 2D.

2. The Future of UI and UX in VR and AR

Short description:

XR is adding a new dimension to UI and UX in VR and AR. We need to create reality-based interactions in these mediums. Understanding the different dimensions and limitations is crucial. Adaptive UI and mind reading are also emerging trends.

And XR is adding a new dimension to that. So how we've adopted. We took different forms. Let's say you have a sign up form. So we have this form hovering in thin air in VR. Realistic? Not really. Something that we've adopted from 2D medium, right? Or we have in AR, we have arrows pointing to different directions that completely break immersion. But we have it because we've adopted. And we haven't invented something new.

So I think it's more crucial to create some sort of reality based interactions in VR and AR. And if you think about that, if you need to log in inside VR, so currently you have a login form, you have this hovering keyboard, you type on this keyboard, and then you get into, you type your username and password and you get inside. But is it something that you will see in reality? Not really, right? So it's more realistic to have some kind of password or key or whatever that you just put on the spot or turn the key inside the door, and it will let you through. It's a sort of reality-based interactions, right? So we need to understand our reality in order to create these interactions. And the medium is completely different.

Now, in VR there is another dimension, something happening behind the viewer. So, I'm looking at the camera, but something is happening behind. So I cannot use color theory to do this amazing call-to-action button animation. So I need to use different things like haptics, sounds, maybe slowing time, and so on. There is also an adaptive UI. Adaptive UI is something that is used on the web. And the idea is, UI, is kind of learning from what are you doing with it. So, like, forms are learning and adapting. So, you can google that. That's kind of a new trend-ish. And another thing that I propose is actual mind reading. And, yeah, obviously I cannot, like, read your thoughts, right? But, to some degree. And I want to ask you, what is it? And obviously, like, we're online. You can answer in chat. I'll pause. So, it's the known universe.

3. Understanding Neurons and EEG Headsets

Short description:

All these dots are clusters of galaxies, but they actually represent neurons in our brains. Neurons work in pairs, with excitatory neurons releasing glutamate and creating a dipole mechanism. This potential change can be measured by electrodes on our skull. To analyze this further, we use EEG headsets, such as the Muse, which helps with meditation.

And all these, like, dots are clusters of galaxies. And it looks amazing. But what is this? It looks pretty much the same, right? But it's actually neurons in our brains. So, we are the universe. And, ask accordingly.

So, how do neurons in our brain... how do they work? So, the neurons come in pairs. And there is excitatory neuron and it releases glutamate and creates dipole mechanism. Basically, have plus and minus. And it sort of acts like a battery. So, you have potential change between different neurons.

So, it creates a potential change that can be measured by electrodes on our skull. And it looks like this. If you measure your brain. So, this is the wake state. This is the sleep state. And you see it's kind of different, right? But it's sort of random ish, right? So, we need to analyze that and to get what does all this mean, right?

So, in order to do so, we will use EEG headsets. And there are lots of consumer and research versions of EEG headsets. And the idea is to put electrodes on your skull, and based on that, measure the potential change under our skull. So, research EEGs look like this. And they're quite costly. But there are consumer ones. And I actually have one here. It's called Muse. It's a nice product that helps you with meditation. So, if you're doing meditation, it kind of helps you to focus and so on. And, yeah, I will read my brain waves, and you will see how it looks like. So it's fairly cheap. It has only like 5 electrodes and that's about it. But it's good for our example.

4. Connecting to Muse via Bluetooth

Short description:

Now, I don't need to open my skull and plug it in. I can connect it using Bluetooth. We're talking about experiments, right? It's sort of thoughts experiment, literally. But also like experiments where technology will lead us. I can measure to some degree what's happening on my brain. I created here a little app. And before connecting to Muse, actually, yeah, before connecting to Muse, in TeamViewer as to what I want to do, I want to see all the readings of this headset. I use Muse.js library. And this library exposes your readings as RxStream that I can subscribe to.

So how do we connect this thing to our brain? Now, I don't need to open my skull and plug it in. I can connect it using Bluetooth. And specifically we will use Web Bluetooth. I can use Bluetooth by accessing Navigator, Bluetooth object. And I call requestDevice. I will filter the Muse service. And I will just connect to and get some attributes from the Bluetooth.

Now, the support is not quite there, right? So we see it in Chrome and for some reason Opera. But the rest is kinda... no, right? It doesn't support. But we're talking about experiments, right? It's sort of thoughts experiment, literally. But also like experiments where technology will lead us. And let's look at the demo. So I have this playground here. And I can pair to my headset. And you will see the brainwaves coming through. And this is my actual brainwaves. And as you can see, when I talk, the waves kind of change. When I blink, you see these tiny spikes, right, of voltage spikes. If I do something like this, you see the higher spikes. So I can measure to some degree what's happening on my brain, right. So this is pretty nice.

But what do I do with this data, right? So I created here a little app. And before connecting to Muse, actually, yeah, before connecting to Muse, in TeamViewer as to what I want to do, I want to see all the readings of this headset. I will subscribe to readings and I will just console log them. Now, I use Muse.js library. And this library exposes your readings as RxStream that I can subscribe to. And just console log whatever Muse.js shows me. So let's connect. And let's see what we have here.

5. Measuring Brain Waves and Blink Detection

Short description:

To measure things more precisely, we use a bandpass filter to cut out frequencies and focus on spikes. These spikes are then divided into epochs to analyze time reference. By applying fast Fourier Transform, we can convert the data to frequency domain and recognize different brain waves. Each wave represents a different state of mind, such as sleep, heightened perception, or relaxation. Before exploring ML, I will demonstrate how to subscribe to alpha wave readings and differentiate blinks using filtering techniques.

And as you can see, similar as the graph we don't really have like the data is quite weird right so we cannot do pretty much anything with that. So, the question is how we measure things more precisely. To measure more precisely we will use bandpass filter. So we cut out the frequencies on this is like the graph right we cut out these and get only the spikes.

Now, then we need to cut all these into epochs which is basically a timeframe because we want like time reference, if in specific amount of time, there is a spike that probably that's, that's a blink right. So we cut this into epochs and also we need to pass it through fast Fourier Transform. So that means that we take the data that we get is a microvolts and we want to convert that to frequency domain so we use fast Fourier Transform and we convert that to frequency domain so it looks from like the raw data that we get we will see different frequencies.

Now we can recognize different brain waves based on these frequencies and the differentiation is gamma, beta, alpha, theta, and delta and each one of them is different to our state of mind. For instance in delta, it's sleep, loss of body awareness, repair and so on. Gamma is heightened perception, learning, problem solving, tasks, quality processing. As you can see they are not super distinctive, it's a broad range. Beta is generally awake, and alpha is relaxed. I can measure the alpha state and see if I'm relaxed. So we can react to brainwave spikes, and we also can feed this data into ML.

But before doing some amazing stuff with ML, I want to show you something. I want to subscribe to the focus. And basically it will give me the alpha wave readings. Also, I want to subscribe to blinking. In order to differentiate the blink I get the readings. I filter them. I get the reading only for the electrode above my left eye. I get the maximum of it, like the spike. Then I use RXOperatorSwitchMap to differentiate the spike. I don't really care about the rest of the data, just the spike. If there is a spike then this is something that I will return. How it looks like. I need to also remove this one. Let's connect it again. And what we will see. If I blink, you can see here me blinking.

6. Feeding Data to Machine Learning

Short description:

We want to feed the blinking data to machine learning. I will connect to my used headset, get all the data, pass it through the necessary filters, and add it as a sample to the KNN Classifier. By classifying the data, I can determine which card I'm thinking about. The main topics of interest are VRXL and IoT AI.

So, as you can see, sometimes it's worse and it's not because it's like a threshold, and maybe I put it not really close to my skull. So, yeah, so this is the blinking, right? So, now we want to feed this data to machine learning. In order to do so, what I will do, I will go to my app.js and we'll add my prediction panel, or predict some stuff. And here I have three cards, and these three cards, one is web and mobile, other is VRXR, and another is IoT AI. So, what I will try to do, I will connect to my used headset, get all the data, pass through all the filters that I need, and then add this as a sample to KNN Classifier, which is machine learning algorithm, and we'll start classifying which card I'm thinking about. So, let me record these waves really quick. So, I'll click on this button while looking at the web and mobile. Now VRXL, and now IoT AI. So, now, if I click on classify, I will be able to switch by just looking, hands are here, so I can just look at different cards and switch between them. And, yeah, as you can see, the main topic that I'm interested in is VRXL and IoT AI.

7. Mind Control with AR and Learning Resources

Short description:

So, this is quite cool. I have a more big store here with the enable drone flag, and, yeah, I have a drone here, this tiny fellow. I will connect to my Muse headset, connect to my drone, record brainwaves, and start classifying. The main takeaway here is that the future is already here and you are the ones to build it. Thank you. We're going to bring Vlad back to the stage for one quick question. After seeing something like this, where does somebody even start if they want to start learning this stuff for themselves? You can broaden your horizons by learning React Native and AR or React and Bluetooth and drones. There are resources like Egghead.io and workshops available. Feel free to reach out to me on Twitter for guidance and learning materials. Thank you, Vlad.

So, this is quite cool, but if it will work, I will bring another level of coolness here. So, I will ... I have a more big store here with the enable drone flag, and, yeah, I have a drone here, this tiny fellow. So, it occasionally works, so let's see if it will work this time. So, what I will do, I will first connect to my Muse headset, I need not to blink. And then I will connect to my drone. Okay, so here we have the drone and hopefully you see it. I will record brainwaves and start classifying. Now, I hope it's in the camera view. Okay, now I will try to move it by just looking at it and let's land it. And it fell. I'm not sure if it was in the view, so let me try to put it in the view again. Now it's probably here and again, I'm just moving that with the power of my thought. So that was quite cool. And the main takeaway here is what is the purpose of all of this, right? Like why are we flying drones with our mind? Why we're using these devices? They are not that reliable. We have web bluetooth support only in Chrome. The main reason for that is that we are the ones who made the rules and we break the rules, we invent new things, right? My main takeaway from this talk is that the future is already here and you are the ones to build it. Thank you.

All right. That was incredible. I didn't think that we were going to see somebody fly a drone with their mind this early in the morning. Unfortunately, we don't have much time, but we're going to bring Vlad back to the stage for one quick question. And then we're going to move on to our next session.

So Vlad, I think after seeing something like this, the mind control with the AR, where does somebody even start if they want to start learning this stuff for themselves? I mean, the whole point of this talk was like right, you need to like, you can change the world basically, right? And you are the one who can change everything, right? And then you need to decide like what you want to learn, like technology-wise. You want to learn, like be more experienced just in React or you want to broaden your horizons, right? So for instance, if you want to have like React Native and AR or have like React and Bluetooth and drones, you can get on different resources. There's Egghead.io which is an amazing website where I will be also doing courses there. Actually, I will be doing a workshop on React Native and AR pretty soon. So I actually created this sound for React Summit, so I will send the link in community channel. So if you want to get to this workshop, you can do so. Otherwise, I also had a bunch of talks. So I have a YouTube channel streamed about VR, but I'll be also recording a bunch of stuff on this topic. Because, I mean, I like to teach these things, right? So that's why I also started a Twitch channel. And yeah, that's one of the places, but obviously there are lots of places where to learn. And if you really want to get into this, I would say just DM me on Twitter, I will let you know. Like, let me know what you want to do as far as technology-concerned, and I probably will be able to direct you to the learning materials and free sources somewhere and so on. Excellent. Well, Vlad, thank you so much. We really appreciate it. I wish we had more time for Q&A.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Don't Solve Problems, Eliminate Them
React Advanced Conference 2021React Advanced Conference 2021
39 min
Don't Solve Problems, Eliminate Them
Top Content
Humans are natural problem solvers and we're good enough at it that we've survived over the centuries and become the dominant species of the planet. Because we're so good at it, we sometimes become problem seekers too–looking for problems we can solve. Those who most successfully accomplish their goals are the problem eliminators. Let's talk about the distinction between solving and eliminating problems with examples from inside and outside the coding world.
Jotai Atoms Are Just Functions
React Day Berlin 2022React Day Berlin 2022
22 min
Jotai Atoms Are Just Functions
Top Content
Jotai is a state management library. We have been developing it primarily for React, but it's conceptually not tied to React. It this talk, we will see how Jotai atoms work and learn about the mental model we should have. Atoms are framework-agnostic abstraction to represent states, and they are basically just functions. Understanding the atom abstraction will help designing and implementing states in your applications with Jotai
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
Fighting Technical Debt With Continuous Refactoring
React Day Berlin 2022React Day Berlin 2022
29 min
Fighting Technical Debt With Continuous Refactoring
Top Content
Let’s face it: technical debt is inevitable and rewriting your code every 6 months is not an option. Refactoring is a complex topic that doesn't have a one-size-fits-all solution. Frontend applications are particularly sensitive because of frequent requirements and user flows changes. New abstractions, updated patterns and cleaning up those old functions - it all sounds great on paper, but it often fails in practice: todos accumulate, tickets end up rotting in the backlog and legacy code crops up in every corner of your codebase. So a process of continuous refactoring is the only weapon you have against tech debt. In the past three years, I’ve been exploring different strategies and processes for refactoring code. In this talk I will describe the key components of a framework for tackling refactoring and I will share some of the learnings accumulated along the way. Hopefully, this will help you in your quest of improving the code quality of your codebases.
AHA Programming
React Summit Remote Edition 2020React Summit Remote Edition 2020
32 min
AHA Programming
Top Content
Are you the kind of programmer who prefers to never see the same code in two places, or do you make liberal use of copy/paste? Many developers swear the Don't Repeat Yourself (DRY) philosophy while others prefer to Write Everything Twice (WET). But which of these produces more maintainable codebases? I've seen both of these approaches lay waste to codebases and I have a new ideology I would like to propose to you: Avoid Hasty Abstractions (AHA). In this keynote, we'll talk about abstraction and how you can improve a codebase applying and creating abstractions more thoughtfully as well as how to get yourself out of a mess of over or under-abstraction.
The Epic Stack
React Summit US 2023React Summit US 2023
21 min
The Epic Stack
Top Content
Modern web development is fantastic. There are so many great tools available! Modern web development is exhausting. There are so many great tools available! Each of these sentiments is true. What's great is that most of the time, it's hard to make a choice that is wrong. Seriously. The trade-offs of most of the frameworks and tools you could use to build your application fit within the constraints of the vast majority of apps. Despite this, engineers consistently struggle with analysis paralysis.Let's talk about this, and a solution I am working on for it.

Workshops on related topic

React, TypeScript, and TDD
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Web3 Workshop - Building Your First Dapp
React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Top Content
Featured WorkshopFree
Nader Dabit
Nader Dabit
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
Remix Fundamentals
React Summit 2022React Summit 2022
136 min
Remix Fundamentals
Top Content
Featured WorkshopFree
Kent C. Dodds
Kent C. Dodds
Building modern web applications is riddled with complexity And that's only if you bother to deal with the problems
Tired of wiring up onSubmit to backend APIs and making sure your client-side cache stays up-to-date? Wouldn't it be cool to be able to use the global nature of CSS to your benefit, rather than find tools or conventions to avoid or work around it? And how would you like nested layouts with intelligent and performance optimized data management that just works™?
Remix solves some of these problems, and completely eliminates the rest. You don't even have to think about server cache management or global CSS namespace clashes. It's not that Remix has APIs to avoid these problems, they simply don't exist when you're using Remix. Oh, and you don't need that huge complex graphql client when you're using Remix. They've got you covered. Ready to build faster apps faster?
At the end of this workshop, you'll know how to:- Create Remix Routes- Style Remix applications- Load data in Remix loaders- Mutate data with forms and actions
Vue3: Modern Frontend App Development
Vue.js London Live 2021Vue.js London Live 2021
169 min
Vue3: Modern Frontend App Development
Top Content
Featured WorkshopFree
Mikhail Kuznetcov
Mikhail Kuznetcov
The Vue3 has been released in mid-2020. Besides many improvements and optimizations, the main feature of Vue3 brings is the Composition API – a new way to write and reuse reactive code. Let's learn more about how to use Composition API efficiently.

Besides core Vue3 features we'll explain examples of how to use popular libraries with Vue3.

Table of contents:
- Introduction to Vue3
- Composition API
- Core libraries
- Vue3 ecosystem

Prerequisites:
IDE of choice (Inellij or VSC) installed
Nodejs + NPM
AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
Developing Dynamic Blogs with SvelteKit & Storyblok: A Hands-on Workshop
JSNation 2023JSNation 2023
174 min
Developing Dynamic Blogs with SvelteKit & Storyblok: A Hands-on Workshop
Featured WorkshopFree
Alba Silvente Fuentes
Roberto Butti
2 authors
This SvelteKit workshop explores the integration of 3rd party services, such as Storyblok, in a SvelteKit project. Participants will learn how to create a SvelteKit project, leverage Svelte components, and connect to external APIs. The workshop covers important concepts including SSR, CSR, static site generation, and deploying the application using adapters. By the end of the workshop, attendees will have a solid understanding of building SvelteKit applications with API integrations and be prepared for deployment.