Build a 3D Solar System with Hand Recognition and Three.js

Rate this content
Bookmark

We live in exciting times. Frameworks like TensorFlowJS allow us to harness the power of AI models in the browser, while others like Three.js allow us to easily create 3D worlds. In this talk we will see how we can combine both, to build a full solar system, in our browser, using nothing but hand gestures!

36 min
16 Jun, 2022

Video Summary and Transcription

This Talk explores the use of TypeScript, 3JS, hand recognition, and TensorFlow.js to create 3D experiences on the web. It covers topics such as rendering 3D objects, adding lights and objects, hand tracking, and creating interactive gestures. The speaker demonstrates how to build a cube and a bouncy box, move objects with flick gestures, and create a solar system with stars and planets. The Talk also discusses the possibilities of using hand gestures for web navigation and controlling websites, as well as the performance limits of these technologies.

Available in Español

1. Introduction to TypeScript demo

Short description:

Let's make more sense of TypeScript by using TypeScript. Let's go to the demo. Hi, everyone, welcome back from lunch.

Let's make more sense of TypeScript by using TypeScript. Let's go to the demo. Hi, everyone, welcome back from lunch. Hi to everyone watching the demo. So our talk is going to be a little bit nicer, a little bit funner. This is the slot after lunch. It's going to be lighter. I want to actually try to start with something. Let's see. Let me try and do that. Wait. Uh-oh. You know it worked when ... No. Wait a second. It always works ten times before the show. Okay. Well, and I can just rotate it with my hands. So hello, JS Nation. I hope the other demos will work better.

2. Introduction to Space-themed Talk

Short description:

Let me introduce myself. I'm a front-end architect and analog astronaut. Let's switch to the main topic - space-themed. We'll discuss why we talk about 3JS and hand recognition. It's important to understand the technologies we know and explore new ones. I'll share a Netflix map of technologies to learn. I chose to focus on the metaverse and creating 3D experiences. Hand recognition is crucial for navigation and enables cool interactions like playing Beat Saber. Let's dive in!

So let me start a little bit about myself. I'm the chief front-end architect and web enthusiast. And I'm also an analog astronaut for the Austrian space forum. I do analog missions around the globe. It's a month of isolations in the desert in order to test space equipment and space missions.

So for that we're going to switch the talk over to the main topic of the conversation, space-themed. We'll do that. And the first question that you need to ask when you see the topic of the conversation, the topic of the talk is why. Why talk about 3JS and hand recognition? Why combine everything together? Why not talk do another talk about ES modules or another talk about performance in JavaScript? And it's because there's a great saying that says if I had eight hours to chop down a tree, I'd spend ten hours to chop down a tree. What's the point of talking about technology? You need to understand the technologies you know in order to understand them better, understand them perfectly. I like this quote better. The universe is much more complicated than you think. So don't just stay with the technologies you know. Try to expand, try to browse other technologies and other aspects.

So, for that, let's do a quick 20 minutes in and out of the talk. So, what happens if you want to build a side project? Like myself, okay? I work with JavaScript all the time, work with React all the time, I wanted to experiment a little bit with different technologies, so I go to the web and I look for libraries to use, I look for tools to use and there's a plethora of things to learn. You can either learn how to write in Solidity or Unity or TensorFlow or whatever you want and you just get confused. You don't really know what to choose. And then what we do, we just turn on Netflix, right? Okay, I'm not going to learn anything new. But we can use Netflix as a method to try and to map the technologies that we want to learn, right? So, I give you the Netflix map of the technologies that I wanted to dive into. So, you have in the top line the things that I recommended for me as a JavaScript developer, like performance and hooks and tricks and mobics and read game. And in the second line it's something that's a little bit more deeper, a little bit more complex, like architecture, server worlds, metaverse. And there's a line of whole shows that we need to avoid, like NPM list and logger and missing reference kit. Yeah, so, I chose to learn about the metaverse because I heard about it, and I said okay, I want to get somewhat deeper into it, into the metaverse. So, when I talk about the metaverse, I don't talk about these scam things of NFTs, of the things that nobody understands. But I'm thinking about this. How to create 3D experiences, right? This is like a demo of a virtual H&M store, and those are the things that I wanted to create. And in the metaverse or those kind of virtual reality or AR, you know that hand recognition interface is key because this is what you use in order to navigate. And you can do really cool things with it, right? You can, like, play a bit saber like that. Don't forget to clean after yourself.

3. Building 3D Worlds with Hand Gestures on the Web

Short description:

You can create beautiful 3D worlds using hand gestures on the web. Technologies like unity, 3JS, hand gestures, TensorFlow, and PyTorch enable this. However, building a rotating cube with WebGL code requires complex 3D theory and WebGL knowledge. Thankfully, Three.js abstracts this complexity and provides an easy interface for developers.

This is a demo by push metrics. And you can do super cool things once you have a hand recognition interface. So, I wanted to create beautiful 3D worlds using only hand gestures on the web. And if I isolate just the important parts, 3D worlds hand gestures on the web. If I look at the technologies that allow me to do that, 3D worlds that can use unity, 3JS, hand gestures, TensorFlow and PyTorch in the backend. And on the web we have TFJS, we're going to talk soon.

So, if I want to build some 3D scene, you see this thing, it's rendered by Unreal 5. This is a 3D rendered scene. It's not an actual movie. It's a demo for the matrix. That's super impressive. And us as developers, we think, okay, we'll just drop some lines, right? And we'll do, okay, create a 3D landscape, we have libraries for that and we'll create this beautiful 3D scene. The problem is that there are a lot of tutorials out there that explain how to build a rotating cube. You write a lot of code, and then there's a rotating cube and then the last paragraph of the tutorial is okay. Do it, continue, after that.

So, okay, it's not like in this talk I'm going to teach you more than how to build a cube, but at least that's something that we want to go deeper. And if you want to code something which is a little bit more complex, then you need to learn 3D theory and work with vertices and pixels and you have to write some WebGL code which looks like that. And the cool thing about WebGL is that you write all of this code and in the end you get, no, a rotating cube. So, in order to actually build a rotating cube with WebGL code, that's not easy and we need something to abstract that. We need some way for us as developers to write something that we don't have to have a Ph.D. and 3D math in order to write.

So, I didn't give up. It's not something that I wanted to do, to give up. So, there's Three.js, Three.js to the rescue. For those of you who know Three.js, don't know Three.js, that's a library that knows to abstract all of the code that you've just seen and allow us to build it with easy interface. The Three.js demo page, you can see a lot of examples. Most of them or all of them are really cool. I won't go into all of them. You can do really cool stuff like this website where you take 3D models and then you render them. This is web.

4. Rendering 3D Objects with Three.js

Short description:

Render 3D objects in the web and change the angle. Bruno Simon's portfolio page is a great example. He also offers a recommended 3.js course. GitHub co-pilot makes learning easier but can be frustrating. Three.js requires a scene, camera, and renderer. Create a new scene, define a perspective camera, and set the degrees, near field, and far field. Render the scene on a canvas and project it into a 2D screen.

Render them into the web and you can try to change the angle of things. We'll see it in a second. Another example is Bruno Simon's portfolio page which is all rendered in 3D. That's super amazing. I saw it like a few years ago and it blew my mind. It's playable. It's interactive. It's really cool. And Bruno Simon also has a 3.js course, super recommended, where he goes from the beginning of Three.js, it's called Three.js journey, from the beginning, how to write how to play with cameras, with lights, with objects, with whatever you want and then you go deeper into the technology.

When I started to write Three.js, I had GitHub co-pilot activated. I wrote four words and co-pilot completed the entire thing. It was super easy and super frustrating trying to learn new technology, where you have co-pilot just completing everything, and, yeah. So co-pilot is going to take over our jobs, that's pretty sure.

So let's talk a little bit about Three.js. Okay? Let's try to break it into how it works. So in order to render something in 3D, you need to have three main components. You have to have the scene where things are happening, you have to have some sort of a camera, so, someone who is watching the scene, and you have to have a renderer because eventually you're taking a 3D scene and trying to project it into a 2D screen, so someone needs to know how to do that. Right?

So if we'll take those things and we'll try to say, okay, what do we need for every one of them? So 3js gives us a really nice interface for that. To create a scene, you create a new scene, that's it. Camera is a little bit more complicated but not as complicated. You create a perspective camera, that's for our purposes, and you need to define the degrees, right, of the camera. You need to define the near field, so everything that is nearer than the near field won't be rendered, and the far field, which is everything that is farther than the far field won't be rendered. And eventually, you need to have the renderer that renders, takes the scene, takes the camera, and says this is the camera, this is what I will display on the screen. So, again, super easy. You just define a place in your site or in your page that you want this 3D to be in, sorry, the canvas. Then you put the scene. Then this is how the camera looks and if we will try to see how it projects things, then you can see that you have the 3D scene on the right and you can see on the left how it is projected. This is the 2D projection. I am touching this point because we are going to get back into the other direction soon. This is how it looks and eventually you render the scene.

5. Adding Lights, Objects, and AI with TensorFlow.js

Short description:

In 3GS, you can easily add lights and objects to your 3D world. You can create cool results like a spherical object with the right map, illuminated and reflecting light. You can also load models from software like Blender. Using AI and machine learning in the web, we can interact with 3GS scenes. TensorFlow.js, a port of TensorFlow to the web, allows running AI models in the browser using WebGL.

This is the code and you will be amazed but it creates nothing. Nothing, right? Because we have a scene, we have a renderer, we have a camera, but we don't have the important things in our 3D world which are objects and lights.

In order to add lights or objects, that is super easy in 3GS, you define them. You say I want ambient light which is the light that illuminates all of the scene or point light or I want it to point somewhere. Objects, again, you define the geometry and the material of the objects and then you just place them somewhere in the scene.

If you do it correctly, you can have some really cool results from the beginning, right? You can have, for example, this kind of spherical object with the right map and you have that. You can have it illuminated and reflect the light and that's pretty cool. Yeah. Okay. You can load models from any software like Blender or anything to 3GS and I think there's actually a talk about this in the talk tomorrow. So this is really cool and I'll show you how it looks.

So, again, this is our rotating cube but you can see that it's fully 3D rendered and I can play with the lights here. I can play with the lights and you see how the spotlights, they affect the lighting of the cube. So I can show you here, I can show you the spotlight. So you see these are the spotlights and when I play with them, you see how the shadows change and the scene looks different. So this is really very simple, very easy thing to do. And you can even do more complex things. This is from the demo page of 3GS. You can have a, you play with a material and have a reflective material that reflects the scene around it and then you can get to those results, which are pretty nice. Okay. So this was 3GS, which is nice, interesting. We already knew how to do that.

But what interests me more was how we can use AI or some sort of machine learning in the web in order to interact with these 3GS scene. And what triggered it was a tweet from Charlie Gerard, Charlie, where she demonstrated how with hand gestures she can create a Figma plugin in order to control Figma with hand gestures, which I thought was pretty cool. So I wanted to dive a little bit into this technology.

So when we talk about AI or machine learning, we're talking about libraries like TensorFlow. TensorFlow is a framework for AI and machine learning in the backend, in Python. And there's a port of TensorFlow to the web called TensorFlow.js by the good people at Google, which can run models, like AI models in the web, in the browser, run inference on AI models. How does it work? With WebGL, the villain from the first act now is the TensorFlow.js. It runs on your graphic card in the web and that's pretty amazing that it can do that.

6. Exploring TensorFlow.js and Hand Tracking

Short description:

There's a lot you can do with TensorFlow.js, like playing Web games and training models to identify objects. TensorFlow.js can run on mobile and web devices using Wasm with WebAssembly. Hand tracking is made easy with the hand pose model, which returns an array of recognized points that can be used for various interactions.

There's a lot of things you can do with TensorFlow.js. I gave a talk on how you can use TensorFlow.js to play Web games for you. So it learns how to play and win the games for you, which is nice.

There are a lot of things you can do with TensorFlow.js. And there's also a project called the teachable machine where you can definitely check it out from the people at Google where you can train the model in the web to identify all sorts of things. So remember the hot dog, not hot dog? Now you can do it with dogs, which is pretty cool.

Okay. It's like the evolution of TensorFlow.js. Because running on the graphic card is not something you can do in mobile devices or in Wicked devices. So it knows how to run in Wasm with WebAssembly. And it gives us a very dedicated models for face recognition or full body recognition, which is pretty nice. And those we can run in the web. We just need to pass a camera feed into media pipe and then you have this output.

But we're interested in hand tracking, right? So they have a model called hand pose where you can just pass a video feed and it analyzes it frame by frame and then it returns you something. So it's super easy to do that. You just fire up the detector and you just you create the detector and then you say, okay, I need to estimate the hands from the video of the camera. Okay. Nice. What does it return? It returns an array of all the points it's recognized in the hand. Okay. So for every finger you're like four or five key points. And there's an array and it looks like that. So the image from the camera goes to these points, these key points, and then you can use these key points to draw something on the canvas or interact with your app or whatever you want, which is pretty nice. And you can super impose those points on the video or not. So let's see. Let's see that in action. You see, I'm super imposing those points on the video. I don't have to do that. I can just have the video and have those points somewhere else. Okay? So this is what we can do.

7. Building a Cube with Hand Gestures

Short description:

Let's build a cube using hand gestures. The best gesture is a snap. We need to identify the snap pattern by checking the finger positions. We convert the 2D coordinates to 3D to create a scene. Creating a box is easy, just position it in the coordinates. Let's see it in action.

Magic. Okay. So we have this amazing power of controlling the web with our hands, so let's build a cube with this power. Okay, we can use the coordinates from the hand in order to create a new cube. Remember we have the coordinates of the fingers.

But what can be the best gesture to create something? If there are Marvel fans around here, they're going to be annoyed, but this is actually the best gesture to create something, right? Just a snap. And if we talk about a snap, let's dive a little bit what does it mean to have a snap. Because we only get the coordinates, remember, we only get the coordinates from media pipe, but we need to identify the snap pattern. If we take this snapping movement, right, we can divide it into two parts. One where the fingers are closed, one where the fingers are far away. If you snap right now, you'll see that the first, the thumb and middle finger are closed and then the thumbtip is above the middle fingertip. Okay? That's important. If we write some sort of pseudocode for that, then we can detect at first the distance between the thumbtip and middle fingertip and then we can say that the thumb is above the middle tip. Again, in code it's super easy. You detect this, which is the thumb and the middle, the thumb and the middle finger are closed and then you check that they're far away, but you need to check that the thumb is above the middle finger. Okay?

And then you have to do something a little bit trickier because you have those 2D coordinates, but we want to somehow create a 3D scene. So we need to convert the vector from 2D to 3D. We'll see it in a second. It's not super easy, but it doesn't matter. So let's talk about how to move from 2D to 3D. We saw that our camera can take 3D into 2D, so in order to do that, we take the projection of the camera, we reverse it, we just normalize it, and it doesn't really matter. It doesn't really matter how to move from 2D to 3D, because you can do what I did, and just take it from step to step. Okay? 2D to 3D is not fun, but you have good people around the Web that already did this math for you, and the only thing you need to do is copy and paste. So how do we create a box? Super easy, we have the coordinates, right? So we just create the box like we saw earlier, geometry, basic material, but we just need to position them in the coordinates that we have. So position it in X, Y, and Z. Let's see it in action. And this time, it better work. So let's see if it will identify my snap now. Wait. APPLAUSE Now, I have to say, the reason it's not working smoothly is because of this guy, which confuses the camera.

8. Creating a Bouncy Box with GreenSock

Short description:

Let's make the box creation more interesting by using GreenSock library for animations. We can use GreenSock's easing function to create a bouncy box effect. By scaling the box up to 110% and then reducing it to 90% with GreenSock, it will appear as if it was created from thin air.

So, yeah. Anyway, it creates the box, which is pretty nice. So, you did this. Which is nice. But you probably noticed it's a little bit flat, the creation of the box.

So let's do something a little more interesting. Let's make it pop with Gsup or GreenSock. It is a library for creating animations and you can do really amazing things with it like this. But the most interesting thing that we want to do with GreenSock is use its easing function. So GreenSock can give you an easing function where you can say, I want some sort of value to go elastically to another value or bounce, or whatever you want. So, in order to do it bouncy box, when we create it, we just need to scale it up to 110% and then have Gsup or GreenSock, elastically reduce it to 90%. So it looks like it was created from thin air. This is the bouncing box, remember that.

9. Moving Boxes with a Flick Gesture

Short description:

Now, let's explore how to move the boxes using a flick gesture. The anatomy of a flick involves closing the thumb and index fingertips and then moving them far apart. To identify a moveable object, we check the distance between the thumb and index fingers and iterate through all elements in the scene. Once we have the flick candidate, we save the thumb location and detect when the flick is released. With the direction vector, we can use Gsup to animate the object's movement.

And now we want some way to move it, to move the boxes around. So, again, we want to use a flick. What's the anatomy of a flick? Just do it and you'll see. First, the thumb and index fingertips are closed, then the thumb and index fingertips are far. That's a flick.

Again, pseudocode, super easy. Distance is small. Distance is far. Magic numbers all around but it's just a trial and error.

Another thing we need to do with a flick that we didn't need to do with a snap is identify if something is going to be moveable because we need to have this object move somewhere. So, in order to do that, again, first we check that the thumb and the index are close and then we iterate all the elements in the scene and we check for every one of them what is their distance to the flick candidate that we're going to do. Again, get vector from xy, that's the 2D, 3D, but it's the same thing all around. It's not that difficult.

And once we identify this is the flick candidate, we need to save the thumb location. Why? We'll see in a second. But we need to say, okay, this is the first thing that we're going to do with the flick. And then we need to identify that the flick is being released, right? So, the distance is large. And now we get the vector, the direction of the flick. So, we have the flick candidate. We have the thumb location. And we have the vector, which is again, Euclidean math, super easy. And we'll use our friend Gsup here to animate the move of earth or whatever we use it. Like that.

And let's see that in action. Ooh, a lot of boxes. Okay. I'm going to try. Let's see if it pops now. Is it? No. It works so much better on my machine.

10. Flickering Hands and Live Demos

Short description:

The flickering of the hands is caused by the background. Let me try something else. Now we try and flick it. That was a good example. Thank you.

So, you see, actually, it flickers because of this. So I'll just try. And maybe. Wait. Oh, you would have been so impressed right now. Okay. So you've seen the pop. And now when I flick it. The problem that you see here, the flickering of the hands, it's because of the background. And because it recognizes the hands that are here. So let me try and do something else. Maybe do like that. Okay. So now we try and flick it. Come on. Be impressed. Okay. That was a good example. Okay? Thank you. And live demos are hard.

11. Creating Spheres and Stars in the Solar System

Short description:

MediaPipe has modules to detect gestures, but we had to create our own continuous gesture detection. In the Solar System, we use spheres instead of cubes. Creating a sphere with 3JS involves defining its geometry, material, and texture. The texture is an image of the sphere, and additional textures can be added. By adding a topography and adjusting shininess, we can create realistic shadows and reflections. To create stars, we place points on a sphere using actual star data from our galaxy.

So you probably think to yourself, okay, do I need to do all this math myself? So MediaPipe does have modules or does have some sort of logic in order to detect gestures. So you can teach MediaPipe to detect this gesture or this gesture. But because we needed something which is continuous, we had to do it ourselves.

Okay, now let's go Solar. We wanted to create a Solar System, hopefully. So with Solar, we can't use cubes anymore. We have to use spheres. And how do we create a sphere with 3JS? First of all, we need to do to create a sphere material or geometry, sorry, and we can define how many triangles will this sphere consist of. And then we have to have the material of the sphere. We have to have the texture. So how does it look? The texture is actually the image of the sphere. And you can also put additional textures on it.

So we have the texture, which is how Earth looks from afar. And we can also have a black and white image that says what is the topography of the Earth. So how the shadows will behave. So we have a white sphere. Put a texture and then we can have this texture, which is a little bit flat. Put a topography. And hopefully you notice that now the mountains have shadows. You can put a shininess because the oceans reflect light different than the land, right? So I can put shininess and then I have this like glare going on. And I can add another layer of clouds, which is nice.

Now we want to create the stars. In order to create the stars we need to create a sphere and be inside the sphere. Imagine stars are around you. You need the camera to be inside the sphere. So we'll use an actual star data from our galaxy. And for the magnitude of the star we'll define the size of the point that we're going to put. So we're just going to create an array of points. And then we're going to just put them on the sphere. Again spherical geometry.

12. Creating Stars and the Sun

Short description:

We use an event system to create stars, making it scalable. To create the sun, we use a state machine with snap gestures. We create a point light within the sun to illuminate everything. Scaling and flicking gestures are used to enter the sun. We introduce a spin gesture by detecting finger angles and direction. The sun rotates using an animate loop.

And then we have the star field. Now it's interesting because we used snap to create the boxes before. We want to use snap to create the stars. So we can just replace create box with create stars right? But that's not good. We want to use some sort of an event system. And now this can be much more scalable. We can have a snap detector that only detects snap. And it just published the event of a snap. And we can have somewhere else, the stars just subscribe to the snap and creates the stars.

Okay, this is going to be really handy later. In order to create the sun, it's a bit more complicated. Now we need some sort of a state machine, first snap for the stars, second snap for the sun. So what is the most complex state machine you can think of? No, we just use a switch case because again, side project weekend. So, create the stars, then move to sun, then end of time.

In order to create the sun, we create a point light that is within the sun and will illuminate everything around it. And to enter the sun, again, we'll make it pop, right. Scale it 110%, then scale it back to 90%. Okay, we need to flick it to the center, which is super easy with the sun because we have events, remember? So, we need to subscribe to flick and we need to move the sun to 0000, the center of the scene. And the last gesture that I wanted to introduce is to make it spin, make the sun, like, rotate itself. So, let's look at the anatomy of a spin. You need to detect straight finger, then you need to find overlapping element, right, which is, again, easy. And then, you need to detect the straight and diagonal finger. So, it need to be straight, but the angle shouldn't be 180 degrees. And then you need to trigger the event of the spin for that element. And how do we spin? We need to identify if it's the right hand or the left hand in order to know the direction of the spin. Again, events are awesome. And then we have the animate loop that just says, okay, the sun needs to spin with some sort of a speed. So, every animation frame, I'll just add the speed to the sun rotation. It looks like that. It's really, really simple.

13. Creating Stars, Sun, Earth, and Mars

Short description:

You just put it in a rotation set, and then the animation loop makes it rotate. Let's try to create the stars. Something's not working. Now let's try to create our sun. Creating Earth is super easy. We need to add Mars as well. We'll add another gesture which is rotating the entire scene with our hands. Let's see it all. And now... Good energies. Okay. Come on.

You just put it in a rotation set, and then the animation loop makes it rotate. And again, apologies in advance.

Okay, let's try to create the stars. No. Wait. Something's not working. Oh, maybe the lanyard is making it worse.

Now let's try to create our sun. Oh... No. No. Thank you.

Okay, creating Earth is super easy. You don't need seven days, you need seven seconds because we have this switch case, we just add Earth and then we need to somehow put it into orbit. To put it into orbit, we'll flick it, but we need to flick it only to Y equals zero, right? So we don't need to flick it to the center, just to the same plane as the sun. And then we have to have some sort of a loop in order to make it rotate the sun, so, again, every animation frame will just add to the angle of Earth.

Okay, no, we won't see because we need to add Mars as well, right? We forgot about Mars. In order to add Mars, we just add it to the switch case. Again, it's super easy to do that. And we'll add another gesture which is rotating the entire scene with our hands. So in order to rotate everything, we just need to detect straight fingers. So I know that I want to do like this, like in Minority Report. So everything is straight and vertical. And then I need to detect the movement of the fingers in order to change the camera. Either rotate it or go backward and forward.

Okay, let's see it all. And now... Good energies. Okay. Come on.

14. Creating Solar System Demo

Short description:

This can take a while. Go get the sandwiches. Sorry, I don't know why it's taking so long. You can snap with me. Sun first. Then flick. Spin. Earth. Flick. Come on Mars. Elon is counting on you. Spin.

This can take a while. Go get the sandwiches.

Wait. Sorry, I don't know why it's taking so long. Yeah, it shouldn't.

Wait, wait, wait. Yeah, you can snap with me. Okay. One. Snap with me. It will help. Nice one. Come on. Could have been a really cool demo. Well, okay.

Sun first. Then flick. Spin. Spin. Earth. Spin. Flick. Back, back, back, back, back. Wait. Wait, wait, wait. Come on Mars. Come on Mars. Elon is counting on you. Come on Mars. Spin.

15. Creating Hand Gestures and Exploring Possibilities

Short description:

And... Wait. Come on. It's less weird than it looks. One last chance, Mars. Come on. Mars won't spin today. But maybe we can make it pop. Now we can use our hand to rotate. We can use the second hand to dolly the camera back and forth. This is cool. There's a guy called Oz Ramos who uses hand gestures to navigate the web and control websites. We can also create an avatar that moves according to your webcam. If this was too much, we can go back to Netflix and relax with a familiar avatar.

And... Wait. Come on. I mean. It's less weird than it looks. Okay. One last chance, Mars. Come on. And I lost my hand. Okay, Mars won't spin today. But maybe we can make it pop. Where's my hand? Anyway.

Now we can use our hand to rotate. And we can use the second hand. When it will be recognized. To dolly the camera back and forth. Right? So, this is our scene. Yeah. It was pretty fun to create it.

And where do we go? Thank you. Applause. Where do we go from here? Amazing. Yes. Okay. This is cool. A lot of things we can do with it. There's a guy called Oz Ramos which uses hand gestures in order to navigate the web and control websites, which is pretty cool. And we can do really cool things with body recognition, like creating an avatar that moves according to your web cam. If this was too much, because a lot of things to soak in, we can go back to Netflix and just relax with a familiar and nice avatar. And that's it.

QnA

Q&A on Performance Limits and Switching Speakers

Short description:

The future is amazing. You're awesome. Thank you very much. Thank you. Thank you. Thank you very much. Let's give one more round of applause. Now, I kindly ask our AV team to put questions on the stage. I remember the main question from the audience: what are the limits for performance? 3JS uses GPUs to render scenes smoothly. Catch Liat in the hybrid chat area or ask remotely. Now, we quickly switch to the next speaker. Whoa, whoa, whoa. Ah, ah, ah, ah. Ah, ah, ah, ah. Ah, ah, ah, ah.

The future is amazing. You're awesome. And my code is over there. It works much better when not live-demoing it. Thank you very much. Thank you. Thank you. Thank you very much. You did such a nice live demo. Let's give one more round of applause. Yes.

Now, I kindly ask our AV team to put questions on the stage. I think we have time for maybe one of them. Do we have? Okay. Technical issues but I remember the question. I remember the main question from audience which is what are the limits for performance. Is there a chance to render something really expensive in the browser? What's your experience with that? Yeah. So there are a lot of performance considerations when you try to render something with javascript. But 3JS uses GPUs. So if you make if you are able to make the GPU render the scene it's very smooth. Actually when I built the demo I didn't use GPUs and it was a little bit choppy but with GPUs it's super smooth. Okay. Cool, cool, cool. And take a chance to catch Liat in the hybrid chat area and folks who watch us remotely you have a chance to ask Liat also. Yeah, now we quickly switch to the next speaker.

Come on, come on, come on. Whoa, whoa, whoa. Ah, ah, ah, ah. Ah, ah, ah, ah. Ah, ah, ah, ah.

Audience Interaction and Excitement

Short description:

Ah, ah, ah. Come on, come on. Eh, eh, eh.

Ah, ah, ah. Ah, ah, ah. Ah, ah, ah. Ah, ah, ah.

Come on, come on. Come on, come on. Ah, ah, ah. Ah, ah, ah. Ah, ah, ah. Ah, ah, ah. Ah, ah, ah. Ah, ah, ah.

Come on, come on. Come on, come on. Eh, eh, eh. Eh, eh. Eh, eh. Eh, eh. Eh, eh. Eh, eh. Eh, eh.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.


Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
React Advanced Conference 2021React Advanced Conference 2021
27 min
From Blender to the Web - the Journey of a 3D Model
Top Content
Creating 3D experiences in the web can be something that sounds very daunting. I'm here to remove this idea from your mind and show you that the 3D world is for everyone. For that we will get a model from the 3D software Blender into the web packed with animations, accessibility controls and optimised for web use so join me in this journey as we make the web more awesome.
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Making “Bite-Sized” Web Games with GameSnacks
Top Content
One of the great strengths of gaming on the web is how easily accessible it can be. However, this key advantage is often negated large assets and long load times, especially on slow mobile connections. In this talk, Alex Hawker from Google’s GameSnacks will illustrate how they are tackling this problem and some key learnings the team found while optimizing third party games and designing their own ultra-lightweight game engine.
JSNation Live 2021JSNation Live 2021
39 min
TensorFlow.JS 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!

Workshops on related topic

JS GameDev Summit 2022JS GameDev Summit 2022
165 min
How to make amazing generative art with simple JavaScript code
Top Content
WorkshopFree
Instead of manually drawing each image like traditional art, generative artists write programs that are capable of producing a variety of results. In this workshop you will learn how to create incredible generative art using only a web browser and text editor. Starting with basic concepts and building towards advanced theory, we will cover everything you need to know.
JS GameDev Summit 2022JS GameDev Summit 2022
86 min
Introduction to WebXR with Babylon.js
Workshop
In this workshop, we'll introduce you to the core concepts of building Mixed Reality experiences with WebXR and Balon.js.
You'll learn the following:- How to add 3D mesh objects and buttons to a scene- How to use procedural textures- How to add actions to objects- How to take advantage of the default Cross Reality (XR) experience- How to add physics to a scene
For the first project in this workshop, you'll create an interactive Mixed Reality experience that'll display basketball player stats to fans and coaches. For the second project in this workshop, you'll create a voice activated WebXR app using Balon.js and Azure Speech-to-Text. You'll then deploy the web app using Static Website Hosting provided Azure Blob Storage.
JSNation Live 2021JSNation Live 2021
81 min
Intro to AI for JavaScript Developers with Tensorflow.js
Workshop
Have you wanted to explore AI, but didn't want to learn Python to do it? Tensorflow.js lets you use AI and deep learning in javascript – no python required!
We'll take a look at the different tasks AI can help solve, and how to use Tensorflow.js to solve them. You don't need to know any AI to get started - we'll start with the basics, but we'll still be able to see some neat demos, because Tensorflow.js has a bunch of functionality and pre-built models that you can use on the server or in the browser.
After this workshop, you should be able to set up and run pre-built Tensorflow.js models, or begin to write and train your own models on your own data.
ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Workshop
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.