Build a 3D Solar System with Hand Recognition and Three.js

Bookmark

We live in exciting times. Frameworks like TensorFlowJS allow us to harness the power of AI models in the browser, while others like Three.js allow us to easily create 3D worlds. In this talk we will see how we can combine both, to build a full solar system, in our browser, using nothing but hand gestures!

by



Transcription


Hi, everyone. Welcome back from lunch. Hi to everyone watching us in remote. So our talk is going to be a little bit nicer, a little bit funner. You know, this is the slot after lunch. So it's going to be lighter. And I want to actually try to start with something. Let's see. Let me try and do that. Wait. Uh-oh. You know, it worked. No. Wait a second. It always worked ten times before the show. Okay. Well, and I can just rotate it with my hands. So hello, JS Nation. I hope the other demos will work better. So let me start a little bit about myself. So I'm the Chief Frontend Architect at Duda. I'm a web enthusiast. And as mentioned, I'm also an analog astronaut for the Austrian Space Forum. I do analog missions around the globe. It's like one month of isolations in the desert in order to test space equipment and space missions. So for that, we're going to switch the talk to dark mode, a little bit more space themed. We'll do that. And the first question that you need to ask when you see the topic of the conversation, the topic of the talk is why. Why to talk about 3JS and hand recognition? Why combining everything together? Why not talk, do another talk about ES modules or another talk about performance in JavaScript? And it's because there's a great saying that said, if I had eight hours to chop down a tree, I'd spend six sharpening my axe, which is go deeper on the technologies you know in order to understand them better, understand them perfectly. I like this quote better. The universe is much more complicated than you think. So don't just stay with the technologies you know, try to expand, try to browse other technologies and other aspects. So for that, let's do a quick 20 minutes in and out adventure. It's going to be a little bit longer than 20 minutes but I hope it will be okay. Okay. So, what happens if you want to build a side project like myself? Okay. I work with JavaScript all the time. I work with React all the time. I wanted to experiment a little bit with different technologies. So I go to the web and I look for libraries to use or I look for tools to use and there's a plethora of things to learn, right? You can either learn how to write in Solidity or Unity or TensorFlow or whatever you want and you just get confused. You don't really know what to choose. And then what we do is we just turn on Netflix, right? And say, okay, I'm not going to learn anything new. But we can use Netflix as a method to try and to map the technologies that we want to learn, right? So I give you the Netflix map of the technologies that I wanted to dive into. So you have in the top line the things that I recommended for me as a JavaScript developer like performance and hooks and tricks and MobX and Grid Game. And in the second line, it's something that's a little bit more deep, a little bit more complex like architecture, server wars, metaverse, and there's a line of all shows that we need to avoid like NPM list and logger and missing graph kit. Yeah. So I chose to learn about the metaverse because I heard about it and I said, okay, I want to get somewhat deeper into it, into the metaverse. So when I talk about the metaverse, don't talk about this scam things of NFTs, of the things that nobody understands. But I'm thinking about this, how to create 3D experiences, right? This is like a demo of a virtual H&M store. And those are the things that I wanted to create. And in the metaverse or in those kind of virtual reality or AR, you know that hand recognition interface is key because this is what you use in order to navigate. And you can do really cool things with it, right? You can play Beat Saber like that, just don't forget to clean after yourself. This is a demo by Pushmetrics. And you can do super cool things once you have a hand recognition interface. So I wanted to create beautiful 3D worlds using only hand gestures on the web. And if I isolate just the important parts, 3D worlds, hand gestures on the web. And if I choose the technologies or I look at the technologies that will allow me to do that, so 3D worlds, I can use Unity, I can use 3JS, hand gestures, you know, TensorFlow and PyTorch in the back end. And on the web, we have TFJS and MediaPipe that we're going to talk soon. Okay. So, if I want to build some 3D scene, you see this scene, it's rendered, I think, by Unreal 5. This is actually a 3D rendered scene. It's not an actual movie. It's a demo for the metrics. That's super impressive. And us as developers, we think, okay, we'll just drop some lines, right? And we'll do, okay, create a 3D landscape. We have libraries for that. And we'll create this beautiful 3D scene. The problem is that there are a lot of tutorials out there that explain you how to build a rotating cube. So, you write a lot of code, then there's a rotating cube, and then the last paragraph of the tutorial is, okay, do it, continue after that. So, okay, it's not like in this talk I'm going to teach you more than how to build a cube, but at least that's something that we want to go deeper. And if you want to code something which is a little bit more complex, then you need to learn 3D theory and work with vertices and pixels, and you have to write some WebGL code which looks like that. And the cool thing about WebGL is that you write all of this code and in the end, you get, no, a rotating cube. So, in order to actually build a rotating cube with WebGL code, that's not that easy and we need something to abstract that. We need some way for us as developers to write something that we don't have to have a PhD and 3D math in order to write. So, I didn't give up. It's not something that I wanted to do to give up. So, there's 3JS, 3JS to the rescue. For those of you who know 3JS, who don't know 3JS, that's a library that knows to abstract all of the code that we just seen and allow us to build it with easy interface. The 3JS demo page is a, you can see a lot of examples. Most of them are, all of them are really cool. I won't go into all of them. You can do really cool stuff like this MavFarm website where you just, you take 3D models and then you render them. This is web, yeah? Render them into the web and you can try and change the angle of things. We'll see it in a second. One of the examples is Bruno Simon's portfolio page which is all rendered in 3D. That's super amazing. I saw it a few years ago and it blew my mind. It's playable, it's interactive, it's really cool. And Bruno Simon also has a 3JS course, super recommended, where he goes from the beginning of 3JS, it's called 3JS journey, from the beginning how to write, how to play with cameras, with lights, with objects, with whatever you want and then you go deeper into the technology. When I started to write 3JS, I had GitHub Copilot activated. I wrote four words and then Copilot completed the entire thing. It was both super easy and super frustrating trying to learn new technology where you have Copilot just completing everything. Yeah, so Copilot is going to take over our jobs, that's pretty sure. So let's talk a little bit about 3JS. Let's try to break it into how it works. In order to render something in 3D, you need to have three main components. You have to have the scene where things are happening. You have to have some sort of a camera, so someone who is watching the scene. And you have to have a renderer because eventually you're taking a 3D scene and you're trying to project it into a 2D screen. So someone needs to know how to do that, right? So if we'll take those things and we'll try to say, okay, what do we need for every one of them? So 3JS gives us a really nice interface for that. To create a scene, you just create a new scene. That's it. Camera is a little bit more complicated but not as complicated. You create a perspective camera. That's for our purposes. And you need to define the degrees, right, of the camera. You need to define the near field. So everything that is nearer than the near field won't be rendered. And the far field, which is everything that's farther than the far field won't be rendered. It just looks complicated. It's not that complicated. And eventually you need to have the renderer that renders, takes the scene, takes the camera, and say, okay, this is the camera and so this is what I'll display on the screen. So again, super easy. You just define a place in your site or in your page if you want this 3D to be in, sorry, the canvas. Then you put the scene. Then this is how the camera looks. And if we'll try to see how it projects things, then you can see that you have the 3D scenes, 3D scene on the right. And you can see on the left how it's projected, right? So this is the 2D projection. And I'm touching this point because we're going to get back into it from the other direction soon. So this is how it looks. And eventually you render the scene. So this is the code. And you'll be amazed, but it creates nothing. Nothing right? Because we have a scene, we have a renderer, we have a camera, but we don't have the important things in our 3D world, which are objects and lights. So in order to add lights or objects, that's again super easy in 3GS. You just define them. You say, okay, I want ambient light, which is the light that illuminates all of the scene, or a point light. We can say I want it to point somewhere. Objects, again, you define the geometry of the object and the material of the objects. And then you just place them somewhere in the scene. And if you do it correctly, then you can have some really cool results from the beginning, right? You can have, for example, this kind of spherical object with the right map. And you have, let's see. So you can have it illuminated and reflect the light. And that's pretty cool. It's visible, right? Yeah. Okay. You can load models from any software, like Blender or anything to 3GS. And I think there's actually a talk about how to do that by Sara Vera tomorrow. So this is really cool. I'll show you how it looks. So again, this is our rotating cube. But you can see that it's fully 3D rendered. And I can play with the lights here. I can play with the lights. And you see how the spotlights, they affect the lighting of the cube. Maybe I'll show here that I can show you the spotlights. So you see, these are the spotlights. And when I play with them, you see how the shadows change and the scene looks different. So this is really very simple, very easy thing to do. And you can even do more complex things. This is from the demo page of 3GS. You play with the material and you can have some sort of reflective material that reflects the scene around it. And then you can get to those results, which are pretty nice. So this was 3GS, which is nice, interesting. We already knew how to do that. But what interests me more was how we can use AI or some sort of machine learning in the web in order to interact with this 3GS scene. And what triggered it was a tweet from Charlie Gerard, dev.dev.charlie, where she demonstrated how with hand gestures she can create a Figma plugin in order to control Figma with hand gestures, which I thought was pretty cool. So I wanted to dive a little bit into this technology. So when we talk about AI or machine learning, we're talking about libraries like TensorFlow. TensorFlow is a framework for AI and machine learning in the back end, in Python. And there's a port of TensorFlow to the web called TensorFlow.js by the good people at Google, which can run models, like AI models in the web, in the browser, run inference on AI models. How does it work? With WebGL, which the villain from the first act now is the protagonist of this one, it runs on your graphic card in the web, and that's pretty amazing that it can do that. There's a lot of things that you can do with TensorFlow.js. If you look for hashtag made with TFJS, you can see you can do self-driving virtual cars. I gave a talk about how you can use TensorFlow.js to play web games for you. So it just learns iteratively how to play and win the games for you, which is nice. So there are a lot of things you can do with TensorFlow.js. And there's also a project called Teachable Machine where you can definitely check it out from the people at Google, where you can train the model in the web to identify all sorts of things. So, remember the hot dog, not hot dog? So now you can do it with actual dogs, which is pretty cool. Okay. MediaPipe is like the evolution of TensorFlow.js, it's something a little bit different than TensorFlow.js because running on a graphic card is not something that you can do in mobile devices or in web devices. So MediaPipe knows how to run in WASM with WebAssembly. And MediaPipe gives us a few very dedicated models for face recognition or full body recognition, which is pretty nice, and those we can run in the web. We just need to pass a camera feed into MediaPipe and then you have this output. But we're interested in hand tracking, right? So they have a model called HandPose where you can just pass a video feed and it analyzes it frame by frame and then it returns you something. So it's super easy to do that. You just fire up the detector and you just create a detector and then you say, okay, I need to estimate the hands from the video of the camera. Okay. Nice. What does it return? It returns an array of all the points it recognized in the hand. So for every finger, there are four or five key points, usually around the knuckles, and there's an array and it looks like that. So the image from the camera goes to these points, these key points, and then you can use these key points to draw something on a canvas or interact with your app or whatever you want, which is pretty nice. And you can superimpose those points on the video or not. So let's see. Let's see that in action. Do you see? Here I'm superimposing those points on the video. I don't have to do that. I can just have the video and have those points somewhere else. Okay? So this is what we can do. Magic. Okay. So we have this amazing power of controlling the web with our hands. So let's build a cube with this power. Okay. We can use the coordinates from the hand in order to create a new cube. Remember, we have the coordinates of the fingers. But what can be the best gesture to create something? So if there are Marvel fans around here, they're going to be annoyed, but this is actually the best gesture to create something, right? Just a snap. And if we talk about a snap, let's dive a little bit what does it mean to have a snap. Because we only get the coordinates, remember. We only get the coordinates from MediaPipe, but we need to somehow identify it's a snap. So if we take this snapping movement, right, we can divide it into two parts. One where the fingers are closed, one where the fingers are far away. If you snap right now, you'll see that the first, the thumb and the middle finger are closed, and then the thumbtip is above the middle fingertip. Okay? That's important. If we write some sort of pseudo code for that, then we can detect that at first the distance between the thumbtip and the middle fingertip is very small, and the distance is just like, you know, the Euclidean distance, just 2D points. And then we can say that the thumb is above the middle tip. Okay? Again, in code, it's super easy. You detect this, which is the thumb and the middle finger are closed, and then you check that they're far away, but you need to check that the thumb is above the middle finger. Okay? And then you have to do something a little bit trickier. Because you have those 2D coordinates, but we want to somehow create a 3D scene. So we need to convert the vector from 2D to 3D. It's a, we'll see it in a second. It's not super easy, but it doesn't matter. But then we identify it to snap, so we can do something with this snap. So let's talk a little bit about how to move from 2D to 3D. We saw that our camera, our renderer can take 3D into 2D. So in order to move from 2D to 3D, we just do, we take the projection of the camera, we reverse it, we just normalize it, we multiply it. It doesn't really matter. It doesn't really matter how to move from 2D to 3D, because you can do what I did and just take it from step over step. Okay? 2D to 3D math is not fun, but you have good people around the web that already did this math for you, and the only thing that you need to do is copy and paste. All right. So how do we create a box? Super easy. We have the coordinates, right? We just create the box like we saw earlier, geometry, basic material, but we just need to position them in the coordinates that we have. So position it in X, Y, and Z. Let's see it in action, and this time it better work. Okay. So let's see if you'll identify my snap now. Wait. Now, I have to say, the reason it's not working smoothly is because of this guy, which confuses the camera. So, yeah. Okay. Anyway, it creates the box, which is pretty nice. So you did this, which is nice, but you probably noticed a little bit flat, the creation of the box. So let's do something a little more interesting. Let's make it pop with Gsup or GreenSock. It's a library for creating animations, and you can do really amazing things with it, like this. But the most interesting thing that we want to do with GreenSock is use its easing function. So GreenSock can give you an easing function where you can say I want some sort of value to go elastically to another value or bounce or whatever you want. So in order to do it bouncy box when we create it, we just need to scale it up to 110% and then have Gsup or GreenSock elastically reduce it to 90%. So it looks like it was created from thin air. Okay. This is the bouncing box. Remember that. And now we want some way to move it, to move the boxes around. So again, we want to use a flick. What's the anatomy of a flick? Just do it and you'll see. First the thumb and the index finger tips are closed, then the thumb and the index finger tips are far. That's a flick. Again, pseudocode, super easy. Distance is small. Distance is far. Magic numbers all around. But it's just a trial and error. Another thing we need to do with a flick that we didn't need to do with a snap is identify if something is going to be flicked. So if an object is nearby. Because we need to have this object move somewhere. So in order to do that, again, first we check that the thumb and the index are close. And then we iterate all the elements in the scene and we check for every one of them what is their distance to the flick candidate that we're going to do. Again, get vector from XY. That's the 2D, 3D. But it's the same thing all around. It's not that difficult. And once we identify this is a flick candidate, we need to save the thumb location. Why? We'll see in a second. But we need to say, okay, this is the first thing that we're going to do with a flick. And then we need to identify that the flick is being released. So the distance is large. And now we get the vector, the direction of the flick. So we have the flick candidate. We have the thumb location. We get the unit vector, which is, again, Euclidean math. Super easy. And we'll use our friend Gsup here just to animate the move of earth or whatever we use it. Like that. And let's see that in action. A lot of boxes. Okay. I'm going to try. Let's see if it pops now. Okay. Did you notice the small pops? Wait. Oh. It works so much better on my machine. So you see, it actually flickers because of this. So I'll just try. And maybe... Wait. Oh, you would have been so impressed right now. Okay. So you've seen the pop. And now when I flick it... Okay. So you see? The problem that you see here, the flickering of the hands, it's because of the background and because it recognizes actually the hands that are here. So let me try and do something else. Maybe do like that. Okay. So now we try and flick it. Come on. Be impressed. Okay. That was a good example. Okay. Thank you. Thank you. Thank you. Thank you. And live demos are hard. So you probably think to yourself, okay, do I need to do all this math myself? So MediaPipe does have modules or does have some sort of logic in order to detect gestures. So you can teach MediaPipe to detect this gesture, this gesture, but because we needed something which is continuous, we had to do it ourselves. Okay. Now let's go solar. We wanted to create a solar system, hopefully. So with solar, we can't use cubes anymore. We have to use spheres. And how do we create a sphere with 3JS? First of all, we need to create a sphere material. And we can define how many triangles will this sphere be consist of. And then we have to have the material of the sphere. We have to have the texture. So how does it look? The texture is actually the image of the sphere. And you can also put additional textures on it. So we have the texture which is like how Earth looks from far. And we can also have a black and white image that says what is the topography of the Earth. So how the shadows will behave. So we have a white sphere. Put a texture and then we can have this texture which is a little bit flat. Put the topography and hopefully you notice that now the mountains have shadows. You can put a shininess because the oceans reflect light different than the land. So I can put shininess and then I have this glare going on. And I can add another layer of clouds which is nice. Now we want to create the stars. In order to create the stars, we need to create a sphere and be inside the sphere, right? Imagine stars all around you. You need the camera to be inside the sphere. So we'll use an actual star data from our galaxy. And for the magnitude of the star, we'll define the size of the point that we're going to put. So we're just going to create an array of points and then we're going to just put them on the sphere. Again, spherical geometry. And then we have the star field. And now it's interesting because we used Snap to create the boxes before. We want to use Snap to create the stars. So we can just replace create box with create stars, right? But that's not good. We want to use some sort of an event system and now this can be much more scalable. We can have a Snap detector that only detects Snap and it just published the event of a Snap and we can have somewhere else the stars just subscribe to the Snap and creates the stars. And this is going to be really handy later. In order to create the sun, it's a little bit more complicated. Now we need some sort of a state machine. First Snap for the stars, second Snap for the sun. So what is the most complex state machine you can think of? No, we just use a switch case because, again, side project, we can't. So create the stars, then move to sun, then end of time. In order to create the sun, we create a point light that is within the sun and will illuminate everything around it. And to enter the sun, again, we'll make it pop, right? Scale it 110%, then scale it back to 90%. We need to flick it to the center, which is super easy with the sun because we have events, remember? So we just subscribe to flick and we need to move the sun to 0, 0, 0, 0, the center of the scene, and the last gesture that I wanted to introduce is to make it spin, make the sun rotate itself. So let's look at the anatomy of a spin. You need to detect straight finger, then you need to find the overlapping element, which is, again, easy, and then you need to detect the straight and diagonal finger. So it needs to be straight, but the angle shouldn't be 180 degrees, and then you need to trigger the event of the spin for that element. And how do we spin? We need to identify if it's the right hand or the left hand in order to know the direction of the spin. Again, events are awesome. And then we have the animate loop that just says, okay, the sun needs to spin with some sort of a speed, so every animation frame, I'll just add the speed to the sun rotation. It looks like that. It's really, really simple. You just put it in a rotation set, and then the animation loop makes it rotate. Okay. Let's see. And again, apologies in advance. Let's try to create the stars. Come on. No. Wait. Something's not working. Okay. Maybe the layout is making it worse. Now let's try to create our sun. No. No. Thank you. Okay. Creating Earth is super easy. You don't need seven days. You need seven seconds. Because we have this switch case, we just add Earth to that. And then we need to somehow put it into orbit. To put it into orbit, we'll flick it. But we need to flick it only to Y equals zero, right? So, we don't need to flick it to the center, just to the same plane as the sun. And then we have to have some sort of a loop in order to make it rotate the sun. So, again, every animation frame will just add to the angle of Earth. Okay. No. We won't see. Because we need to add Mars as well. We forgot about Mars. In order to add Mars, we just add it to the switch case. Again, it's super easy to do that. And we'll add another gesture which is rotating the entire scene with our hands. So, in order to rotate everything, we just need to detect straight fingers. So, I know that I want to do like this, you know, like in minority report. So, everything is straight and vertical. And then I need to detect the movement of the fingers in order to change the camera. Either rotate it or go backward and forward. Okay. Let's see. And now... Good energies. Okay. Come on. This can take a while. Go get sandwiches. Wait. Sorry. I don't know why it's taking so long. It shouldn't. Wait. Yeah. You can snap with me. Okay. One. Snap with me. Come on. Could have been a really cool demo. Well, okay. Sun first. Then flick. Spin. Spin. Earth. Spin. Flick. Woo! Back, back, back, back, back. Wait. Wait, wait, wait. Come on, Mars. Come on, Mars. Elon is counting on you. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars. Come on, Mars.
36 min
16 Jun, 2022

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic