Games Are Smarter Than Us

Rate this content

JS awsomeness beyond webpages. First we'll write a cool 2D game (in Javascript) - and then - write AI code (in Javascript!) that will be able to win this game for us. Oh, what a time to be alive!

26 min
17 Jun, 2021

AI Generated Video Summary

Today's Talk explores game development using JavaScript, including building games in the browser, using game engines, and utilizing the BitMellow framework. It also delves into the concept of using AI to make computers play games, discussing reinforcement learning and implementing it in games like Flappy Bird. The Talk highlights the process of teaching the agent to learn, modifying rewards to improve performance, and the journey of game development from initial stages to advanced AI integration.

1. Introduction to Games and JavaScript

Short description:

Today we're going to talk about games and how we can use computers to play them. We'll divide this talk into two parts: building a game in the browser with JavaScript and teaching the computer to play the game. In the first part, we'll explore how to build a game and the challenges that come with it. In the second part, we'll discuss the importance of game graphics and how to use the Canvas API to draw the game graphics.

Script and JavaScript Learning Script and JavaScript Learning Script and JavaScript Learning Script and JavaScript Learning Script and JavaScript Learning Script and JavaScript Learning Script and JavaScript Learning Hi people, thank you for joining, and for those of you seeing it in the streaming afterwards, thank you for watching. Today we're going to do a little bit of a different talk, we're gonna talk a little bit about games and how we can play them and how we can use computers to play them.

So the talk title is games always change. And a little bit about myself. I'm Liad, I'm the Client Architect at Duda, and the subtitle of this talk is pixels and robots because we're going to divide this talk into two parts. The first part, we're going to see how to build a game ourselves in the browser with javascript. And the second part is how to make the game or teach the game to teach the computer to play the game for us. So the game will be smarter than us.

So the first thing, the chapter one of our journey, is the village of Borden. And why do I call it the village of Borden? Because you want to build a game, right? And in your head, you see the game, and you say, you think to yourself, okay, I'm going to build this game and I'm going to add a chat so every player can talk with each other, and I'm going to add a video chat that all of my friends can talk to each other and another video chat with the rest of my friends and another box for one of my oldest friends. And then you start to say, okay, how do you develop a game in JavaScript? And you go online and you read about JavaScript game development, and you see like a six-minute read article. It's a good article, but it's only six minutes read. And you say, okay, learn as you go, how do I build it? And you go to the other schools and you see an HTML game example with just a few boxes and it kind of feels like that, right? How to draw an owl, just draw two circles and then draw the rest of the owl. So, it doesn't really help. And then you go to the other end of the scale and you see these very long tutorials, hours and hours and hours on how to build a game, how to code a game in JavaScript. And you sit in front of your VS code, and you sit in front of your screen, your bank screen, and you kind of feel like that, right? You say, okay, I'm not going to do that. I'm not going to invest 30, 40, 50 hours of coding just to understand how to write a JavaScript game. So, you get to the peak of despair, and the peak of despair leads you straight back to Netflix. But it doesn't have to be like that.

So, let's start with our chapter two, value of pixels and see what do we need to build a game in JavaScript? So, the first thing and obviously the most important thing is the game graphics. And in JavaScript, the conferences are about React. So, you're probably going to use React components to build the game. It turns out that in JavaScript games, you don't really use React components. It's not that performant. You use an API that's called Canvas. And what does it mean? It means that you have in your head the drawing of how you want to build the game, how the map looks like, and how do you want the game to look, and how do you want the character to look. And you have this thing in your head. But what you really need is to draw a canvas, to draw a canvas in the body, and then you have an API for the canvas that allows you to draw on it as if you were the browser. So, instead of putting elements and animating them, you can just draw on the canvas. And you do it by getting the context, and then just using this context and the Canvas API is very rich and very easy to search to actually draw your game graphics.

2. Building Game Elements and Using Game Engines

Short description:

To build a game, you need to put your game elements in the code and use the Canvas to draw them. You also need to listen for controls and implement the game logic. All of this happens in one frame of the game, which is controlled by the game loop. Drawing on the Canvas can be challenging, so sprites are often used to simplify the process. If building a game from scratch seems daunting, game engines like Phaser, Pixy.js, and GDevelop can provide a framework to focus on graphics and logic. Another option is the framework.

And then you have to put your game elements. Since you don't use React and don't use state management, you just put elements, you just put objects in your code, and for example, you have the first player, and you have its coordinates, and is it running or not, is it healthy or not, and then you have the enemy, for some reason it's John McLean, but Fry fighting John McLean, and you have its coordinates and the velocity, and these are the things that represent your elements, and then the Canvas, you will use the Canvas to draw from these objects.

You have to listen for controls, right, so you have the keys, and then you listen to event listeners on the window key down or key up, and once you listen to those events, you register the key. You say, okay, if the key was pressed, then I put it as true, then I put it as false, and this is the code that does that, right? You say the key code is left, then you put keys.left is pressed, and then you have in the keys object, you have the list of all the keys that are pressed.

Then you have to implement the game logic. So, the game logic is actually where the game takes place. So, for example, you say, okay, if the keys.left was pressed, then I will increase the Batman velocity X. If it's down, I'll increase the Batman velocity Y, and then you have to say, okay, to detect for edges, if the Batman.y is bigger than the screen height, then I have to put it in the screen height, and then you say, okay, if Batman and Superman are the same X, then you'll reduce Batman's lives. Otherwise, if he takes the heart, you increase his lives, and if it's smarter, you break. And this, all this is to you have to make sure that the code follows the logic of the game that you want to do.

Now, bear in mind, all of those things are happening in one frame of the game. So that brings us to the game loop. You have to run a loop using request animation frame, and you have to do with all the magic of the game inside the loop. And every animation frame, you run all the logic of the game. So, you basically update everything according to the keys that were pressed, like we saw. And then you run the actual game logic. You detect collisions, you detect edges, everything. And then in the end, when you have the updated objects, you draw them on Canvas, and then you run the next frame, and the next frame, and the next frame.

Now, we talked about drawing on Canvas, but it's not that easy. You can see here, for example, how to draw this pixel art character on the canvas. And you see that it's very, very, very, very, long to do it in code. We usually will use sprites. Sprites are, as you can see here, you have all the modes of the character in one image, compressed into one image, so you can just use the part of the image that you want according to the position and the velocity of the character.

And when you see all that, do you still want to build a game? Do you still think it's easy? Probably not so much. Luckily, we have game engines, and game engines are frameworks that encapsulate most of the things that we don't need to worry about, and they just let us focus on the graphics and the logic. That brings us to chapter 3, or the middle of efficiency. So there are a lot of JavaScript game engines. Phaser is one of the best ones, and you have Pixy.js, which is also very good. You have GDevelop, and most of them require some sort of learning or previous knowledge in order to use them. I chose to show here a framework that's called

3. BitMellow: Exporting Logic and Using React

Short description:

BitMellow is a simple and easy JavaScript game framework that allows you to export your logic, HTML, and assets for use in your own application. It provides a tile editor, tile map editor, sound editor, and code editor. You can export your project as a single HTML file, but we'll use Npx create React app to build a small React app instead. By importing the BitMellow engine, game logic, and project data JSON, we can initialize the game logic in the React game wrapper.

It's a very simple, very easy JavaScript game framework, and the nicest thing about it is that it lets you export, export your logic, export your HTML, export your assets and use it in your own application.

So let's see. When we go inside bitmello, we see this kind of screen. You can see here on the left, we have the tile editor. The tile editor basically lets us edit the tiles that are going to be used in the game by us. So we can edit the actor. We can draw here something else if we want, if we want to add another character. We'll see later in the talk how we're going to implement this. And you have the tile map editor, and that basically helps you edit the map of the game. You have the sound editor, which is very nice. You can add sounds to the game. And the most important part is the code. Don't worry if it's too small here. We're going to go over it later in greater details. And you can export your project, which is nice. In the tile editor, you can see here on the right some tutorials. So if you want to plan animations, you can see here how to plan the animation. If you want to draw clouds, you have some explanations how it's the best way to draw the clouds, how they should flow. So that's why I really like BitMellow. It's exporting an HTML. It's not something we can use, it's exporting a single HTML that basically includes all the data of the game. And we don't want to use that. And for that, for those of you who wondered how any of that is connected to React. So we'll use Npx create React app to the game, and that's the first and the last appearance of React in this talk. So we're going to use a game, sorry, we use the create React app and we're going to build ourselves like a small React app, and we're going to break the HTML into the parts. So we're gonna import the BitMellow engine. We're gonna take all the game logic that we wrote and put it in game.js, and we're gonna import from the project data JSON. And in the React, in the game wrapper, we're just gonna init the game logic. This is the JSON. And the nice thing about BitMellow is that it lets you export only the JSON.

4. Building Games with BitMellow

Short description:

You can edit and export your game in BitMellow, and easily build games using the BitMellow engine. The engine handles drawing the map, targets, and player, as well as updating the player's coordinates and checking for target grabs. It also handles animating the player's movements based on the frame count. With BitMellow, game development becomes simple and enjoyable.

So you can keep editing in BitMellow and exporting and overriding only the JSON. Okay, so let's see what we have. So we have here this Qt game. So you can easily write a game like that, where you just go and collect mushrooms. And you can animate the text and you can decide which mushroom appears where. And you can see here the text that tells you how many mushrooms you've collected. And basically what it does is that you can see that engine on update, that's the loop that we discussed. So it draws the map and then draws the targets. So for each target, for each mushroom, if the target wasn't grabbed already, it draws the tile. And it's really easy, just screen dot draw tile, that's all we need. And then it updates the player. So what does it mean to update the player? So you just taking the coordinate of the player, you check four keys like we saw. And we saw and then you update the coordinates. And then you calculate the distance to the target. And if the distance is small enough, you say, OK, the target was grabbed. And the last thing is just drawing the player. And like you saw in the tiles, you have three tiles for the player because it can go when it walks, it lifts one leg and then lifts the other leg. So you say, OK, you count the frame since it started walking and then you decide which tile to drop. So it's really nice. Yeah, so it's safe to say that we leveled up and now we can build, easily build games using the Bitmallow engine.

5. Using AI to Make Computers Play Games

Short description:

Now, let's explore the concept of using AI to make the computer play games. AI can analyze parameters and make decisions on what actions to take. Unlike coding rules for every step and state, AI can infer rules from data. However, in computer games, we don't have all the inputs and results, so reinforcement learning is used. This involves teaching the computer to learn from rewards and punishments instead of labeled data.

And now it's time to do something a little bit more interesting. It's time to visit the Mountain of Wisdom. And what does it mean, the Mountain of Wisdom? Well, building our own game wasn't the end goal for us, right? We wanted to make the computer play it. And the idea is that in the last years that you can see a lot of papers, a lot of work about AI playing games. And you can see here a StarCraft by DeepMind some from the last years. And it's not as complicated as it seems, but it requires a lot of training. And we'll see in a second how does it work behind the scenes.

So you can see here an AI playing Super Mario. And you can see here what happens behind the scenes. So you can see what happens behind the scenes. Well, the AI looks at all of the parameters and then decides what to do, what kind of action to do. Okay. So the thing about using AI instead of coding the rules, is that if you want to play a game by coding the rules, you have to code and to tell the computer what to do in every step, in every state, meaning if the player.x larger than enemy.x and the player is enough health then move is run from the enemy. And when you use AI, you just try to infer the rules from the data. So you tell the computer, okay, this is the input. This is the result. This is the input. This is the result. This is the input. This is the result. Just train on that. And then I'll give you a new input and you will tell me what's a new result. The thing is that with computer games, we don't have all the inputs and we don't have all the results in So for example, in StarCraft, we can say, okay, in this frame, do that. In this frame, do that. In this frame, do that. Because we don't know that. So we have reinforcement learning. Reinforcement learning, meaning that we teach the model, we teach the computer to learn from rewards and punishments instead of results. Instead of labeling the data. What does it mean? Let's say we have an agent and we have the environment.

6. Observing State and Evaluating Actions

Short description:

The agent observes the state, runs processing, and selects the best action based on immediate and expected rewards. The environment returns a reward based on the action, which can be zero, positive, or negative. Actions are evaluated by comparing their outcomes and updating a table of state-action values. Reinforcement learning involves trying actions, receiving rewards, and learning their effectiveness.

So the agent wants to play the environment. So first of all, our agent needs to observe the state. And the state usually means what happens right now. How can I quantify what I see right now in the game? For example, for super Mario, it can be the horizontal, the vertical axis, how many enemies do you see, how many lives, how many coins. That can be the state.

And the agent observes the state, it runs some sort of processing and then it invokes an action and it sends the action to the environment and it doesn't know anything about the environment. It just knows that it needs to send the action there. So the environment gets the action and in return it returns a reward. And the reward can be zero if nothing happens, and it can be positive, it can be negative. It depends on the outcomes of the action and the environment also changes the state. So it sets a new state.

So now the question is, how good is every action? So when an agent sees a state, it needs to know how good, it needs to look at all the actions and select the best one. So how do you measure the best one? How good is the action? Well, you calculate the immediate reward that you get from the action. And since it brings you to a new state, you also calculate the expected reward from this new state. So if it brings me to a better state, this action is better than some action that brings me to a worst state. So basically we just hold some sort of a table, and in this table for every state, for example, now we're in state two. We look at the actions and we see that the best one here is up, right, because it has a value of 11. So we select up. And now if we select up, we get some sort of reward, so we get a state two, and then it moves us to another state because we were in state two and then we selected up, so now we're in state three. In state three we select the best action here, which is right, but we also update the value of the up action from the previous state because now we know it gets us to a state where the value is 10. So now I updated it. So basically, reinforcement learning is just try and fail and try and get a reward and try and fail and try and get reward and try to learn how good are the actions when you do that.

7. Implementing Flappy Bird and Reinforce.js

Short description:

Here's an example of a flappy bird game and how it learns to play through iteration. If the state is complex, a neural network can be used to output actions. For more complex games, frames are used instead of states. TensorFlow and TensorFlow.js are frameworks that enable machine learning in JavaScript. The MetaCar project and open AI provide environments for training game-playing code. Reinforce.js is an easy-to-use library for defining states, actions, and training agents.

Okay. And here's an example of a flappy bird, something that tries to play flappy bird, if you know, and basically that's what it does, right? It saves in the table the state and whether to flap or to do nothing. And it takes a lot of iteration. It takes something like 25 minutes until it learns to play it completely.

And if you have a very complex state where you can't put it in the table, you just put a neural network and that's a whole bigger world to discuss, a neural network that gets the state and then outputs the action. That's what we're going to use. And for more complex games, you don't even use a state, you just feed the frames. This is the frame of the game, just decide what to do. This is the frame. The actual pixels, not the state. That's what autonomous cars are doing.

Okay. Everything is cool. But how do we implement it? So, we have a nice framework that's called TensorFlow, which can help us. And we have TensorFlow.js, of course, that allows us to use those sort of machine learning in JavaScript. And it can do really cool things. I recommend checking TensorFlow.js and the MetaCar project, which is not using TensorFlow, but it's autonomous in the browser. Which is cool. And we have open AI, which gives us a gym, an environment for retro games. So, we can just send actions, get rewards, send actions, get rewards, and then train on how to play those games, which is kind of nice. We can just write code that plays this game.

Okay. But we're gonna use something that's called Reinforce.js. And it's really easy to use. You just define the states that you expect and the actions that you want. And then you initialize an agent. You act on the state. You tell the agent, act on the state. And then the agent returns the action that you need to do. You try to execute the action and get the reward.

8. Teaching the Agent to Learn

Short description:

To make the agent learn, we initialize it, act on the state, get the reward, and learn from it. The AI.js code is straightforward: it gets the actor and target, calculates the distance between them, acts on their coordinates, invokes the actions, gets the new coordinates, and determines the reward based on the distance to the target. We can now watch the computer play the game, earning positive rewards for getting closer to the mushroom and negative rewards for moving away from it.

And then you tell the agent to learn from the reward. And it's that easy. That's it. You just initialize the agent. You act on the state. You get the reward. You learn from the reward. And then in the loop, you do it all over again.

So, that's what we're going to use. We're going to change Game.js to expose our actors, the actor and the target. And to allow us to fire action. And we have here the AI.js code. I'll share the code later. But basically, it's really easy. It just basically says okay, get the actor and the target. Get the distance between them. Act on their coordinates. Invoke the actions. Invoke the action. Get the new coordinates. And then get the reward. And the reward is did you get closer to the target. Then we'll get positive reward. If we got further from the target, we get negative reward. And we learn from this reward.

So, now, we can just start playing give the computer to play for us. So, you can see it on the left. And you can see our agent playing and exploring. And it gets positive reward if it gets closer to the mushroom. And it gets negative reward if it gets further from the mushroom.

9. Computer Learning and Model Optimization

Short description:

The computer learns to get closer to the mushroom and improves over time. The rewards are negative as it moves further from the next mushroom or cake. The model can be saved and loaded for better performance. When the agent reaches higher levels, it becomes very good.

You can see it gets stuck but quickly it gets to the mushroom. And it learns to get closer to the mushroom. And that's the computer playing. And it's learning and learning. And you can see the rewards are negative because every time it takes a mushroom or a cake, it gets further from the next one. So the cumulative reward are negative. And since everything is modeled in the network, we can actually save this model and then load it. So if I load the model or I load a previously trained model, my agent should be better. My agent should be, it should use all the knowledge that it already learned in previous iterations in order to play. And you can, I'll share this website later and you can just play it and you can see that when it gets to 30, 40, 50, it gets really, really good. So cool.

10. Exploring AI and Modifying Rewards

Short description:

Now we have AI, which is very good. We can add other tiles and change game logic without modifying the AI code. Rick can collect planets in a space-themed game. By changing the reward, Rick learns to get further from Morty. Rewards for getting closer to something and getting further from Morty can be added.

Now we have AI, which is very good. And now we can start a little bit to explore with that. Yeah, I think we deserve a level up for that.

Okay, so now we get to the final chapter, the river of opportunities. And since we have AI, and the AI is not connected at all to the game, we can just add other tiles, add different game logic, and the AI will still be able to play this game. So let's say I draw a lot of other tiles now, I draw Rick and Morty and I draw some stars and I want my game to be in space, so I put my game in space. I don't change at all the AI code, and I just start. And I let my Rick try to collect the planets. So you can see it collect the planets. I loaded the previously trained model from the mushroom game, but it doesn't matter because all I changed was the graphics and some of the logic of the planets. And you can see that Rick is still a little bit confused, but it won't take long until it takes Mars as well, and then cake. And so that's really nice. And if you let it run, it gets really, really good.

We can also add another player and change the reward. So instead of rewarding for getting closer to something, we can reward it for getting further from something. So let's say we have Rick and Morty, and we have Morty running around here, and we will input Rick's coordinates and Morty's coordinates and we'll change the reward. So instead of rewarding for getting closer, we'll reward for getting further. So we'll add a minus here. And then we can see that Rick's trying to escape Morty, and I can't load anything here because the model that I trained earlier was about getting closer. So now we need to let Rick learn on its own to get further from Morty. And you can see that whenever Morty gets closer, Rick gets further away and eventually gets really, really good at that. And the rewards are getting higher and higher. Well, I think Morty is picking Rick here, so we put it in the corner.

The last thing that you can do is try to add the reward. So the reward for getting closer to something and reward for getting further from Morty. And let's see how it works. So you see the reward here is minus one for getting closer to Morty and plus one getting closer to the target. And here, Rick gets a little bit more confused because he needs to move in the space in that kind of way. For example, this situation where Morty is very close to the target, he doesn't know what to do. And you can see the rewards are getting negative very fast.

11. The Journey of Game Development

Short description:

We went through different stages in our game development journey, from the village of boredom to the pit of despair, and then to the valley of pixels where we learned game building. We reached the middle of efficiency and the mountain of wisdom, where we added AI to our games. Finally, we arrived at the river of opportunities, realizing the endless possibilities with our model. So, don't just dream, take action and implement your ideas. You're awesome!

But again, if you let it run for a few minutes, it gets really, really good at that. So what did we have? We have the village of boredom, which led us to the pit of despair. And then we had the valley of pixels where we learned how to build games. The middle of efficiency, the mountain of wisdom where we learned how to add AI to the games. And then the river of opportunities where we saw that once we have this model, we can do everything, anything with it.

So don't let your dreams be dreams. Just do it. Just do it. If you think about something, just read about it and try to implement it. And that's it. You're awesome. I was Liad. Here you can see I have the repository and the website to play with, and I'll share the presentation.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Building Fun Experiments with WebXR & Babylon.js
During this session, we’ll see a couple of demos of what you can do using WebXR, with Babylon.js. From VR audio experiments, to casual gaming in VR on an arcade machine up to more serious usage to create new ways of collaboration using either AR or VR, you should have a pretty good understanding of what you can do today.
Check the article as well to see the full content including code samples: article. 
JSNation Live 2021JSNation Live 2021
27 min
Building Brain-controlled Interfaces in JavaScript
Neurotechnology is the use of technological tools to understand more about the brain and enable a direct connection with the nervous system. Research in this space is not new, however, its accessibility to JavaScript developers is.Over the past few years, brain sensors have become available to the public, with tooling that makes it possible for web developers to experiment building brain-controlled interfaces.As this technology is evolving and unlocking new opportunities, let's look into one of the latest devices available, how it works, the possibilities it opens up, and how to get started building your first mind-controlled app using JavaScript.
React Summit 2023React Summit 2023
32 min
How Not to Build a Video Game
In this talk we'll delve into the art of creating something meaningful and fulfilling. Through the lens of my own journey of rediscovering my passion for coding and building a video game from the ground up with JavaScript and React, we will explore the trade-offs between easy solutions and fast performance. You will gain valuable insights into rapid prototyping, test infrastructure, and a range of CSS tricks that can be applied to both game development and your day-to-day work.
6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.

Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter

Workshops on related topic

JSNation 2023JSNation 2023
116 min
Make a Game With PlayCanvas in 2 Hours
Featured WorkshopFree
In this workshop, we’ll build a game using the PlayCanvas WebGL engine from start to finish. From development to publishing, we’ll cover the most crucial features such as scripting, UI creation and much more.
Table of the content:- Introduction- Intro to PlayCanvas- What we will be building- Adding a character model and animation- Making the character move with scripts- 'Fake' running- Adding obstacles- Detecting collisions- Adding a score counter- Game over and restarting- Wrap up!- Questions
Workshop levelFamiliarity with game engines and game development aspects is recommended, but not required.
JS GameDev Summit 2022JS GameDev Summit 2022
121 min
PlayCanvas End-to-End : the quick version
In this workshop, we’ll build a complete game using the PlayCanvas engine while learning the best practices for project management. From development to publishing, we’ll cover the most crucial features such as asset management, scripting, audio, debugging, and much more.
JS GameDev Summit 2022JS GameDev Summit 2022
86 min
Introduction to WebXR with Babylon.js
In this workshop, we'll introduce you to the core concepts of building Mixed Reality experiences with WebXR and Balon.js.
You'll learn the following:- How to add 3D mesh objects and buttons to a scene- How to use procedural textures- How to add actions to objects- How to take advantage of the default Cross Reality (XR) experience- How to add physics to a scene
For the first project in this workshop, you'll create an interactive Mixed Reality experience that'll display basketball player stats to fans and coaches. For the second project in this workshop, you'll create a voice activated WebXR app using Balon.js and Azure Speech-to-Text. You'll then deploy the web app using Static Website Hosting provided Azure Blob Storage.
ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.