Migration from WebGL to WebGPU

Rate this content
Bookmark

In this presentation, I'll explore the transition from WebGL to WebGPU, demonstrating how these changes affect game development. The talk will include practical examples and code snippets to illustrate key differences and their implications on performance and efficiency.

21 min
28 Sep, 2023

Video Summary and Transcription

This talk explores the differences between WebGL and WebGPU, with a focus on transitioning from WebGL to WebGPU. It discusses the initialization process and shader programs in both APIs, as well as the creation of pipelines in WebGPU. The comparison of uniforms highlights the use of uniform buffers for improved performance. The talk also covers the differences in conventions between WebGL and WebGPU, including textures, viewport and clip spaces. Lastly, it mentions the differences in depth range and projection matrix between the two APIs.

Available in Español

1. Introduction to WebGL and WebGPU

Short description:

In this talk, we will explore the differences between WebGL and the soon to be released WebGPU and learn how to get the project ready for transition. WebGL has a history that dates back to 1993, and the first stable version, WebGL 1.0, was released in 2011. WebGL 2.0, released in 2017, brought several improvements and new features. WebGPU, built on Vulkan, Direct3D 12, and Metal, has been making significant progress and is supported by several engines.

Hello, everyone. I am Dmitry Vaschenko, a Lead Software Engineer at My.Games. And in this talk, we will explore the differences between WebGL and the soon to be released WebGPU and learn how to get the project ready for transition.

Let's begin by exploring the timeline of WebGL and WebGPU, as well as the current state of WebGL and WebGPU. WebGL, similar to other technologies, has a history that dates back to the past. The desktop version of WebGL debuted way back in 1993. In 2011, WebGL 1.0 was released as the first stable version of WebGL. It was based on OpenGL ES 2.0, which was introduced in 2007. And this release allowed web developers to incorporate 3D objects into browsers without requiring extra plugins. In 2017, a new version of WebGL was introduced, called WebGL 2.0. And this version was released six years after the initial version, and was based on WebGL ES 3.0, which was released in 2012. WebGL 2.0 came with several improvements and new features, making it even more capable of producing powerful 3D graphics on the web.

Lately, there has been a growing interest in new graphics APIs that offers developers more control and flexibility. Three notable APIs here are Vulkan, Direct3D 12, and Metal. Together these three APIs create the foundation for WebGPU. Vulkan, developed by the Kronos Group, is a cross-platform API that provides developers with lower level access to graphics hardware resources. This allows for high performance applications with better control of graphics hardware. Direct3D 12, created by Microsoft, is exclusively for Windows and Xbox, obviously, and offers developers deeper control over graphics resources. And Metal, an exclusive API for Apple devices, which designed by Apple, of course, with maximum performance in mind of their hardware. WebGPU has been making significant progress lately. It has expanded to platforms like Mac, Windows and Chrome OS, now available in Chrome and aged 113 versions. And Linux and Android support is expected to be added soon. There are several engines that either support or are experimenting with WebGPU. For example, Babylon.js fully supports WebGPU, while Tree.js currently has experimental support. Play Canvas is still in development, but its future looks promising. And Unity made an announcement of early and experimental WebGPU support in alpha version 2023.2. Cocoa's Creator 3.6.2 officially supports WebGPU. And finally Construct is currently only supporting Chrome version 113 or later on Windows, MacOS and Chrome OS machines. Taking this into consideration, it seems like a wise move to start transitioning towards WebGPU or at least preparing projects for future transition. Now let's explore the main high-level differences.

2. Graphics API Initialization and Shader Programs

Short description:

When working with graphics APIs like WebGL and WebGPU, the first step is to initialize the main object for interaction. WebGL uses contacts to represent an interface for drawing on a specific HTML5 canvas element, while WebGPU introduces the concept of a device that provides more flexibility. In WebGL, the shader program is the primary focus, and creating a program involves multiple steps. However, this process can be complicated and error-prone.

And when beginning to work with graphics APIs, the first step is to initialize the main object for interaction. This project process has some differences between WebGL and WebGPU, which can cause some issues in both systems. In WebGL this object is called contacts. And this context represents an interface for drawing on an HTML5 canvas element. And obtaining these contacts is easy, but it's important to note that it's tied to a specific canvas. This means that if you need to render on multiple canvases, you will need multiple contacts.

And WebGPU introduces a new concept called device. The device represents a GPU abstraction that you will interact with. The initialization process is a bit more complex than in WebGL, but it provides more flexibility. One advantage of this model is that one device can render on multiple canvases or even none. This provides additional flexibility, allowing one device to control rendering in multiple windows or contexts.

WebGL and WebGPU are two distinct methods for managing and organizing the graphics pipeline. In WebGL, the primary emphasis is one on the shader program, which combines vertex and fragment shaders to determine how vertex is transformed and how each pixel is colored. To create a program in WebGL, you need to follow several steps. Firstly, you need to write and compile the source code for shaders. Next you need to attach the compiled shaders to the program and then link them. Next you need to activate the program before rendering. And lastly, you need to transmit data to the activated program. This process provides flexible control over graphics but can be complicated and prone to errors, particularly for large and complex projects.

3. WebGPU Pipeline Creation

Short description:

In WebGPU, a pipeline replaces separate programs and includes shaders and other rendering parameters. Creating a pipeline involves defining the shader, creating the pipeline, and activating it before rendering. This approach simplifies the process and allows for optimized and efficient graphics on the web.

When developing graphics for the web, it's essential to have a streamlined and efficient process. And in WebGPU, this is achieved through the use of a pipeline. The pipeline replaces the need for separate programs and includes not only shaders but also other critical information that are established as states in WebGL. Creating a pipeline in WebGPU may seem more complicated initially but it offers greater flexibility and modularity. The process involves three key steps.

First, you must define the shader by writing and compiling the shader source code just as you would in WebGL. Second, you create the pipeline by combining the shaders and other rendering parameters into a cohesive unit. And finally, you must activate the pipeline before rendering. Compared to WebGL, WebGPU encapsulates more aspects of rendering into a single object. This approach creates a more predictable and error-resistant process, and instead of managing shaders and rendering states separately, everything is combined into one pipeline object. By following these steps, developers can create optimized and efficient graphics for the web with ease.

4. Comparison of Uniforms in WebGL and WebGPU

Short description:

Uniform variables in WebGL and WebGPU can be consolidated into larger structures using uniform buffers, leading to reduced API calls and improved performance. WebGL2 allows for subsets of a large uniform buffer to be bound through the bind-buffer-range API call, while WebGPU uses Dynamic Uniform Buffer offsets. These optimizations provide flexibility and efficiency for developers working on WebGL and WebGPU projects.

Now, let's compare the uniforms in WebGL and WebGPU. Uniform variables offer constant data that can be accessed by all shader instances, and with basic WebGL, we can set uniform variables directly via API calls. However, this approach is straightforward, but necessitates multiple API calls for each uniform variable. With the advert of WebGL2, developers are now able to group uniform variables into buffers, a highly efficient alternative to using separate uniform shaders. By consolidating different uniforms into a larger structure using uniform buffers, all uniform data can be transmitted to the GPU at once, leading to reduced API calls and superior performance. In this case of WebGL2, subsets of a large uniform buffer can be bound through a special API call, known as bind-buffer-range. Similarly, in WebGPU, Dynamic Uniform Buffer offsets are utilized for the same purpose, allowing the passing of a list of offsets when invoking the set-bind group API. This level of flexibility and optimization has made Uniform Buffers a valuable tool for developers looking to optimize their WebGL and WebGPU projects.

5. Transitioning from WebGL to WebGPU

Short description:

Instead of supporting individual Uniform Variables, work is exclusively done through Uniform Buffers. Loading data in one large block is preferred by modern GPUs instead of many small ones. Transitioning from WebGL to WebGPU involves modifying both the API and shaders. The WGSL specification facilitates a seamless and intuitive transition while ensuring optimal efficiency and performance for contemporary GPUs. If you are working with WGSL, you will notice that some of the built-in GLSL functions have different names or have been replaced. There are tools available that can automate the process of converting GLSL to WGSL. Let's talk about some of the differences in conventions between WebGL and WebGPU. Specifically, we will go over disparities in textures, viewport and clip spaces. When you migrate, you may come across an unexpected issue where your images are flipped.

A better method is available through WebGPU. Instead of supporting individual Uniform Variables, work is exclusively done through Uniform Buffers. Loading data in one large block is preferred by modern GPUs instead of many small ones. Rather than recreating and rebinding small buffers each time, creating one large buffer and using different parts of it for different draw calls can significantly increase performance. And while WebGL is more imperative, resetting global state with each call and striving to be as simple as possible, WebGPU aims to be more object-oriented and focused on resource reuse, which leads to efficiency, of course.

Although transitioning from WebGL to WebGPU may seem difficult due to difference in methods, starting with a transition to WebGL2 as an intermediate step can simplify the work. Transitioning from WebGL to WebGPU involves modifying both the API and shaders. The WGSL specification facilitates a seamless and intuitive transition while ensuring optimal efficiency and performance for contemporary GPUs. I have an example shader for a texture that uses GLSL and WGSL. WGSL serves as a connection between WebGPU and native graphics APIs. Although WGSL appears to be more wordly than GLSL, the format is still recognizable. The following tables display a comparison between the basic and matrix datatypes found in GLSL and WGSL. Moving from GLSL to WGSL indicates a preference for more stringent typing and clear specification of data size, resulting in better quality legibility and lower chance of mistake. The metadeclaring structures has been altered with addition of explicit syntax for declaring fields in WGSL structures, and this highlights the need for improved clarity and simplification for data structures in shaders. By altering the syntax of functions in WGSL, it promotes a unified approach to declarations and return values, which results in more consistent and predictable code.

If you are working with WGSL, you will notice that some of the built-in GLSL functions have different names or have been replaced. This is actually helpful because it simplifies the function names and makes them more intuitive. This will make it easier for developers who are familiar with other graphical graphic APIs to transition to WGSL. If you are planning to convert your WebGL projects to WebGPU, there are tools available that can automate the process of converting GLSL to WGSL. One such tool is Naga, a Rust library that can be used to convert GLSL to WGSL, and best of all, it can even be used right in your browser with the help of WebAssembly.

Let's talk about some of the differences in conventions between WebGL and WebGPU. Specifically, we will go over disparities in textures, viewport and clip spaces. And when you migrate, you may come across an unexpected issue where your images are flipped. This is a common problem for those who have moved applications from OpenGL to Direct3D. In OpenGL and WebGL, images are usually loaded so that the first pixel is in the bottom left corner. However, many developers load images starting from top left corner, which results in a flipped image. Direct3D and Metal systems use the upper left corner as the starting point for textures. and the developers of WebGPU have decided to follow this practice since it appears to be most straightforward approach for most developers. If your WebGL code selects pixels from the frame buffer, it's important to keep in mind that WebGPU uses a different coordinate system. To adjust for this, you may need to apply a straightforward y="1-y'' operation to correct the coordinates.

6. Differences in Depth Range and Projection Matrix

Short description:

WebGL and WebGPU have different definitions for the depth range of the clipping space. WebGL uses a range from minus one to one, while WebGPU uses a range from zero to one. The projection matrix is responsible for transforming the positions of your model into clip space. Adjustments can be made by ensuring the projection matrix generates outputs ranging from zero to one. Transitioning to WebGPU is a step towards the future of web graphics, combining successful features and practices from various graphics APIs.

If a developer encounters a problem where objects are disappearing or being clipped too soon, it may be due to differences in the depth domain. WebGL and WebGPU have different definitions for the depth range of the clipping space. While WebGL uses a range from minus one to one, WebGPU uses a range from zero to one, which is similar to other graphics APIs like Diag2D, Metal and Vulkan. This decision was made based on advantages of using a range from zero to one that were discovered while working with other graphics APIs.

So the projection matrix is primarily responsible for transforming the positions of your model into clip space, and one useful way to make adjustments to your code is to ensure that the projection matrix generates outputs ranging from zero to one. This can be achieved by implementing certain functions available in libraries like GLMatrix, such as PerspectiveCO function. Other metrics operations also offer comparable functions that you can utilize. And in the event when you are working with an existing project matrix that cannot be modified, there is still a solution. You can transform the projection matrix to fit the 0 to 1 range by applying another metric that modifies the depth range before the projection matrix. This pre-multiplication technique can be an effective way to adjust the range of your projection matrix to fit your needs.

So, as you see, transitioning to WebGPU is more than just switching graphic APIs. It's a step towards the future of web graphics, combining successful features and practices from various graphics APIs.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Building Fun Experiments with WebXR & Babylon.js
Top Content
During this session, we’ll see a couple of demos of what you can do using WebXR, with Babylon.js. From VR audio experiments, to casual gaming in VR on an arcade machine up to more serious usage to create new ways of collaboration using either AR or VR, you should have a pretty good understanding of what you can do today.
Check the article as well to see the full content including code samples: article. 

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
116 min
Make a Game With PlayCanvas in 2 Hours
Featured WorkshopFree
In this workshop, we’ll build a game using the PlayCanvas WebGL engine from start to finish. From development to publishing, we’ll cover the most crucial features such as scripting, UI creation and much more.
Table of the content:- Introduction- Intro to PlayCanvas- What we will be building- Adding a character model and animation- Making the character move with scripts- 'Fake' running- Adding obstacles- Detecting collisions- Adding a score counter- Game over and restarting- Wrap up!- Questions
Workshop levelFamiliarity with game engines and game development aspects is recommended, but not required.
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
JS GameDev Summit 2022JS GameDev Summit 2022
165 min
How to make amazing generative art with simple JavaScript code
Top Content
WorkshopFree
Instead of manually drawing each image like traditional art, generative artists write programs that are capable of producing a variety of results. In this workshop you will learn how to create incredible generative art using only a web browser and text editor. Starting with basic concepts and building towards advanced theory, we will cover everything you need to know.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
JS GameDev Summit 2022JS GameDev Summit 2022
121 min
PlayCanvas End-to-End : the quick version
Top Content
WorkshopFree
In this workshop, we’ll build a complete game using the PlayCanvas engine while learning the best practices for project management. From development to publishing, we’ll cover the most crucial features such as asset management, scripting, audio, debugging, and much more.