A primitive 2d game engine in Rust + Wgpu
Reinventing the wheel to do what unity could in a couple of hours in 4 months instead
Published at Feb 2, 2025
I spent a good chunk of my free time over the past four months working on a 2D game engine in Rust. This culminated in a very basic 2D platformer I made. The engine is built on top of the Entity Component System, or ECS for short.
I was very interested in how graphics worked under the hood. I took a class at university 3 years ago that taught me some basics, but I was itching to get back into it. The combination of technical and creative discipline along with a really quick feedback loop drew me in. I was always enamored with game engines and creating one, just because it always seemed like a huge technical challenge. I was never one to shy away from a challenge, so I finally got around to making one.
This is what the current game looks like:
As you can see, it is pretty basic on the gameplay front. But a lot of things have to happen in the background:
- request the GPU device for the current platform (MacOS, Windows, Chrome, etc.)
- Meticulous setup between the GPU and the CPU to allow for data transfer using wgpu (render pipelines, bind groups, uniform buffers, storage buffers, etc.)
- Load and specify each texture in advance and sample to a quad to appear on the screen.
- create an orthographic projection camera to create an API that allows humans to reason about what goes on the screen
- matrix multiplications to transfer from world coordinates to screen coordinates (and multiple different GPU coordinate spaces)
- perform tons of physics calculations to do collision detection, movement, and applying jump acceleration/velocity
- register the lights on the screen and adjust the shade/color of the scene appropriately
- adjust all elements on the screen properly if we resize the window
There’s a pretty good reason to not create a game engine if your only focus is on developing the game. Game engines take a ton of work to set up properly, and involve diving deep into a lot of things that aren’t game dev specific. If your main goal is to turn your idea for a game into reality as a solo developer, it is not worth the effort required to create your own when Unity/Unreal/Godot has already done it and probably better than you could on your own. That said if you are the type of person who isn’t satisfied until they understand how all the parts work and want to dive deep into the technical problems, a game engine is a gift that never stops giving in that aspect.
There are a lot of things that I could talk about after working on this for the past couple of months, but I will try to highlight some of the areas that stood out to me.
Challenges
Setting up a reasonable rendering pipeline
Using a low level rendering API like webgpu/vulcan/metal means that you have a lot more fine-grained control over how you render things, which creates a trade-off for higher development complexity. There are a lot of things that you need to set up to just draw a single triangle to the screen, let alone a more complicated scene that has multiple objects, light sources, and textures. It took a while to get accustomed to how the pipeline works, and how to coordinate different render passes. In order to minimize draw calls, I also implemented batched rendering, which means that all the vertex information for all objects on the screen gets passed in on the same draw call. This also creates more developer complexity as you have to make sure that all your vertex and index data is aligned correctly.
Having multiple textures was also a bit of a pain because passing in arrays of textures is not currently supported on webgpu. This means that to have multiple textures rendered, you have to manually hardcode and enumerate the textures you want to render. This has been fine for now, but this can easily get hairy in a more complex project.
One thing I did to make this manageable is a WGSL preprocessor that I wrote. I will expand on this further down in another section because I think there’s more to talk about regarding WGSL.
Side adventures with compute shaders and stencil buffers
I wanted to learn more about how the stencil buffer works, so I created an outline mode where if toggled, the sprites will have a blue outline on them. This was done using this technique, so I set out to implement this in the wgpu API. However, it was very tricky to reason about where this was going wrong initially because I had a bunch of different render passes and I could’ve been passing it wrong somewhere else.
In order to debug this, I wanted some way to display the stencil buffer values on the screen. This gave me to bright idea to use compute shaders to do the task. Unfortunately, this led me on a road that took 10+ hours of debugging through really tricky byte conversion and indexing math to figure out that since my screen width was not a multiple of 256, everything went wrong. I didn’t actually fix this, I just resized my window to a satisfiable width and disabled this check on other widths.
I guess the takeaway here is to be careful not to get too sidetracked, but given that this was a side project this doesn’t matter all that much.
Level design needs low friction
One way that you could define the job of a game engine is a system that allows for low friction deployment of game ideas. For most games, this would include an easy way to create levels. Having to change an x,y coordinate, wait a couple of seconds for the game to compile, and then see the results is too much friction for doing slight adjustments to platform positioning. I discovered how painful this process was when actually trying to create a level myself.
That being said, the solution doesn’t have to be a full-blown editor, even though that is common in commercial engines. It could be something as simple as file hot-reloading, where you can update your values in real-time, or some kind of hook into a different tool for level design like Blender; I’ve seen the latter used here.
Porting to WASM
This was one of the final steps of the project and one that got frustrating at times.
Probably the worst part of debugging this was the developer experience in comparison with compiling to native. This required running a separate compiling command which took 20 seconds to run, and then copying it over to my web project to see the results everytime a change was made. To make matters worse, the platform-specific code in Rust, denoted by #[cfg(target_arch = "wasm32")]
, has all lsp features disabled so you get none of the rust niceties.
There are a lot of things that don’t work on wasm that work on rust, such as accessing file system, and even std implementations of time. Having to work around these was a bit of a pain but not even the biggest hurdle.
The first problem I ran into was trying to display things on the screen. The wasm-bindgen API was a bit tricky to get right, as both start
and main
appear on the docs as start points for your application and I picked the wrong one at first. There was also an issue with the canvas being size 0 at the very beginning, so nothing would render properly, EXCEPT 20% of the time it would. This weird non-deterministic behavior was because I would request the window to initialize to a certain size, and then start initializing the state. However, the request is async and the API provides no way to wait on it, so this meant that most times we would get to a weird state where during the initialization the size changed and corrupted the whole process. I eventually fixed this by adding a 100ms delay between the window size request and state initialization, but clearly, a better solution is left to be desired.
Another issue is with managing the loading and unloading of wasm modules. My website is a SPA (Single-Page Application), which means that lots of things load once and never get dropped. This can be great for general performance, but not-so-great when your loading a huge wasm module that still runs in the background once you leave the tab and unknowingly kills your performance. Since Javascript is garbage-collected, it’s very hard to explicitly deallocate the wasm module, since even if the canvas is gone there may be event-listeners that prevent it from being picked up. I eventually worked around this by following this thread and using an objectUrl to manually allocate/deallocate the wasm module.
Finally, there was adding resize functionality to wasm. Honestly, this was quite tricky and slightly hacky in ways that are quite coupled with my current website styling. I was reaching the end of my patience though so it was probably the right tradeoff to make. I used the ResizeObserver to change the canvas size on the rust side and made sure it snapped to width sizes that are multiples of 256 otherwise the stencil buffer wouldn’t show up. I then did a bunch of throwing styling to the wall until something reasonable stuck.
This whole process of getting wasm to render correctly and somewhat reasonably was somewhere I had to spend a lot more time than I hoped to, but it should come into handy with future rust/wasm projects (at least that’s how I’m coping).
Takeaways
Current tooling around WGSL sucks
The TLDR is that the tooling around wgsl is basically nonexistent. There is a workable code analyzing plugin in vscode, but that is about it. There is no equivalent to rust-analyzer for wgsl, or anything close to it. This means that a lot of time you are writing code where it is hard to know if it will compile until you run the executable. However, this isn’t the worst thing. That would be the lack of preprocessor capabilities. There is no concept of #include
in wgsl, so if you don’t implement one yourself you will end up with either a really huge wgsl file or a bunch of copy-pasting. I got pretty tired of copy-pasting things around pretty quickly and eventually ended up writing my own crude preprocessor directive. It simply scanned the file for a statement like //#include uniform.wgsl
and then pasted the contents of my file in. This has the downside of making the okay wgsl code analyzer a lot worse since it doesn’t know about this, but it made my wgsl code so much more modularized and easier to maintain so I was fine with the tradeoff. I occasionally copy-pasted the code manually just to have better error analysis during development but would remove it right after. I’m considering writing my own wgsl code analyzer at this point that works with the preprocessor language I wrote, but that might be a project for another day…
I learned the consequences of my own actions
Having to maintain an ever-growing codebase yourself basically guarantees that you will do something to shoot your own foot at some point. Maybe you cut some corners earlier to get things working and now you have to expand the API in a way that supports a different use case. Maybe just copy-pasting pipeline descriptors for new pipelines will start adding tons of mental load and make your code unsustainable. Maybe trying to zip through multiple parallel dictionaries at once is great to follow the Structure of Arrays format, but makes your dev velocity slower in having to write annoying zip/foreach commands. Any choice you make will come back to you in both good and bad ways. Going through this process is great though in improving your code decision-making mental model for next time.
Importing a library is not free
External libraries can be great in that they provide a lot of functionality upfront without having to do much work. However, being able to use the library effectively can many times involve spending a good chunk of time learning how the library works and massaging your code to play nicely with it.
I experienced this mainly with integrating egui into my engine. egui is a powerful immediate gui framework that allows you to build rich guis very easily. That being said, there was definitely a learning curve involved with getting familiar with it. For example, in order to create the scroll popup, I had to do a ton of work upfront in loading the texture to egui, removing all the default styling on the modal, and figuring out how to update the UV coordinates correctly to give the scrolling feel.
I also feel a lot less knowledgeable about how UIs work in game engines. Admittedly this was a part I cared less about than graphics which is why I made this tradeoff in the first place, but at some point, I might come back to create my own framework from scratch.
Github threads are a godsend when LLMs are clueless
A lot of wgsl and rust wasm issues aren’t as well documented as topics like React and Python. For particularly complicated issues I found that asking an LLM didn’t really get me very far. But as you could maybe tell from some of the links on this page, I used github threads a lot to find solutions to my problems. I think the reason LLms didn’t work as well here is because they were niche problems that many people have not run into because either people don’t do rust wasm all that much compared to data analysis in python, or because the problem itself was very specific to my current setup. These github threads were the only place I found anyone even discussing these topics, and without them, I probably would’ve had to rethink my entire approach to do something different.
Closing thoughts
Reinventing the wheel can get a bad rap, but there’s something pretty special about understanding how every part of a large system like a game engine works. The only way to build real intuition about these things is to actually do the thing. It’s also really gratifying when something you’ve been debugging for many hours finally works, and experiences like that really build up the mental muscles required for problem-solving. Will I end up creating the most technologically impressive game with this engine? Maybe, but probably not. But that doesn’t mean the lessons learned from this are wasted even if I don’t continue this project, which I probably will. There are so many more areas to explore like creating a SIMD math library, forward+ rendering for light sources, Audio support, editor capabilities like map editing, 3D rendering, and adding multiplayer networking (from scratch) to name a few. Plus any future projects I make that require visualizations can leverage this engine now. Overall, I think this project made me excited about programming again and I don’t foresee this being the last technical blog post I ever make.