A Guide to Rust Graphics Libraries

As of May 2019.


So, people on the gamedev channel of the Unofficial Rust Discord were talking about graphics API’s and what goes where and what does what, people were contradicting and correcting each other, the rain of acronyms was falling hard and fast, and it was all getting a bit muddled. So I’m here to attempt to set the record straight. This is intended to provide context for people who want to get into writing graphics stuff (video games, animations, cool visualizations, etc) in Rust and don’t know where to start.

Why should you believe me? Because I need to know this stuff to choose what to write ggez with, so I’ve been following the state of things for the last few years. And because I’m way more interested in this stuff than is really good for me. And because I like writing. That said, I’m far from an expert in most of this stuff, more like an interested observer.

But before we get into Rust details, let’s look at the wonderful wide world of graphics APIs in general…

What’s actually going on

Nobody writes code directly for GPU’s. The hardware interfaces, instruction sets and details of how they actually work are closely guarded secrets by the manufacturers… well, except for the two of three major manufacturers who now open-sourced (most of) their drivers. So, really it’s just NVidia who wants to keep you in vendor lock-in. This has also allowed GPU hardware to evolve a lot, quickly, without having to worry as much about backwards compatibility, but it looks like the fundamental designs and tradeoffs of how to make a GPU and CPU talk to each other have reached a more stable state the last decade or so. Anyway, regardless of the reasons, the operating system’s GPU driver does all the talking to hardware, and provides an API for programs to talk to it. In fact, there’s a number of such API’s, some of which you’ve probably heard of before, so let’s take a look at the star players:


Basically the Javascript of graphics APIs. OpenGL started out as a reasonably good idea made by SGI in 1992 and since then it’s grown, amalgamated, mutated, sprouted weird appendages, and so on. The amount of historical baggage is even worse than JS, ’cause it goes all the way back to the early 90’s when dedicated GPU’s were only the most high-tech of things, and GPU hardware has evolved a lot since then. Back then you fed the GPU one vertex per function call and things like per-face shading were a big deal. The current version is 4.6 and nothing below 3.2 is worth using anymore.

OpenGL is an open standard developed by the industry group Khronos, and is supported on Windows, Linux, and Mac, plus there are variants of it that run on mobile (OpenGL ES) and web browsers (WebGL). There’s a million different versions of it, a billion extensions, and it didn’t get a way to actually choose which version you were trying to use until version 3.2. Most of the time the version you are actually using is “whatever the GPU driver wants to give you”, and the GPU driver implementations are mostly awful (though that article is a few years old now, things have gotten a bit better). It’s been designed with complete backwards compatibility in the API for nearly three decades and is full of annoying edge cases and leaky abstractions that were perfectly valid in 1997 but have not aged well. There are programming techniques that get around them (PDF link), but it’s still tricky. That said, again like Javascript, it’s the only API supported by all major platforms, it can be made to work well with a bit of care, and it’s not going anywhere any time soon.


Similar to OpenGL except owned by Microsoft. They design the API, it runs on Windows and XBox only, and I think the GPU vendors write the drivers with cooperation from Microsoft. I frankly don’t know a whole lot about it, but what I do know suggests it’s a lot like OpenGL except with the ability to break backwards compatibility. This means old features can actually get deprecated, new versions can have major redesigns that change fundamental bits of how things work, and in general it’s kept up with the times better. Also, since it’s backed by Microsoft which has a vested interest in making games run well on their systems, and a lot of money, the drivers are generally less buggy and the tools are better. If a game is Windows-only, it runs on DirectX. DirectX versions before 9 or so probably aren’t worth using anymore, and versions 9-11 are broadly on par with OpenGL 4 in terms of functionality. However, DirectX 12, the most recent version, is part of a new generation of much lower-level API’s that give the programmer much more control over the details of how the GPU executes and orders things. Which brings us to…


The new hotness. Developed by the same industry group as OpenGL and initially released in 2016, the current version of Vulkan is 1.1. If OpenGL is GPU Javascript, Vulkan is GPU C. It is much lower level, much more general purpose, and (potentially) much easier to write fast code in than OpenGL. It is also probably not something you want to write directly a lot of the time, as it is very specific and verbose. It is less a graphics API and more an interface for talking to a GPU; the actual graphics API is something you use Vulkan to create.

Why do we want this? Well, my understanding is not the best, but I’ll try to sum it up: a lot of stuff in OpenGL (and probably DirectX pre-DX12) involved mutating state in the GPU driver, such as telling it “switch to this blending mode” or “load this shader”. These are things that should only affect one part of the big long multi-step rendering process, but if you’re not careful it ends up happening when the GPU is busy working on something else that needs to depend on that part of the state. Whenever something major changes the GPU has to wait for 1000+ threads to finish what they’re doing, synchronize, wait for input, and then it can tell them what the new operating mode is and send them on their merry way again. This leads to a lot of dead time with most of the GPU sitting around not doing much. OpenGL makes it very easy to create these dead spots by accident, by changing a setting that shouldn’t have needed to affect other stuff but did. Also because it’s so stateful it’s difficult for the GPU driver to help much. The drivers try to merge state changes together into chunks, organize them efficiently and create big batches of commands for the GPU to do all at once, but the driver doesn’t know what you’re going to tell it to do next so a lot of it is heuristics, guesswork and trade-offs. Like JIT-compiled languages, it makes it easy for a relatively innocuous change to throw you off the “fast path” for no apparent reason.

Vulkan doesn’t do this at all. With Vulkan you create a giant tree structure with lots of very explicit settings and associations between all its parts that say exactly how to run a series of processing steps and what to do with the results, load it up with giant arrays of vertex data and such, put it in a big crate, and ship it off to the GPU. Then the next day (as far as the computer is concerned) the GPU delivers a nice shiny new frame of rendered graphics to your front door, rings your doorbell, then goes back to working on the next frame ’cause it’s already gotten the specification for it. Since the GPU gets all the information it needs to draw a frame basically in one go, everything is explicitly specified and if the programmer needs to say “wait this frame is fast and that frame is slow, what changed?” they can actually tell what changed. Additionally, the GPU driver has an easier time because it needs to do less work guessing what you’re intending to do; it has more information to use for trying to schedule commands for the GPU to actually execute. Beyond making your program run faster, this leads to faster and more efficient (and hopefully less buggy) GPU drivers.


Basically DirectX 12 but made by and for Apple. It (along with AMD’s now-deprecated Mantle prototype) actually pre-date Vulkan and DirectX 12, and basically kicked off this new generation of low-level graphics systems. Like DirectX 12 I don’t know much about it; it’s a low-level API similar to Vulkan, and those who have tried it say it’s quite nice to work with. However, Apple used to do all its graphics through OpenGL. Their work inspired Vulkan and they certainly could have had a large hand in directing its development. But instead after everyone else had already bought into it they announced they would never use Vulkan, they made this cool new thing called Metal that everyone should use, and oh by the way they’re going to stop supporting OpenGL at some point down the road anyway and nobody but them is ever allowed to write graphics drivers for their hardware. So, I don’t care how good Metal is, its only purpose is to enhance Apple’s vendor lock-in.

Seriously. I’m willing to forgive DirectX for a little bit of these shenanigans, but we really don’t need more proprietary crap in the universe. I want to write games that run on anyone’s system, and so I’m never going to use Metal if I can avoid it. And I can. Which brings us to…

Actual Rust stuff!

Okay, despite all the differences listed above, all the above systems are painfully low-level, and do things along the lines of “take an array of vertices and a pile of config data, and produce a single rendered frame”. If you really just want to shove models in, choose some material settings and place some lights, and get sweet rendered graphics out, you’re going to need to use something higher-level than that. Probably the Amethyst game engine. Possibly kiss3d or three. Maybe write your own. Or just use Godot or Unity like a sane person. But that’s not how I roll so let’s keep on rolling.

All of these fundamental graphics API’s are written in, by, and for C. There’s probably some C++ in there somewhere, but they are generally exposed as a C library. And using raw C API’s from Rust is pretty painful, so people write Rust wrappers for them that make life nicer and safer. That said, these are quite low-level systems, and pursuing absolute safety when binding a C API to Rust is expensive and difficult, and it is not always worth the trouble. That said, even unsafe Rust is still way cooler than C, so we want to use a nicer Rust binding to write our graphics code anyway.

There’s a number of Rust libraries for graphics, with varying levels of safety and sanity. Wrapping OpenGL is glium and luminance, and wrapping Vulkan is vulkano. There are also low-level wrappers that just present a nice Rust-y but unsafe version of the same system: winapi and d3d12-rs for DirectX, metal-rs for Metal, gl-rs for OpenGL, and ash for Vulkan. As you can see, the cross-platform API’s seem to have more people making higher-level crates on top of them, as well as cooler names. This is probably so we Rustaceans can play around with neat features and see just how safe we can make an API where you have to say “these entirely separate programs just so happen to share a chunk of memory and if program A writes a float into it then program B will only ever try to read a float from that same location, honest”. Presumably the people who only write programs that run on a single platform are too busy actually getting useful stuff done to worry too much about fancy higher-level abstractions, so we’re going to look at the libraries that aren’t just bare wrappers.

But what if writing a Rust program using just one low-level C API just isn’t good enough for you? Then you have gfx, which intends to be a graphics library that can use any graphics API as a drawing backend. You can take a program, compile it on Windows and it will use DirectX, then compile the same program on Mac and it will use Metal. And it will be fairly low level so it’s easy for people to write higher-level tools atop it, and it will do all this fast. Well, that is, the runtime performance will be fast; compiling it might take a while…

Regardless! gfx has a bit of a complicated history. It’s open source but a lot of the push behind it comes from Mozilla. It started as a research project and worked pretty decently on most of the target backends which is no small feat in itself, and was intended to be used as a backend for Mozilla’s experimental Servo browser engine. But the system kept evolving, and eventually it became clear to the developers that a choice had to be made: It had to become unsafe, or it had to become slower to enforce safety at runtime. The developers figured that it was far easier to make a safe-but-slower wrapper on top of an existing unsafe API than to go the other way around, and so decided to drop pure memory safety. In fact, they reorganized basically everything and renamed it gfx-hal, for “Graphics Hardware Abstraction Layer”, and adopted an API that is basically 97% identical to Vulkan. The remaining 3% or so is some extra wiggle room to handle whatever edge cases were needed to make it a bit more flexible around the bits of DirectX12 and Metal and such that didn’t quite fit. But by and large, writing code for gfx-hal is basically the same as using Vulkan.

This process was a long time coming, but just after Christmas 2018 gfx-hal 0.1 was released. And… it works! Amazingly. It’s not finished, but it’s aiming for a known target and it pretty much hits it. The Vulkan backend works very well (as it should), and it functions reasonably on DirectX12 and Metal also. Making it work well on OpenGL and DirectX pre-12 is still a work in progress though. It is not easy to make it work well on those backends since they’re basically implementing a low-level API atop a higher-level one, like compiling C to Javascript. I bet there will probably always be at least some performance malus to using the backends for higher-level API’s, but I also expect them to get pretty darn good eventually, and they have top men working on it right now.

So we now basically have a Rust Vulkan implementation that can run on any platform, and can have more added in the future. In fact, there is also gfx-portability, which intends to be a thin layer over that which presents pure Vulkan as a C API similar to MoltenVK. Again, this isn’t fully complete but actually works pretty well for some quite non-trivial use cases. When it’s more finished, you will be able to take any program using Vulkan, slot gfx-portability into it, and run it on any platform.

Okay fine you’re a gfx fanboy but what about other stuff

Oh right, there’s other crates available as well! They’re all really quite good, actually. This isn’t an exhaustive list, I’m just going to try to list the most major things. I also haven’t put in as much time and work into using them as I have into gfx, so my knowledge is not as deep, but here goes:


The original Safe OpenGL library, written by the esteemed Tomaka. Rumors of its death are greatly exaggerated, as it is now maintained by its user community, though I admit it appears maintained with a pretty light touch. It gets its safety by being higher-level than pure OpenGL, introducing some higher-level abstractions and fitting them together for you automatically so you don’t have to do quite as much fiddling of state variables by hand. There are some known safety holes in it though and with the main maintainer having all his time taken up by foolishly making money to be able to eat (how dare he!), nobody’s done the research and redesigning to figure out how to close them. As far as I can tell 98% of people probably won’t notice them though. As a baseline to compare everything else against, glium is still holding up well.


The Other Safe OpenGL Library, written by Phaazon as basically a one-person project. I confess I haven’t actually used it besides playing around, but that playing around was quite nice. For me the main selling point is the author: I’ve used some other libraries Phaazon has written and the docs are concise but complete, the feature set is extremely practical, and he’s always willing to help explain how things work and take suggestions for changes or additions. It’s also maintained more actively than glium by someone who is using it for his own stuff. In level of abstraction and safety, it is about equivalent to glium. One difference is that glium intends to support all versions of OpenGL, while luminance only intends to support OpenGL 3.3+ which is a much saner approach.


Did you go through and say “This is just fine but I want it in Rust, with nice structures and the Copy trait in the right places and enums and stuff”? Then this is what you want. It’s a very literal wrapper of the Vulkan API, Rustified. That’s what it does, that’s all it does, and it does it great. It’s very unsafe, but it also doesn’t get in your way at all; if you can write something in Vulkan using pure C, it’s quite straightforward to write the exact same thing in ash. I’ve literally gone through the above C++ tutorial and just written everything it does in Rust using ash instead. It’s maintained by one MaikKlein, whom I haven’t had the pleasure of interacting with, but various and sundry other notable names in the Rust graphics/gamedev community seem to have had occasion to chip in on it from time to time, so it’s basically the go-to Vulkan binding.

Again, if you want to use low-level Metal, OpenGL or DirectX11/DirectX12 with only the most basic of conveniences, metal-rs, gl-rs and winapi/d3d12-rs are where to go. Everything I said about ash more or less applies to those as well.


Another Tomaka Special, similar in spirit to glium, vulkano is intended to be an Entirely Safe wrapper of Vulkan. Alas, it is similarly kinda-abandoned- by-its-original-creator-but-maybe-he’ll-come-back-to-it-when-he-wins-the-lottery, and is now maintained by Rukai. I haven’t used it myself so all I can say is what’s in the documentation: “Vulkano is still in heavy development and doesn’t yet meet its goals of being very robust. However the general structure of the library is most likely definitive, and all future breaking changes will likely be straight-forward to fix in user code.”


I’m cheating, we’re back to gfx-hal. Writing raw Vulkan code is rather tedious and fiddly, and there’s a fair amount of stuff you have to write for yourself atop it. Like memory management. So, why not have a nice helper library that can handle a bunch of the common and rote bits? Having absolute control over what the GPU does is great, but when it takes 300 lines of code just to do basic setup that’s going to be basically identical for 95% of users, there’s obviously a bit of room for convenience. That convenience is what rendy intends to provide: A set of modular, fairly fundamental tools to make life better when working with gfx-hal. Instead of painstakingly wiring together stages in a render pass one array index at a time, you can create nodes in a graph and tell them what their dependencies are. Instead of cobbling together a swap-chain out of render targets, semaphores and glue to make it please just display stuff on the screen already, rendy can manage it for you. But it’s very modular and unopinionated, and you can always just get down into the guts with raw gfx-hal. It’s much more of a toolkit than a wrapper.

It is being written by some of the devs behind the Amethyst game engine (Viral, Frizi, and Fusha), and I am thinking quite seriously about using it for a future version of ggez as well. I was honestly very skeptical of rendy at first, partially ’cause to me Amethyst feels a bit like vaporware – it’s super ambitious and innovative, and that makes it hard to have incremental progress where you learn from design mistakes. But then I actually tried rendy and I have to say, it’s exactly what I wanted.

The Future is Being Written (in Rust)

But wait, there’s more…

Particularly anal graphics programmers (ie me) were incredibly relieved after Vulkan was released and offered a way to write graphics code that wasn’t as Unnecessarily Bad as OpenGL. In fact, this relief was rivaled only by the relief felt by particularly anal programming language nerds (ie also me) after Webassembly was released and offered a way to write frontend code for the web that didn’t need Javascript. Apparently I was not alone in this, as fairly soon after Vulkan 1.0 was released, people started asking things like “can we please have something better than OpenGL for doing 3D graphics on teh interwebs?” And when people in Apple, Google, Microsoft, Mozilla and Intel start asking things like this, for better or for worse they get together and build it, and so we get WebGPU.

So, the first thing is, as of May 2019, WebGPU is not finished. It’s one of those big hairy efforts between multiple vendors, and so there’s a lot of technical stuff to be sorted out as everyone tries to find the best solution. Plus a lot of political stuff to be sorted out as everyone tries to screw over everyone else so they have a technical advantage and more vendor lock-in (surprise, it seems to be mainly Apple doing that). Second, WebGPU must be Completely Safe. Browsers execute untrusted code, and so it must be impossible for even a fairly low-level API to make bad things happen. Third, it’s very reminiscent of Vulkan, but not actually Vulkan, more like a reasonable subset of Vulkan with a bunch of the hairier features chopped out. And finally, there’s a functioning prototype in Rust made by the gfx-rs community and using gfx-hal for its drawing, called wgpu.

Note the distinction between the standard, WebGPU, and the implementation, wgpu. There are also other prototype implementations of WebGPU: Dawn, by Google and another prototype by Apple that probably isn’t open source.

Being a simplified, safe, Vulkan-ish thing, wgpu is actually a reasonably attractive library for writing graphics code on the desktop as well. In fact, designing it for easy portability across different desktop platforms is being pushed for as a goal of the standard by some of the groups involved. There’s already an experimental game framework using it – in fact, one with a real pretty particles demo that you should 100% download and try out, even though the framework itself is quite immature. I’m personally a little leery of using a literal experimental prototype for making anything real right now, though kvark (one of the main wgpu devs) assures me that no really, it’s a real thing that’s not going to die and it’s going to be awesome and it’ll be done any day now. He said the same thing about gfx-hal in late 2017, and honestly he was right, except “any day now” ended up taking about a year and a half. :-P So we will see.

Another intriguing possibility is going the other way around though: what’s to stop gfx-hal from running on WebGPU someday? Nothing, that’s what. It might not be particularly easy, but gfx-hal is already doing all sorts of things that aren’t easy, and it’s working out for them pretty well. In fact, the Amethyst project got a grant to work on making rendy work better on the web, and since rendy is a toolkit for gfx-hal their efforts will probably include more work to make gfx-hal work on the web as well. So the chances of gfx-hal working well on any eventual WebGPU API are preeeetty good.

So, despite the truism that all web standards are awful, between WebAssembly and WebGPU the near future of games on web is looking very, very interesting.

So what do I actually use to make triangles appear on the screen?

Like I said, this whole article is for crazy people. If you just want to model something in Blender, write a program that loads it, and display it on the screen, you should use an existing game engine like Godot or Unity. We don’t have anything like that for Rust yet (though Amethyst is getting there), just a big pile of parts lying around. Assembling the parts to make something nice isn’t too difficult, but it’s still a project that needs some elbow-grease and power tools instead of being an Ikea package you can simply piece together. Right now, the ecosystem looks more or less like this:

If you want to do your own triangle-slinging and want it to work on all platforms right now but don’t want to write 1200 lines of setup code, use glium, luminance, or one of the other OpenGL bindings.

If you’re not afraid of getting your hands deep into the guts of the rendering pipeline and want it to work on all platforms right now, use gfx-hal. Unless you’re really determined to do-it-yourself, you should also use rendy which reduces the 1200 lines of setup code to 400.

If the above applies but you don’t want all this weird gfx-hal stuff in the way, and you only ever ever want it to work on Linux and Windows, use ash or vulkano. Worst case, it can run on gfx-portability anyway. You won’t get rendy though; if there’s other similar tools like rendy that make Vulkan a bit nicer, I don’t know about them (yet). Let me know if you find some!

If you want something cooler and more futuristic than OpenGL but not as hairy as Vulkan (and are willing to take a bet), and you want it to run on anything including someday web native, use wgpu.

Appendix: Game frameworks

I like making flowcharts, so I’m going to give you another one of the various Rust game engine/framework crates I know about and what they use/can use for graphics:

Quick, biased rundown:

  • ggez – My 2D game framework of choice, ’cause I’m the main maintainer. What I made because trying to use Piston was too annoying. Designed basically to be like Love2D.
  • quicksilver, tetra – Other 2D game frameworks. Very similar to ggez in API, mainly different in what technology choices they make. Approaches the same problems as ggez from different angles.
  • coffee – New and experimental 2D game framework, higher-level and more opinionated than ggez/quicksilver/tetra.
  • amethyst – The big gorilla 3D game framework. Apparently it actually works.
  • piston – The Original Rust 2D Game Engine. Too broad in scope, too uncertain in design, not well documented enough – though I might be a little biased about this. People still seem to use it though?