LearningGfx
Disclaimer: This is believed up to date as of gfx
version 0.18. 0.18 is intended to be the last version of the “old” or
“pre-ll” gfx-rs
API. It will be around forever and
contributions/bugfixes accepted, but active development has shifted to a
new crate using a different approach, called gfx-hal
.
gfx-hal
is developed by the same people, has similar goals,
and is still developed in the gfx
repo, but is not at all
like the pre-ll gfx
. Instead it takes a different approach,
intending to be an unsafe but fast and portable mostly-Vulkan
implementation in Rust, similar to MoltenVK. If you want a tutorial on
gfx-hal
I recommend this one.
This is also all tested only with the OpenGL backend; by its nature
gfx-rs
should work the same on any backend, but I make no
promises.
This is also not a tutorial on 3D programming. It assumes that the reader is familiar with OpenGL or a similar API.
Another good tutorial resides here, although it currently doesn’t work with Rust >= 1.20: https://suhr.github.io/gsgt/
Learning gfx-rs
So what is gfx-rs
, and how does it work? The goal is to
have a 3D graphics API that is portable across different 3D
implementations, in a fairly low-level and efficient way. For instance,
you could take the same code and run it on OpenGL, Vulkan and DirectX
and it will do the same thing on all of them.
How does it do this? Basically by providing an abstraction that is a
common superset of the behavior of all these API’s. All of these systems
are, in the end, a way of managing resources, shoving data into the
graphics card, defining shader programs that run on this data with the
given resources, and then telling it to actually do stuff.
gfx-rs
tries to wrap all that up in a common API that’s
applicable to any of these drawing backends.
Data model
gfx-rs
’s model consists of only a few core types:
Factory
, Encoder
and Device
.
These are the types you use to interact with the GPU. The
Factory
allocates resources, such as textures and vertex
buffers. The Encoder
takes commands that you give it, such
as “draw this geometry” or “load this data into that shader variable”,
and turns them into a list of instructions. Then the Device
is what takes this list of instructions and actually sends them to the
GPU, using OpenGL, DirectX or whatever API the backend uses.
There is a vast plethora of other types and somewhat confusing
generics associated with gfx-rs
, but these are the core
pieces where the rubber actually meets the road. Whatever else you’re
doing, you’re going to be getting resources from a Factory
,
telling an Encoder
to generate commands, and then calling
Encoder::flush(some_device)
to feed those commands to the
GPU.
This has some advantages beyond portability! This separation of
concerns makes it easy to tell what functions allocate new stuff, what
sends commands to the GPU, what changes shared state, and so on.
Additionally, this model is fundamentally multithread-able: You can have
multiple Encoder
s and queue up commands in each of them,
then just have one place where those commands get sent to the GPU all at
once. Depending on how the backend works this might result in
performance improvements – Vulkan/Metal/DirectX12 probably does this
much better than OpenGL or DirectX11, but the basic model still works
with any backend. (Yes, there’s some overhead to this process as well,
so gfx-rs
is a bit slower than raw OpenGL, but from what
little I know if you’re going to be trying to do multithreaded rendering
you’re probably going to end up creating a similar structure no matter
what. And for me the safety and portability is worth it.) I personally
also like this for ease of debugging: It’s conceptually easier to dig
into a list of commands and make sure that they’re what you want them to
be, compared to having different parts of the programming emitting
synchronous state-changing calls whenever they feel like it. It could
even theoretically have an optimizer stage in it, to do things to the
command buffer like removing redundant calls or reordering commands to
make them more efficient, though I don’t think anyone’s been so bold as
to actually do this yet. Still, the potential is there. :-D
There is one more component of this process: the pipeline. A “pipeline” is a collection of type definitions that define the inputs and outputs of a shader program. For instance, the inputs might be a set of vertices, uniform properties, and constants, and the output would be a render target… that is, a framebuffer that will get displayed on your screen. These type definitions and the shader programs that use them are wrapped together into a “pipeline state object” or PSO. So to actually draw something, you have to provide geometry to draw, data to give to the shader pipeline, and a PSO that represents the shaders that need to run.
Unfortunately, one of the problems that isn’t solved yet is that
there is no unified shader solution. You have to write and use separate
shaders for each backend individually. That’s not a huge problem, but it
is something extra you have to do to make a program portable to
different backends. The gfx_app
crate already provides some
framework for managing this, and hopefully as SPIR-V and other such
technologies become more common it will get easier for
gfx-rs
to encompass this functionality.
Crate structure
This is where things get annoying, IMO. gfx-rs
is made
of a bunch of interrelated crates, and there are bunches of other crates
that look like they’re a core part of it but actually aren’t and maybe
aren’t even maintained by the core gfx-rs
team. When in
doubt, check the https://github.com/gfx-rs/ github group. All these
crates are bundled together into one git project, so there’s no reason
they can’t be all one thing, but they’re not. Additionally, none of the
version numbers are in sync, so gfx
0.14 relies on
gfx_core
0.6 and works with gfx_device_gl
0.13
and so on.
That said, once you figure the structure out it’s a fairly simple
tree with gfx_core
at the root, gfx_device_*
depending on it, and gfx
and gfx_window_*
forming the leaves. So there is reason behind it all. Usually
gfx
and gfx_window_whatever
are all you need
to directly use. But for the sake of completeness, the following list
contains all the actual gfx-rs
crates:
gfx_core
: Low-level types and operations; you usually won’t have to worry about it, but sometimes type definitions and such from it might creep through. If they do it might be a warning sign that you’re doing something wrong though. Irritatingly, some types defined in it are exposed in the crates that depend on them but https://docs.rs/ doesn’t link properly between crates, making it hard to hunt down the right types in the docs. Building a local copy of the docs might make life easier.gfx
: A nicer API built atopgfx_core
; this is where the aforementionedFactory
,Device
andEncoder
traits are defined, and generally what you need to be working with.gfx_device_*
: These provide the backends. Comes ingl
,vulkan
,metal
,dx11
,dx12
, and possibly other varieties. At the moment, it appears that only thegl
anddx11
ones are fully functional, but lots of work is happening on them.gfx_window_*
: These interface with various window providers. Generally the way things work is you will have your drawing API, whether OpenGL or Metal or whatever, which is only concerned with drawing. Alongside this you will have a “window provider” which handles interacting with the windowing system on whatever OS you’re using, sets window titles and icons, handles input events, and generally does everything besides drawing; these two API’s work together to let you actually do stuff.gfx-rs
thus has interfaces to a number of popular window providers: best-supported isglutin
, which is quite nice and written in pure Rust. However, there are other portable ones, namelyglfw
andsdl
, along as platform-specific ones likemetal
anddxgi
.gfx_app
: A layer of abstraction that lets you select backend and window provider more flexibly and portably. Use this if you want, it’s nice when it works, but it isn’t necessary and afaict is currently under heavy development.gfx_gl
: A customized OpenGL wrapper thatgfx_device_gl
uses. Probably not super useful to anyone else??
Along with these are many mostly-unrelated crates you will stumble
across while trying to find gfx
on <docs.rs> or
<crates.io> just because they have gfx
in their name,
but which are often made by people completely unrelated to
gfx-rs
. Some of them are useful, some of them are
booby-traps:
sdl2_gfx
: Wrapper for the SDL2_gfx C library. Has nothing to do withgfx-rs
.gfx_window_sdl2
: A booby-trap. Old, useless, no longer supported, and not made by the same people asgfx-rs
in the first place. If you want to use the SDL2 window provider, thegfx_window_sdl
crate is what you want. Don’t usegfx_window_sdl2
.gfx_text
: A freetype-based text renderer forgfx-rs
. Looks generally fine, probably less work than rolling your own renderer withrusttype
.gfx_glyph
: An efficient font rendering library for programs usinggfx-rs
.gfx_graphics
: A drawing backend for thepiston2d-graphics
drawing API that usesgfx-rs
to do its drawing. Not interesting to anyone not usingpiston2d-graphics
.gfx_phase
andgfx_scene
: Higher-level drawing abstractions of some kind or another. I know nothing about them, but it is said that they are obsolete.- Probably lots of others. Sigh.
Actually doing things
So the actual processing flow for using gfx-rs
seems
quite conceptually similar to Vulkan/DX12/whatever. You create a bunch
of resources (vertex buffers, textures, uniforms, etc), bundle them
together with a set of shaders, then stuff them into the encoder via
various commands like draw()
. However the encoder batches
them up for you so nothing actually happens (except resource allocation)
until you call encoder.flush()
, at which point it does all
the things at once. This is also how it targets different backends: the
encoder translates its commands into OpenGL calls, Vulkan calls,
whatever.
Setup
Create a project, add the folowing dependencies to your
Cargo.toml
:
gfx = "0.17"
glutin = "0.12"
gfx_window_glutin = "0.20"
If you are not using gfx_window_glutin
you may have to
add gfx_device_gl = "0.15"
as well.
Creating a window
gfx-rs
’s examples are quite nice here; the
triangle
one uses the raw gfx_window_glutin
window provider, while the others use gfx_app
.
Basic process is to use whatever window provider you actually want
(glutin, glfw, whatever) to do the setup, then pass the
WindowBuilder
or equivalent into the
gfx_window_*::init()
method. It will do whatever it needs
to, and pass you back a created window and all the gfx state objects in
a big tuple.
Glutin example:
#[macro_use]
extern crate gfx;
extern crate gfx_window_glutin;
extern crate glutin;
use gfx::traits::FactoryExt;
use gfx::Device;
use gfx_window_glutin as gfx_glutin;
use glutin::{GlContext, GlRequest};
use glutin::Api::OpenGl;
pub type ColorFormat = gfx::format::Srgba8;
pub type DepthFormat = gfx::format::DepthStencil;
const BLACK: [f32; 4] = [0.0, 0.0, 0.0, 1.0];
pub fn main() {
let mut events_loop = glutin::EventsLoop::new();
let windowbuilder = glutin::WindowBuilder::new()
.with_title("Triangle Example".to_string())
.with_dimensions(512, 512);
let contextbuilder = glutin::ContextBuilder::new()
.with_gl(GlRequest::Specific(OpenGl,(3,2)))
.with_vsync(true);
let (window, mut device, mut factory, color_view, mut depth_view) =
gfx_glutin::init::<ColorFormat, DepthFormat>(windowbuilder, contextbuilder, &events_loop);
let mut running = true;
while running {
events_loop.poll_events(|event| {
if let glutin::Event::WindowEvent { event, .. } = event {
match event {
glutin::WindowEvent::Closed |
glutin::WindowEvent::KeyboardInput {
input: glutin::KeyboardInput {
virtual_keycode: Some(glutin::VirtualKeyCode::Escape), ..
}, ..
} => running = false,
_ => {}
}
}
});
window.swap_buffers().unwrap();
device.cleanup();
}
}
Exact same thing using SDL:
#[macro_use]
extern crate gfx;
extern crate gfx_window_sdl;
extern crate sdl2;
use gfx::Device;
pub type ColorFormat = gfx::format::Srgba8;
pub type DepthFormat = gfx::format::DepthStencil;
const BLACK: [f32; 4] = [0.0, 0.0, 0.0, 1.0];
pub fn main() {
let sdl_context = sdl2::init().unwrap();
let video = sdl_context.video().unwrap();
let mut builder = video.window("Example", 800, 600);
let (mut window, mut gl_context, mut device, mut factory, color_view, depth_view) =
gfx_window_sdl::init::<ColorFormat, DepthFormat>(&video, builder).unwrap();
'main: loop {
let mut event_pump = sdl_context.event_pump().unwrap();
for event in event_pump.poll_iter() {
match event {
sdl2::event::Event::Quit { .. } => {
break 'main;
}
_ => {}
}
}
window.gl_swap_window();
device.cleanup();
}
}
From creating the window, you get a:
- window: your window provider’s window type.
- gl_context (for SDL, Glutin and others have it basically as part of the window)
- device (for most graphics backends; apparently Vulkan doesn’t need this)
- factory
- render target view
- depth stencil view
The render/depth views are the things your shaders output to; I don’t know enough about how depth stencil stuff works to comment much, but the render target is essentially your “output screen”.
Sidebar: what window provider should I use?
Your options are glutin, glfw, sdl, and maybe one or two others I’m not familiar with. Glutin is the best supported; unless you have a compelling reason, you should probably use that. Most of the work and thus most of the knowledge and support are done on the glutin window provider Doesn’t mean the glfw and sdl window providers are unsupported or bad, just that they’re second-tier. I’ve gotten glfw and sdl to work and they seem to do fine. Basic examples can be found at https://github.com/icefoxen/gfx/tree/master/examples/triangle-glfw and https://github.com/icefoxen/gfx/tree/master/examples/triangle-sdl. (Dec 2018: shoot, links are broken it appears that at some point I nuked those examples by accident…)
Defining a pipeline
Ok, now you have to define a pipeline to actually shove your drawing information through. The easiest and best way to do this is with the `gfx_defines!’ macro, like so:
// Put this code above your main function
gfx_defines!{
vertex Vertex {
pos: [f32; 4] = "a_Pos",
color: [f32; 3] = "a_Color",
}
constant Transform {
transform: [[f32; 4];4] = "u_Transform",
}
pipeline pipe {
vbuf: gfx::VertexBuffer<Vertex> = (),
transform: gfx::ConstantBuffer<Transform> = "Transform",
out: gfx::RenderTarget<ColorFormat> = "Target0",
}
}
gfx_defines!
is documented fairly well, but I’ll explain
the bits we use here. The above definition defines three structs:
Vertex
, Transform
, and
pipe::Data
. Vertex
is your vertex type and is
just a struct with two fields, pos
and color
.
This is the vertex type your vertex buffers will get filled with, and
that your vertex shaders will receive: they will get two inputs, a
vec4
named a_Pos
, and a vec3
named a_Color
. The Transform
struct has one
field, transform
, and will appear to your shaders as a
uniform struct containing a single mat4
named
u_Transform
. You can define multiple types of vertices,
constant buffers and pipelines, just by giving them different names.
So hopefully you see where this is going; I’m sorry I only know the way it appears in GLSL.
Now let’s look at the pipeline, pipe
. This says that to
draw stuff you need a VertexBuffer<Vertex>
, where
Vertex
is the vertex type you just defined. Similarly, it
takes a ConstantBuffer<Transform>
named “Transform”,
and outputs to a RenderTarget<ColorFormat>
named
“Target0”. The ColorFormat
is the color format type you
defined earlier, and the RenderTarget
is… hey, your
WindowProvider::init()
function returned a
RenderTargetView
that I said was the screen’s framebuffer,
right? So your pipeline
is saying that you need a buffer
containing your Vertex
and Transform
types,
and outputs to a RenderTarget
. You can give it your
window’s render target to draw to the screen, or create a different
RenderTarget
to render into memory for multi-pass
rendering. It all makes sense, right?
Ok, now let’s define shaders to use these values, and wrap everything together into a PSO. Our vertex shader looks like this:
#version 150 core
in vec4 a_Pos;
in vec3 a_Color;
uniform Transform {
mat4 u_Transform;
};
out vec4 v_Color;
void main() {
v_Color = vec4(a_Color, 1.0);
gl_Position = a_Pos * u_Transform;
}
And our fragment shader is just:
To load these shaders we suck them in and create a PSO, like so:
let pso = factory.create_pipeline_simple(
include_bytes!("shader/myshader_150.glslv"),
include_bytes!("shader/myshader_150.glslf"),
pipe::new()
).unwrap();
The key part is the innocent little pipe::new()
here;
that is where the pipeline definition you created with
gfx_defines!
gets instantiated. If you have another
pipeline type named my_pipeline
that does something
entirely different with vertex and uniform types, you would just create
your pipeline with my_pipeline::new()
instead of
pipe::new()
.
Also, as far as I know this is the only part of gfx-rs
that really doesn’t (and can’t) help you out in terms of making it Just
Work with different backends. You have to know what backend you’re
using, and load the right shaders itself. gfx_app
has some
supporting framework for making this easier, but there’s not much that
gfx-rs
can do to make sure the right backends load the
right shaders. It also can’t portably ensure you don’t have a typo in
the uniform names of one of your shaders. It will at least check that
sort of thing for you at runtime and give you a sensible error if you do
though.
For more in-depth explanation of how pipeline definitions work, see the excellent blog post at http://gfx-rs.github.io/2016/01/22/pso.html
Drawing stuff
Okay, so take a look at the documentation for the
Encoder::draw()
method. It takes a Slice
, a
PipelineState
, and a PipelineData
. The
PipelineState
we already created with
Factory::create_pipeline_simple()
while the
PipelineData
is just the data type for your
pipeline
definition from the gfx_defines!
macro. You can create a VBO and a Slice
with
Factory::create_vertex_buffer_with_slice()
.
So you end up with something like this:
// Put at the start of your file, outside of the loop
let mut encoder: gfx::Encoder<_, _> = factory.create_command_buffer().into();
const TRIANGLE: [Vertex; 3] = [
Vertex { pos: [ -0.5, -0.5, 0.0, 1.0 ], color: [1.0, 0.0, 0.0] },
Vertex { pos: [ 0.5, -0.5, 0.0, 1.0 ], color: [0.0, 1.0, 0.0] },
Vertex { pos: [ 0.0, 0.5, 0.0, 1.0 ], color: [0.0, 0.0, 1.0] },
];
//Identity Matrix
const TRANSFORM: Transform = Transform {
transform: [[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]]
};
let (vertex_buffer, slice) = factory.create_vertex_buffer_with_slice(&TRIANGLE, ());
let transform_buffer = factory.create_constant_buffer(1);
let data = pipe::Data {
vbuf: vertex_buffer,
transform: transform_buffer,
out: color_view.clone(),
};
// Put in main loop before swap buffers and device clean-up method
encoder.clear(&color_view, BLACK); //clear the framebuffer with a color(color needs to be an array of 4 f32s, RGBa)
encoder.update_buffer(&data.transform, &[TRANSFORM], 0); //update buffers
encoder.draw(&slice, &pso, &data); // draw commands with buffer data and attached pso
encoder.flush(&mut device); // execute draw commands
Notice that TRIANGLE
uses the Vertex
type
defined in your gfx_defines!
macro; the macro really isn’t
doing anything too magical in most cases, all the data types it defines
are just plain ol’ structs. We create a vertex buffer with the
Factory
, handing it a slice of our Vertex
type
and, optionally, a list of vertex indices. We aren’t bothering with
vertex indices right now, so we just give it ()
instead.
(You could also just use Factory::create_vertex_buffer()
.)
Creating a uniform buffer for our Transform
is just as
easy, then we load the data from our Transform
into it.
Finally we create our pipe::Data
object which contains all
the stuff our pipeline needs, and feed it all into
encoder.draw()
.
Note that most of the useful Factory
methods are for
some reason in the FactoryExt
trait, so you must do
use gfx::traits::FactoryExt;
before you can use them.
Basically just to annoy you.
You can also put multiple things into a buffer at once, but I’m not
quite sure how that works yet. Digging into the gfx-rs
examples would probably be enlightening.
Defining colors
Most times that you actually want color it goes into a shader variable, so you can handle those variables however you want. It just becomes another part of your vertex definition, as before.
The main exception (so far) is the Encoder::clear()
method, which has this signature:
clear<T: RenderFormat>(&mut self,
view: &RenderTargetView<R, T>,
value: T::View) where T::View: Into<ClearColor>
This is a great example of trait-salad making things look
complicated, when really it isn’t that complicated. So this method takes
a RenderTargetView
which is the object that’s getting drawn
to, and then it takes a T
which has a complicated type
signature that really comes down to “something that can be turned into a
ClearColor
that is valid for the given
RenderTargetView
”. This is just abstracted out because your
RenderTargetView
could theoretically be some weird device
that only takes certain color formats. So what is a
ClearColor
? For that you have to descend into the
gfx_core
crate, sift through ColorInfo
,
ColorMask
, PackedColor
,
ColorSlot
, and so on until you discover that
ClearColor
is just a type that represents a color, with a
bunch of From
methods so that you can create one from a
u32
, a [f32;4]
, and so on.
Making an API amazingly generic makes life complicated. But it also makes it possible to ensure, at compile time, that the color you provide can be turned into your render target’s color format in an intelligent way. Is it worth it? Hell yeah.
Creating a texture
Great, now how do we load and use a texture? With the
Factory
, naturally.
Factory::create_texture_immutable()
is usually what you
probably want; it takes a Kind
, which describes the data
layout, and an array of data, the exact type of which is based on your
ColorFormat
but which is probably &[u8]
.
This will return the Texture
, which is the actual data
descriptor, and a ShaderResourceView
which seems to be, as
the name implies, a texture handle that can be fed into a shader. How
are these different? Dunno. There’s also
Factory::create_texture_immutable_u8()
which appears to
just slurp out of a flat array of u8
, so that might be
simpler to use. (It actually takes an &[&[u8]]
where each sub-array is a single mipmap level for the texture!)
THEN you need a sampler as well, which defines how a texture is
sampled into a shader. Factory::create_sampler()
creates
it, and it takes some straightforward parameters that specify things
like interpolation mode, wrapping mode, etc.
FactoryExt::create_sampler_linear()
may be nice to use as
well.
So you have your pipeline data object which defines all the stuff
that gets shoved into a shader, as well as the stuff it produces at the
end of it… your texture samplers are just one more thing that goes into
this. First though, let’s load an image. The easy way to do the loading
is to use the image
crate, though anything that produces a
&[u8]
will work.
extern crate image;
fn gfx_load_texture<F, R>(factory: &mut F) -> gfx::handle::ShaderResourceView<R, [f32; 4]>
where F: gfx::Factory<R>,
R: gfx::Resources
{
use gfx::format::Rgba8;
let img = image::open("resources/player.png").unwrap().to_rgba();
let (width, height) = img.dimensions();
let kind = gfx::texture::Kind::D2(width as u16, height as u16, gfx::texture::AaMode::Single);
let (_, view) = factory.create_texture_immutable_u8::<Rgba8>(kind, gfx::texture::Mipmap::Provided, &[&img]).unwrap();
view
}
Then you make your pipeline contain a sampler and UV stuff in your vertex definitions:
gfx_defines!{
vertex Vertex {
pos: [f32; 2] = "a_Pos",
uv: [f32; 2] = "a_Uv",
}
constant Transform {
transform: [[f32; 4];4] = "u_Transform",
}
pipeline pipe {
vbuf: gfx::VertexBuffer<Vertex> = (),
tex: gfx::TextureSampler<[f32; 4]> = "t_Texture",
transform: gfx::ConstantBuffer<Transform> = "Transform",
out: gfx::RenderTarget<ColorFormat> = "Target0",
}
}
For some reason, your tex
item says it is just a
TextureSampler
but really takes a
gfx::handle::ShaderResourceView
AND a
TextureSampler
in a tuple. So to put it together:
let sampler = factory.create_sampler_linear();
let texture = gfx_load_texture(&mut factory);
let data = pipe::Data {
vbuf: quad_vertex_buffer,
tex: (texture, sampler),
out: color_view,
};
Then you just set up your shaders properly.
Vertex shader:
#version 150 core
in vec2 a_Pos;
in vec2 a_Uv;
out vec2 v_Uv;
void main() {
v_Uv = a_Uv;
gl_Position = vec4(a_Pos, 0.0, 1.0);
}
Fragment shader:
#version 150 core
uniform sampler2D t_Texture;
in vec2 v_Uv;
out vec4 Target0;
void main() {
Target0 = texture(t_Texture, v_Uv);
}
Setting blend mode
This is where stuff gets magical. It’s just a type in your
pipeline
definition. So you switch:
gfx_defines!{
...
pipeline pipe {
...
out: gfx::RenderTarget<ColorFormat> = "Target0",
}
}
to
gfx_defines!{
...
pipeline pipe {
...
out: gfx::BlendTarget<ColorFormat> = ("Target0", gfx::state::ColorMask::all(), gfx::preset::blend::ALPHA),
}
}
And that’s it!
This has an advantage and a (small) disadvantage. The advantage is
it’s totally rad. The disadvantage is that the blend-mode is
part of your pipeline definition, so if you want to change it you have
to alter your pipeline object or create a new one. The
pipeline
portion of the gfx_defines!
macro
specifies a structure called pipe::Init
and the
right-hand-side of the assignment on each line is the default values of
it that get created by pipe::new()
. However, there’s no
reason you can’t create your own pipe::Init
structure from
scratch and assign it to whatever you want.
As a game programmer who tends to make relatively simple games, I basically set the blend mode once at the beginning of the program and forget about it anyway.
Conclusion
gfx-rs
is a pretty nice low-level portable graphics API.
It’s conceptually similar to next-gen graphics API’s such as Vulkan,
DX12 and Metal, while being easier to use, Safe, cross-platform, and
capable of using any of these API’s as a backend. That’s pretty slick.
Once you get a handle on what the different parts are and how they fit
together, it’s honestly pretty darn nice to use, and it
compartmentalizes all the pieces it needs very elegantly. The people
working on it are also generally pretty awesome, know what they’re
doing, and are amazingly helpful if you pop into their Gitter chat and
ask questions.
It has some downsides: Its documentation, while improving, is not the best, especially at the level of big overview stuff. Hopefully this helps that, a little. Its structure of a million little crates doesn’t really help, IMO, and makes it harder for the uninitiated to figure out what they should be doing. It’s young; I’ve discovered a couple bugs in the process of using it, but they’re generally fixed quickly. This also means that the API is in flux and might change in big ways in the future… but even if it does, version 0.17 is pretty solid for all the things I need it to be able to do, so for me at least there’s no reason not to keep using it even if new versions change everything in crazy ways.
I’m pretty sure this doesn’t really scratch the surface of what
gfx-rs
can do, and I’m no expert in graphics programming in
general. But hopefully this is useful to people. If anyone has
corrections or suggestions, contact me on Reddit
/u/icefoxen
or in the #rust-gamedev
channel on
irc.mozilla.org
.
Appendix
This is a random salad of stuff that I don’t have anywhere else to put yet:
- Resources are
Arc
-referenced, so cloning them is cheap. This is very nice because otherwise trying to have multiple things refer to a texture, for example, is a PITA. Everything ingfx::handle::*
can be cloned easily as well. - Encoder is really a convenience wrapped around the
CommandBuffer
type which is lower-level; kvark cites this as an example of the difference betweengfx
andgfx_core
, but I don’t really understand what the distinction is yet. - per kvark again: “there is PSO definition and PSO object. An object
depends on the PSO definition + shader program + rasterizer state + PSO
init data. So what you’d typically want is very few PSO definitions, and
hence very few PSO Data types.” But you can create as many PSO data
objects as you want, really, though each one might consume graphics card
memory for the resources it needs. But again, since resources are
Arc
’ed, they’re easy to share.
Performance notes
There is a fairly basic performance
example that
gfx-rs
includes which just draws 10,000 triangles, and has
modes to do it using raw OpenGL or gfx-rs
. Checking it out,
the results were… well, surprising and disappointing: OpenGL was way
faster than gfx-rs
. Well that didn’t seem right, and some
profiling got me an answer: gfx-rs
was emitting lots and
lots of redundant calls to set OpenGL state. Some talk with the devs on
Gitter and some digging around and hacking of the Encoder
and lower-level CommandBuffer
got the example down to
pretty close to the OpenGL example, just by working a bit harder to
issue fewer redundant OpenGL calls. Not bad work for one evening of
slightly-intoxicated hacking. See https://github.com/gfx-rs/gfx/issues/1198 for
details.
Conclusion: People have been much more focused on getting the API
right and making it work well than on performance, which is pretty much
the right decision. There’s no real reason for gfx-rs
to be
slow besides the effort not having been put in yet, and so there’s lots
of low-hanging fruit to pluck when it becomes worth it. Building the
command buffer with the Encoder
does have some overhead in
the OpenGL backend, so I’d expect with some of work the OpenGL backend
would end up with performance about half that of raw OpenGL, at the
worst case. But from what little I know of optimizing OpenGL code, the
answer is always “less redundancy” and “fewer draw calls doing more
work” anyway, and that still applies for writing graphics code with
gfx-rs
. Plus the command-buffer model lets you spread most
of the CPU cost of building it across multiple threads.
It will be very interesting to see how much overhead exists in the
Vulkan-and-stuff backends, since they should be more amenable to this
command-buffer-based model, but alas as of Feb 2017 they’re not really
done enough yet. Honestly gfx-rs
is designed more with
these next-gen API’s in mind than OpenGL anyway, and they all present
CommandBuffer
-like things that gfx-rs
will be
using directly, so ideally there should be next to no overhead for those
backends. I’m excited!
Random type notes
- device: the OpenGL (or whatever) context. It’s what the Encoder interacts with to actually execute drawing commands.
- factory: An object that creates resources such as textures (or encoders or pipelines, for that matter, so it’s basically the root object perhaps)
- encoder: A list of drawing commands
- command buffer: A lower-level list of drawing commands; the Encoder seems to generally be the higher-level device-independent interface, while a CommandBuffer is the lower-level device-dependent one. More or less.
- vertex: A structure containing all the data needed for, well, a single vertex.
- pipeline data: The collection containing the data needed for drawing a frame, such as vertex buffers, shader variables, textures, render targets, etc. Basically the set of variables contained in the shader pipeline. The pipeline state object seems to contain the shaders themselves.
- pipeline state object: The actual collection of information on a pipeline, consisting of shader programs and information on how to bind the pipeline data object to them. So the PSO is the actual pipeline, the pipeline data is just what’s input to it.
- resources: defines a bunch of associated types: shader, texture,
buffer, etch. So by implementing this trait you can lock a bunch of
types together into one type, it looks like. Interesting… What the
concrete types is for a
Resources
trait appears to be defined by the backend.
Things that aren’t addressed yet in this tutorial
- Srgb
Random wisdom that hasn’t been incorporated yet.
10:54 < kvark> Icefoz: gfx-rs exists on 2 planes: the core and the render. Factory is from the core. Render
adds some new concepts, like the typed pipeline state objects, so it extends the factory to
work with them via FactoryExt.
10:55 < kvark> Icefoz: a better example of this core/render division is the CommandBuffer - Encoder concepts.
The former is core, the latter is render, more high-level, but conceptually the same thing.
@ebkalderon Thank you for writing this up. Minor nitpick: gfx_device_vulkan::Device does not exist.
@icefoxen 13:57 @ebkalderon Not a minor nitpick! How does Vulkan work then?
@msiglreith 14:00 Only exposing a graphics command queue atm in vulkan
@ebkalderon 14:03 @icefoxen Basically, it has a concept of starting and finishing frames. Once a frame is started, one or more graphics queues are generated from a device (mapped to a physical GPU on the system) and are passed around the application. These queues act kind of like gfx:: Devices and can consume command buffers in a thread-safe manner. Finally, the frame is finished and displayed on-screen.
@icefoxen 14:04 Aha, interesting. I was assuming that those would basically be how the the gfx::CommandBuffer was implemented for that backend.
@ebkalderon 14:06 I thought so too, but the differences are so great that it's difficult to ignore. Maybe check out the gfx_device_vulkan API on Docs.rs and see for yourself?
@msiglreith 14:07 Small nitpick, command queues are only created once at initialization.