20210607 Development R7 M2 Rendering - orbitalfoundation/wiki GitHub Wiki

Rendering

GENERAL

For Orbital we want a rendering system with these features:

  1. Not Invasive. Must not try to seize control of the system architecture. Bevy3D is a good example of this anti-pattern that seems to pollute many Rust libraries. Rust programmers especially have a conceit that they can make Rust friendly enough for people to want to write their apps in; this is an artifact of Rust itself being so unfriendly, and also of over-reaching aspirational programming. This unfortunately creates lock-in, once a person has built their app in the "Bevy Mindset" they are trapped there. Bevy3D ultimately is a strongly opinionated framework that pretends it is a rendering engine - it thinks of itself as the outermost framing concept. We DO NOT want our entire operating system to be controlled by some insignificant module - especially not the fricking rasterizer.

  2. Simple specifically must be driven by "throwing messages over a fence". Ideally we want to publish messages to the Renderer and have it manufacture whatever we say.

  3. Retained mode scenegraph Based. I want a retained mode scenegraph that I can drive by sending it fragments that it will remember. I don't want to drive an immediate mode renderer that has no state.

  4. Rich 2D Semantics combined. I want a single engine that combines 2d and 3d thinking but it has to especially have rich 2d semantics. There are a few 3d engines out for Rust that are ok but they have incomplete 2d semantics. This has to be as good as the web canvas.

MESSAGES

We send messages to the rasterizer describing parts of the scene for it to include. These are described in JSON and passed over a messaging bus to the renderer. Messages may look like so:

{
  id:1234,
  kind:"circle",
  radius:5,
  border:3px,
  fillcolor:blue,
  bordercolor:red,
  xyz:[0,0,0],
  ypr:[0,0,0],
  hwd:[1,1,1],
}

2D PRIMITIVES

We have demanding expectations of the 2d primitives:

  • text -> ttf, shadow, kerning, baselines, text wrapping
  • lines -> straight, bezier, capped, triangulated -> see https://mattdesl.svbtle.com/drawing-lines-is-hard
  • box -> curved corners or not
  • ellipse -> arcs, curves
  • sprites -> slicing, tiling, scaling, rotating, stencils, additive, masks
  • outline -> alpha, color, gradient, image and other style attributes
  • fill -> alpha, color, gradient, image and other style attributes
  • layers -> separated rendering 'windows' or targets
  • clipping

2D Widgets

We want to be able to instantiate a high level widget that can return event state such as button presses:

  • buttons that can be styled
  • text input box that can be styled
  • draggable region
  • layout helper (grid, flow, etc)

3D PRIMITIVES

In our case we actually do not have heavy 3d demands:

  • camera
  • lights
  • meshes and animations
  • physics
  • gltf support
  • clipping

DAG / Node Hierarchy / Grouping concepts

Nodes

  • grouping concepts
  • transform hierarchies
  • visibility
  • properties carried downwards

In more detail

I'm continuing to run into ongoing questions around how to display stuff to the user. This is something I need to solve in some reasonable way. Let's try define what I need from display capabilities:

  1. API/Library pattern. I want an API like interface not a "system". I want to be able to say "paint a thing" and not have to fit my code inside of some other harness.

  2. 2D. This is critical. I would like "Canvas like" 2d primitives: svg, sprites, text (in different fonts and sizes and effects), drawing lines, 2d rectangles and boxes, arcs, circles, ellipses, bezier paths. Brush effects such as stroke style, fill style, alpha, gradients, paint with an image, filled, unfilled, borders, caps on lines, shadows and so on. On top of this common 2d widgets are also nice to haves such as input boxes and buttons.

  3. 3D. This is critical. Most 2d rendering can be subsumed into a larger space of 3d rendering capabilities aside from fonts. This is more than simply setting a camera position but rather using transform hierarchies and retained state to better organize, orchestrate and manage the rendering of elements in a structured way. From a 3D API I want a scene-graph model with basic primitives: groups, transforms, camera, light, geometry. Nice to haves are 3d tubes/mitre/bezier but that isn't critical. I need to be able to load gltfs. I expect Rapier physics. Directly specifying shaders is critical as well.

  4. NOT rust focused. Many of the rust 3d engines focus on "how easy it is" to program 3d games in rust. The fact is however there isn't a single person in the world who is going to build an industrial strength game in Rust. Rust is the wrong language for describing thousands of small lightweight scenes and interactions. Engines like Bevy3d are going in the wrong direction. By focusing on native programming in rust, they are failing to focus on scripting languages and visual grammars; which are the primary way that non-technical people build 3d games today.

Current Rust 2d/3d Engines

Engine Mainloop Support WGPU Shader GLTF Fonts Widgets Example
WebGPU -------
Bevy3D Link
Amethyst Link
Harmony Link
Piston3d
three-d
Rg3d
  1. WebGPU. I could directly build my own API like 3d engine directly on top of WebGPU. It's an excellent bridge to GPU hardware and not hard to use. It would however take me a few weeks to build a reasonable 3d engine. Here are a couple of good starting points -> https://github.com/Joey9801/wgpu-test/blob/master/src/main.rs , https://github.com/gfx-rs/wgpu-rs/tree/master/examples and https://sotrh.github.io/learn-wgpu/intermediate/tutorial12-camera . The challenges of a DIY approach are building a good camera and projection system with hierarchical transforms and a scene graph, as well as shaders to deal with lights and geometry. As well there may be some challenges in dealing with fonts and integrating physics takes some work too.

  2. Bevy3D. https://bevyengine.org/ . Uses WGPU/Rapier which is nice. This engine focuses on making Rust itself a pleasant way to build a game. I'm not happy with how coercive it is however. If I was going to use this engine I'd have to fix the App::run() pattern which is coercive (attempts to control me and makes my kernel into a module of Bevy rather than visa versa. Also I'd build up some ECS primitives that represent the typical kinds of concepts or objects that people routinely manage in 3d - 99% of developers just want objects with position, orientation and transform hierarchies; so the ECS pattern is kind of stupid because it's going to 99% of the time be used to re-invent the same pattern. It does do many of the things that I want otherwise. See https://caballerocoll.com/blog/bevy-chess-tutorial/

  3. Amethyst. https://github.com/amethyst/amethyst/blob/main/examples/gltf_scene/main.rs . Looks like talks directly to api's like vulkan?

  4. Harmony. A newer smaller project, WGPU and GLTF support; does not coercively control the main loop. See https://github.com/StarArawn/harmony/blob/master/examples/hello-cube.rs

  5. Piston3d. https://github.com/PistonDevelopers/piston-examples/blob/master/examples/cube.rs . Doesn't seem to have a lot of support for things like GLTFs or physics. https://github.com/PistonDevelopers/piston/wiki/Piston-overview . https://kai.coding.blog/why-i-like-piston

  6. Three-D. OpenGL / GLTF . https://github.com/asny/three-d

  7. RG3d. OpenGL -> https://github.com/rg3dengine/rg3d-tutorials/blob/main/tutorial1-character-controller/src/main.rs . Takes control of the main loop which I hate.

  8. Pathfinder. https://github.com/servo/pathfinder -> It would be pleasant to support a modern high performance font engine as well... this only binds to OpenGL right now (not WebGPU).

  9. wgpu_glyph. Font support if going with a DIY approach. https://github.com/hecrj/wgpu_glyph/blob/master/examples/hello.rs

  10. WebXR. It is hard for me to leverage this at the level I am at (raw Rust). https://lib.rs/crates/openxr

  11. https://www.libsdl.org/

  12. https://www.glfw.org/

  13. Makepad (which is very appealing but needs more documentation). https://www.youtube.com/watch?v=CvWWcCvhV3w

In the same vein one can imagine a high level UX library like Flutter being dynamically fetched and made available for clients that wish to use that. The main goal is to not "enforce" any one solution but to keep the kernel extremely small. Another high level UX here is https://github.com/hecrj/iced .

A somewhat lower level toolkit that some may want to use is https://github.com/RobLoach/node-raylib .

For my purposes I will not let users talk to lower level capabilities below my scene graph manager. High performance developers may want more power - but I don't exactly see how we provide that in a safe way.

Also See:

Graphics Rendering tensions

The more I look at this, the messier these existing solutions appear to be. They are all trying to "teach" developers how to "build" games in Rust - rather than merely being API's or appliances. They are enforcing all kinds of thinking or patterns that they want you to use and be invested in. But I don't really want to be hugely invested in their incredibly poor thinking - like I don't want to learn to master some weird system somebody has decided is the future of rust based programming. And I don't actually want to even drive the system from rust - I just want to expose it to javascript. It's not reasonable to ask novices to program in rust!

Are there any WGPU solutions specifically that can be used?

WGPU seems like the most powerful - but the tools in Rust seem poor. Here are the options I see:

  1. This rendering example does a basic wgpu setup and render: https://github.com/gfx-rs/wgpu-rs/blob/master/examples/hello-triangle/main.rs
  2. This example also sets up wgpu and runs it - to render a 3d camera view: https://github.com/sotrh/learn-wgpu/blob/master/code/intermediate/tutorial12-camera/src/main.rs
  3. This example does a basic setup and then renders fonts also: https://github.com/hecrj/wgpu_glyph/blob/master/examples/hello.rs
  4. This example renders glbs but it steals the main loop … but it is pretty darn nice as a framework - https://github.com/BrassLion/rust-pbr/blob/master/src/main.rs
  5. Bevy does do it all - but it wants to wrap me. https://github.com/bevyengine/bevy/blob/latest/examples/3d/3d_scene.rs
  6. Or, I could build my own thing. Notably the brasslion example works hard to setup an architecture, and so does joey, and the tutorials do as well. I could scavenge between these all to devise some system. And then I could try paint something myself? This would be weeks of work however; especially dealing with input, camera transforms, shaders, asset management and so on.

See also:

BEVY specifically as a solution?

I am trying out using bevy as a rendering layer. It's not really designed to be used the way I am using it. There are several tensions:

  • "system" -> bevy has a conceit that it itself should be your outermost scope or 'system'
  • mainloop -> as a result it grabs the run loop; preventing you from doing any other work
  • "resource" -> if you want to do work you have to carefully package up any state as a resource
  • "system" -> bevy introduces a pile of concepts, one is a system that can "do work"
  • "query" -> it's unclear exactly how a system gets arbitrary arguments but it just works

My approach for using Bevy:

  • messages -> I've managed to get my message channel visible to a bevy "system" at runtime
  • create -> I pass messages to my code to manufacture bevy objects as I wish

References:

Conclusions: It doesn't seem to make sense to provide wholesale a given "full stack" such as say Bevy or Piston game engines provide. Developers may have their own stacks; and late binding composability should in theory allow shared libraries or resources to be delivered to the kernel and be re-used by other apps as needed. At the same time - most developers building apps in our system would be effectively wanting to leverage tools such as Amythest or Bevy .