Optimizations - jopo86/onyx GitHub Wiki

Onyx - Optimizations

Using the high-level features of Onyx is nice and all for simple games or applications, but it can affect performance if we are trying to create something with thousands of renderables. To mitigate this, we need to stop relying on parts of Onyx that streamline the rendering process, because they also increase the overhead.

Rendering Without Renderables

What we can do to increase performance potentially drastically is to render everything on a bit of a lower-level, closer to that of raw OpenGL (but still far from it). The Renderable and Renderer objects are nice and all, but they do increase the overhead quite a bit because every single renderable has its own mesh, shader, and texture, and every time it is rendered with the renderer it uses its shader, binds its texture, and then renders its mesh, which is simply the OpenGL rendering workflow.

What we can do instead is only create one copy of each shader and texture we need to use. We can then bind the shader/texture only once, and then repeatedly render each mesh that uses that shader/texture combination. This reduces overhead because switching shaders/textures is an expensive call to the GPU, and we can gain some performance by removing lots of these calls.

But how would we render with our translations, camera, lighting, and/or fog now? Well, all of these things are just variables that the shaders use and then do their thing. So we can still create Camera, Lighting, and Fog objects, we just have to set the variables in the shaders instead of having the Renderer do it for us (we'll discuss translations shortly). Here is how Onyx's shader presets have these variables laid out:

uniform mat4 u_view;
uniform mat4 u_projection;

uniform vec3 u_camPos;

struct Lighting
	bool enabled;
	vec3 color;
	float ambientStrength;
	vec3 direction;

struct Fog
	bool enabled;
	vec3 color;
	float start, end;

uniform Lighting u_lighting;
uniform Fog u_fog;

If you dont understand this GLSL code, that's fine. These are just the names of the variables in the shaders so we know what to set. So, assuming we have shader, camera, lighting, and fog objects created, this is how we can update the shader accordingly:

shader.use() // shader needs to be active to set uniform variables!

shader.setMat4("u_view", cam.getViewMatrix());
shader.setMat4("u_projection", cam.getProjectionMatrix());
shader.setVec3("u_camPos", cam.getPosition());

shader.setBool("u_lighting.enabled", true); // whether we want lighting to be enabled
shader.setVec3("u_lighting.color", lighting.getColor());
shader.setFloat("u_lighting.ambientStrength", lighting.getAmbientStrength());
shader.setVec3("u_lighting.direction", lighting.getDirection());

shader.setBool("u_fog.enabled", true); // whether we want lighting to be enabled
shader.setVec3("u_fog.color", fog.getColor());
shader.setFloat("u_fog.start", fog.getStart());
shader.setFloat("u_fog.end", fog.getEnd());

We probably wouldn't have to update the lighting or fog settings every frame, but that is how we would update everything.

Now, how do we handle translations? Well, translations use a matrix called the model matrix that basically bundles translating, rotating, and scaling into one operation. All we do is create a Mat4 object, and then we can call translate, rotate, and scale on this object just like we can with renderables (however there are no setPosition/setRotation/setScale functions, you would have to create this system yourself).

Now, assuming we have a Mat4 model variable created, to apply the model matrix, we set the shader's variables:

uniform mat4 u_model;
uniform mat4 u_inverseModel;

like this:

shader.setMat4("u_model", model);
shader.setMat4("u_inverseModel", Onyx::Math::Inverse(model)); // inverse is expensive, you would not want to do this every frame, only when necessary (so when the model is changed)

The inverse model is needed to calculate the translation of normal vectors. If the shader you're using doesn't require normal vectors, it won't require an inverse model.