Advanced Tutorial - jopo86/onyx GitHub Wiki

Onyx - Advanced Tutorial

Welcome to the advanced tutorial! It is recommended to do the Basic Tutorial before coming here, but if you already have, great!

Let's get started with some boring boilerplate code, we can just use all the code we left off with for the basic tutorial.

#include <Onyx/Core.h>
#include <Onyx/Math.h>
#include <Onyx/Window.h>
#include <Onyx/InputHandler.h>
#include <Onyx/Renderable.h>
#include <Onyx/Renderer.h>
#include <Onyx/Camera.h>

using Onyx::Math::Vec2, Onyx::Math::Vec3, Onyx::Math::Vec4;

int main()
{
    Onyx::ErrorHandler errorHandler(true, true);
    Onyx::Init(errorHandler);

    Onyx::Window window(
        Onyx::WindowProperties{ 
            .title = "Onyx Tutorial", 
            .width = 1280, 
            .height = 720,
            .backgroundColor = Vec3(0.0f, 0.2f, 0.4f)
        }
    );
    window.init();

    Onyx::InputHandler input;
    window.linkInputHandler(input);

    Onyx::Texture container = Onyx::Texture::Load(Onyx::Resources("textures/container.jpg"));
    Onyx::Renderable cube = Onyx::Renderable::TexturedCube(1.0f, container);

    Onyx::Camera cam(Onyx::Projection::Perspective(60.0f, 1280, 720));
    window.linkCamera(cam);
    cam.translateFB(-2.0f);

    Onyx::Font roboto = Onyx::Font::Load(Onyx::Resources("fonts/Roboto/Roboto-Bold.ttf"), 48);
    Onyx::TextRenderable fpsCounter("FPS: 0", roboto, Vec4::Cyan());
    fpsCounter.setPosition(Vec2(20.0f, window.getBufferHeight() - roboto.getSize() - 20.0f));

    Onyx::Renderer renderer(cam);
    window.linkRenderer(renderer);
    renderer.add(cube);
    renderer.add(fpsCounter);

    input.setCursorLock(true);

    const float CAM_SPEED = 4.0f;
    const float CAM_SENS = 50.0f;

    while (window.isOpen())
    {
        input.update();

        if (input.isKeyDown(Onyx::Key::Escape)) window.close();

        if (input.isKeyDown(Onyx::Key::W))      cam.translateFB( CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::A))      cam.translateLR(-CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::S))      cam.translateFB(-CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::D))      cam.translateLR( CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::Space))  cam.translateUD( CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::C))      cam.translateUD(-CAM_SPEED * window.getDeltaTime());

        cam.rotate(input.getMouseDeltas().getX() / 200.0f * CAM_SENS, input.getMouseDeltas().getY() / 200.0f * CAM_SENS);

        cam.update();

        cube.rotate(Vec3(20.0f * window.getDeltaTime()));

        fpsCounter.setText("FPS: " + std::to_string(window.getFPS()));

        window.startRender();
        renderer.render();
        window.endRender();
    }

    window.dispose();
    renderer.dispose();
    Onyx::Terminate();

    return 0;
}

Running this should get the same old textured spinning cube with an FPS counter and position on the top left. Let's get into the big topics of the advanced tutorial!

Lighting

Before I say anything - change the textured cube to a colored one, it will make the lighting pop out more.

// replace cube creation
Onyx::Renderable cube = Onyx::Renderable::ColoredCube(1.0f, Vec3::Red());

Ok, now we can get started. See how its all just one blob, and if it wasn't rotating it would just look like a weird 2D shape? That's because there's no lighting; no shading. But, we can change that. Onyx supports ambient & directional lighting, explained below. (There are (currently) no specular highlights, reflections, or shadows, if you want all that head over to Unreal Engine).

We use the Lighting class to adjust lighting settings to our satisfaction, and then we can give the renderer the lighting settings. Lighting is created from three settings: color, ambient strength, and direction. The color is pretty self explanatory. The ambient strength is the base amount of light objects receive, regardless of whether the light is facing them, ranging from 0-1. 0 is complete darkness with no light, and 1 is full brightness everywhere (defeating the point of lighting). The last setting is the direction, which is simply a vector specifying which way the light is facing. You can fine tune a direction for yourself, or you can just use what I use: (0.2, -1.0, -0.3).

Keep in mind that object colors are multiplied by the light color, so if one of the channels (channel meaning R, G, or B separately) is 0, the channel for the object's color will be multiplied by 0 and end up 0 as well. If you want to make a colored light, it's better to make the channels with the color you don't want something like 0.8, rather than full-on 0.

// replace renderer creation
Onyx::Lighting lighting(Vec3(1.0f, 1.0f, 1.0f), 0.3f, Vec3(0.2f, -1.0f, -0.3f));
Onyx::Renderer renderer(cam, lighting);

If you move a little to the right after starting the program, you should see this:
Image 8
Now it actually looks like a cube! It still doesn't look too special, but don't worry, you'll get HUGE results from lighting when we get into model loading.

Just for fun, let's put the texture back on the cube, and maybe make the light color change over time. Onyx::GetTime() gives us the time in seconds since the window was initialized, so we can use some trig on the time for some color changing results. To actually update the renderer's lighting, we need to call refreshLighting() on it.

// replace cube creation
Onyx::Texture container = Onyx::Texture::Load(Onyx::Resources("textures/container.jpg"));
Onyx::Renderable cube = Onyx::Renderable::TexturedCube(1.0f, container);

// in mainloop
lighting.setColor(Vec3(std::max(sin(Onyx::GetTime()) * 2.0f, 0.5), std::max(cos(Onyx::GetTime()) * 0.7f, 0.5), std::max(tan(Onyx::GetTime()) * 1.3f, 0.5)));
renderer.refreshLighting();

Here's the reason we need to refresh the lighting: Onyx tries to save some performance by not updating the shader variables of renderables every frame, instead it only updates them when it needs to. It needs to when: setLighting() is called (updates the lighting for all renderables), and when add() is called (updates the added renderable's lighting). However, in cases like this, we may need to override this behavior, and so the renderer provides the refreshLighting() function to do so.

You should see some wacky results from this:
Image 9

Custom Renderables

Let's get rid of that cube renderable and make our own from scratch! This is a hefty topic, so it'll be divided into some subsections.

What are Renderables?

Renderables consist of three parts: the mesh, the shader, and the texture. The mesh contains the position and shape of the object, the shader does a few different things that we'll talk about, and the texture is pretty self explanatory at this point (many renderables, as we've seen, do not use a texture).

The Mesh

The mesh specifies the individual vertices that make up the object, and the indices that tell OpenGL how to draw the object.

Vertices may contain positional information (the most important), normal information (used for lighting), texture coordinate information, and/or color information (probably the least common, not seen in models). The positional information consists of 3D points in space, hopefully you're familiar with that. The normal information consists of 3D directional vectors in space, specifically ones perpendicular to the surface of the object. The lighting on an object will be the strongest if the normal vector and the light direction vector are pointed straight at each other, that's how it's calculated. The texture coordinate information consists of what are called UV mappings, essentially just the XY mappings, ranging from 0 to 1, on a texture image. (0, 0) is the bottom left of a texture, (1, 1) is the top right. Lastly, the color information consists of the 3 RGB values we're used to, as well as a 4th 'alpha' value, AKA the opacity. All of this information is specified in one float array, on a per-vertex basis, and we tell Onyx how the vertex array is formatted using the VertexFormat enum class. This will become more clear when we make a mesh ourselves, by far the hardest part of making custom renderables.

All meshes are created from triangles, connecting three vertices together and filling the space between. If we just used the order of vertices to draw triangles, three vertices at a time, we would need to repeat many vertices many times. That's where indices come in - they allow us to refer to a vertex by an index; the first vertex defined has index 0, the next 1, and so on. These indices are implicitly assigned to each vertex. So, with an index array, one triangle is defined by simply three unsigned integers referring to the index of a vertex, instead of three large collections of floating point values. With a cube, it doesn't matter that much, but with large models it most definitely saves memory space.

If we are just drawing one triangle, we would have 3 vertices for the 3 points on the triangle, and the indices would simply be (0, 1, 2). Or (1, 2, 0), or (2, 0, 1), the order of indices in one triangle doesn't matter.

If we are drawing a square, however, we would have 4 vertices for the 4 corners of the square, but the indices would have to specify two triangles to make up the square. If our vertices are laid out clockwise or counter-clockwise (as long as they aren't skipping around), our indices would be: (0, 1, 2), (2, 3, 0). Hopefully you can visualize that and make sense of it.

Alright, that's the hardest part out of the way, let's move on.

The Shader

Shaders are programs that run on the GPU of your system. They are written in a language called GLSL, which is similar to C but with some added matrix math functions, and while we could write and create our own completely custom shaders, we're going to use some of the many shader presets Onyx already has for us. If you're curious, you can take a peek at some GLSL shader code in the resources/shaders folder.

Shaders, at least how Onyx uses them, consist of two parts: the vertex shader and the fragment shader. The vertex shader's main function is to take 3D positional data and project it as coordinates on our 2D screen. It uses a combination of the renderable's transformations, the camera's view matrix, and the projection's projection matrix. Don't worry, you don't really have to know this, but it's cool. In our case, the vertex shader also transforms the normal vectors using the renderable's transformations, and calculates the lighting intensity for each vertex. The fragment shader is responsible for coloring each pixel, and it simply uses a combination of the renderable's color or texture, as well as the light intensity which was calculated and sent over by the vertex shader.

The Texture

The texture is pretty simple, it's an image applied to a renderable. The UV mappings in the vertices we talked about earlier correspond to coordinates in the texture image, and the fragment shader interpolates these coordinates from each vertex to generate a coordinate for each fragment/pixel, and takes the color of the texture at that coordinate.

A Nuance

Sorry, this might be confusing. We can use a shader that does not require as much data as the mesh vertices provide, but we cannot use a shader that requires more than the mesh vertices provide. So, our mesh could provide positional, normal, texture coord, and color information, and we could use a shader that only concerns positional information, or one that only concerns positional and texture information, etc.

Let's Make a Renderable!

Let's make a tetrahedral. It sounds fancy, it's really just a shape made from 4 points - 3 points specifying the base triangle, and one point on the top. From this, we get a base triangle and 3 side triangles.
Image 10

Let's make the 3 points for our base. It will change on the X and Z axes, and stay constant on the Y axis, since it is flat. Our XZ coordinates, to center around the 0, 0 origin, should be: (-1, -1), (1, -1), (0, 1). Make sense? Then, our top coordinate will just be in the center of the XZ plane and our Y coordinate will be 1. So, our 3D XYZ positional coordinates are as follows: (-1, 0, -1), (1, 0, -1), (0, 0, 1), (0, 1, 0). Let's make an array of vertices to show this:

float vertices[] = {
    -1.0f, 0.0f, -1.0f,
     1.0f, 0.0f, -1.0f,
     0.0f, 0.0f,  1.0f,
     0.0f, 1.0f,  0.0f
};

Great, now what will our indices look like? Well, we'll start of making a triangle out of the base, which would just be (0, 1, 2). Now, we need 3 triangles made out of 2 consecutive vertices of the base, and the top vertex. So, for organization, our third index can always be the top vertex, and the first two can be the consecutive vertices of the base, starting at 0, 1. So here are those 3 sets of indices: (0, 1, 3), (1, 2, 3), (2, 0, 3). Note how we roll back to 0 for the second index of that last set. Let's code this:

uint indices[] = {
    0, 1, 2,
    0, 1, 3,
    1, 2, 3,
    2, 0, 3
};

Now we can make a mesh out of this! A mesh needs a VertexBuffer object and an IndexBuffer object - these objects are very simple, they just need the array data and the size of the arrays, and the vertex buffer also needs the format of our vertices. Based on what data the vertices may contain, we may have positional information (P), normals (N), texture coords (T), and/or colors (C). So the available formats include P, PN, PC, PT, PCT, PNT, PNC, and PNCT. We just have positional information, so our format will be P.

Onyx::Mesh mesh(
    Onyx::VertexBuffer(vertices, sizeof(vertices), Onyx::VertexFormat::P),
    Onyx::IndexBuffer(indices, sizeof(indices))
);

They are called buffers because the information they hold is temporary. If you aren't the one who created the vertex buffer (like when we use the presets), it may be deleted at any time after the mesh is created. Because of this, Onyx does not allow access to the data given to the buffers after they are created. However, if you are the one who created them, you are in control of the data they were given, and if the arrays you provide are allocated on the heap it is still your responsibility to free that memory.

Shader names consist of the vertex format they are designed for, as well as an underscore and any sort of 'extension' they may have. For us, we want a shader designed for the P vertex format, and we want to choose the color of the object. So, we will use the P_Color shader preset with the color we want, I'm gonna use green.

Onyx::Shader shader = Onyx::Shader::P_Color(Vec4::Green());

Great, now we can create a renderable by simply passing in this mesh and shader, and add it to the renderer!

Onyx::Renderable tetra(mesh, shader);
// ...
renderer.add(tetra);

Now run the program, let's see what we've got!
Image 11
Well, it's kinda just a blob, even worse then the cube. We could add some normal vectors so there is lighting, but that would be too mathy. Instead, we're going to render in wireframe mode, which will show us the outlines of our triangles. All we have to do is use the renderer's static SetWireframe function:

// can put this anywhere after Onyx initialization
Onyx::Renderer::SetWireframe(true);
// can set width of lines with Renderer::SetLineWidth(float)

Image 12
Now we can actually see the shape. Let's add a texture to it, which will help us see the shape without wireframe.

But, there's a problem - each face needs its own 3 texture coordinates, which means we now need 3 vertices per face. This kind of defeats the whole point of the vertex and index system, but it's important to understand that when loading huge models, it does save lots of data.

Anyways, we now need 12 vertices for our textured tetrahedral, and now our indices will just be 0-11 in order. Each face will just use the bottom-left of the texture, the bottom-right, and then the middle-top, just like how our positional information is defined for our base triangle. So our texture coordinates for every face will just be (0, 0), (1, 0), (0.5, 1). Here's how all that vertex and index data would look:

float vertices[] = {
    // positions            // texture coords
    -1.0f, 0.0f, -1.0f,     0.0f, 0.0f,
     1.0f, 0.0f, -1.0f,     1.0f, 0.0f,
     0.0f, 0.0f,  1.0f,     0.5f, 1.0f,

    -1.0f, 0.0f, -1.0f,     0.0f, 0.0f,
     1.0f, 0.0f, -1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.5f, 1.0f,

     1.0f, 0.0f, -1.0f,     0.0f, 0.0f,
     0.0f, 0.0f,  1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.5f, 1.0f,

     0.0f, 0.0f,  1.0f,     0.0f, 0.0f,
    -1.0f, 0.0f, -1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.5f, 1.0f
};

uint indices[] = {
    0, 1, 2,
    3, 4, 5,
    6, 7, 8,
    9, 10, 11
};

Now, we need an actual texture to apply to the renderable. We can just use the container texture from before, or feel free to grab some different one from the web. We also need to change our vertex format to PT, since that is how our vertices are laid out, and we need to change our shader to the PT shader. This shader doesn't have any extras, so there's no underscore something, just the vertex format. Here's how all of that works out:

Onyx::Mesh mesh(
    Onyx::VertexBuffer(vertices, sizeof(vertices), Onyx::VertexFormat::PT),
    Onyx::IndexBuffer(indices, sizeof(indices))
);

Onyx::Shader shader = Onyx::Shader::PT();

Onyx::Texture container = Onyx::Texture::Load(Onyx::Resources("textures/container.jpg"));

Onyx::Renderable tetra(mesh, shader, container);

And, if you turned wireframe off, you should get this:
Image 13

Let's spice it up one last time - we can assign colors to each vertex along with these texture coordinates! Let's assign red to the bottom-left of the base, yellow to the bottom-right, green to the middle-top, and blue to the top of the whole thing (the 4th value will always be 1.0f, that's just the opacity):

float vertices[] = {
    // positions            // colors                   // texture coords
    -1.0f, 0.0f, -1.0f,     1.0f, 0.0f, 0.0f, 1.0f,     0.0f, 0.0f,
     1.0f, 0.0f, -1.0f,     1.0f, 1.0f, 0.0f, 1.0f,     1.0f, 0.0f,
     0.0f, 0.0f,  1.0f,     0.0f, 1.0f, 0.0f, 1.0f,     0.5f, 1.0f,

    -1.0f, 0.0f, -1.0f,     1.0f, 0.0f, 0.0f, 1.0f,     0.0f, 0.0f,
     1.0f, 0.0f, -1.0f,     1.0f, 1.0f, 0.0f, 1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.0f, 0.0f, 1.0f, 1.0f,     0.5f, 1.0f,

     1.0f, 0.0f, -1.0f,     1.0f, 1.0f, 0.0f, 1.0f,     0.0f, 0.0f,
     0.0f, 0.0f,  1.0f,     0.0f, 1.0f, 0.0f, 1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.0f, 0.0f, 1.0f, 1.0f,     0.5f, 1.0f,

     0.0f, 0.0f,  1.0f,     0.0f, 1.0f, 0.0f, 1.0f,     0.0f, 0.0f,
    -1.0f, 0.0f, -1.0f,     1.0f, 0.0f, 0.0f, 1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.0f, 0.0f, 1.0f, 1.0f,     0.5f, 1.0f
};

And once we change the vertex format and the shader to PCT, we get this cool result:
Image 14
Remember when I said: we can use a mesh that provides more data than the shader requires, but we can't use a shader that requires more data than the mesh provides? Well, our mesh provides position, color, and tex coord data, meaning we can still use the P_Color shader and assign a color to all vertices, or the PC shader to ignore the texture, and it will still work perfectly. Try it out!

That's gonna be it for the renderable section. All of this knowledge may go away when we get to model loading, but I think it's good to know.

Model Loading

It's time for the big guns. This section will be way shorter than the renderable section (don't worry, they all will), and way cooler.

We can load an OBJ file into a ModelRenderable, which behaves just like a normal Renderable, except in reality it is a collection of separate renderables. First, let's download a model to use. OBJ files, most of the time, come with complementary MTL (material) files to color the objects, and the sometimes come with textures as well. I've compiled this animal model pack into a nice zip folder that you can download here, and extract the contents of that file into your resources/models folder. Make sure that you have Animals.obj and Animals.mtl in the resources/models folder, and a large collection of images in resources/models/AnimalTextures.

Alright, now to load a model, we use a Model class and its static LoadOBJ function with the filepath, which for us will be Onyx::Resources("models/Animals.obj"). We then create a ModelRenderable object and just give it the loaded Model object. We can do this all on one line:

Onyx::ModelRenderable animals(Onyx::Model::LoadOBJ(Onyx::Resources("models/Animals.obj")));

Now, you can add animals to the renderer just like you would any other renderable. Now, make sure you still have lighting for the renderer from the first section (without the wacky effects, preferably), and run the program.
Image 15
Voila!

To really see the effect of lighting, let's add a keybind that will toggle lighting. The renderer, as well as many other classes, can handle toggling settings for us, so this is all we have to do (I'm going to use the L key):

// in mainloop
if (input.isKeyDown(Onyx::Key::L)) renderer.toggleLightingEnabled();

If you run it now, though, you may notice a problem - this lighting toggle is run every frame if the L key is pressed down, so if we hold it down for a duration longer than just 2 frames, it will be run twice and just flash for a second.

Key Cooldowns

Luckily, the InputHandler class has a solution: cooldowns. We can add cooldown durations (in seconds) to each and every key and/or mouse button. Here's all we have to do:

// before mainloop
input.setKeyCooldown(Onyx::Key::L, 0.5f);

Now, there is a 0.5 sec cooldown for the L key. If you hold it down, it perfectly illustrates the importance of lighting:
Image 16

A Quick Tip

When downloading OBJ and MTL files, be careful renaming them. The OBJ file references the MTL file by name, so if you change the name of the files, you need to change that reference as well - in the OBJ file, near the beginning, it will read mtllib <name>.mtl, and must match the filename of the MTL file. Additionally, in the MTL file, make sure all texture paths are accurate. The texture that Onyx uses is the map_Kd variable in MTL files, so make sure the path after every map_Kd label is accurate (many model uploaders keep the filepaths absolute, so they are only valid on their machine. You will have to fix this often).

UI (User Interface)

Apart from the text rendering, everything we've rendered has been affected by our projection and camera movement. But lots of the time we don't want that, we want UI - like buttons, menus, etc. Lucky for us, there is a UiRenderable class for just that! There are no presets for this class, however, so that's why I've saved it for the advanced tutorial - specifically after the custom renderables section. That knowledge will help us a lot here, although it's not as complicated.

All coordinates in OpenGL are 3D. OpenGL doesn't know if we want to render something as UI or not, so all positional coords are 3D, period. So, for UI, we'll just set the Z coordinate to 0. The main difference about UI rendering is that 1) the shaders used don't factor in the camera POV to render, so the object stays static on the screen, and 2) the coordinates we enter are SCREEN coordinates, not world coordinates. That means the range is (0-1280, 0-720). Let's make a little triangle UI object.

Similar to custom renderables, we define the vertices and indices first. Let's make a triangle, 300 pixels wide and tall, that will cover part of the bottom left of our screen, and make a Mesh out of it, hopefully you're good with this by now:

float vertices[] = {
    0.0f,   0.0f,   0.0f,
    300.0f, 0.0f,   0.0f,
    150.0f, 300.0f, 0.0f
};

uint indices[] = {
    0, 1, 2
};

Onyx::Mesh mesh(
    Onyx::VertexBuffer(vertices, sizeof(vertices), Onyx::VertexFormat::P),
    Onyx::IndexBuffer(indices, sizeof(indices))
);

Making a UiRenderable doesn't require explicitly defining a shader, it's done on the inside. We can either make a colored UI renderable, or a textured one. For now, we'll make it colored, and we'll use a slightly transparent orange, making the alpha value 0.5. Then, we can add it to the renderer, as usual (you can keep the animals model).

Onyx::UiRenderable triangle(mesh, Vec4::Orange(0.5f));
// ...
renderer.add(triangle)

You should get this, being able to see through the triangle:
Image 17

Alright, let's make a textured UI renderable now. I found some random transparent background PNG of a Santa hat, I'm gonna use that. Here's a direct download link, add the image to your resources/textures folder: Download
(I renamed it to just santaHat.png)

Alright, so we're going to redo our vertices and indices, just to make a square with some self-explanatory texture coordinates. Let's make it in the middle of the screen, and maybe 200 px wide and tall. We'll also have to change the vertex format of the mesh to VT.

float vertices[] = {
    // positions                        // tex coords
    1280/2 - 100, 720/2 - 100, 0,       0, 0,
    1280/2 + 100, 720/2 - 100, 0,       1, 0,
    1280/2 + 100, 720/2 + 100, 0,       1, 1,
    1280/2 - 100, 720/2 + 100, 0,       0, 1
};

uint indices[] = {
    0, 1, 2,
    2, 3, 0
};

Onyx::Mesh mesh(
    Onyx::VertexBuffer(vertices, sizeof(vertices), Onyx::VertexFormat::PT),
    Onyx::IndexBuffer(indices, sizeof(indices))
);

(1280/2, 720/2) gets the exact center of the screen, and we subtract and add 100 to get a width and height of 2(100) = 200.

Now we can just give the UiRenderable constructor a texture instead of a color:

Onyx::UiRenderable santaHat(mesh, Onyx::Texture::Load(Onyx::Resources("textures/santaHat.png")));
// ...
renderer.add(santaHat);

Image 18
I put the hat on the cow 😄

We can also transform UI renderables, and it's much more friendly since there's only (effectively) 2 axes. Let's rotate it and scale it down each frame.

// in mainloop
santaHat.rotate(10.0f * window.getDeltaTime());
santaHat.scale(1 - 0.2f * window.getDeltaTime());

Image 19
Uhh... that's weird. What's going on?

Well, all meshes scale and rotate around the point (0, 0), the origin. Well, not exactly. (All of what I am about to say goes for 3D space also, not just UI, the origin obviously being (0, 0, 0).) They scale around the origin relative to their original vertex coordinates, but transformations don't affect their original vertex coordinates. So, if we want the hat to rotate around its center, we need to define its vertex coordinates around (0, 0), and then set its position to the middle of the screen. But that's not too hard - it's arguably easier. Let's do it!

// redefine vertices to be centered around 0, 0
float vertices[] = {
    // positions         // tex coords
    -100, -100, 0,       0, 0,
     100, -100, 0,       1, 0,
     100,  100, 0,       1, 1,
    -100,  100, 0,       0, 1
};
// ...
santaHat.setPosition(Vec2(1280 / 2, 720 / 2));

Image 20
Much better.

Well, that's going to be it for the advanced tutorial. For more tutorial-like practice and experience with Onyx, check out Guides!

⚠️ **GitHub.com Fallback** ⚠️