1.3 Shaders - neslib/DelphiLearnOpenGL GitHub Wiki
:link: Source Code
As mentioned in the [Hello Triangle](1.2 Hello Triangle) tutorial, shaders are little programs that rest on the GPU. These programs are run for each specific section of the graphics pipeline. In a basic sense, shaders are nothing more than programs transforming inputs to outputs. Shaders are also very isolated programs in that they're not allowed to communicate with each other; the only communication they have is via their inputs and outputs.
In the previous tutorial we briefly touched the surface of shaders and how to properly use them. We will now explain shaders, and specifically the OpenGL Shading Language, in a more general fashion.
But before we do that, we show how to simplify usage and optimize performance of Vertex Buffers and Element Buffers by grouping them into a Vertex Array Object.
Vertex Array Objects
In the previous tutorial, we mentioned that you need to run code like this every frame for every object you want to render:
{ 1. Bind the vertex buffer }
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
{ 2. Then set the vertex attributes pointers }
glVertexAttribPointer(AttributePosition, 3, GL_FLOAT, GL_FALSE, 3 * SizeOf(Single), Pointer(0));
glEnableVertexAttribArray(AttributePosition);
{ 3. Use our shader program when we want to render an object }
glUseProgram(ShaderProgram);
{ 4. Now draw the object }
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, nil);
{ 5. Restore state }
glDisableVertexAttribArray(AttributePosition);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
It may not look like that much, but imagine if we have over 5 vertex attributes and perhaps 100s of different objects (which is not uncommon). Binding the appropriate buffer objects and configuring all vertex attributes for each of those objects quickly becomes a cumbersome process. What if there was some way we could store all these state configurations into an object and simply bind this object to restore its state?
A vertex array object (also known as VAO) can be bound just like a vertex buffer object or element buffer object and any subsequent vertex attribute calls from that point on will be stored inside the VAO. This has the advantage that when configuring vertex attribute pointers you only have to make those calls once and whenever we want to draw the object, we can just bind the corresponding VAO. This makes switching between different vertex data and attribute configurations as easy as binding a different VAO. All the state we just set is stored inside the VAO.
:warning: VAO's are not supported on all platforms, but are available on most platforms through extensions. With OpenGL ES 3.0, VAO's are part of the specification. But because we want to target OpenGL ES 2.0 for broader support, we have to check the OpenGL extensions and fall back to another solution if VAO's are not supported. More on this later.
A vertex array object stores the following:
- Calls to
glEnableVertexAttribArray
orglDisableVertexAttribArray
. - Vertex attribute configurations via
glVertexAttribPointer
. - Vertex buffer objects bound using
glBindBuffer
. - Element buffer objects bound using
glBindBuffer
.
The process to generate a VAO looks simliar to that of a VBO and EBO:
var
VAO: GLuint;
begin
glGenVertexArrays(1, @VAO);
end;
To use a VAO all you have to do is bind the VAO using glBindVertexArray
. From that point on we should bind/configure the corresponding VBO(s), EBO(s) and attribute pointer(s) and then unbind the VAO for later use. As soon as we want to draw an object, we simply bind the VAO with the prefered settings before drawing the object and that is it. In code this would look a bit like this:
{ Initialization code (done once, unless your object frequently changes): }
{ 1. Bind Vertex Array Object }
glBindVertexArray(VAO);
{ 2. Copy our vertices and indices in a buffer for OpenGL to use }
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, SizeOf(VERTICES), @VERTICES, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, SizeOf(INDICES), @INDICES, GL_STATIC_DRAW);
{ 3. Then set our vertex attributes pointers }
glVertexAttribPointer(AttributePosition, 3, GL_FLOAT, GL_FALSE, 3 * SizeOf(Single), Pointer(0));
glEnableVertexAttribArray(AttributePosition);
{ 4. Unbind the VAO }
glBindVertexArray(0);
[...]
{ Drawing code (in TApplication.Update): }
{ 5. Draw the object }
glUseProgram(ShaderProgram);
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, nil);
glBindVertexArray(0);
:warning: It is common practice to unbind OpenGL objects when we're done configuring them so we don't mistakenly (mis)configure them elsewhere.
And that is it! Usually when you have multiple objects you want to draw, you first generate/configure all the VAOs (and thus the required VBO, EBO and attribute pointers) and store those for later use. The moment we want to draw one of our objects, we take the corresponding VAO, bind it, then draw the object and unbind the VAO again.
:warning: A VAO stores the
glBindBuffer
calls when the target isGL_ELEMENT_ARRAY_BUFFER
. This also means it stores its unbind calls so make sure you don't unbind the element array buffer before unbinding your VAO, otherwise it doesn't have an EBO configured.
Using VAOs through Extensions
As mentioned at the beginning, VAOs are not natively supported on all platforms. On Windows and macOS, we can check if the VAO APIs are available. On iOS and Android we need to check if OpenGL supports the GL_OES_vertex_array_object
extension. Also, the APIs have slightly different names on different platforms. To provide a uniform interface though, we create our own function pointers to these APIs using the standard names:
var
glGenVertexArrays: procedure(n: GLsizei; arrays: PGLuint); {$IFDEF MSWINDOWS}stdcall{$ELSE}cdecl{$ENDIF};
glBindVertexArray: procedure(array_: GLuint); {$IFDEF MSWINDOWS}stdcall{$ELSE}cdecl{$ENDIF};
glDeleteVertexArrays: procedure(n: GLsizei; arrays: PGLuint); {$IFDEF MSWINDOWS}stdcall{$ELSE}cdecl{$ENDIF};
Windows
On Windows, the VAO APIs are declared as function pointers in the Winapi.OpenGLExt
unit. So we just assign those function pointers to our own function pointers:
glGenVertexArrays := Winapi.OpenGLExt.glGenVertexArrays;
glBindVertexArray := Winapi.OpenGLExt.glBindVertexArray;
glDeleteVertexArrays := Winapi.OpenGLExt.glDeleteVertexArrays;
macOS
The situation on macOS is similar, but the VAO API's have a APPLE
suffix:
glGenVertexArrays := glGenVertexArraysAPPLE;
glBindVertexArray := glBindVertexArrayAPPLE;
glDeleteVertexArrays := glDeleteVertexArraysAPPLE;
iOS
On iOS, we need to check for the GL_OES_vertex_array_object
extension and the API's have a OES
suffix:
if glIsExtensionSupported('GL_OES_vertex_array_object') then
begin
glGenVertexArrays := glGenVertexArraysOES;
glBindVertexArray := glBindVertexArrayOES;
glDeleteVertexArrays := glDeleteVertexArraysOES;
end;
VAO's should be supported on all iOS devices that support OpenGL ES 2.0.
Android
On Android, the situation is similar, but we have to manually load the function pointers from the OpenGL library because they are not imported by Delphi:
if glIsExtensionSupported('GL_OES_vertex_array_object') then
begin
LibHandle := dlopen(AndroidGles2Lib, RTLD_LAZY);
if (LibHandle <> 0) then
begin
glGenVertexArrays := dlsym(LibHandle, 'glGenVertexArraysOES');
glBindVertexArray := dlsym(LibHandle, 'glBindVertexArrayOES');
glDeleteVertexArrays := dlsym(LibHandle, 'glDeleteVertexArraysOES');
end;
end;
VAOs should be available on most (recent) Android devices, but support may be missing on older devices.
TVertexArray
class
Since VAOs may not be supported on certain devices, we need a fallback method. To shield the user from these issues, we encapsulate all VAO functionality into a TVertexArray
class. This class will use VAOs if supported or otherwise fall back to manually binding buffers and vertex attributes.
TVertexArray
is declared in the sample unit Sample.Classes
. It implements the IVertexArray
interface, which you should use to store instances, so you can take advantage of automatic memory management.
The TVertexArray
class goes a step further by automating to process of creating, binding and filling and VBO and EBO for you, as well as setting the vertex attributes. For this to work though, the class needs to know the layout of your vertices. In the previous tutorial, a vertex was only defined by a 3D position. But in most applications you will want to associate other data with vertices as well, such as colors or texture coordinates (which will be discussed in a later tutorial).
To define a vertex layout, you use the TVertexLayout
record. Defining a layout is pretty simple:
var
VertexLayout: TVertexLayout;
begin
VertexLayout.Start(Shader)
.Add('position', 3)
.Add('color', 3);
end;
TVertexLayout
uses a fluent API so you can string multiple Add
calls together. The first call should be to the Start
method, where you pass the shader that defines the vertex attributes. This shader is of type IShader
, which we will define later in this tutorial.
Next, you call Add
for each vertex attribute, passing the name of the attribute as it appears in the shader, and the number of Single
values for the attribute. In this example, both attributes use 3 values each (the position
attributes for x
, y
and z
coordinates, and the color
attribute for red
, green
and blue
intensities).
Creating a IVertexArray
is pretty simple now as well:
const
{ Each vertex consists of a 3-element position and 3-element color. }
VERTICES: array [0..17] of Single = (
// Positions // Colors
0.5, -0.5, 0.0, 1.0, 0.0, 0.0, // Bottom Right
-0.5, -0.5, 0.0, 0.0, 1.0, 0.0, // Bottom Left
0.0, 0.5, 0.0, 0.0, 0.0, 1.0); // Top
const
{ The indices define a single triangle }
INDICES: array [0..2] of UInt16 = (0, 1, 2);
var
VertexArray: IVertexArray;
begin
VertexArray := TVertexArray.Create(VertexLayout, VERTICES, Sizeof(VERTICES), INDICES);
end;
Just pass the vertex layout, vertices and indices to the constructor, and it will take care of:
- Generating a VAO (if supported)
- Generating a VBO and EBO
- Binding the VBO and EBO
- Copy the vertex data to the VBO
- Copy the index data to EBO
- If VAOs are supported:
- Call
glVertexAttribPointer
for each vertex attribute defined in theTVertexLayout
- Enable each vertex attribute
- Call
- If VAOs are not supported:
- Makes of copy of the
TVertexLayout
for later use.
- Makes of copy of the
Then, rendering an object is reduced to:
Shader.Use;
VertexArray.Render;
The Render
method performs to following:
- If VAOs are supported:
- Call
glBindVertexArray
to bind the VAO. - Call
glDrawElements
to render the object. - Call
glBindVertexArray(0)
to unbind the VAO.
- Call
- If VAOs are not supported:
- Use
glBindBuffer
to bind the VBO. - Use
glBindBuffer
to bind the EBO. - Call
glVertexAttribPointer
for each vertex attribute defined in theTVertexLayout
. - Enable each vertex attribute.
- Call
glDrawElements
to render the object. - Disable each vertex attribute.
- Use
glBindBuffer
to unbind the VBO. - Use
glBindBuffer
to unbind the EBO.
- Use
As you can probably tell from the amount of code, VAOs are not only easier to use but also more efficient.
All APIs mentioned above have already been discussed earlier. But feel free the review to code of TVertexArray
for more details.
GLSL (ES)
After this long detour, we finally arrived at the meat of this tutorial.
Shaders are written in the C-like language GLSL. GLSL is tailored for use with graphics and contains useful features specifically targeted at vector and matrix manipulation. OpenGL ES uses a dialect of GLSL called GLSL ES. All tutorials will use the GLSL ES dialect, since this dialect also (mostly) works with regular OpenGL, so it is available on all platforms.
Shaders can contain variables (attributes, varyings and uniforms) and functions. Each shader's entry point is at a function called main
. This is where we process any input variables and output the results in its output variables. Don't worry if you don't know what varyings and uniforms are, we'll get to those shortly.
Vertex Shader
A vertex shader typically has the following structure:
attribute type InputVariable;
uniform type UniformVariable;
varying type VaryingVariable;
void main()
{
// Use InputVariable and UniformVariable for your calculations
...
// Values you assigned to varyings are passed to the fragment shader:
VaryingVariable = SomeValue;
// Vertex shaders must set the gl_Position variable:
gl_Position = vec4(...);
}
The input to a vertex shader is in the form of vertex attributes. We already discussed those in length before. The value of an attribute
can differ for each vertex. There is a maximum number of vertex attributes we're allowed to declare limited by the hardware. OpenGL guarantees there are always at least 16 4-component vertex attributes available, but some hardware might allow for more which you can retrieve by querying GL_MAX_VERTEX_ATTRIBS
:
var
NumAttributes: GLint;
begin
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, @NumAttributes);
end;
Vertex shaders can also use uniform
variables. These are set in the Delphi code and its value is the same (uniform) for each vertex. Think of it as a constant.
Finally, vertex shaders use varying
variables as output to pass data to the fragment shader. Why these are called varying
will become clear later.
Fragment Shader
A fragment shader typically has the following structure:
uniform type UniformVariable;
varying precision type VaryingVariable;
void main()
{
// Uniforms can be shared with the vertex shader.
// Varying values are output from the vertex shader.
DoSomethingWith(UniformVariable, VaryingVariable);
gl_FragColor = vec4(...);
}
A fragment shader does not have attribute
s, but it does support uniform
variables. These can be shader with the vertex shader (if they have the same name as in the vertex shader), but usually vertex shaders and fragment shaders use different uniforms.
Fragment shaders also support varying
variables. However, here they are used as input since its values are output from the vertex shader. Varying variables must have the same names in the vertex and fragment shaders, so the compiler can match them together.
Types
GLSL has like any other programming language data types for specifying what kind of variable we want to work with. GLSL has most of the default basic types we know from languages like C: int
, float
, double
, uint
and bool
. Note that GLSL ES only supports int
, float
and bool
though. Further more, the GPUs in mobile devices are mostly optimized for floating-point operations, so using float
s may actually be faster than using int
s.
GLSL also features two container types that we'll be using a lot throughout the tutorials, namely vectors and matrices. We'll discuss matrices in a later tutorial.
Precision
When using GLSL ES, we must specify a precision for each varying
variable in the fragment shader. This precision is used for optimizing performance by sacrificing speed for accuracy or the other way around. For example:
varying lowp vec3 ourColor;
The precision qualifier can have one of the following values:
highp
: highest precision.mediump
: medium precision.lowp
: lowest precision.
The lower the precision, the faster the shader may run, but the lower the accuracy. For some types of data, like colors, low accuracy does not matter, but for other types of data it does. Also, the accuracy may be different on different devices. So it is best to experiment and choose the lowest precision that still satisfies the required quality level. You can use these as general guidelines to start with:
highp
: for positional data.mediump
: for normals, lighting vectors and texture coordinateslowp
: for colors
Note that precision qualifiers are not supported in (desktop) OpenGL, only in OpenGL ES (and WebGL). To make sure the fragment shaders compile on all platforms we use a little trick:
When compiling a fragment shader for (desktop) OpenGL, we automatically add these defines to the beginning of the sourcer code:
#define lowp
#define mediump
#define highp
This defines the symbols lowp
, mediump
and highp
as empty, so they are discarded when encountered in the shader source code.
Vectors
A vector in GLSL is a 1, 2, 3 or 4 component container for any of the basic types just mentioned. They can take the following form (n
represents the number of components):
vecn
: the default vector ofn
single-precision floats.bvecn
: a vector ofn
booleans.ivecn
: a vector ofn
integers.uvecn
: a vector ofn
unsigned integers (not supported by GLSL ES).dvecn
: a vector ofn
double-precision floats (not supported by GLSL ES).
Most of the time we will be using the basic vecn
since floats are sufficient for most of our purposes, and most efficient on many devices.
Components of a vector can be accessed via vec.x
where x
is the first component of the vector. You can use .x
, .y
, .z
and .w
to access their first, second, third and fourth component respectively. GLSL also allows you to use rgba
for colors or stpq
for texture coordinates, accessing the same components.
The vector datatype allows for some interesting and flexible component selection called swizzling. Swizzling allows for the following syntax:
vec2 someVec;
vec4 differentVec = someVec.xyxx;
vec3 anotherVec = differentVec.zyw;
vec4 otherVec = someVec.xxxx + anotherVec.yxzy;
You can use any combination of up to 4 letters to create a new vector (of the same type) as long as the original vector has those components; it is not allowed to access the .z
component of a vec2
for example. We can also pass vectors as arguments to different vector constructor calls, reducing the number of arguments required:
vec2 vect = vec2(0.5, 0.7);
vec4 result = vec4(vect, 0.0, 0.0);
vec4 otherResult = vec4(result.xyz, 1.0);
Vectors are thus a flexible datatype that we can use for all kinds of input and output. Throughout the tutorials you'll see plenty of examples of how we can creatively manage vectors.
Input and Output
Shaders are nice little programs on their own, but they are part of a whole and for that reason we want to have inputs and outputs on the individual shaders so that we can move stuff around. GLSL defined the attribute
and varying
keywords specifically for that purpose. Each shader can specify inputs and outputs using those keywords and wherever a varying
variable of the vertex shader matches with a the varying
variable of the fragment shader stage they're passed along. The vertex and fragment shader differ a bit though.
The vertex shader should receive some form of input otherwise it would be pretty ineffective. The vertex shader differs in its input, in that it receives its input straight from the vertex data. To define how the vertex data is organized we specify the input variables using attribute
s so we can configure the vertex attributes on the CPU. We've seen this in the previous tutorial as attribute vec3 position;
.
If we want to send data from one shader to the other we'd have to declare a varying
variable in both shaders. When the types and the names are equal on both sides OpenGL will link those variables together and then it is possible to send data between shaders (this is done when linking a program object). To show you how this works in practice we're going to alter the shaders from the previous tutorial to let the vertex shader decide the color for the fragment shader.
Vertex shader
attribute vec3 position;
varying vec4 vertexColor;
void main()
{
// See how we directly give a vec3 to vec4's constructor
gl_Position = vec4(position, 1.0);
// Set the output variable to a dark-red color
ertexColor = vec4(0.5, 0.0, 0.0, 1.0);
}
Fragment shader
// The input variable from the vertex shader (same name and same type, but with precision)
varying low vec4 vertexColor;
void main()
{
gl_FragColor = vertexColor;
}
You can see we declared a vertexColor
variable as a vec4
output that we set in the vertex shader and we declare a similar vertexColor
input in the fragment shader. Since they both have the same type and name, the vertexColor
in the fragment shader is linked to the vertexColor
in the vertex shader. Because we set the color to a dark-red color in the vertex shader, the resulting fragments should be dark-red as well. The following image shows the output:
There we go! We just managed to send a value from the vertex shader to the fragment shader. Let's spice it up a bit and see if we can send a color from our application to the fragment shader!
Uniforms
Uniforms are another way to pass data from our application on the CPU to the shaders on the GPU, but uniforms are slightly different compared to vertex attributes. First of all, uniforms are global. Global, meaning that a uniform variable is unique per shader program object, and can be accessed from any shader at any stage in the shader program. Second, whatever you set the uniform value to, uniforms will keep their values until they're either reset or updated.
To declare a uniform in GLSL we simply add the uniform keyword to a shader with a type and a name. From that point on we can use the newly declared uniform in the shader. Let's see if this time we can set the color of the triangle in the fragment shader via a uniform:
uniform vec4 ourColor; // We set this variable in the OpenGL code.
void main()
{
gl_FragColor = ourColor;
}
We declared a uniform vec4 ourColor
in the fragment shader and set the fragment's output color to the content of this uniform value. Since uniforms are global variables, we can define them in any shader we'd like so no need to go through the vertex shader again to get something to the fragment shader. We're not using this uniform in the vertex shader so there's no need to define it there.
:warning: If you declare a uniform that isn't used anywhere in your GLSL code the compiler will silently remove the variable from the compiled version which is the cause for several frustrating errors; keep this in mind!
The uniform is currently empty; we haven't added any data to the uniform yet so let's try that. We first need to find the index/location of the uniform in our shader. Once we have the index/location of the uniform, we can update its values. Instead of passing a single color to the fragment shader, let's spice things up by gradually changing color over time:
procedure TShaders.Update(const ADeltaTimeSec, ATotalTimeSec: Double);
var
GreenValue: Single;
VertexColorLocation: Integer;
begin
...
GreenValue := (Sin(ATotalTimeSec) * 0.5) + 0.5;
VertexColorLocation := glGetUniformLocation(ShaderProgram, 'ourColor');
glUniform4f(VertexColorLocation, 0.0, GreenValue, 0.0, 1.0);
...
end;
We use the running time in seconds (ATotalTimeSec
, passed from our framework) to vary the green component of the color in the range of 0.0
- 1.0
by using the Sin
function.
Then we query for the location of the ourColor
uniform using glGetUniformLocation
. We supply the shader program and the name of the uniform (that we want to retrieve the location from) to the query function. If glGetUniformLocation
returns -1, it could not find the location. Lastly we can set the uniform value using the glUniform4f
function. Note that finding the uniform location does not require you to use the shader program first, but updating a uniform does require you to first use the program (by calling glUseProgram
), because it sets the uniform on the currently active shader program.
:information_source: Because OpenGL is in its core a C library it does not have native support for type overloading, so wherever a function can be called with different types OpenGL defines new functions for each type required;
glUniform
is a perfect example of this. The function requires a specific postfix for the type of the uniform you want to set. A few of the possible postfixes are:
f
: the function expects a float as its valuei
: the function expects an int as its valueui
: the function expects an unsigned int as its value3f
: the function expects 3 floats as its valuefv
: the function expects a float vector/array as its valueWhenever you want to configure an option of OpenGL simply pick the overloaded function that corresponds with your type. In our case we want to set 4 floats of the uniform individually so we pass our data via
glUniform4f
(note that we also could've used thefv
version).
Now what we know how to set the values of uniform variables, we can use them for rendering. If we want the color to gradually change, we want to update this uniform every game loop iteration (so it changes per-frame) otherwise the triangle would maintain a single solid color if we only set it once. So we calculate the GreenValue
and update the uniform each render iteration:
procedure TShaders.Update(const ADeltaTimeSec, ATotalTimeSec: Double);
var
GreenValue: Single;
VertexColorLocation: Integer;
begin
{ Define the viewport dimensions }
glViewport(0, 0, Width, Height);
{ Clear the color buffer }
glClearColor(0.2, 0.3, 0.3, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
{ Be sure to activate the shader }
ShaderProgram.Use;
{ Update the uniform color }
GreenValue := (Sin(ATotalTimeSec) * 0.5) + 0.5;
VertexColorLocation := glGetUniformLocation(ShaderProgram, 'ourColor');
glUniform4f(VertexColorLocation, 0.0, GreenValue, 0.0, 1.0);
{ Now draw the triangle }
VertexArray.Render;
end;
The code is a relatively straightforward adaptation of the previous code. This time, we update a uniform value each iteration before drawing the triangle. If you update the uniform correctly you should see the color of your triangle gradually change from green to black and back to green.
As you can see, uniforms are a useful tool for setting attributes that might change in render iterations, or for interchanging data between your application and your shaders, but what if we want to set a color for each vertex? In that case we'd have to declare as many uniforms as we have vertices. A better solution would be to include more data in the vertex attributes which is what we're going to do.
More attributes!
We saw in the previous tutorial how we can fill a VBO, configure vertex attribute pointers and store it all in a VAO. This time, we also want to add color data to the vertex data. We're going to add color data as 3 floats to the VERTICES
. We assign a red, green and blue color to each of the corners of our triangle respectively:
const
{ Each vertex consists of a 3-element position and 3-element color. }
VERTICES: array [0..17] of Single = (
// Positions // Colors
0.5, -0.5, 0.0, 1.0, 0.0, 0.0, // Bottom Right
-0.5, -0.5, 0.0, 0.0, 1.0, 0.0, // Bottom Left
0.0, 0.5, 0.0, 0.0, 0.0, 1.0); // Top
Since we now have more data to send to the vertex shader, it is necessary to adjust the vertex shader to also receive our color
value as a vertex attribute input.
attribute vec3 position;
attribute vec3 color;
varying vec3 ourColor;
void main()
{
gl_Position = vec4(position, 1.0);
ourColor = color;
}
Since we no longer use a uniform for the fragment's color, but now use the ourColor
varying variable we'll have to change the fragment shader as well:
varying lowp vec3 ourColor;
void main()
{
gl_FragColor = vec4(ourColor, 1.0);
}
Because we added another vertex attribute and updated the VBO's memory we have to re-configure the vertex attribute pointers. The updated data in the VBO's memory now looks a bit like this:
Knowing the current layout we can update the vertex format with glVertexAttribPointer
:
{ Position attribute }
AttributePosition := glGetAttribLocation(ShaderProgram, 'position');
glVertexAttribPointer(AttributePosition , 3, GL_FLOAT, GL_FALSE, 6 * SizeOf(Single), Pointer(0));
glEnableVertexAttribArray(AttributePosition );
{ Color attribute }
AttributeColor := glGetAttribLocation(ShaderProgram, 'color');
glVertexAttribPointer(AttributeColor , 3, GL_FLOAT, GL_FALSE, 6 * SizeOf(Single), Pointer(3* SizeOf(Single)));
glEnableVertexAttribArray(AttributeColor );
The first few arguments of glVertexAttribPointer
are relatively straightforward. We use glGetAttribLocation
to get the location of the new color
attribute. The color values have a size of 3 floats
and we do not normalize the values.
Since we now have two vertex attributes we have to re-calculate the stride value. To get the next attribute value (e.g. the next x
component of the position vector) in the data array we have to move 6 floats to the right, three for the position values and three for the color values. This gives us a stride value of 6 times the size of a Single
in bytes (= 24 bytes).
Also, this time we have to specify an offset. For each vertex, the position vertex attribute is first so we declare an offset of 0. The color attribute starts after the position data so the offset is 3 * SizeOf(Single)
in bytes (= 12 bytes). As said before, this value must be typecast to a Pointer
.
:information_source: Note that our
TVertexArray
class takes care of all of this for us now, so we don't need to write these lines of code ourselves.
Running the application should result in the following image:
The image might not be exactly what you would expect, since we only supplied 3 colors, not the huge color palette we're seeing right now. This is all the result of something called fragment interpolation in the fragment shader. When rendering a triangle the rasterization stage usually results in a lot more fragments than vertices originally specified. The rasterizer then determines the positions of each of those fragments based on where they reside on the triangle shape.
Based on these positions, it interpolates all the fragment shader's varying variables. That is why these are called varying
, because their values vary from fragment to fragment.
Say for example we have a line where the upper point has a green color and the lower point a blue color. If the fragment shader is run at a fragment that resides around a position at 70% of the line its resulting color input attribute would then be a linear combination of green and blue; to be more precise: 30% blue and 70% green.
This is exactly what happened at the triangle. We have 3 vertices and thus 3 colors and judging from the triangle's pixels it probably contains around 50000 fragments, where the fragment shader interpolated the colors among those pixels. If you take a good look at the colors you'll see it all makes sense: red to blue first gets to purple and then to blue. Fragment interpolation is applied to all the fragment shader's input attributes.
Detour: Asset Management
Before we finish this tutorial by writing our own shader class, we are going to take a detour into asset management.
As your apps or games get larger, they will start to depend on a lot of files (or assets) such as shaders, textures, sounds, music, dialog, scripts etc. These files need to be deployed somehow with the application, and where you deploy these files to differs by platform. You could use Delphi's Deployment Manager to deploy these files to platform-specific target directories. However, as the number of files grow, managing deployed files this way becomes very cumbersome and error prone, and deploying an app becomes very slow.
In this tutorial series, we take a different approach: all assets are stored in a single ZIP file called assets.zip
, and this ZIP file is linked to the executable as a resource. At run-time, we open the ZIP file at application startup, and extract files during the application lifetime as needed.
We use a static TAssets
class (in the unit Sample.Classes
that manages the ZIP file for us. Usage and implementation are very simple: at application startup (usually in the TApplication.Initialize
method), you call TAssets.Initialize
to open the ZIP file. After that, you can load assets into memory using TAssets.Load
:
class procedure TAssets.Initialize;
begin
if (FStream = nil) then
FStream := TResourceStream.Create(HInstance, 'ASSETS', RT_RCDATA);
if (FZipFile = nil) then
begin
FZipFile := TZipFile.Create;
FZipFile.Open(FStream, TZipMode.zmRead);
end;
end;
class function TAssets.Load(const APath: String): TBytes;
begin
Assert(Assigned(FZipFile));
FZipFile.Read(APath, Result);
end;
There are some additional methods, like LoadRawByteString
which load a file into a RawByteString
instead of a TBytes
array.
To link the assets.zip
file as a resource to your application. perform these steps in Delphi:
- Go to "Project | Resources and Images..."
- Click the "Add..." button and select your
assets.zip
file - Set the "Resource identifier" to
ASSETS
- Make sure the "Resource type" is set to
RCDATA
Our own shader class
Writing, compiling and managing shaders can be quite cumbersome. As a final touch on the shader subject we're going to make our life a bit easier by building a shader class that reads shaders from assets.zip
, compiles and links them, checks for errors and is easy to use.
Again, we use an object interface called IShader
to ease memory management. Its declaration in short and simple:
type
IShader = interface
['{6389D101-5FD2-4AEA-817A-A4AF21C7189D}']
{$REGION 'Internal Declarations'}
function _GetHandle: GLuint;
{$ENDREGION 'Internal Declarations'}
procedure Use;
function GetUniformLocation(const AName: RawByteString): Integer;
property Handle: GLuint read _GetHandle;
end;
The Use
method is rather trivial, and just calls glUseProgram
, but it hides this OpenGL detail from us. Likewise, the GetUniformLocation
method is a wrapper around glGetUniformLocation
, but it does something extra: it raises an exception if the uniform is not found (since you usually expect the uniform to be there). Finally, the Handle
property gives us access to the underlying OpenGL shader program ID, so we can use it directly with OpenGL APIs.
IShader
is implemented in the TShader
class. Its constructor requires the file paths of the source code of the vertex and fragment shader respectively. These should be relative paths in the assets.zip
file, where forward slashes (/) are used to separate directories (forward slashes work on all platforms). An example of a valid path is shaders/MyShader.vs
.
constructor TShader.Create(const AVertexShaderPath, AFragmentShaderPath: String);
var
Status, LogLength: GLint;
VertexShader, FragmentShader: GLuint;
Log: TBytes;
Msg: String;
begin
inherited Create;
FragmentShader := 0;
VertexShader := CreateShader(AVertexShaderPath, GL_VERTEX_SHADER);
try
FragmentShader := CreateShader(AFragmentShaderPath, GL_FRAGMENT_SHADER);
FProgram := glCreateProgram;
glAttachShader(FProgram, VertexShader);
glErrorCheck;
glAttachShader(FProgram, FragmentShader);
glErrorCheck;
glLinkProgram(FProgram);
glGetProgramiv(FProgram, GL_LINK_STATUS, @Status);
if (Status <> GL_TRUE) then
begin
glGetProgramiv(FProgram, GL_INFO_LOG_LENGTH, @LogLength);
if (LogLength > 0) then
begin
SetLength(Log, LogLength);
glGetProgramInfoLog(FProgram, LogLength, @LogLength, @Log[0]);
Msg := TEncoding.ANSI.GetString(Log);
raise Exception.Create(Msg);
end;
end;
glErrorCheck;
finally
if (FragmentShader <> 0) then
glDeleteShader(FragmentShader);
if (VertexShader <> 0) then
glDeleteShader(VertexShader);
end;
end;
class function TShader.CreateShader(const AShaderPath: String;
const AShaderType: GLenum): GLuint;
var
Source: RawByteString;
SourcePtr: MarshaledAString;
Status, LogLength: GLint;
Log: TBytes;
Msg: String;
begin
Result := glCreateShader(AShaderType);
Assert(Result <> 0);
glErrorCheck;
Source := TAssets.LoadRawByteString(AShaderPath);
{$IFNDEF MOBILE}
{ Desktop OpenGL doesn't recognize precision specifiers }
if (AShaderType = GL_FRAGMENT_SHADER) then
Source :=
'#define lowp'#10+
'#define mediump'#10+
'#define highp'#10+
Source;
{$ENDIF}
SourcePtr := MarshaledAString(Source);
glShaderSource(Result, 1, @SourcePtr, nil);
glErrorCheck;
glCompileShader(Result);
glErrorCheck;
Status := GL_FALSE;
glGetShaderiv(Result, GL_COMPILE_STATUS, @Status);
if (Status <> GL_TRUE) then
begin
glGetShaderiv(Result, GL_INFO_LOG_LENGTH, @LogLength);
if (LogLength > 0) then
begin
SetLength(Log, LogLength);
glGetShaderInfoLog(Result, LogLength, @LogLength, @Log[0]);
Msg := TEncoding.ANSI.GetString(Log);
raise Exception.Create(Msg);
end;
end;
end;
destructor TShader.Destroy;
begin
glUseProgram(0);
if (FProgram <> 0) then
glDeleteProgram(FProgram);
inherited;
end;
A few things to note:
- The constructor does the following:
- It uses the helper method
CreateShader
to create a shader handle, load the source code from the assets, and compile the shader. - A utility function
glErrorCheck
is called at various locations. This is a custom function that only does something when compiled inDEBUG
mode. In that case, it will call theglGetError
API to check if the last OpenGL call had an error, and if so it raises an exception. InRELEASE
mode, we do not perform this error checking becauseglGetError
is known to have a negative impact on performance. - After the shader program is linked, we can release the original shader objects (using
glDeleteShader
).
- It uses the helper method
- The
CreateShader
helper mostly contains code we have seen before, we a few exceptions:- Instead of using a constant string, the shader source code is loaded from the assets now using
TAssets.LoadRawByteString
. - On non-mobile platforms we prefix the source code of the fragment shader with some empty defines. As said earlier, these defines will make the shader compiler ignore these OpenGL ES specific precision qualifiers.
- Instead of using a constant string, the shader source code is loaded from the assets now using
- The destructor releases the program with
glDeleteProgram
. Before it can do that, it must make sure to program is not in use though, by callingglUseProgram(0)
.
And there we have it, a completed shader class. Using the shader class is fairly easy; we create a shader object once and from that point on simply start using it:
procedure TMyApplication.Initialize;
begin
TAssets.Initialize;
FShader := TShader.Create('shaders/shader.vs', 'shaders/shader.fs');
...
end;
procedure TMyApplication.Update(const ADeltaTimeSec, ATotalTimeSec: Double);
begin
...
FShader.Use;
RenderSomething;
end;
Here we stored the vertex and fragment shader source code in two files called shader.vs
and shader.fs
. You're free to name your shader files in any way you like; I personally find the extensions .vs
and .fs
quite intuitive.
Exercises
- Adjust the vertex shader so that the triangle is upside down
- Specify a horizontal offset via a uniform and move the triangle to the right side of the screen in the vertex shader using this offset value.
- Output the vertex position to the fragment shader using the
varying
keyword and set the fragment's color equal to this vertex position (see how even the vertex position values are interpolated across the triangle). Once you managed to do this; try to answer the following question: why is the bottom-left side of our triangle black?
:arrow_left: [1.2 Hello Triangle](1.2 Hello Triangle) | Contents | [1.4 Textures](1.4 Textures) :arrow_right: |
---|