GUIDE: Working with textures (images) for games. - linuxgurugamer/HumanStuff_EasyBuilderWorkshop Wiki

GUIDE: Working with textures (images) for games.

This guide requires you to use at least one of the many flavors of image editing software out there.

Some image editors are free where others either cost money or require a subscription of some sort.

The guide assumes you are capable of filling in some blanks yourself, as explaining the inner operations of each and every image editor goes beyond the scope of this guide. This is where using GOOGLE really comes in handy.

Unless you already have an image editing application of choice, here's a few That I, in order of preference, recommend:

  1. Paint.Net THE GREATEST 8 BIT PER CHANNEL EDITOR THERE IS (Note, I have primarily learned with Paint.Net myself so I know how well it compares) Link: http://www.getpaint.net/index.html

    • Pros

      • FREE!

      • Lightweight.

      • Has support for many uncommon file formats.[/color]

      • Has many users, meaning there's a lot of guides and tutorials for it.

Cons


  1. Krita (KDE's Image Manipulation Program) Link: https://krita.org/en/download/krita-desktop/ THE GREATEST 16 BIT PER CHANNEL EDITOR THERE IS

    • Only download from the dev webite. NOT MS Store or Stream

    • Pros

      • FREE!

      • Open source! (While this might not matter to you, it means that if something is missing or you need something, someone can make that happen through programming)

      • Has many users, meaning there's a lot of guides and tutorials for it.

      • Has support for a great deal of common file formats and then some. It even natively supports 16 bit per channel formats!

      • Lightweight, doesn't require a lot of computer resources.

      • Works on Windows, Linux and OS X natively.[/color]

Cons

  1. Texconv THE GREATEST CMS LINE TIIL FOR DDS ENCODING/COMPRESSING THERE IS (Note, I have primarily learned with Texconv myself so I know how well it compares) Link: https://github.com/Microsoft/DirectXTex/wiki/Texconv

    • Pros

      • FREE!

      • Has a, 3rd party open source, GUI option!

      • Lightweight.

      • Has support for many common file formats.[/color]

      • Has many users, meaning there's a lot of guides and tutorials for it.

Cons


  1. Nvidia Texture Tools Exporter - Stand Alone GUI THE GREATEST GUI LINE TOOL FOR DDS ENCODING/COMPRESSING THERE IS (Note, I have primarily learned with Texconv myself so I know how well it compares) Link: https://developer.nvidia.com/nvidia-texture-tools-exporter

    • Pros

      • FREE!

      • Has a GUI, with CMD options!

      • #1 in QUALITY, HANDS DOWN!.

      • Has support for many common file formats.[/color]

      • Has many users, meaning there's a lot of guides and tutorials for it.

Cons


  1. Texconv THE GREATEST CMD LINE TOOL FOR DDS ENCODING/COMPRESSING THERE IS (Note, I have primarily learned with Texconv myself so I know how well it compares) Link: https://github.com/Microsoft/DirectXTex/wiki/Texconv

    • Pros

      • FREE!

      • Has a, 3rd party open source, GUI option! HERE Link: https://vvvv.org/contribution/texconvgui

      • Lightweight.

      • Has support for many common file formats. INCLUDING EXR! [/color]

      • Has many users, meaning there's a lot of guides and tutorials for it.

Cons


  1. Adobe Photoshop (Note, Adobe also has a number of other image editing related applications) Link: http://www.adobe.com/products/photoshop.html

    • Pros

      • Comes with most of the professional grade tools you need.

      • Has a professionally developed user interface, making it easy to find what you are looking for.

      • Used by a lot of professionals, ensuring there's guides and tutorials for just about anything you want to do.

      • Has support for a great deal of common file formats as well as the ability to interoperate with all programs in the Adobe CC suite seamlessly.[/color]

Cons


  1. GIMP (GNU Image Manipulation Program) Link: https://www.gimp.org/ PURE TRASH

    • Pros

      • FREE!

      • Open source! (While this might not matter to you, it means that if something is missing or you need something, someone can make that happen through programming)

      • Has many users, meaning there's a lot of guides and tutorials for it.

      • Has support for a great deal of common file formats and then some.

      • Lightweight, doesn't require a lot of computer resources.

      • Works on Windows, Linux and OS X natively.[/color]

Cons


Note that there are countless other image editing applications out there as well as image format conversion tools.
And then there's ETS 2 Studio, which technically removes the need for DDS saving/conversion. We shall take a closer look at ETS 2 Studio later on though as far as textures go. Don't get your hopes up, ETS 2 Studio is NOT an image editing program. (yet)

This guide concerns itself with KERBAL SPACE PROGRAM game version specifics, hence it can NOT be applied to all versions of ETS 2, ATS and any other games beyond that uses DDS files.

But: there are some details in the guide that should apply to any image formats for any foreseeable future.


------------------------------ END HEADER ------------------------------


What is a texture?

What is a texture? Is it something you can run your finger along to feel bumps and ridges? Or is it about surface material properties, such as road texture that affect grip?
Of course not!

We are talking about textures for video games, images that are displayed on your screen with various transformations such as scaling, rotation and positioning.
For example, here's a wall.

Image

That looks very dull doesn't it? Imagine if we could liven it up a little?
Well, here's the internets most favorite cat!

Image

Let's put this image onto the wall. Easy, just slap it on top and we are done, right?

Image

There! Purrfection!
... Well, it doesn't look right does it? It's as if somehow it's lacking perspective.
So, with the magic of sloppy image editing, let's fix that!

Image

Now were talking! Or nyanyanyanyanyanyanyanyanya or whatever...

Ok, what is the point of this exercise though? Why did i bother showing you that?
Textures (in a video game world) works much the same, you can "slap them on" or you can utilize maths to transform them in whatever way, shape or form you'd like.
Here's where the distinction (if there ever was one, i might just have made it up) lies between your regular image (just the cat) and a texture. (the cat with perspective)
Images are rectangular in shape whereas textures, which originally are rectangular, are meant to be deformed in some way. Textures will map to surfaces in game, be it stationary surfaces in two dimensions or ever rotating, moving and morphing surfaces in three dimensions.

But before we go into those transformations (hence forth a collective term for rotation, scaling, positioning and warping) let's delve deeper into what makes up a texture / image.

Behind the scenes (pixels) of an image

Do note, this will get technical but for the sake of this guide i want to do this bit to remove any doubt about what i will be saying later on.

It shouldn't come as a big surprise to you that in a digital world, all images shown to you are not some sort of photo album with physical paper and ink being shifted around. There is no army of tiny people running around inside your computer case and up/down your cables to present you with images of cats or whatnot.
In digital imagery, images are nothing more than data describing what each and every pixel should look like.

What is a pixel?

Well, to simplify things, your screen/tv/monitor has pixels. Quite a lot of them actually!
If you put a magnifying glass to your monitor right now, this is what you would likely see:

Image

Each pixel has a Red, Green and Blue shaded liquid crystal (Or the respectively colored Light Emitting Diodes if you have a LED monitor) which lets through more or less light from the fluorescent light(s) that shine inside your monitor with the brightest white color possible.
Red, Green and Blue will henceforth be referred to as "RGB" for the sake of simplicity.

Just as you may have played with colors as a kid, mixing two colors together to make a third color happen. Your monitor mixes these three colors (RGB) together which fools your eyes, assuming you are sitting at a certain distance away from your screen, into seeing one single color of the pixel.
So in other words, if you put the Red crystal to maximum power and leave the Green and Blue crystals off, the white light that shines behind those crystals will only show the red color spectrum as the Green and Blue crystals blocks all light from coming through them.
Hence, if you activate all three colors, Red, Green and Blue, you will see white because you are now presented with the full spectrum of colors.
If you didn't know, the color white has all the visible colors in it. As shown if you shine white light through a prism. Or look at white light through prism glasses...

Image

To make a possibly long and technical story short, a pixel has varying strengths of Red, Green and Blue in it.

How does the monitor know what strength to use for RGB

The process is pretty straight forward, through the cable going from your computer to your monitor there's a lot of data being transmitted every second.
To give you an idea, if you have a 60 hz (Hertz) full HD monitor (1920x1080 pixels) there's 1920 times 1080 times 24 times 60 (1920*1080*24*60) bits, or electronic impulses if you will, of information going to your monitor EVERY second!
That's right, there's roughly 3 BILLION electrical signals being sent through that small cable every second!!!

Or to give you another example you might be able to relate to... Imagine if your internet was as fast as that...

You would have a 3 GBit/s connection. If you are currently on a 30 MBit connection your monitor's cable is one hundred times faster.
You could download (not that i condone such a thing) a DVD movie which is 4.7 Gigabytes in size in 12.5 seconds at those speeds.

So, in simple terms, your computer sends information about each and every pixels components in groups of 24 bits per pixel.
Each crystal, R, G and B respectively, makes use of 8 of those bits. This gives each crystal 256 possible settings where 0 means OFF and 255 means FULLY ON.

But how does the computer know what to send to the monitor?

Let's put certain specifics aside and just look at the stream of data that your computer(s) Graphics Processing Unit, or GPU, uses.
In the most simple terms, your computer stores ALL the data to be shown on screen in one long line of data. It starts with the pixel on the top left of your screen, going left to right and ending up at the bottom right corner of your screen.

If you had a 4x4 pixel screen and you supplied these numbers...

CODE: SELECT ALL

255 255 255 255 0 0 0 255 0 0 0 255 0 0 0 255 255 255 255 0 0 0 255 0 0 0 255 0 0 0 255 255 255 255 0 0 0 255 0 0 0 255 0 0 0 255 255 255 255 0 0 0 255 0 0 0 255 0 0 0

Looking CLOSE at the crystals...

Image

Looking at the pixels from a distance...

Image

Subsequently, this is how images are stored on your computer's storage medium as well, albeit not as numbers but as bytes. Just a long stream of data.

Though not all image formats are this straightforward, some formats will compress the pixel data to save space. Otherwise, to store an image that covers your whole screen (full HD monitor that is) it would be roughly 6 Megabytes in size per picture. Seeing as how you are likely to store thousands of images on your computer, those sizes would easily add up. Say you have a photo library with 10,000 photos in it. That would take up 60 Gigabytes of space!

And don't even think about uncompressed full length movies! Movies are several images displayed in sequence after all...

But i digress, typical JPEG (format) images can cut those 6 megabytes down to a mere 200 kilobytes per image with little perceptible loss, at least as far as photos go! For other images... Well, i will get to that!

So now that you understand how images (textures) are stored digitally, we can move on to texture transformations!

What is image/texture transformations?

I really COULD go into a lot of in depth math and technicalities that would normally take a very math inclined person (which i am not) to even begin to comprehend.
But instead, here's a few links if you want to dig into the subject at more depth...

Don't read these unless you REALLY want to know more.

https://en.wikipedia.org/wiki/Texture_mapping
https://en.wikipedia.org/wiki/Transformation_matrix
https://en.wikipedia.org/wiki/3D_projection

But for the sake of simplicity, let's just look at what actually happens to the pixel data after it has been transformed. I will try to keep this as simple as possible so everyone can follow along.

Let's begin with a simple copy operation. Or if you will, positioning.

Pixel coordinates

To position something on a grid, you need a set of coordinates to work with.
For example, on a chessboard you have grids named A-H, 1-8. Like this:

Image

This means that if you want a knight to move from C1 to A3 you can say just that. "Knight from C1 to A3" (Ignoring any nomenclature that is normally used in chess.)

The same goes for images, you have a coordinate system like this:

Image

I can give any number XY to refer to a specific pixel on grid. If i say "put a red pixel on 12, 4" then this is where the pixel would end up on the image:

Image

Pixel regions, turn a stream of bytes into X & Y coordinates

Now, what if i have this image...

Image

And i want to place it on the grid at the same coordinates as the red pixel?

First, we must know the dimensions of the image we want to place. In this case it's 19 x 19 pixels. This because we are not positioning a single pixel anymore, we are positioning a lot of pixels!

Image

But why really do we need to know the dimensions of the image we are positioning?

Without getting too much into technical details, remember how the pixel data is just a stream of bytes?

That is, the pixels looks like this in data:

CODE: SELECT ALL

XXXXXXXXXXXXXXXX

But we want the pixels presented like this:

CODE: SELECT ALL

XXXX

XXXX

XXXX

XXXX

In other words, when we put the face image above on our grid we start with the first 19 pixels at coordinate [12, 4] and travel right along the X axis, one pixel at a time, until we've reached coordinate [31, 4].

Then we go down to the next row (Y axis), thus starting at coordinate [12, 5] and doing the next 19 pixels from the face image's data stream. We keep on doing this until we have placed all of the 19*19 = 361 pixels.

So, if you have a stream of pixel data for an image that is 50 x 50 pixels and you want to extract the data for the pixel at coordinate [37, 26] you would have to get the 50*26+37 = 1337'th entry in the data stream.

With this newfound knowledge out of the way, we can look at another type of transformation, namely scaling.

Size does matter! Scaling textures.

Let's start with the most simple scale transform. Halving the size of an image.
Well, you simply skip every other pixel to halve an images size!

Here's a new image (20x20 pixels) that we will scale:

Image

And here we placed the original (unscaled) image at [0,0] and the same image, at half it's size (10x10 pixels) at [0, 20].

Image

The red arrows show how the pixels shifted over in the scaling operation. As you can tell, a lot of the original image's details have gone missing because we are skipping pixels here.

This is down to the type of scaling operation one is using though. In this one we are simply skipping pixels. You get dominant pixels that determine the end result.

If however we scaled using a bilinear method, the whole resulting image would be a dull gray.

Most video games uses bilinear or trilinear filtering, where filtering is another word for "what pixel data to keep, merge or discard" in this context.

The end effect is that even scaled textures look OK to look at, you get a mix between various pixel colors all merged into one pixel or over the span of several pixels.

This is an expensive operation but graphics processing units excel at these kinds of operations, hence all this magic takes place on the GPU in games and other graphics intensive applications. Even Windows uses the GPU for it's UI these days, hence why everything looks so fancy... If your OS had to use the CPU to do these calculations it would run very slow.

But let's move on... What happens to the above test pattern if we scale up in a bi/trilinear environment?
Well, this happens...

Image

Everything gets blurred!

Since other transformations are simply alterations of positioning and scaling, we won't go into any more details on this subject right now.

Different file formats, lossy compression and lossless compression

As I mentioned above, certain file formats (JPEG for instance) can compress images to reduce their footprints on your storage mediums. Some formats even have varying degrees of compression quality where you can set just how good the resulting image will look, at the expense of a larger file.

No compression

As i explained above, image data is ultimately a representation of each pixel's color values. Not compressing this data means it takes up however much space is necessary to describe each pixel.

One notable such file format is BMP (Bitmap) which, apart from a small header explaining the size of the image, stores every pixel AS IS. For all intents and purposes, if you took the pixel data stream and copied it to your GPU's screen buffer (google that) it would just show up on your monitor as such. This assuming the image is the same size as your screens current resolution of course. Otherwise you would get some very strange looking streaks of pixels on your screen.

Clearly, no compression is a bad idea then, right?

Not necessarily. If you have to pick between no compression and lossy compression then you want no compression for maximum quality. Back in the olden days before other lossless formats gained enough global spread, pure BMP files were the only format that everyone could work with that also didn't incur losses in quality. That is to say, even professional graphics artists used plain BMP files to share their work amongst each other.

Some were smart though, they used ZIP and RAR archives to achieve lossless compression.

In other words, they saved their image as BMP and then added the file to a ZIP, RAR, ARJ or other common compressed format before sending the file.

Lossy compression

What is "lossy" to begin with? Well, for an easy comparison imagine i suddenly suffered from the "lazy typing" syndrome...

Take this text:


CODE: SELECT ALL

I have been writing this guide for quite a while now, it's actually starting to affect my mental state.


Now, apply lossy compression on it using quality level 0, then decompress it again:

CODE: SELECT ALL


I haf ben writin tis guid for qute whil no, is actully strtin to afec my mntal stat.


CODE: SELECT ALL


I have been writin this guide for quite a while no, it's actualy starting to afect my mental stat.


The most commonly used lossy compression format is that of JPEG (Joint Photographic Experts Group) image files and it's movie equivalent, MPEG. (Moving Picture Experts Group) I will not explain the format in depth here as it goes beyond the intents of this guide, but here's a number of examples of what JPEG compression does to an image at various quality levels. Note that this loss is irrecoverable.

Original uncompressed image:

Image

JPEG compressed image, quality 60:

Image

JPEG compressed image, quality 0:

Image

Not that big of a deal really. Well, unless you would like a closeup!

Original:

Image

Compressed:

Image

And even if you use the best compression quality, there's still going to be ever so slight changes in the original image. These changes makes the image harder to work with inside image editing programs.
Hence, my advice (and plea) to you is to never save source materials in JPEG or other lossy compression formats. The artifacts created by these formats means one can no longer select large swaths of the same color as the algorithm equalizes these colors and/or blockifies it as evident in the last image.

But then again, if file size is all that matters to you. The original image at the top is 592 Kb in size, the one with compression quality at 60 is 101 Kb in size. That is, the compressed image is one sixth the size of the losslessly compressed one.
All i know is, i prefer lossless any day of the week. Which brings us to... (And don't worry, i will get to DDS files later)

Lossless compression

The main difference here is that when you decompress the data it's exactly the same as it was before you compressed it. To the last letter.
PNG, TIFF and some other file formats employ lossless compression. It's exactly the same as saving an image as BMP and then putting it into a ZIP or RAR archive.
These file formats have those compression methods built into them.

Putting it all together, DDS files and games

DDS stands for Direct Draw Surface and is a relic of old times past. Long back when DirectX came not only with Direct 3D but the suitably named (not) 2D drawing library called Direct Draw. These days they have corrected that and are now calling it Direct 2D like it should have been called from the beginning.

DDS files is yet another image format amongst many many others. What sets this image format apart is that it has gained MASSIVE support around the world. Even graphics libraries that want nothing to do with Microsofts DirectX monopoly eventually comes face to face with the fact, DDS is here to stay in video gaming country.
That and the fact that all GPU's today have special support for DDS data. They can accept DDS surfaces "raw", decompress (if they are) them and draw them on a mesh in real time with almost no overhead. This (assuming you are using compressed DDS textures) not only saves on the amount of memory you need to allocate, it also greatly simplifies the process of getting image data to the GPU. But best of all, since you can send compressed DDS streams to the GPU you don't use up as much bandwidth on your PCI-E bus.

Use DDS compression or not?

However, all is fine and dandy until you actually see what a compressed DDS file looks like. Especially when you scale it up, as you would if you ever happened to get too close to the textured surface where a compressed DDS file is used.
Ok, enough small talk. Let's take a look at a typical, compressed, DDS file.

Note, all three images here were made in 256x256 pixels. I have scaled them up by a factor of three to show the differences more clearly.
Imagine you were looking at an object that is 1x1 meters in size at a distance of 3 meters. Then you moved closer until you were just one meter away from the object. The texture doesn't change, it's just enlarged / scaled up, this is what you would see with...

Original image:

Image

Using DDS compression:

Image

Using DDS compression that doesn't come from (professional grade) NVidia DDS tools:

Image

Clearly, even DDS files suffer (sometimes greatly) from blockiness and color space issues. These artifacts would show up in game clear as day, especially during daytime!
Furthermore, it really depends on what colors you are using in your image due to how the DDS format works. Some colors will be blocky, even the professional algorithm can't deal with certain color variances.
Hence, if you are making truck skins for example then some will turn out fine with compression but then one day you will try and combine two colors that clearly doesn't work out in the DDS format no matter which DDS writing software you use.

So again, i must stress the power of not using compressed formats and instead compress the file itself with an archiver. Which is very much possible since any ETS 2 / ATS mod archive is just a ZIP archive and you can select what files to compress and how much to compress them by.

Uncompressed DDS files are not that different from BMP files by the way. Header followed by a large stream of data to be put right to the GPU's screen buffer, little to no processing needed.

OK, FINE... What options should i pick when i save DDS textures?

It depends on what you are working on. Here's a few options which in some way or another are common to any image editing or DDS conversion application.

DXT 1, DXT 3, DXT 5, BC1, BC2, BC3 etc

These are all compressed formats. It rarely matters which one you choose unless you are working with transparency.
If you are working with transparency or DDS files with an alpha channel you want to pick DXT 5.
If your texture doesn't have transparency or an alpha channel you can use DXT 1 and save a few bytes.

In any other case, if given the option to pick compression, select "none".

8.8.8, 8.8.8.8, RGB8, RGBA8

Remember what i said about how the pixel's color components (RGB) have 8 bits of data each? Well, this is related to that!
These are the bit-depths used. Depending on the application, you get to select this in one drop down list or one separate from selecting the compression level.

8.8.8 or RGB8 and 8.8.8.8 or RGBA8 respectively are uncompressed formats. But pay attention, there are more than one type of format with similar names. You want to use the RGB formats here!
The first two are without an alpha channel / transparency and the latter two are with an alpha channel / transparency.

BGR / ABGR options...

Don't...
Unless you want Red to be Blue and Blue to be Red or...
Alpha channel / transparency to be Red, Blue to be Green and Green to be Alpha channel / transparency.

These formats are used in other games but not games from SCS Software.

16 Bits per channel formats. 16.16 etc.

Unless you work on Normal Maps (which also implies you don't need this guide really) you needn't bother with these special formats.

16 Bit formats, 5.6.5 or R5G6B5 etc.

While they would technically work fine in game. You really don't want 65 thousand colors when you can have 16.7 Million of them.

Mipmaps

Will your texture be seen from a distance in game? That is, can you move away from or towards your texture or will it be viewed in a mirror?
If yes, then do generate mipmaps.
Do NOT use existing mipmaps... This is for advanced users only and goes beyond the scope of this guide.

But what are mipmaps really? Well, the short and simple is, mipmaps are used whenever the texture becomes smaller than half it's previous size.
Here's what a mipmapped image actually looks like, using the option to load mipmaps when opening a DDS file:

Image

As you can see, it's the same image scaled down to half the size of the one left of it over and over again.
Imagine that image was displayed on a cube. As you move further away from the cube in a 3D world, the smaller the texture becomes. Thus, you can save a lot of GPU performance by using the smaller mipmaps when the object is at a far enough distance.
The last mipmap is just one single pixel. If your cube is that far away, all the GPU really does is draw that one pixel and then it's finished. Rather than going over the entirety of the image needlessly.

There's a few neat tricks one can do with mipmaps but again, it goes beyond the scope of this guide. Quite simply though, if you want a texture to look different when far away, this is how you would change it.
One notable example is light flare mods, which changes how light flares look when further away from the viewer.

All other options...

Goes WAAAY beyond the scope of this guide. Seriously, don't touch!
Restore to defaults if you managed to change anything you shouldn't have changed.