Dithering & Color Depth - GithubPrankster/monroe GitHub Wiki

Art created by Mark Ferrari

Dithering & Color Depth

A long time ago, graphics in computers were quite rudimentary. Their range of colors was limited, and you had to choose between either high resolution but usually greyscale output, or low resolution color output. One of the more famous cards capable of this back in the 80's was the Color Graphics Adapter, let you get a whopping 640x200 of resolution, but only 2 colors to pick from its 4-bit palette.

The Legend of Beeb I, a simple game of mine utilizing the 4-bit palette

This was most useful for businesses, editing all those spreadsheets and charts and not needing big color. However computer enthusiasts thought about there being a way to have more color than merely 16 palette hardware entries. They noticed that when utilizing composite video output, the degraded quality of the signal's separation of light information and color information would result in strange color artifacts. By creating dot patterns from the CGA palette, they managed to obtain unique color output that even a few popular games supported.

My rendition of this behavior. The dot pattern can blend with the black background under a lessened quality display.

Even as computer graphics gained more color depth, and even more familiar blending effects in the most advanced of machines like SGI's offerings (who are known best for 90's CGI and helping develop the Nintendo 64 hardware) common computer displays were at most capable of having 256 colors at their disposal, generated by internal graphics hardware like the Video Graphics Array. The Commodore Amiga around 1985 however possessed graphics hardware that allowed display of roughly 12-bit imagery with no palette, though the associated mode's memory space requirement restricted its usage to mostly static output.

Hold-And-Modify mode animation created by Ken Offer

It's within the realm of 16-bit and early 32-bit consoles where I believe the knowledge on working with these limitations had become the most powerful yet. Within the famous war of the Sega Mega Drive versus the Nintendo SNES, Mega Drive actually provided lessened graphics capabilities in comparison to SNES, only being capable of around 61 color palettes in comparison to SNES' 256, and even for the time a questionable 9-bit color depth against SNES' 15-bit color. If developers wanted to be colorful in MD they'd need to be clever too, doing tricks like changing the palette mid-scanline. (a future wiki topic!)

Dynamite Headdy, a quite pretty and action packed Mega Drive game

I hinted it a bit ago, but this is where dithering really strikes the picture! With lessened color options, the idea is to create the illusion of more being available from afar by way of what I can only describe as a harsh gradient between 2. However with the television showing you a somewhat horizontally blurred output, this ends up smoothing the gradient into fairly convincing new colors. A good eye can catch the pattern, but regardless it makes you interpret much nicer of a result.

An example of applying dithering. The upper sprite could be meant for a metal barrel, but its limited colors keep it from that. By creating a gradient-like pattern between the dark part and light part of the barrel, it feels like it's smoother and has more color depth than it actually has. Pictured too is what that color "range" could look like.

Even within the SNES, these techniques were applied a lot. Sometimes artists had access to much higher color depth art digitizing or sprite creation tools, and knowing they'd have lessened color depth on the system, utilized dithering to keep things like gradients more intact. Being clueless about color practices leads to the Paperboy 2s of life.

Perhaps bad color usage is the least of worrries with this game though.

You'd better watch out if a SNES game applied it well though. It just looks really nice. Catch some of the dithering in the machinery on this scene in Chrono Trigger for instance.

Now, for something interesting.. Going forward, we are at a time where only the N64 had alpha blending. Granted, somewhat limited in scope! Its limited texture RAM (2 MERE KILOBYTES) forced developers to mainly stick to 15-bit color assets and use other types of textures like palette-based ones and go nuts with them.

Sin and Punishment, a N64 game from around the end of its lifetime

On the side of Sega, things weren't exactly the best. They messed up big time with the poor Sega Saturn and a lack of games interesting to the public. Developers spoke very harshly of the machine, due to it being hillariously complex (2 CPUS, 2 GRAPHICS UNITS, A MOTOROLA 68K SOUND PROCESSOR, there's also CDs yay) but one interesting thing was that it could do blending. Sort of. There's a pretty good article on this by Matt Greer, but the gist is that the first graphics unit could have pixels from sprites set for blending with the second unit's layers, but emphasis on set!

Drawing a sprite above this data overrides it, and drawing this data above a sprite will pretty much eat up what was there. It was easier instead to work with setting sprites as meshes, dithering the sprite output by half, which would look pretty ok on a television, but nowadays looks a tad weird.

With some blurring, the blending is more akin to the Playstation version, though I'm not sure I'd like to play MM like that. Mesh is fine.

Now, the Playstation is a console I know much more from this era to talk about. There was no alpha blending as mentioned, but 16-bit colors could have their highest bit, the semi-transparency bit, set to let the GPU know to blend the color with the current framebuffer using 1 of 4 different functions. These range from simply adding or subtracting together framebuffer and color to doing those same with either the color or both the framebuffer and color being darkened. Pretty strange, and in one instance tricky to emulate, but fairly neat.

PSX Harvest Moon screenshot with the second blend mode, subtracting color from the framebuffer, generating a pleasing fade to black

What's interesting with PSX dithering is that it's an option right from the GPU! Once enabled, all things you draw recieve an overlay of a bayer matrix dither, softening up things nicely. Whilst the GPU could actually generate 24-bit color, it took up too much space in the framebuffer (remember Hold-And-Modify from the Amiga?) and thus was used very sparingly.

PSX dithering as emulated by one of Monroe Engine's shaders. Above is the 15-bit output, below is that output with dithering.

I hope this document has piqued your interest on how developers made the most out of hardware to make games look very pretty. I recommend checking out websites such as Architecture of Consoles for technical information on the specific old console you liked! And inform me about how helpful this was, I hope to improve it in the future. Have a great time!