Scan's TekSpek

Our Aim
To provide you with an overview on New And existing technologies, hopefully helping you understand the changes in the technology. Together with the overviews we hope to bring topical issues to light from a series of independent reviewers saving you the time And hassle of fact finding over the web.

We will over time provide you with quality content which you can browse and subscribe to at your leisure.

TekSpek GPU - Graphics
GPU Antialiasing

GPU Antialiasing


Date issued:

Overview
This TekSpek explains multi- and super-sampling anti-aliasing.

The Technology
This TekSpek will assume you know the affects of applying a level of anti-aliasing (AA) on your 3D accelerator, be it via the driver control panel or via a control in your game. We assume you know the effect it has on image quality, so you can think about a before and after scenario. So this TekSpek isn't about explaining what it does as such, although it will, it's about explaining the how and why. So if you've ever wondered how or why AA works, this guide is for thee. We assume a slight level of technical knowledge in places, but nothing that will blow your brain to bits.

So you know AA makes the image on your screen look better, probably because of it smoothing out jaggies on the edges of on-screen objects. But how does it do so? The answers largely lie in the understanding of two key principles of 3D rendering that might not be immediately apparent or obvious.

Pixels aren't as atomic as you think
While the pixel is ultimately what ends up on your screen, each made up of just one colour composed of a red, green and blue component (and luminance of course), it's never just processed as a single colour before it gets to the display. During rendering on the GPU, a pixel can be covered by more than one piece of geometry (decomposed into triangles as you know). It's that concept of more than one triangle per pixel, and thus possibly more than one colour per pixel before final output, that's the first key to understanding what AA is all about.

Filtering. It's all about the filtering.
So, given more than one colour per pixel before rendering, how do you resolve that into the single colour the display needs? The answer is filtering. Think about a pixel half covered by a white triangle, half covered by a black one. Like so, infact.

That final screen pixel obviously can't be both black and white at the same time, since it can only be one colour. So what colour should it be? Grey, right, half way between black and white? Correct. Given RGB (0,0,0) as black and RGB (255,255,255) as white on an 8-bit per channel display (2^8 is 256, remember), interpolating linearly between white and black would give you RGB (128,128,128), since half 256 is 128, and we want the median colour. Using the wonders of modern web markup, that colour is this.

But how do you get that colour from the pixel above? More specifically, how do you get the data to filter the colours in the pixel to get that grey? What the GPU wants to do, since there's a frequency to that data (from 0,0,0 to 255,255,255 in our example), is actually measure and filter a signal made up of colour data, much like you would in any DSP application such as audio processing or motion video decoding. Data presented needs to be filtered before it can be passed on to the next part of the algorithm, which in our case is pushing pixels to your display, and it's filtering that's the second key component to understanding what's going on. For the adventurous, Google for Nyquist's Theorem ;)

Therefore anti-aliasing is pretty much just sub-pixel filtering, right?
Yes, it pretty much is. You need to look at different points inside the pixel, before final colour output, and it's that sub-pixel analysis and susequent filtering of that data provided by the analysis that determines final colour and therefore how effective the anti-aliasing is. You want the grey because of the bordering pure black and white triangles, so let's show you how a modern GPU gets there.

Colour sampling
We've already discussed sampling and filtering the colour inside the pixel, so we'll quickly run over how that's done on almost all modern GPUs. The GPU has a fixed grid masked across the pixel which marks it out into areas. Then, depending on how many samples the GPU is able to take, it has a look inside some of those areas, collecting data. With colour sampling it collects colour, of course.

Here's our pixel again, but this time with some different coloured, overlapping triangles and the grid we just talked about.

For the purposes of this TekSpek we've made our triangles opaque, mostly so you can see the grid but also to somewhat simulate the interpolation between colours in the pixel to generate the final one -- our filtering again. The large orange dots are the points inside the pixel that the GPU will sample colour from. In our case, the four colour samples give us decent coverage in the pixel for the red, green and white polygons (remember a triangle is just a polygon) in order to make a decent attempt at filtering for the final colour.

In this case, average between two samples of full white (255,255,255), one sample of full red (255,0,0) and one sample of full green (0,255,0) gives us an RGB value of 192,192,128, which looks like this.

So that's our sub-pixel sampling with colour, also more commonly called super-sampling. The super part of the name comes from the fact you're taking more than one colour sample per-pixel, and it's NVIDIA graphics boards that most commonly offer up any user-choosable super-sampling modes. More on that later.

Depth sampling
There's another method of sub-pixel sampling and filtering that also anti-aliases pixels, that doesn't take into account just pixel colour. Sub-pixel depth sampling takes into account the depth of the triangles providing the full pixel coverage. Depth is stored in a separate sample buffer per-pixel, seeded by the depth buffer (commonly called the Z-buffer), and when the GPU comes to anti-aliasing just before final colour writing as before, it samples Z multiple times in the pixel, computes an average and uses that to interpolate for final pixel colour. And that's our filtering again, cool.

This technique only antialiases polygon intersections, sampling colour just once per pixel, saving large amounts of bandwidth compared to super-sampling. Here's our pixel again, with triangle depth swapped to sort of hint at what goes on. The blue dots, present the first time, are the sub-pixel depth sample points.

Because the separate sample buffer is only updated if multiple triangles share a pixel, less bandwidth is consumed than super-sampling and performance can stay high for noticeable increases in image quality. This technique, often called multi-sampling, is therefore the most popular AA algorithm implemented today, since it's cheap in terms of chip area and offers acceptable performance. Combined multi- and super-sampling can be done too, for the maximum IQ possible on your gaming-based card. Sweet!

Summary
It should be pretty simple to follow how those AA schemes work. The GPU looks inside pixels to see what's going on, as well as looking to see how far away parts of triangles inside the pixel are, in order to filter colour data and write your final pixel colour out to memory, for showing on your display. Really quite simple (at least as presented). So now when someone asks you how AA works, you know!