Category Archives: Image Enhancements

Description

Screen Space Ambient Occlusion

SSAO is the new technique that most new games just must include because of the hype around it since the computer game Crysis. It’s a technique for creating a rude approximation of ambient occlusion by using the a depth of the rendered scene. This works by comparing the current fragments depth with some random sample depths around it to see if the current depth is occluded or not. The current fragment is occluded if the sample is closer to the eye than the current fragment. Although it sounds very bad to do so, in practice it does work beyond all expectations.

How to take the samples is a big concern as it will impact what will occlude and how much. The currently best implementations takes random samples in a hemisphere in the direction of the normal. This limits the amount of self occlusion. Another problem is that if you only take the depth into consideration then a flat surface might occlude itself because of the perspective. By also comparing the normal when calculating the AO, this problem will go away.

One of the hard parts of implementing SSAO is to choose the correct smoothing technique. Because of the big cost of taking occlusion samples you want to take as few samples as possible but this will give much noise in the SSAO so the result needs smoothing. Just doing a simple gaussian blur will not be good as the blur will make the SSAO bleed. Instead a blur that considers the depth and/or normals is needed. One of those is the bilateral filter which often is used in combination with SSAO.

The steps of a simple SSAO implementation:

  1. Render the scene. Save the linear depth in a texture. Save the normals in eye space in a texture.
  2. Render a full screen quad with the SSAO shader. Save the result to a texture.
  3. Blur the result in X
  4. Blur the result in Y
  5. Blend the blurred SSAO texture with the scene, or use it directly when rendering the scene.

An optimization is to render the SSAO in a lower resolution than the screen and upsample it when blurring. Another optimization is to store both the normals and the depth in a single texture.

SSAO in the NVIDIA SDK

SSAO in the NVIDIA SDK

Probably one of the best implementations of SSAO is this one by NVIDIA (although it’ rather slow). The SDK 10 has a paper about the technique and also source code!
http://developer.download.nvidia.com/SDK/10.5/direct3d/samples.html

And here’s three papers/presentations from NVIDIA describing their SSAO in detail:
http://developer.download.nvidia.com/presentations/2008/GDC/GDC08_Ambient_Occlusion.pdf
http://developer.download.nvidia.com/presentations/2008/SIGGRAPH/HBAO_SIG08b.pdf
http://developer.download.nvidia.com/SDK/10.5/direct3d/Source/ScreenSpaceAO/doc/ScreenSpace AO.pdf

SSAO in Starcraft II

SSAO in Starcraft II

Some information of how Starcraft II will use SSAO is included in this paper ( see chapter 5.5 ):
http://ati.amd.com/developer/SIGGRAPH08/Chapter05-Filion-StarCraftII.pdf

SSAO in Two Worlds

SSAO in Two Worlds

A link to a description of the SSAO implementation in the game Two Worlds:
http://www.drobot.org/pub/GCDC_SSAO_RP_29_08.pdf

SSAO in Crysis

SSAO in Crysis

The paper and game that started it all (look in the chapter 8.5.4.3):
http://delivery.acm.org/10.1145/1290000/1281671/p97-mittring.pdf?key1=1281671&key2=9942678811&coll=ACM&dl=ACM&CFID=15151515&CFTOKEN=6184618

Hardware Accelerated Ambient Occlusion

Hardware Accelerated Ambient Occlusion

One of the papers that probably inspired the Crysis team:
http://perumaal.googlepages.com/

Kindernoiser SSAO

Kindernoiser SSAO

A simple but smart SSAO implementation, here with well commented shader source code:
http://rgba.scenesp.org/iq/computer/articles/ssao/ssao.htm

A gamedev.net thread with lots of discussion about SSAO
http://www.gamedev.net/community/forums/topic.asp?topic_id=463075

Gaussian Blur Filter Shader

There are different ways to perform blur and this is one of the most common way to do it in a shader. It’s a two step method with first a horizontal blur and then a vertical blur. By splitting the work in two directions (two passes) you can save a lot of computation.

The method can be divided in the following parts:

  1. Render the scene you want to blur to a texture (could be downsampled)
  2. Render a screen aligned quad with the horizontal blur shader to a texture
  3. Render a screen aligned quad with the vertical blur shader to the screen or texture depending on what you want to use it for

The following image shows how the blur works when splitted up in two directions.

Separable blur filter

Here’s the horizontal blur shader.

Vertex Shader (GLSL) . This shader screen align a quad with width 1. Any method to render a screen aligned quad will work. So you’re free to use other shaders.

varying vec2 vTexCoord;
 
// remember that you should draw a screen aligned quad
void main(void)
{
   gl_Position = ftransform();;
  
   // Clean up inaccuracies
   vec2 Pos;
   Pos = sign(gl_Vertex.xy);
 
   gl_Position = vec4(Pos, 0.0, 1.0);
   // Image-space
   vTexCoord = Pos * 0.5 + 0.5;
}

Fragment Shader (GLSL) 

uniform sampler2D RTScene; // the texture with the scene you want to blur
varying vec2 vTexCoord;
 
const float blurSize = 1.0/512.0; // I've chosen this size because this will result in that every step will be one pixel wide if the RTScene texture is of size 512x512
 
void main(void)
{
   vec4 sum = vec4(0.0);
 
   // blur in y (vertical)
   // take nine samples, with the distance blurSize between them
   sum += texture2D(RTScene, vec2(vTexCoord.x - 4.0*blurSize, vTexCoord.y)) * 0.05;
   sum += texture2D(RTScene, vec2(vTexCoord.x - 3.0*blurSize, vTexCoord.y)) * 0.09;
   sum += texture2D(RTScene, vec2(vTexCoord.x - 2.0*blurSize, vTexCoord.y)) * 0.12;
   sum += texture2D(RTScene, vec2(vTexCoord.x - blurSize, vTexCoord.y)) * 0.15;
   sum += texture2D(RTScene, vec2(vTexCoord.x, vTexCoord.y)) * 0.16;
   sum += texture2D(RTScene, vec2(vTexCoord.x + blurSize, vTexCoord.y)) * 0.15;
   sum += texture2D(RTScene, vec2(vTexCoord.x + 2.0*blurSize, vTexCoord.y)) * 0.12;
   sum += texture2D(RTScene, vec2(vTexCoord.x + 3.0*blurSize, vTexCoord.y)) * 0.09;
   sum += texture2D(RTScene, vec2(vTexCoord.x + 4.0*blurSize, vTexCoord.y)) * 0.05;
 
   gl_FragColor = sum;
}

And here’s the vertical blur shader.

Vertex Shader (GLSL) (the same as for the blur in horizontal direction)

varying vec2 vTexCoord;
 
// remember that you should draw a screen aligned quad
void main(void)
{
   gl_Position = ftransform();;
  
   // Clean up inaccuracies
   vec2 Pos;
   Pos = sign(gl_Vertex.xy);
 
   gl_Position = vec4(Pos, 0.0, 1.0);
   // Image-space
   vTexCoord = Pos * 0.5 + 0.5;
}

Fragment Shader (GLSL) 

uniform sampler2D RTBlurH; // this should hold the texture rendered by the horizontal blur pass
varying vec2 vTexCoord;
 
const float blurSize = 1.0/512.0;
 
void main(void)
{
   vec4 sum = vec4(0.0);
 
   // blur in y (vertical)
   // take nine samples, with the distance blurSize between them
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y - 4.0*blurSize)) * 0.05;
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y - 3.0*blurSize)) * 0.09;
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y - 2.0*blurSize)) * 0.12;
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y - blurSize)) * 0.15;
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y)) * 0.16;
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y + blurSize)) * 0.15;
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y + 2.0*blurSize)) * 0.12;
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y + 3.0*blurSize)) * 0.09;
   sum += texture2D(RTBlurH, vec2(vTexCoord.x, vTexCoord.y + 4.0*blurSize)) * 0.05;
 
   gl_FragColor = sum;
}

And this is a scene without blur.

Scene before bluring

And this is the same scene but with gaussian blur.

Blured Scene

You can tweak the blur radius to change the size of the blur and change the number of samples in each direction.

Cost for separable blur shader : 9+9 = 18 (number of texture samples)
Cost for shader if blured in one pass: 9*9 = 81 (number of texture samples)
So splitting up in two directions saves a lot.

The gaussian weights are calculated accordingly to the gaussian function with standard deviation of 2.7. These calculations were done in the excel document found [2].

Here’s a description of blur shaders and other image processing shaders in DirectX:
[1] http://ati.amd.com/developer/shaderx/ShaderX2_AdvancedImageProcessing.pdf

More info about calculating weights for separable gaussian blur:
[2] http://theinstructionlimit.com/?p=40

Bilinear Interpolation

When sampling a texel from a texture that has been re-sized (which is nearly always the case in 3D rendering) you need to use some kind of filter to select what result you should get. Bilinear interpolation uses the four nearest neighbors to interpolate an average texel value.

Bilinear Interpolation

This is a built in filter in OpenGL and to activate it you set the following lines when setting up a texture:

// set the minification filter
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST );
// set the magnification filter
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );

If you for some reason wants to do bilinear interpolation manually in a shader then the function to do so looks like the following in GLSL. Note that in vertex shaders you have to do manual bilinear interpolation between texture samples.

const float textureSize = 512.0; //size of the texture
const float texelSize = 1.0 / textureSize; //size of one texel 
 
vec4 texture2DBilinear( sampler2D textureSampler, vec2 uv )
{
    // in vertex shaders you should use texture2DLod instead of texture2D
    vec4 tl = texture2D(textureSampler, uv);
    vec4 tr = texture2D(textureSampler, uv + vec2(texelSize, 0));
    vec4 bl = texture2D(textureSampler, uv + vec2(0, texelSize));
    vec4 br = texture2D(textureSampler, uv + vec2(texelSize , texelSize));
    vec2 f = fract( uv.xy * textureSize ); // get the decimal part
    vec4 tA = mix( tl, tr, f.x ); // will interpolate the red dot in the image
    vec4 tB = mix( bl, br, f.x ); // will interpolate the blue dot in the image
    return mix( tA, tB, f.y ); // will interpolate the green dot in the image
}

Here’s a magnification of a texture using using three different types of filters. The texture is mapped on a sphere and the viewport has been zoomed in on a small part of it.

Nearest Neigbour  (OpenGL fixed function implementation)

Nearest Neigbor filter

Bilinear Interpolation (OpenGL fixed function implementation)

Bilinear Interpolation OpenGL

Bilinear Interpolation (GLSL implementation, the code above)

Bilinear Interpolation GLSL

Some information about OpenGL texture mapping and how to set the filtering properties:
http://www.nullterminator.net/gltexture.html

Here’s some discussion why you should not always use bilinear interpolation:
http://gregs-blog.com/2008/01/14/how-to-turn-off-bilinear-filtering-in-opengl/

Link to a bilinear interpolation function for DirectX:
http://www.catalinzima.com/?page_id=85

Article on GameDev.net about bilinear filtering:
http://www.gamedev.net/reference/articles/article669.asp

Improved Alpha-Testing

The full title of this paper is improved “Alpha-Tested Magnification for Vector Textures and Special Effects”. It’s about a technique to use vector textures when decaling for improved precision when magnificating. This kind of decal is useful for signs and such things that contains text or symbols because they can easily be represented as vector graphics. The left image below shows standard Alpha-Testing and the image on the right shows the improved version.

Improved Alpha-Testing for Decaling

Link to the paper:
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf