Category Archives: Rendering Methods

Description

VPOS

Starting with DirectX Pixel Shader Model 3.0 there exist an input type called VPOS. It’s the current pixels position on the screen and it’s automatically generated. This can be useful when sampling from a previously rendered texture when rendering an arbitrarily shaped mesh to the screen. To do this, we need uv-coords that represents where to sample on the texture. These coordinates can be gained by simply dividing VPOS with the screen dimensions.
When working with older hardware, that doesn’t support shader model 3.0, there is a need to manually create the VPOS in the vertex shader and pass it to the fragment shader as a TEXCOORD. This is the way to do so ( including the scaling to uv-range which manually has to be done for VPOS if you’re using it).

Vertex Shader:

float4x4 matWorldViewProjection;
float2 fInverseViewportDimensions;
struct VS_INPUT
{
   float4 Position : POSITION0;
};
struct VS_OUTPUT
{
   float4 Position : POSITION0;
   float4 calculatedVPos : TEXCOORD0;
};
float4 ConvertToVPos( float4 p )
{
   return float4( 0.5*( float2(p.x + p.w, p.w - p.y) + p.w*fInverseViewportDimensions.xy), p.zw);
}
 
VS_OUTPUT vs_main( VS_INPUT Input )
{
   VS_OUTPUT Output;
   Output.Position = mul( Input.Position, matWorldViewProjection );
   Output.calculatedVPos = ConvertToVPos(Output.Position);
   return( Output );
}

Pixel Shader:

float4 ps_main(VS_OUTPUT Input) : COLOR0
{
   Input.calculatedVPos /= Input.calculatedVPos.w;
   return float4(Input.calculatedVPos.xy,0,1); // test render it to the screen
}

The image below shows an elephant model rendered with the shader above. As can be seen, the color (red and green channels) correctly represents the uv-coords for a fullscreen quad. Since 0,0,0 = black, 1,0,0 = red, 0,1,0 = green, 1, 1,0 = yellow.

VPOS Elephant
This is how the pixel shader would have looked like if VPOS were used instead (note: no special vertex shader needed in this case).
struct PS_INPUT
{
   float2 vPos : VPOS;
};
float4 ps_main(PS_INPUT Input) : COLOR0
{
   return float4(Input.vPos*fInverseViewportDimensions + fInverseViewportDimensions*0.5,0,1); // test render it to the screen
}

The original code, more info and proof can be found here:
http://www.gamedev.net/community/forums/topic.asp?topic_id=506573

Render Thickness

In [1] they describe a clever way of rendering the thickness of an object in a single pass. The method only correctly works for convex objects but this limitation isn’t that bad, the method can often be used to get the approximated thickness of concave objects as well. For example, [1] uses it to fake the light scattering in clouds rendered as billboards. The methods works like this:

The object is rendered and the distance from the near plane is saved in a color channel R. Also, the distance to the far plane is saved in channel G. By rendering with the blend color mode MIN, one will get the minimum distance from the near plane in R, and the minimum distance to the far plane in G. By using these two distances, one can easily calculate the thickness of the rendered object with the following formula (1-G) – R (if distance is scaled so one is the the distance between the clip planes). Alpha can be saved as well in the same render pass, by outputting it to the A channel. And selecting blend alpha mode ADD (color and alpha can have different modes). This will add up the alpha.

All this is done in only one pass. Just remember to clear to white before rendering.

The image below shows the thickness of the popular Hebe mesh rendered with this method. This model is not convex, and the problem areas are for example the arm holding the bowl. As one can see, the algorithm believes that the bowl and the shoulder are connected, and therefore believes that part of the object is the thickest.

Hebe

[1] The Art and Technology of Whiteout
http://ati.amd.com/developer/gdc/2007/ArtAndTechnologyOfWhiteout(Siggraph07).pdf

Basic Triangle

The triangle is the basic geometry that is used when rendering. All other shapes of geometry you want to draw must be divided into triangles.

Basic Triangle

The triangle parts:

  1. Face, the triangle itself, the area is what is gonna be rasterized (with normal fill mode at least).
  2. Face normal, the normal to the plane which the triangle is parallell too. It is mostly used for calculating the vertex normal.
  3. Vertex, a triangle has three vertices with x,y,z coordinates, they are located in the triangle corners. All transformations apply to these ones.
  4. Edge, the line between vertices are called edges, a triangle has three edges. Are used for example shadow volumes.
  5. Vertex normal, each vertex has a normal which decides the smoothness of the geometry.

Other data often used per vertex:

  • Tangent and Binormal for per pixel lighting
  • Texture coordinates (uvw-coords), sometimes more than one per vertex

Tutorial to render a triangle in DirectX10
http://msdn.microsoft.com/en-gb/library/bb172486(VS.85).aspx

Tutorial to render a triangle in OpenGL
 http://60hz.csse.uwa.edu.au/workshop/workshop0/workshop1.html

Tutorial to render a triangle in XNA
http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/The_first_triangle.php

Tutorial to render a triangle in OpenGL ES 2.0
http://www.webreference.com/programming/opengl_es/

Light Indexed Deferred Rendering

A different approach to deferred lighting in which the lights are rendered before the actual geometry is rendered. Here’s the abstract from the paper describing this technique:

“Current rasterization based renderers utilize one of two main techniques for lighting, forward rendering and deferred rendering. However, both of these techniques have disadvantages. Forward rendering does not scale well with complex lighting scenes and standard deferred rendering has high memory usage and trouble with transparency and MSAA. This paper aims to explore a middle ground between these two lighting techniques with the aim of keeping the key advantages of both. This is achieved with deferring lighting by storing a light index value where light volumes intersect the scene.”

 Light Indexed Deferred Rendering

The homepage of the research in this technique including paper and demo:
http://code.google.com/p/lightindexed-deferredrender/

A OpenGL.org discussion about this technique
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=232157&fpart=1