Tag Archives: Game


Starting with DirectX Pixel Shader Model 3.0 there exist an input type called VPOS. It’s the current pixels position on the screen and it’s automatically generated. This can be useful when sampling from a previously rendered texture when rendering an arbitrarily shaped mesh to the screen. To do this, we need uv-coords that represents where to sample on the texture. These coordinates can be gained by simply dividing VPOS with the screen dimensions.
When working with older hardware, that doesn’t support shader model 3.0, there is a need to manually create the VPOS in the vertex shader and pass it to the fragment shader as a TEXCOORD. This is the way to do so ( including the scaling to uv-range which manually has to be done for VPOS if you’re using it).

Vertex Shader:

float4x4 matWorldViewProjection;
float2 fInverseViewportDimensions;
struct VS_INPUT
   float4 Position : POSITION0;
struct VS_OUTPUT
   float4 Position : POSITION0;
   float4 calculatedVPos : TEXCOORD0;
float4 ConvertToVPos( float4 p )
   return float4( 0.5*( float2(p.x + p.w, p.w - p.y) + p.w*fInverseViewportDimensions.xy), p.zw);
VS_OUTPUT vs_main( VS_INPUT Input )
   VS_OUTPUT Output;
   Output.Position = mul( Input.Position, matWorldViewProjection );
   Output.calculatedVPos = ConvertToVPos(Output.Position);
   return( Output );

Pixel Shader:

float4 ps_main(VS_OUTPUT Input) : COLOR0
   Input.calculatedVPos /= Input.calculatedVPos.w;
   return float4(Input.calculatedVPos.xy,0,1); // test render it to the screen

The image below shows an elephant model rendered with the shader above. As can be seen, the color (red and green channels) correctly represents the uv-coords for a fullscreen quad. Since 0,0,0 = black, 1,0,0 = red, 0,1,0 = green, 1, 1,0 = yellow.

VPOS Elephant
This is how the pixel shader would have looked like if VPOS were used instead (note: no special vertex shader needed in this case).
struct PS_INPUT
   float2 vPos : VPOS;
float4 ps_main(PS_INPUT Input) : COLOR0
   return float4(Input.vPos*fInverseViewportDimensions + fInverseViewportDimensions*0.5,0,1); // test render it to the screen

The original code, more info and proof can be found here:

Real-Time Cloud Rendering

A very fast technique to render clouds as imposters which are only updated when the viewport has changed enough to make a visible error. Here’s the abstract from the paper:

This paper presents a method for realistic real-time rendering of clouds for flight simulators and games. It describes a cloud illumination algorithm that approximates multiple forward scattering in a preprocess, and first order anisotropic scattering at runtime. Impostors are used to accelerate cloud rendering by exploiting frame-to-frame coherence in an interactive flight simulation. Impostors are particularly well suited to clouds, even in circumstances under which they cannot be applied to the rendering of polygonal geometry. The method allows hundreds of clouds with hundreds of thousands of particles to be rendered at high frame rates, and improves interaction with clouds by reducing artifacts introduced by direct particle rendering.”

Cloud Rendering
Link to a webpage with much more information about this technique:

Link to the paper:

Procedural Approach to Animate Interactive Natural Sceneries

This algorithm for rendering grass allows user interaction with the grass which most other methods won’t. They call this interaction effect treading and it’s implemented as a scalar field which determines in which direction the individual grass straws should bend.

Interactive grass rendering

Link to the paper, more screenshots and movies

Rendering Countless Blades of Waving Grass

A full article that presents every aspect of implementing billboarded grass fields in games. It uses a vertex shaders for animating the grass in the wind.

A grass rendering

Here’s the full article for free (also the source for the image). Or you could buy the great book “GPU Gems 1″ which also contains the article.

An implementation of the grass rendering by Nvidia: