Tag Archives: DirectX

VPOS

Starting with DirectX Pixel Shader Model 3.0 there exist an input type called VPOS. It’s the current pixels position on the screen and it’s automatically generated. This can be useful when sampling from a previously rendered texture when rendering an arbitrarily shaped mesh to the screen. To do this, we need uv-coords that represents where to sample on the texture. These coordinates can be gained by simply dividing VPOS with the screen dimensions.
When working with older hardware, that doesn’t support shader model 3.0, there is a need to manually create the VPOS in the vertex shader and pass it to the fragment shader as a TEXCOORD. This is the way to do so ( including the scaling to uv-range which manually has to be done for VPOS if you’re using it).

Vertex Shader:

float4x4 matWorldViewProjection;
float2 fInverseViewportDimensions;
struct VS_INPUT
{
   float4 Position : POSITION0;
};
struct VS_OUTPUT
{
   float4 Position : POSITION0;
   float4 calculatedVPos : TEXCOORD0;
};
float4 ConvertToVPos( float4 p )
{
   return float4( 0.5*( float2(p.x + p.w, p.w - p.y) + p.w*fInverseViewportDimensions.xy), p.zw);
}
 
VS_OUTPUT vs_main( VS_INPUT Input )
{
   VS_OUTPUT Output;
   Output.Position = mul( Input.Position, matWorldViewProjection );
   Output.calculatedVPos = ConvertToVPos(Output.Position);
   return( Output );
}

Pixel Shader:

float4 ps_main(VS_OUTPUT Input) : COLOR0
{
   Input.calculatedVPos /= Input.calculatedVPos.w;
   return float4(Input.calculatedVPos.xy,0,1); // test render it to the screen
}

The image below shows an elephant model rendered with the shader above. As can be seen, the color (red and green channels) correctly represents the uv-coords for a fullscreen quad. Since 0,0,0 = black, 1,0,0 = red, 0,1,0 = green, 1, 1,0 = yellow.

VPOS Elephant
This is how the pixel shader would have looked like if VPOS were used instead (note: no special vertex shader needed in this case).
struct PS_INPUT
{
   float2 vPos : VPOS;
};
float4 ps_main(PS_INPUT Input) : COLOR0
{
   return float4(Input.vPos*fInverseViewportDimensions + fInverseViewportDimensions*0.5,0,1); // test render it to the screen
}

The original code, more info and proof can be found here:
http://www.gamedev.net/community/forums/topic.asp?topic_id=506573

Soft Particles

Normal particles on the left, soft particles on the right

The aim with soft particles is to remove the ugly artifact that appears when the particle quad intersects the scene. There are a lot of different approaches to solve this, some more complicate than others. The simplest formula for soft particles is to just fade the particle if it’s getting to close to the scene. To do this, the scene without particles has to be rendered first and the depth saved in a texture. When drawing the particles, the depth of the particle will be compared to the scene depth. The alpha should be increased by a smooth fade by this depth difference. The formula below in HLSL is the simplest possible for soft particles, and works very well. Scene_depth is the sampled depth (in viewspace) of the scene in the direction of the current pixel. Particle_depth is the depth(in viewspace) of the current particle pixel. Scale is used to control the “softness” of the intersection between particles and scene:

fade = saturate((scene_depth – particle_depth) * scale);

NVIDIA [1] proposes a method that the following fade should be used instead of the linear one described above, to make the fade even smoother.

float Output = 0.5*pow(saturate(2*(( Input > 0.5) ? 1-Input : Input)), ContrastPower);
Output = ( Input > 0.5) ? 1-Output : Output;

Umenhoffer [2] proposes a method called spherical billboards to deal with these problems. In this method, the volume is approximated by a sphere. This method also deals with the near clipplane problem that particles will instantly disappear if they get to close to the camera.

There is also an idea [3] that the alpha channel can be used to represent the density of the particles. Although this method has the drawback that the textures might need to be redone by the artists.

The method by Microsoft [4] uses a combination of spherical billboards and a texture representation of the volume. But instead of using the alpha channel, they ray march the sphere and sample the density and volume from a 3D noise texture. The result can be seen in the image below.

Volumetric Particles

The video below shows how soft shadows can increase realism in games using large particles. It’s originally an ad for Torque 3D engine.

[1] Soft Particles by NVIDIA
http://developer.download.nvidia.com/whitepapers/2007/SDK10/SoftParticles_hi.pdf

[2] Spherical Billboards and their Application to Rendering Explosions
http://www.iit.bme.hu/~szirmay/firesmoke.pdf

[3] A Gamasutra article about soft particles
http://www.gamasutra.com/view/feature/3680/a_more_accurate_volumetric_.php

[4] A DirectX 10 implementation of soft particles by Microsoft, called Volumetric Particles
http://msdn.microsoft.com/en-us/library/bb172449(VS.85).aspx

s683fcw9dj

Deferred Lighting

This is a lighting technique that lately has increased a lot in popularity. The normal way of shading is to perform the lighting calculations on a fragment when it is rasterized to the screen. This is often good but requires a lot of calculations if there are many lights. And the bad thing is that this fragment might later on be overwritten by some other fragment so the calculations might be a waste.

In deferred lighting (or deferred shading, or deferred rendering), you save the information about the fragment that is necessary to perform the shading (lighting) by rendering them to textures instead of doing the actual lighting calculation. When all geometry is rendered, the lighting will now be calculated only once per pixel on the screen. So no calculations will be wasted. You can perhaps say that it is some sort of a lazy evaluator.

The information saved per fragment is often:

  • position ( or just depth )
  • albedo ( the diffuse texture )
  • normal
  • specular

And these are sometimes also used:

  • shininess
  • material ID (for selecting material behaviour)

When all geometry has been rendered and it’s time to perform the lighting, the lights needs to be represented as something when sent to rasterization. Point lights can be drawn either as spheres or just square billboards. Directional light should be drawn as a full screen rectangle. And spotlights will be cones. Note that this shading technique allows for lights shaped in any form, not just these traditional ones.

The big reason for using deferred rendering is how well it scales with more lights. Another reason that it has increased in popularity lately is how nice it works with new rendering methods like SSAO and depth of field. The problem areas with deferred lighting is transparent objects and multisampling (antialiasing). If the original scene didn’t have per pixel lighting (but instead maybe vertex lightning) on the whole scene then the deferred rendering might be slower than traditional rendering.

Deferred Lightning example

Deferred Lighting example

Explanation of deferred lighting (and source code)
http://www.beyond3d.com/content/articles/19/

Deferred Rendering in S.T.A.L.K.E.R.

Deferred Rendering in S.T.A.L.K.E.R.

Explanation of how deferred lighting was used in the game S.T.A.L.K.E.R.
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html 

The result of the deferred rendering XNA tutorial

The result of the deferred rendering XNA tutorial

A very good tutorial of how to implement deferred lighting in XNA 2.0. This is good reading even when you are rendering in an other API.
http://www.ziggyware.com/readarticle.php?article_id=155

A long discussion on the gamedev.net forum of pros and cons of using deferred rendering compared to traditional forward rendering.
http://www.gamedev.net/community/forums/topic.asp?topic_id=424979

DirectX9 implementation if deferred shading, and some optimization talk
http://www.gamedev.net/reference/programming/features/shaderx2/Tips_and_Tricks_with_ DirectX_9.pdf

Deferred Lightning in Leadwerk Engine

Deferred Lightning in Leadwerk Engine

Info about the implementation of deferred shading in the Leadwerks Engine.
http://www.leadwerks.com/files/Deferred_Rendering_in_Leadwerks_Engine.pdf

Deferred Lighting in Killzone 2

Deferred Lighting in Killzone 2

A presentation about deferred lighting in the game Killzone 2:
http://www.guerrilla-games.com/publications/dr_kz2_rsx_dev07.pdf