There are many occasions when the fragment position in world space needs to be reconstructed from a texture holding the scene depth (depth texture). One example of use is in deferred rendering when trying to decrease memory usage by not saving the position but instead only the depth. This will result in one channel of data, instead of three channels needed when saving the whole position.

There are different ways to save the depth. The most popular are view space depth and screen space depth. Saving depth in view space instead of screen space gives two advantages. It’s faster, and it gives better precision because it’s linear in view space.

This is how **screen space depth** can be rendered in HLSL:

struct VS_OUTPUT { float4 Pos: POSITION; float4 posInProjectedSpace: TEXCOORD0; }; // vertex shader VS_OUTPUT vs_main( float4 Pos: POSITION ) { VS_OUTPUT Out = (VS_OUTPUT) 0; Out.Pos = mul(Pos,matWorldViewProjection); Out.posInProjectedSpace = Out.Pos; return Out; } // pixel shader float4 ps_main( VS_OUTPUT Input ) : COLOR { float depth = Input.posInProjectedSpace.z / Input.posInProjectedSpace.w; return depth; } |

The HLSL pixel shader below shows how the position can be reconstructed from the depth map stored with the code above. Although this is one of the slowest ways of doing position reconstruction since it requires a matrix multiplication.

float4 ps_main(float2 vPos : VPOS;) : COLOR0 { float depth = tex2D(depthTexture,vPos*fInverseViewportDimensions + fInverseViewportDimensions*0.5).r; // scale it to -1..1 (screen coordinates) float2 projectedXY = vPos*fInverseViewportDimensions*2-1; projectedXY.y = -projectedXY.y; // create the position in screen space float4 pos = float4(projectedXY,depth,1); // transform position into world space by multiplication with the inverse view projection matrix pos = mul(pos,matViewProjectionInverse); // make it homogeneous pos /= pos.w; // result will be (x,y,z,1) in world space return pos; // for now, just render it out } |

To reconstruct** depth from view space**, a ray from the camera position to the frustum far plane is needed. For a full screen quad, this ray can be precalculated for the four corners and passed to the shader. This is how the computer game Crysis did it [1] . But for arbitrary geometry, as needed in deferred rendering, the ray must be calculated in the shaders [2] .

[1] “Finding next gen: CryEngine 2″

http://ati.amd.com/developer/gdc/2007/mittring-finding_nextgen_cryengine2(siggraph07).pdf

[2] “Reconstructing Position From Depth, Continued”

http://mynameismjp.wordpress.com/2009/05/05/reconstructing-position-from-depth-continued/