Tag Archives: Billboards

Render Thickness

In [1] they describe a clever way of rendering the thickness of an object in a single pass. The method only correctly works for convex objects but this limitation isn’t that bad, the method can often be used to get the approximated thickness of concave objects as well. For example, [1] uses it to fake the light scattering in clouds rendered as billboards. The methods works like this:

The object is rendered and the distance from the near plane is saved in a color channel R. Also, the distance to the far plane is saved in channel G. By rendering with the blend color mode MIN, one will get the minimum distance from the near plane in R, and the minimum distance to the far plane in G. By using these two distances, one can easily calculate the thickness of the rendered object with the following formula (1-G) – R (if distance is scaled so one is the the distance between the clip planes). Alpha can be saved as well in the same render pass, by outputting it to the A channel. And selecting blend alpha mode ADD (color and alpha can have different modes). This will add up the alpha.

All this is done in only one pass. Just remember to clear to white before rendering.

The image below shows the thickness of the popular Hebe mesh rendered with this method. This model is not convex, and the problem areas are for example the arm holding the bowl. As one can see, the algorithm believes that the bowl and the shoulder are connected, and therefore believes that part of the object is the thickest.

Hebe

[1] The Art and Technology of Whiteout
http://ati.amd.com/developer/gdc/2007/ArtAndTechnologyOfWhiteout(Siggraph07).pdf

Instanced Billboards

In DirectX9, one can use instanced billboards to render lots of particles with good performance. Since we presume all particles to be billboards constructed from two triangles forming a quad, by using instancing, we can reuse this geometry data (only uv-coordinates needed) for each particle and therefore saving bandwidth. When rendering, two streams with different frequency should be used.  The quad-geometry data makes the first stream, the second stream consists of the per instance data that is unique for each particle. This instance data could be the position, rotation, color and more. The big drawback with this rendering approach is that it requires hardware instancing support, which means Shader Model 3.0. (or Shader Model 2.0 for ATI cards if using a trick described in the first source below).

This particle rendering approach described in details:
http://zeuxcg.blogspot.com/2007/09/particle-rendering-revisited.html

DirectX9 info about rendering multiple streams
http://msdn.microsoft.com/en-us/library/bb147299(VS.85).aspx

DirectX9 instancing info
http://msdn.microsoft.com/en-us/library/bb173349(VS.85).aspx

Normal Mapped Billboards

This technique doesn’t actually invent something new. It’s just a combination of normal mapping and billboards to realistically lit particle systems. It has been used successfully in many games to fake volumetric smoke.

The movie below shows an example of a lot of particles, rendered as billboards that are normal mapped to look like spheres.

Lit smoke and Post-process system design (also in the book ShaderX 5)
http://www.gamedev.net/community/forums/topic.asp?topic_id=432218&whichpage=1?

A thesis that tried to implement this method (some interesting info, but results aren’t good enough)
http://epubl.ltu.se/1404-5494/2008/011/LTU-HIP-EX-08011-SE.pdf

Some discussion about normal mapped billboards
http://www.drone.org/tutorials/lighting_flat_objects.html

Fluid Simulation and Rendering

For effects like smoke or water, a fluid simulation and rendering approach is needed. There are currently two popular methods for this:

  1. Simulate the fluid on the CPU and send the result as particles to the GPU for rendering as billboards. This is often called a particle system. The technique has been around since the dawn of computer graphics.

  2. Simulate the fluid on the GPU and render the result into textures. This will then be rendered by doing volume ray casting (or ray marching) on the GPU. This technique is new and rather unexplored, and there are few real-life implementations. The result can be very realistic but slow.

Technique one burdens both CPU, bandwidth and GPU. Although in modern solutions, it’s the bandwidth that’s the bottleneck. The GPU based technique only burdens the GPU ( but a lot ).

The movie shows the GPU method of fluid simulation and rendering. More info about this particular implementation in the two last links.

Building an Advanced Particle System
http://www.mysticgd.com/misc/AdvancedParticleSystems.pdf
Building a Million Particle System
http://www.2ld.de/gdc2004/MegaParticlesPaper.pdf
Real-Time Simulation and Rendering of 3D Fluids
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch30.html
The previous page’s authors homepage:
http://www.cs.caltech.edu/~keenan/project_fluid.html