Tag Archives: Nvidia

Texture Atlas

Texture atlas [1][2] is a technique to group smaller textures into a larger texture. This decreases the number of state switches [3] a renderer needs to do and therefore often increases performance. Texture atlases have been used for a long time in the video game industry for sprite animations. When using texture atlases, the uv-coordinates of the models have to be changed so the original 0..1 map to the textures tile in the atlas. Grouping of textures can be done manually by texture artists or with tools. The texture coordinate system can be changed to map the new texture in a tool, or in the shader at run-time.

An example texture atlas
Image from article [2]

There are some limitations with using texture atlases compared to normal textures. First of all, all texture coordinates must initially be within 0..1 range. So for example, no “free” tiling can be used. The other problem is bleeding between tiles in the atlas when doing filtering, for example when using mipmaps.

Some additional information from Ivan-Assen Ivanov, author of article [2].

” – separate textures hurts not only batching (in facts, it hurts batching less than years ago), but also memory – as there is a certain per-texture overhead. This is especially painful on consoles – on PCs, the overhead is still there, I guess, but the driver hides it from you. The exact numbers are under NDA, of course, but on an old, unreleased project, we saved about 9 MB by atlas-sing a category of textures we didn’t atlas before.

- vertex interpolators are expensive! make sure you measure the remapping from 0..1 to the actual UVs in the atlas both in the vertex and in the pixel shader. Sounds counterintuitive, but on modern GPUs and with dense geometry, pixel shader is actually faster.” 

[1] “Improve Batching Using Texture Atlases” http://http.download.nvidia.com/developer/NVTextureSuite/Atlas_Tools/Texture_Atlas_Whitepaper.pdf

[2] “Practical Texture Atlases” (borrowed image from this page)
http://www.gamasutra.com/features/20060126/ivanov_01.shtml

[3] “Batch, Batch, Batch: What Does It Really Mean?”
http://developer.nvidia.com/docs/io/8230/batchbatchbatch.pdf

Soft Particles

Normal particles on the left, soft particles on the right

The aim with soft particles is to remove the ugly artifact that appears when the particle quad intersects the scene. There are a lot of different approaches to solve this, some more complicate than others. The simplest formula for soft particles is to just fade the particle if it’s getting to close to the scene. To do this, the scene without particles has to be rendered first and the depth saved in a texture. When drawing the particles, the depth of the particle will be compared to the scene depth. The alpha should be increased by a smooth fade by this depth difference. The formula below in HLSL is the simplest possible for soft particles, and works very well. Scene_depth is the sampled depth (in viewspace) of the scene in the direction of the current pixel. Particle_depth is the depth(in viewspace) of the current particle pixel. Scale is used to control the “softness” of the intersection between particles and scene:

fade = saturate((scene_depth – particle_depth) * scale);

NVIDIA [1] proposes a method that the following fade should be used instead of the linear one described above, to make the fade even smoother.

float Output = 0.5*pow(saturate(2*(( Input > 0.5) ? 1-Input : Input)), ContrastPower);
Output = ( Input > 0.5) ? 1-Output : Output;

Umenhoffer [2] proposes a method called spherical billboards to deal with these problems. In this method, the volume is approximated by a sphere. This method also deals with the near clipplane problem that particles will instantly disappear if they get to close to the camera.

There is also an idea [3] that the alpha channel can be used to represent the density of the particles. Although this method has the drawback that the textures might need to be redone by the artists.

The method by Microsoft [4] uses a combination of spherical billboards and a texture representation of the volume. But instead of using the alpha channel, they ray march the sphere and sample the density and volume from a 3D noise texture. The result can be seen in the image below.

Volumetric Particles

The video below shows how soft shadows can increase realism in games using large particles. It’s originally an ad for Torque 3D engine.

[1] Soft Particles by NVIDIA
http://developer.download.nvidia.com/whitepapers/2007/SDK10/SoftParticles_hi.pdf

[2] Spherical Billboards and their Application to Rendering Explosions
http://www.iit.bme.hu/~szirmay/firesmoke.pdf

[3] A Gamasutra article about soft particles
http://www.gamasutra.com/view/feature/3680/a_more_accurate_volumetric_.php

[4] A DirectX 10 implementation of soft particles by Microsoft, called Volumetric Particles
http://msdn.microsoft.com/en-us/library/bb172449(VS.85).aspx

s683fcw9dj

Rendering Countless Blades of Waving Grass

A full article that presents every aspect of implementing billboarded grass fields in games. It uses a vertex shaders for animating the grass in the wind.

A grass rendering

Here’s the full article for free (also the source for the image). Or you could buy the great book “GPU Gems 1″ which also contains the article.
http://http.developer.nvidia.com/GPUGems/gpugems_ch07.html

An implementation of the grass rendering by Nvidia:
http://developer.nvidia.com/object/nature_scene.html

Real-Time Volumetric Smoke

This approach to render volumetric smoke uses the new feature of DirectX10 that enables rendering to 3D textures. It uses voxelization of the geometry to enable the smoke to flow around and react to the geometry in a realistic way.

 Volumetric Smoke Rendering

All details can be found in this paper by Nvidia.
http://developer.download.nvidia.com/presentations/2007/gdc/RealTimeFluids.pdf