May 012013
 

The next lighting technique I want to cover is environment mapping with cube maps.

Environment mapping

Environment mapping is another form of specular lighting. When I spoke about specular lighting here, I was talking about simulating the light reflected from one bright light source. You can repeat the calculations for multiple light sources but it quickly gets expensive. For a real scene, an object will be reflecting light from every other point in the scene that is visible from it. Using the standard specular approach is obviously not feasible for this infinite number of light sources, so we can take a different approach to modelling it by using a texture. We call this environment mapping.

Cube maps

We need to use a texture to store information about all of the light hitting a point in the scene. However, textures are rectangular and can’t obviously be mapped around a sphere (which is needed to represent all the light from the front, back, sides, top, bottom etc). What we do is use six textures instead, one for each side of a cube, and we call this a cube map. When arranged in a cross shape you can see how they would fold together into a cube:

Uffizi Gallery environment map

This is a famous cube map of the Uffizi Gallery in Florence, and is a bit like a panoramic photo with six images stitched together. These six textures are actually stored separately (they’re not actually combined into one big texture), and they are labelled front, back, left, right, top and bottom as in the image.

Lighting with cube maps

Remember how the reflection vector is calculated by reflecting the view vector around the normal. This reflection vector points to what you would see if the surface was a mirror (and is therefore the direction where any specular lighting comes from). The problem is to know what light would be reflected from that direction, and that is where the cube map comes in.

An environment map is another name for a cube map that contains a full panoramic view of the world (or environment) in all directions, such as the one above. The reflection vector can be used directly to look up into the environment map. This then gives you the colour of the reflection from that point.

In a pixel shader, it’s as simple as doing a normal texture lookup apart from you use the reflection direction as the texture coordinate:

float4 reflection = reflectionTexture.Sample(sampler, reflectionDirection);

Here is an example of my test scene where the only lighting on the objects is from the nebula cube map I used here:

Only lit using an environment map

How does the GPU actually look up into the texture though? The first thing it needs to do is find which face of the cube the reflection ray is going to hit. To find this, we just need to find the longest component in the reflection vector. If the X component is longest then the vector is mainly pointing left or right (depending if it’s positive or negative) so we will use the left or right face. Similarly, if the Y component is longest then we use the top or bottom face, and if Z is longest then we use the front or back face. Let’s start with an example:

reflectionVector = { -0.84, 0.23, 0.49 }

The X component is the largest, so we want either the left or right face. It’s negative, so we want the left face.

Now the face has been determined, the vector can be used to find the actual texture coordinates in that face. In exactly the same way as a vertex is projected onto the screen, the vector is projected onto the face we’ve just found by dividing by the longest component:

{ -0.84, 0.23, 0.49 } / -0.84 = { 1.0, -0.27, -0.58 }

Take the two components that don’t point towards the face (Y and Z in this case) and map them from (-1, 1) range to (0, 1) range, as this is the range that texture coordinates are specified in:

textureCoords = { -0.27, -0.58 } * 0.5 + { 0.5, 0.5 } = { 0.37, 0.21 }

And that is what texture coordinate will be sampled from the left face in this example.

Blurry reflections

Using a cube map like this will always give very sharp, mirror-like reflections, as sharp as the texture in the cube map. Surfaces that aren’t completely smooth should have blurred reflections instead, due to the variety of surface orientations on a small area of rough material (like I talked about here). One way of doing this would be to sample the cube map multiple times in a cone around the reflection vector and average them, which would simulate the light reflected in from different parts of the world. However, this would be very slow. Instead we can make use of mipmapping.

Mipmapping

Mipmapping is an important technique when doing any kind of graphics using textures. It involves storing a texture at multiple different sizes, so that the most appropriate size can be used when rendering. Here are the mipmaps for the left face of the texture above:

Mipmap levels for one cube face

Each successive mipmap level is half the resolution of the previous one. To make the next mipmap level, you can just average each 2×2 block of pixels from the previous level, and that gives the colour of the single pixel in the lower resolution level. What this means is that each pixel in a smaller mipmap level contains all of the colour information of all of the pixels it represents in the original image. As shown in the picture, if you blow up a smaller mipmap you get a blurry version of the original image.

Funnily enough, this is exactly what we need for blurry reflections. By changing what resolution mipmap level we sample from (and we are free to choose this in the pixel shader), we can sample from a sharp or a blurry version of the environment map. We could change this level of blurriness per-object, or even per-pixel, to get a variety of reflective surface materials.

Mipmaps in standard texturing

To finish off, I’ll give a quick explanation of why mipmapping is used with standard texturing, as I didn’t cover it earlier.

There are two main reasons why mipmaps are useful, and these are to do with aliasing and memory performance. Both of these problems are only seen if you’re drawing a textured polygon on the screen, and it’s really small. The problems in this case is that you’re only drawing a few pixels to the screen, but you’re sampling from a much larger texture.

The first problem of aliasing occurs when a sample of a texture for a pixel doesn’t include all of the colour information that should be included. For example, imagine a large black and white striped texture drawn only a few pixel wide. When sampling from the full-resolution texture each pixel must be either black or white, as those are the only colours in the texture. Each pixel should actually be drawn grey, because each pixel covers multiple texels (a texel is a pixel in a texture), which would average out to grey. Using a smaller, blurrier mipmap level would give the correct grey colour. If mipmapping isn’t used, the image will shimmer as the polygon moves (due to the where in the texture the sample happens to land).

Aliasing when a highly detailed texture is scaled down and rotated clockwise a bit

The second problem is performance. Due to the way that texture memory works, it is much more efficient to use smaller textures. Using a texture that is far too big not only looks worse due to aliasing, it’s actually a lot slower because much more memory has to be accessed to draw the polygon. Using the correct mipmap level means less memory has to be accessed so it will draw faster.

Next time I’ll conclude this part of the series by talking about basic shadowing techniques.