Apr 022013

Last time I spoke about specular lighting, which combined with diffuse and ambient from the previous article means we now have a good enough representation of real-world lighting to get some nice images.

However, this lighting isn’t very detailed. Lighting calculations are based on the orientation of the surface, and the only surface orientation information we have is the normals specified at each vertex. This means that the lighting will always blend smoothly between the vertices, because the normals are interpolated. Drawing very detailed surfaces in this manner would require very many vertices, which are slow to draw. What would be better would be to vary the lighting across a polygon, and for this we can use normal maps.

Normal maps

This is an example of what a normal map looks like:

Example of a normal map

A normal map looks like a bluey mess, but it makes sense when you understand what the colours mean. What we’re doing with a normal map is storing the orientation of the surface (the surface normal) at each pixel. Remember that the normal is a vector that points directly away from the surface.  A 3D vector is made up of three coordinates (X, Y and Z), and coincidentally our textures also have three channels (red, green and blue). What this means is that we can use each colour channel in the texture to store one of the components of the normal at each pixel.

We need the normal map to work no matter the orientation of the polygon that is using it. So if the normal map is mapped onto the floor, or a ceiling, or wrapped around a complex shape, it still has to provide useful information. Therefore it’s no use encoding the direction of the normal directly in world-space (otherwise you couldn’t reuse the map on a different bit of geometry). Instead, it is encoded in yet another ‘space’ called tangent space. This is a 3D coordinate system where two of the axes are the U and V axes that texture coordinates are specified in. The third axis is the surface normal.

How tangent space related to UV coordinates

Encoding a normal in this space is straightforward. The red channel in the texture corresponds to the distance along the U axis, the green channel is the same for the V axis, and the blue channel is the distance along the normal. The distances along U and V can go from -1 to 1 (as we’re encoding a unit vector), so a texture value of 0 represents -1, and 255 (the maximum texture value if we’re using an 8-bit texture) represents +1. Because a surface normal can never face backwards from the surface, the blue channel only needs to encode distances from 0 to 1.

Now we can understand what the colours in the normal map mean. A pixel with lots of red is facing right, and with little red is facing left. A pixel with lots of green is facing up, and with little green is facing down. Most pixels have a lot of blue, which means they’re mainly facing out along the normal (as you’d expect, as this is the average surface orientation).

Shading with normal maps

So now we have a normal map, and it’s mapped across our object in world space. We can read the texture at each pixel to give us a tangent space normal, but the lighting and view directions are specified in world space. We need to get all of these vectors into the same space, and for this we need a matrix that converts between tangent and world space. Luckily, that’s fairly easy to get.


First a quick diversion into rotation matrices. I’ve talked about 4×4 transform matrices for transforming from one 3D space to another, but the top left 3×3 part of the matrix is all you need to perform just the rotation. Because we only want to rotate the normal we don’t need to apply any translation, so we just need a rotation matrix.

Green is the rotation part of a transform matrix. Red is the rotation part, made up of the X, Y and Z axes in the new space.

Rotation matrices between coordinate systems with three perpendicular axes (i.e. the usual ones we use in graphics) have a couple of nice properties. The first is that the columns are just the original axes but transformed by the rotation we’re trying to represent, i.e. the first column is where the X axis would be after the rotation, the second column where the Y axis would be, and the third column where the Z axis would be.

The second nice property is that the inverse of a rotation matrix is its transpose. This means that the rows represent the three axes with the inverse rotation applied. If you’re interested, this is a more in depth explanation of rotation matrices here.

Tangent to world space

So how does this help us? We need to build a rotation matrix to convert between tangent space and world space. The first thing to do is to add a couple more vertex attributes – these are the tangent and the binormal vectors. These are similar to the normal, but they define the other two axes in tangent space. Remember that these are defined by how the UV texture coordinates are mapped onto the geometry. Your modelling package should be able to export these vertex attributes for you.

Now, we need to use these to get the light, view and normal vectors into the same space. In this case we’ll transform the view and light directions into tangent space in the vertex shader (although you could instead transform the normals into world space in the pixel shader, if that makes your shader simpler).

As shown above, the tangent-to-world matrix is just the 3×3 matrix where the columns are the Tangent (X axis in the normal map), Binormal (Y axis) and Normal (Z axis), in that order. To get the world-to-tangent matrix, just transpose it so the rows are Tangent, Binormal and Normal instead:

World to tangent space matrix, made up of the tangent, binormal and normal vectors in world space

Then you can use this to transform your light and view vectors! In case it helps, here’s some vertex shader HLSL code to do all this:

// Transform the normal, tangent and binormal into world space. ModelViewMtx
// will be a 4x4 matrix, so take care not to include the translation.
float4 normalWorld = float4(input.normal, 0.0f);
normalWorld.xyz = normalize(mul(normalWorld, ModelViewMtx).xyz);
float4 tangentWorld = float4(input.tangent, 0.0f);
tangentWorld.xyz = normalize(mul(tangentWorld, ModelViewMtx).xyz);
float4 binormalWorld = float4(input.binormal, 0.0f);
binormalWorld.xyz = normalize(mul(binormalWorld, ModelViewMtx).xyz);

// Build the world-to-tangent matrix (transpose of tangent-to-world).
float3x3 worldToTangentSpace =
    float3x3(tangentWorld.xyz, binormalWorld.xyz, normalWorld.xyz);

// Transform the light and view directions.
output.lightDirTangentSpace = mul(worldToTangentSpace, lightDirWorldSpace);
output.viewDirTangentSpace  = mul(worldToTangentSpace, viewDirWorldSpace);

In the pixel shader you read the normal map the same as any other texture. Remap the X and Y components from (0, 1) range to (-1, 1) range, and then perform lighting calculations as usual using this normal and the transformed view and light vectors.

Here’s my test scene with the normal map on the floor:

And here’s a screenshot with the nicer shading and shadows turned on:

A caveat

One last technical point – the properties of rotation matrices I talked about only hold for purely rotational matrices, between coordinate spaces where the axes are all at right angles. Due to unavoidable texture distortions when they’re mapped to models, this usually won’t be the case for your tangent space. However, it’ll be near enough that it shouldn’t cause a problem except in the most extreme cases. If you renormalise all your vectors in the vertex and pixel shaders then it should all work out alright…

Next time should be a bit simpler when I’ll be talking about environment mapping!

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>



This site uses Akismet to reduce spam. Learn how your comment data is processed.