Lighting is what makes a 3D scene look real. You used to hear talk of the number of polyons or the resolution of the textures in a game, but these days GPUs are powerful enough that we’re rarely limited by these things in any meaningful sense. The focus now is on the lighting. The most detailed 3D model will look terrible if it’s unlit, and the simplest scene can look photorealistic if rendered with good lighting.
I’ll first cover the most basic lighting solutions that have been used in games for years, and later talk about more advanced and realistic methods. This will cover simple ambient and diffuse lighting. But first of all I’ll explain a bit of the theory about lighting in general and how real objects are lit.
A bit of light physics
Eyes and cameras capture images by sensing photons of light. Photons originate from light sources, for example the sun or a light bulb. They then travel in a straight line until they are bounced, scattered, or absorbed and re-emitted by objects in the world. Finally some of these photons end up inside the eye or the camera, where they are sensed.
Not all photons are the same – they come in different wavelengths which correspond to different colours. Our eyes have three types of colour-sensitive cells which are sensitive to photons of different wavelengths. We can see many colours because our brains combine the responses from each of the cell types, for example if both the ‘green’ and ‘red’ cells are stimulated equally we’ll see yellow.
Televisions and monitors emit light at just three wavelengths, corresponding to the red, green and blue wavelengths that our eyes are sensitive to. By emitting different ratios of each colour, almost any colour can be reproduced in the eye. This is a simplification of the real world because sunlight is made up of photons with wavelengths all across the visible spectrum, and our cones are sensitive across a range of wavelengths.
Objects look coloured because they absorb different amounts of different frequencies of light. For example, if a material absorbs most of the green and red light that hits it, but reflects or re-emits most of the blue light, it will look blue. Black materials absorbs most of the light across all frequencies. White materials absorb very little.
The rendering equation
Feel free to skip this section if you don’t get it. The rendering equation is a big complicated equation that states exactly how much light will reach your eye from any point in the scene. Accurately solving the rendering equation will give a photorealistic image. Unfortunately it’s too complicated to solve it exactly, so lighting in computer graphics is about finding better and better approximations to it. The full equation looks something like this:
Don’t worry about understanding it. What it says is that the light leaving a point towards your eye is made up of the light directly emitted by that point, added to all of the other light that hits that point and is subsequently deflected towards your eye.
If you zoom in closer and closer to any material (right down to the microscopic level), at some point it will be effectively ‘flat’. Hence we only need to consider light over a hemisphere, because light coming from underneath a surface would never reach it. Here is a slightly easier to understand picture of what the rendering equation represents:
The reason that the rendering equation is so hard to solve is because you have to consider all of the incoming light onto each point. However, all of that light is being reflected from an infinite number of other points. And all the light reflected from those points is coming from an infinite number of other points again. So it’s basically impossible to solve the rendering equation except in very special cases.
Hopefully you’re still with me! Let’s leave the complicated equations behind and look at how this relates to simple lighting in games.
As the rendering equation suggests, there is loads of light bouncing all over the place in the world that doesn’t appear to come from any specific direction. We call this the ambient lighting. At its simplest, lighting can be represented as a single colour. Here is my simple test scene (a box and a sphere on a flat floor plane) lit with white ambient light:
To work out what the final colour of a surface on the screen is, we take the lighting colour for each channel (red, green, blue) and multiply it with the surface colour for that channel. So under white light, represented in (Red, Green, Blue) as (1, 1, 1), the final colour is just the original surface colour. So this is what my test world would look like if an equal amount of light hit every point. As you can see, it’s not very realistic. Constant ambient light on its own is a very bad approximation to world lighting, but we have to start somewhere.
As a quick aside, I mentioned earlier that we only use three wavelengths of light when simulating lighting, as a simplification of the real world where all wavelengths are present. In some cases this can cause problems. The colour of the floor plane in my world is (0, 0.5, 0), meaning the material is purely green and has no interaction with red or blue light. Now we can try lighting the scene with pure magenta light, which is (1, 0, 1), thus it contains no green light. This is what we get:
As you can see, the ground plane has gone completely black. This is because when multiplying the light by the surface colour, all components come out to zero. Usually this isn’t a problem as scenes are rarely lit with such lights, but it’s something to be aware of.
Directional light and normals
The first thing we can do to improve the lighting in our scene is to add a directional light. If we are outside on a sunny day then the scene is directly lit by one very bright light – the sun. In this case the ambient light is made up of sunlight that has bounced off other objects, or the sky. We know how to apply basic ambient, so we can now look at adding the directional light.
It’s time to introduce another concept in rendering, which is the normal vector. The normal vector is a direction that is perpendicular to a surface, i.e. it points directly away from it. For example, the normal vector for the floor plane is directly upwards. For the sphere, the normal is different at every position on the surface and points directly away from the centre.
So where do we get this normal vector from? Last time, I introduced the concept of vertex attributes for polygon rendering, where each vertex contains information on the position, colour and texture coordinates. Well, the normal is just another vertex attribute, and is usually calculated by whichever tool you used to make your model. Normals consist of three numbers, (x, y, z), representing a direction in space (think of it as moving from the origin to that coordinate in space). The length of a normal should always be 1, and we call such vectors unit vectors). Hence the normal vector for the floor plane is (0, 1, 0), which is length 1 and pointing directly up. Normals can then be interpolated across polygons in the same way as texture coordinates or colours, and this is needed to get smooth lighting across surface made up of multiple polygons.
We also need another vector, the light direction. This is another unit vector that points towards the light source. In the case of distant light sources such as the sun, the light vector is effectively constant across the whole scene. This keeps things simple.
There are two ways that light can interact with an object and reach your eye. We call these diffuse and specular lighting. Diffuse lighting is when a photon is absorbed by a material, and then re-emitted in a random direction. Specular lighting is when photons are reflected off the surface of a material like a mirror. A completely matt material would only have diffuse lighting, while shiny material look shiny because they have specular lighting.
For now we will only use diffuse lighting because it is simpler. Because diffuse lighting is re-emitted is all directions it doesn’t matter where you look at a surface from, it will always look the same. With specular lighting, materials look different when viewed from different angles.
The amount of light received by a surface is affected by where the light is relative to a surface – things that face the sun are lighter than things that face away from it. This is because when an object is at an angle, the same amount of light is spread over a greater surface area (the same reason as why the earth is cold at the poles). We now have the surface normal and the light angle, so we can use these to work out the light intensity. Intuitively, a surface will be brighter the closer together the normal and the light vectors are.
The exact relation is that the intensity of the light is proportional to the cosine of the angle between them. There is a really easy and quick way to work this out, which is to use the dot product. Conveniently enough the dot product is a simple mathematical function of two vectors that happens to give you the cosine of the angle between them, which is what we want. So, give two vectors A and B, the dot product is just this:
(A.x * B.x) + (A.y * B.y) + (A.z * B.z)
To get the diffuse lighting at a pixel, take the dot product of the normal vector and the light vector. This will give a negative value if the surface is pointing away from the light, but because you don’t get negative light we clamp it to zero. Then you just add on your ambient light and multiply with the surface colour, and you can now make out the shapes of the sphere and the box:
Better ambient light
At this point I’ll bring in a really simple improvement to the ambient light. The ambient represents the light that has bounced around the scene, but it’s not all the same colour. In my test scene, the sky is blue, and therefore the light coming from the sky is also blue. Similarly, the light bouncing off the green floor is going to be green. When you can make generalisations like this (and you often can, especially with outdoor scenes), we may as well make use of this information to improve the accuracy of the ambient light.
In this case we can simply have two ambient light colours – a light blue for the sky and a green for the ground. Then we can use the surface normal to blend between these colours. A normal pointing up will receive mostly blue light from the sky, and a normal pointing down will receive mostly green light from the floor. We can then apply a blend based on the Y component of the normal. This is the scene using just the coloured ambient:
And then add the diffuse lighting back on:
Now just combine the lighting with some texture mapping and you’re up to the technology levels of the first generation of 3D polygonal games!
That’s more than enough for this post, so next time I’ll talk about specular lighting.