Andy Fenwick

Andy Fenwick is a graphics programmer, computer gamer, board gamer, science geek and reluctant DIY-er. He is often found in his natural habitat of Ashby-de-la-Zouch listening to gothic music and complaining about government incompetence to his long-suffering wife.

Jan 122014
 

Massive Open Online Courses (MOOCs) have really taken off in the last couple of years. I was aware they existed but it was only a few months ago that I took the plunge and signed up for one. I wasn’t really sure what to expect in terms of course content, time requirements and difficulty, so I thought I’d talk about the two I’ve taken so far.

A couple of friends had taken courses on Coursera so that’s where I signed up (although there are many other options). Coursera hosts courses run by many different universities around the world, on all manner of subjects. There are loads of courses starting throughout January so now is a good time to see if anything takes your fancy. Most run for between four and 12 weeks, and generally ask for 4-8 hours per week to watch the lectures and complete the homework (although this is quite variable depending on how easy you find the subject).

The courses aren’t ‘worth’ anything in the sense of traditional qualifications. An interesting development is that some of the more rigorous Coursera courses have a ‘Signature Track’ option where you can pay some money to have your identity verified and get some real university credit for your work. I’ve not looked too much into this though.

So why did I want to do a course in the first place? I’ve read quite a few popular science books on physics, quantum mechanic, string theory and the like but they always shy away from the actual maths (understandable if you want to sell any copies). Without the maths though it’s impossible to understand the subject beyond some vague hand wavy concepts. I was looking for some way to delve a little deeper into the subject without doing a full physics degree, which is rather impractical when you have a job.

From the Big Bang to Dark Energy

The first course that caught my eye was From the Big Bang to Dark Energy. It’s a short four week course from the University of Tokyo giving an overview of the history of the universe and basic particle physics. The only recommended background knowledge was some simple high school maths, so I wasn’t expecting anything too difficult.

There were a couple of hours of lectures per week which were engaging and easy to understand. They concentrated on general concepts rather than equations (although there were a few equations scattered around), in particular focusing on why we know what we know from a range of recent experiments.

This course was aimed at the more casual learner. It was very light on the maths in the lecture videos, but used the homework questions to introduce a few calculations (mostly just cancelling units and multiplying a few numbers). You could quite happily ignore the maths and still get a pass, and you were allowed nearly unlimited attempts at the questions (hence my final score of 100%).

While a lot of the content was familiar to me I still learnt a few things, and I would recommend this course to anyone looking for a light introduction to the history and evolution of the universe (assuming the course runs again).

Analysis of a Complex Kind

The second course I took was Analysis of a Complex Kind on complex numbers and complex analysis, from Wesleyan University. This was a completely different experience. It was much more formal and rigorous, and felt a lot like a traditional university-level class. I spend probably 6-9 hours per week, which was sometimes a struggle. Even though the course was only six weeks long it felt like quite a commitment (although that may just be a comment on my general level of busyness).

I wasn’t overly familiar with complex analysis before the course outside of a few bits at school and university, but elements of it keep cropping up in my reading so I decided it would be interesting to learn more. You definitely need a strong interest and ability in maths before considering this course, and if you’re going into complex numbers cold then it’ll be a steep learning curve.

It made a change to go back to working with pen and paper, and I got through reams of the stuff by the end. Picking up a pen is something I rarely find myself needing to do these days. The assessments were mainly multiple choice questions, but there’s a deceptively large amount of work needed to find the answers.

One new feature for me in this course was peer-assessed assignments. These were questions that involved drawing graphs, or long-form answers that couldn’t be multiple choice. You can either scan in your work on paper or submit PDFs created directly on computer, and then you’re provided marking guidelines and have a week to assess four other people’s work. The process isn’t perfect (I saw one or two marking errors) but that’s why everyone is marked four times and averaged. Doing a decent job of marking others’ work actually took a fair chunk of time, longer than I was expecting.

I was pleased with my final score of 94.8% (fractionally missing out on a distinction). It was a good workout for the brain, and even though I’m unlikely to use anything I learnt in this course day to day I suspect it’ll come in handy should I pursue any further maths or physics-based education.

 Overall

These type of courses work really well if you can reliably dedicate a few hours per week. I won’t be taking any more for at least a couple of months as they monopolised my free time somewhat (and I have other projects I want to work on), but I’m sure I’ll be back for more.

A lot of the courses seem to be being run for the first time by people who haven’t done this kind of thing before, but don’t let that put you off. These two were both well run and offered great learning potential. MOOCs are likely to only improve in the future as people get more used to what works and what doesn’t. Find something you’re interested in and give one a go!

Jan 062014
 
Dec 142013
 

The final part I’m going to cover for high dynamic range rendering is an implementation of lens flare. Lens flare is an effect that raises a lot of hate from some people, and it can certainly be overdone. However, when used subtly I think it can add a lot to an image.

Theory of lens flare

Lens flare is another artifact of camera lenses. Not all of the light that hits a lens passes through – some small amount is always reflected back. Most camera systems also use many lenses in series rather than just the one. The result is that light can bounce around inside the system between any combination of the lenses and end up landing elsewhere on the sensor.

If you’re not a graphics professional you could take a look at this paper which aims to simulate actual lenses to give very realistic lens flares. If there’s any possibility that you might find it at all useful then stay away because it’s patent pending (but this is not the place to discuss the broken state of software patents).

We don’t need to simulate lens flare accurately, we can get a good approximation with some simple tricks.

Ghosts

The main component of lens flare is the ‘ghost’ images – the light from the bright bits of the scene bouncing around between the lenses and landing where they shouldn’t. With spherical lenses the light will always land somewhere along the line from the original point through the centre of the image.

The lens flare effect is applied by drawing an additive pass over the whole screen. To simulate ghosts, for every pixel we need to check at various points along the line through the centre of the screen to see if any bright parts of the image will cause ghosts here. The HLSL code looks something like this:

float distances[8] = {0.5f, 0.7f, 1.03f, 1.35, 1.55f, 1.62f, 2.2f, 3.9f};
float rgbScalesGhost[3] = {1.01f, 1.00f, 0.99f};

// Vector to the centre of the screen.
float2 dir = 0.5f - input.uv;

for (int rgb = 0; rgb < 3; rgb++)
{
    for (int i = 0; i < 8; i++)
    {
        float2 uv = input.uv + dir*distances[i]*rgbScalesGhost[rgb];
        float colour = texture.Sample(sampler, uv)[rgb];
        ret[rgb] += saturate(colour - 0.5f) * 1.5f;
    }
}

The eight distance values control where the ghosts will appear along the line. A value of 1.0 will always sample from the centre of the screen, values greater than one will cause ghosts on the opposite side of the screen and values less than one will create ghosts on the same side as the original bright spot. Just pick a selection of values that give you the distribution you like. Real lens systems will give a certain pattern of ghosts (and lots more of them), but we’re not worrying about being accurate.

This is a simpler set of four ghosts from the sun, showing how they always lie along the line through the centre:

lensflare_ratios

Four ghosts from the sun, projected from the centre of the screen

The ghost at the bottom right had a distance value of 1.62. You can see this by measuring the ratio of distance to the centre of the screen in the image above.

This next image is using eight ghosts with the code above. You can’t see the ghost for value 1.03 as this is currently off-screen (values very near 1.0 will produce very large ghosts that cover the entire screen when looking directly at a bright light, and are very useful for enhancing the ‘glare’ effect).

You can see the non-circular ghosts as well in this image, as some of the sun is occluded:

lensflare_occluded

Full set of ghosts from an occluded sun

Chromatic aberration

Another property of lenses is that they don’t bend all wavelengths of light by the same amount. Chromatic aberration is the term used to describe this effect, and it leads to coloured “fringes” around bright parts of the image.

One reason that real camera systems have multiple lenses at all is to compensate for this, and refocus all the colours back onto the same point. The internal reflections that cause the ghosts will be affected by these fringes. To simulate this we can instead create a separate ghost for each of the red, green and blue channels, using a slight offset to the distance value for each channel. You’ll then end up with something like this:

lensflare_chromatic

Chromatic aberration on ghosts

Halos

Another type of lens flare is the ‘halo’ effect you get when pointing directly into a bright light. This code will sample a fixed distance towards the centre of the screen, which gives nice full and partial halos, including chromatic aberration again:

float rgbScalesHalo[3] = {0.98f, 1.00f, 1.02f};
float aspect = screenHeight/screenWidth;

// Vector to the centre of the screen.
float2 dir = 0.5f - input.uv;

for (int rgb = 0; rgb < 3; rgb++)
{
    float2 fixedDir = dir;
    fixedDir.y *= aspect;
    float2 normDir = normalize(fixedDir);
    normDir *= 0.4f * (rgbScalesHalo[rgb]);
    normDir.y /= aspect; // Compensate back again to texture coordinates.

    float colour = texture.Sample(sampler, input.uv + normDir)[rgb];
    halo[rgb] = saturate(colour - 0.5f) * 1.5f;
}
lensflare_halo

Full halo from a central sun

lensflare_halo2

Partial halo from an offset sun

Put together the ghosts and halos and you get something like this (which looks a mess, but will look good later):

lensflare_ghosthalo

Eight ghosts plus halo

 Blurring

The lens flares we have so far don’t look very realistic – they are far too defined and hard-edged. Luckily this is easily fixed. Instead of sampling from the original image we can instead use one of the blurred versions that were used to draw the bloom. If we use the 1/16th resolution Gaussian blurred version we instead get something which is starting to look passable:

lensflare_blurred

Ghosts and halo sampling from a blurred texture

Lens dirt

It’s looking better but it still looks very CG and too “perfect”.  There is one more trick we can do to make it look more natural, and that is to simulate lens dirt.

Dirt and smears on the lens will reflect stray light, for example from flares, and become visible. Instead of adding the ghosts and halos directly onto the image, we can instead modulate it with a lens dirt texture first. This is the texture I’m currently using, which was part of the original article I read about this technique and which I can unfortunately no longer find. If this is yours please let me know!

lensflare_dirt

Lens dirt overlay

This texture is mostly black with some brighter features. This means that most of the flares will be removed, and just the brighter dirt patches will remain. You may recognise this effect from Battlefield 3, where it’s used all the time.

You can’t really see the halo when modulating with this lens dirt texture, so we can add a bit more halo on top. This is the final result, as used in my demos:

lensflare_final

Final result

And that’s it for High Dynamic Range rendering, which I think is one of the most important new(-ish) techniques in game rendering in the last few years.

Nov 242013
 

Last time I covered HDR render targets, tone mapping and automatic exposure control. Now it’s time to simulate some camera imperfections that give the illusion that something in an image is brighter than it actually is on the screen.

Bloom

The first effect to simulate is bloom. This is when the light from a really bright object appears to “bleed” over the rest of the image. This is an image with no bloom – the sun is just a white circle, and doesn’t look particularly bright:

nobloomexample

No bloom

With a bloom effect the sun looks a lot brighter, even though the central pixels are actually the same white colour:

bloomexample

With bloom

Theory of bloom

Why does this happen? There is a good explanation on Wikipedia but this is the basic idea.

Camera lenses can never perfectly focus light from a point onto another point. My previous diagrams had straight lines through lens showing the path of the light through the lens. What actually happens is that light (being a wave) diffracts through the aperture creating diffraction patterns. This means the light from a single point lands on the sensor as a bright central spot surrounded by much fainter concentric rings, called the Airy pattern (the rings have been brightened in this picture so you can see them easier):

airy_pattern

Airy disk

Usually this isn’t a problem – at normal light levels the central peak is the only thing bright enough to be picked up by the sensor, and it fits within one pixel. However, with very bright lights, the diffraction pattern is bright enough to be detected. For anything other than a really tiny light source the individual rings won’t be visible because they’ll all overlap and blur together, and what you get is the appearance of light leaking from bright areas to dark areas.

This effect is pretty useful for us. Because people are used to bright objects blooming, by doing the reverse and drawing the bloom we perceive the object as brighter than it really is on screen.

Implementation

The idea of rendering bloom is the same as Bokeh depth of field. Recall from the depth of field that each pixel is actually the shape of the aperture, drawn at varying sizes depending how in focus it is. So to draw Bokeh ‘properly’ each pixel should be draw as a larger texture. To draw bloom ‘properly’ you would instead draw each pixel with a texture of the Airy pattern. For dim pixels you would only see the bright centre spot, and for very bright pixels you would see the circles as well.

That’s not very practical though so we can take shortcuts which make it much quicker to draw at the expense of physical accuracy. The main optimisation is to do away with the Airy pattern completely and use a Gaussian blur instead. When you draw many Airy patterns in neighbouring pixels the rings average out and you are left with something very similar to a Gaussian blur:

gaussian

Gaussian blur

The effect we are trying to simulate is bright pixels bleeding over darker neighbours, so what we’ll do is find the bright pixels in the image, blur them and then add them back onto the original image.

To find the bright pixels in the image we take the frame buffer, subtract a threshold value based on the exposure and copy the result into a new texture:

bloom_extract1

The extracted bloom – the original image with a threshold value subtracted

 Then create more textures, each half the size of the previous one, scaling down the brightness slightly with each one (depending how far you want the bloom to spread). Here are two of the downsized textures from a total of eight:

bloom_extract2

The 1/8th size downsized extracted bloom

bloom_extract3

The 1/64th size downsized extracted bloom

Because we’re not simulating bloom completely accurately, there are a few magic numbers we can tweak (like the threshold and downscaling darkening) to control the overall size and brightness of the bloom effect. Ideally we would work it all out automatically from the Airy disk and camera properties, but this method looks good enough and is more controllable to give the type of image you want.

Now we have all the downsized textures we need to blur them all. I’m using an 11×11 Gaussian blur which is soft enough to give an almost completely smooth image when they’re all added up again. A larger blur would give smoother results but would take longer to draw. The reason for doing the downscaling into multiple textures is that it is much quicker to perform smaller blurs on multiple smaller textures than it is to perform a massive blur on the original sized image.

After blurring, the two textures above look like this (and similarly for all the others):

bloom_blur2

Blurred 1/8th size bloom

bloom_blur3

Blurred 1/64th size bloom

Then to get the final image we simply add up all of the blurred textures (simple bilinear filtering is enough to get rid of the blockiness), scale it by some overall brightness value and add it back on top of the tonemapped image from last time. The end result will then be something like this, with obvious bloom around the sun but also some subtle bleeding around other bright areas like around the bright floor:

bloom_final

The great thing about this is that you don’t need to do anything special to make the sun or other bright lights bloom – it’s all just handled automatically, even for ‘accidental’ bright pixeld like intense specular highlights.

That’s not quite everything that you can do when rendering bright things. Next time I’ll describe that scourge of late-90s games – lens flare. (It looks better these days…)

Nov 132013
 

Last week we went to Reykjavik in the hope of finally spotting the aurora borealis. It was my fourth time in Iceland with no luck on the previous visits (it was generally cloudy) so I was hoping for good weather. Luckily the day we arrived it was completely clear all night, so we were hopeful as we headed out in the car after dinner. As soon as we’d left the lights of the town it was apparent that there was something happening as there was a faint, slightly green, streak across the sky, a little way above the northern horizon.

There were still street lights on the road at this point, so we carried on out into the wilderness and found a layby to stop in (where there was already a coach tour parked up, but it was all we could find in the dark). Away from all the light pollution the aurora was much more visible, and started to grow and move around a bit. Success at last!

There was something in the sky for the whole three hours we were out there, sometimes getting stronger and sometimes fading away. It’s a lot fainter in real life than you see in photos, and the movement is a lot slower than you see in videos, but it’s still really impressive and beautiful.

I wasn’t planning on taking any photos as it’s really hard without decent equipment, but as the display was lasting so long I decided to have a go anyway. This was using a shutter time of 6 seconds, and maximum aperture and ISO. The photos are really quite bad, but they give an idea of what it looked like! The reflections are from the roof of the car, which made do as a poor tripod substitute…

aurora1

aurora2

aurora3

aurora4

aurora5

aurora6

Oct 212013
 

The screenshots in my graphics posts are from my raymarching renderer, and mainly consist of boxes and spheres. That’s a bit boring so I thought I’d have a look for something more interesting, and came across the Syntopia blog talking about raymarching 3D Mandelbulb fractals. I plugged the code into my renderer and it worked a treat. Here’s a video I rendered out from it:

It’s not exactly artistically shot (just a programmatically spinning camera and light) but it looks quite nice. I may set up some proper paths and things at some point and do something nicer. (Here is a much nicer animation I found, which was rendered with particles in Lightwave and apparently took nearly an hour per-frame to render. Mine took about 45 mins total.)

Running the code yourself

If you fancy having a play around (or just want to make your GPU overheat) you can download the program. I make no guarantee that the code runs on your system (I’ve only tried it on my Radeon HD7950, but it should work on most DX11 cards), and use this software at your own risk etc.

Download FractalRender.zip

This has the .exe, a couple of textures and two shader files you can edit. The shader files are compiled on load, so you can modify the shaders (in the .fx files, you can just use Notepad or something to edit them) and re-run the .exe. Compile errors should pop up in a box.

Controls

WASD to move, hold the left mouse button to turn and hold the right mouse button to move the sun. Press H to toggle the full help text with the rest of the controls (sun size, depth of field etc).

Shaders

This demo is using a deferred renderer (for no particular reason) so there are two separate shaders you can play with. The fractal generation code is in sdf.fx and calculates and writes out the depth, colour, normals, sun shadow and ambient occlusion into buffers for every pixel. deferredshader.fx applies the lighting. Depth of field, antialiasing and lens flare are applied afterwards.

deferredshader.fx is the simpler of the two. The lighting setup is pretty much copied from this Iñigo Quilez article for the sun, sky and indirect lighting. A physically-based specular term is added (more complex but better looking than the simple specular I wrote about before) and then some fog is applied. Light colours and specular gloss can easily be edited.

sdf.fx controls the geometry generation. The SDF() function is the signed distance function, and takes a world position and returns the distance to the nearest surface. There are a few alternative functions you can try (the others are commented out). SDFMandelbulbAnimated() generates the animated Mandelbulb in the video and is the code from the Syntopia blog. SDFMandelbulbStatic() is an optimised function for generating the static power-8 Mandelbulb, using the optimised code from here. As well as that there are a couple of other functions for fun – some infinite wibbly spheres and the box and two spheres from my earlier videos.

After using the SDF to find the surface intersection point it calculates the soft shadowing from the sun (which I described here). This can be really slow so there is an option to turn it off completely at the top of the file if your frame rate is too low, or you could tweak the iterations and max radius down a bit. It also calculates an ambient occlusion term using five rays at 45° from each other. This can also be slow, and the same applies.

This is just a quick overview. Any specific queries or questions, leave a comment.

Finally, here are a few screenshots I took. Macro photography of fractals gives some quite nice results. Enjoy!

fractal

fractal3

fractal4

Oct 042013
 

High Dynamic Range (HDR) rendering is a way of coping with the large possible range of brightness in a scene, and how to render that to a screen. There are a few parts to this which are all required to get good results. In this part I’ll cover the initial rendering, tone mapping and dynamic exposure control. Next time I’ll cover bloom and throw in some lens flare for good measure.

What is HDR rendering?

Do an image search for HDR photography and you’ll see loads of weird and unnatural looking pictures. This completely isn’t what HDR rendering is, although it’s trying to tackle the same issue.

The human eye has a dynamic range of around 1,000,000,000 : 1. That means we can see in starlight, and we can see in light a billion times brighter such as a sunny day. At any given time we can see contrast of up to 10,000 : 1 or so, but our eyes adjust so that this range is useful for whatever we’re looking at. In a bright scene we can’t perceive much detail in dark shadow, and at night a bright light will blind us.

LCD displays have contrast ratios of around 1000 : 1, so a white pixel is only a thousand or so times brighter than a black pixel (depending on the screen). This means we can’t display the billion-to-one contrast of the real world directly on a screen, but it does map pretty well to the range of the eye at any given time.

Rendering in HDR – floating point render targets

The final image that you output to the screen has 24 bits per pixel – 8 bits of accuracy for each of red, green and blue. Renderers traditionally draw to a render target with the same accuracy, so that the result can be displayed directly. This means that only 256 levels of brightness can be stored for each colour channel at any pixel. 256 levels are enough that you can’t really see any banding between the colours when looking at a smooth gradient on the screen, but it’s not enough accuracy to capture a scene with a lot of dynamic range.

Time for a contrived example. Here is a photo on a lamp on my window sill:

The exposure on this photo was 1/20 seconds, which let in enough light that you can see the detail in the trees. However, the lamp itself is pure white and you can’t see any detail. There is no way to tell if the lamp is fairly bright, or really really bright. Let’s see another photo with the exposure reduced to 1/500 seconds.

You can start to see some detail in the lamp now, but you can only just see the outline of the trees. The bulb is still white, even with this short exposure, so you can tell that it’s really bright. One more with the exposure set to 1/1300 seconds:

Now we can see more of the detail in the bulb, and the trees have almost completely disappeared. The camera won’t go any quicker and the bulb is still white, so it must be really really bright.

So you can see that with only 256 intensity levels, you have to lose information somewhere. Either you have a long exposure so you can see darker objects but lose all detail in the bright areas, or a short exposure where you can see detail in bright objects at the expense of darker areas.

To get around this we need to use a higher precision render target with a lot more than 256 intensity levels. The ideal format currently is 16-bit floating point for each colour channel (which if you’re interested can represent values from 0.00006 to 65000 with 11 bits of accuracy in the mantissa). This is more than enough precision for accurately drawing both moonlit nights and blazing days at the same time.  The downsides are that 16-bit render targets require twice as much memory and they’re a bit slower to render into (more data to push around), but on modern hardware, and certainly the next-gen consoles, they’re completely viable to use.

Tone mapping – getting it on the screen

So you’ve rendered your scene into a 16-bit render target. Given that your monitor still wants 8-bit colour values, you need to do some conversion. While your HDR source image is an accurate representation of the world (including all possible light intensities), we need to attempt to simulate what your eyes would actually see so that we can draw it on a screen. This stage is called tone mapping.

Eyes and cameras can adjust to let in more or less light, or be more or less sensitive. On a camera this is controlled by the aperture size, shutter speed and ISO settings, and in the eye you have pupil size and chemical changes in the photoreceptors. This means you can see intensity variation in some small part of the entire dynamic range. Everything darker than this will just look black, and everything brighter will just look white. A tone mapping function is one that can map the entire infinite range of light into a zero-to-one range (and then you multiply by 255 to get values that can be displayed on a screen), while preserving the contrast in the part of the range you’re interested in.

One simple tone mapping operator is the Reinhard operator:

x_{out}=\frac{x}{x+n}

where n is a number that controls the exposure and x is your rendered value. You can see that zero will map to zero, and large values will converge on 1. A larger value chosen for n will make the final image darker (in fact n is the input value that will map to half brightness on your screen).

Let’s try it on a rendered image. I’ll use one of my demo scenes from before with no extra post-processing so you can see what’s going on. I need to pick a value for n so I’ll try 0.2:

This is over-exposed, so now lets try n = 1.0:

That’s pretty good, but the sun is still completely white so let’s see what happens with n = 10:

Like with the lamp photos, the ‘exposure’ is now short enough that the sun isn’t just a white blob (it’s still a blob, but you can now make out the slightly yellow colour).

The Reinhard operator isn’t the only option, and in fact it’s not that good. It desaturates your blacks, making them all look grey. Lots of people have tried to come up with something better, and a good one (which I’m using in my own code) is John Hable’s Filmic Tone Mapping which debuted in Uncharted 2 (and which you can read all about here if you want, including the issues with Reinhard).

Swapping to Filmic Tone Mapping we get this, where you can see a lot more contrast and saturation in the colours:

Automatic Exposure Control

We can now render our HDR image to a screen, but we’re relying on this magic exposure value. It would be really nice if we could handle this automatically, in the same way as our eyes. In fact we can do something quite similar.

When your eyes see a really bright scene they automatically adjust by closing the pupil to let in less light. In a dark environment the pupil reopens, but more slowly. To simulate this we first need to know how bright our rendered scene is. This is easy to do – we can just add up all the pixels in the image and divide by the number of pixels, which will get you the average brightness.

[To be technical, you should actually use the log-average luminance – take the log of the luminance of each pixel, average those, and then exponentiate it again. When doing a straight average a few really bright pixels can skew the result noticeably, but this has much less effect on a log-average. Also, for performance reasons you don’t actually add up all the pixels directly – instead write the log-average values into a half-resolution texture and then generate mipmaps to get down to a single pixel, which you then exponentiate to give the same result.]

Then you need to pick a target luminance, which is how bright you want your final image to be. Applying your tone mapping (with your current exposure value) to you average input luminance will give you your average output luminance. Now we can set up a feedback loop between frames – if your scene is coming out too bright, reduce the exposure for the next frame, scaling the reduction by how far off you are. Similarly if it’s too dark, increase the exposure for next frame. You can go further and increase the exposure slower than you decrease it, to simulate the eye adjusting quicker to bright scenes.

And now you never need to worry about your image being too light or dark ever again – it’s all taken care of automatically! This is another major advantage of using HDR rendering – it doesn’t matter how bright any of the lights in your scene are (in absolute terms) as long as they’re correct relative to one another.

There’s more…

The image is starting to look good, but the sun is still just a white circle. Next time I’ll talk about using bloom to ‘go brighter than white’…

Sep 232013
 

I was pleased with the quality of my miniatures photos came out, so this is my cheap, cheerful and minimal effort method of photography.

What you’ll need

  • A camera –  any basic compact will do, as long as it has a manual settings mode.
  • Tripod – or at least some way of keeping the camera still.
  • A sheet of A1-sized thin white card – I found a sheet for £1.75 in Hobbycraft.
  • A desk lamp.
  • A daylight bulb – these shine white instead of the yellow of normal bulbs and cost about £10 at Hobbycraft (and if you’re painting using a standard yellow lamp, just buy one of these now, you won’t regret it!)
  • Some other form of light, either another lamp or a bright room.

The most important thing on this list (apart from the camera) is the daylight bulb. Balance the camera precariously, improvise a white sheet, whatever, but without a good bulb your photos will look horrible!

 

Room setup

Put the card on a flat surface and lean the back on something to make a nice curved surface. Put your miniatures on the card. Set up the lamp out front, shining more-or-less straight forwards. Put your camera on the tripod, maybe a foot or so from the miniature. Something like this (not ideal because some of my white card has been cut off and used for something…):

You want to minimise shadows, which is why you put the lamp in front and not off to the side. Multiple lamps from different angles help here, as does having a big window nearby and a bright day. But don’t use direct sunlight or it’ll cast shadows again. If you’re serious you can make a light box like this one which should give better results than I’ve managed to get (but requires some non-negligible amount of effort to make, plus storage space and more lamps than I have).

 

Camera setup

This style of photography is basically the ideal environment in every possible way – you have a completely static scene, full control of everything in it, controllable lighting and as long as you need.

First put your camera in macro mode (little flower icon) – this enables it to focus up close, and we’re very close here. Then put it in manual mode and use these setting:

  • F-Stop – this controls the aperture size. Set it as high as it will go. This makes the aperture as small as possible, which has the effect of making the in-focus depth range as large as possible (you may be able to see this from the diagram is my depth of field post). This will help keep both the front and the back of your miniature in focus in your photo.
  • ISO – this controls the sensitivity of the light sensor. Set this as small as possible. This means that a lot of light has to reach each pixel before it registers, which reduces the noise in your image (more light means that the relative random differences between neighbouring pixels are smaller).
  • Shutter speed/exposure – change this until your photos come out at the right brightness. With the tiny aperture and low ISO you’ll need a relatively long exposure, maybe 1/10th second.
  • Delay mode – you want to set a delay between pressing the shutter release button and it taking the photo. This is because you’ll move the camera slightly when pressing it which will blur your image. My camera has a two second delay option which is more convenient than the standard ten second delay.

Then just snap away, adjusting the shutter speed until you get the right exposure. I prefer to slightly over expose rather than have it too dark, to get a nice white background and bright colours. You can tell I’ve taken photos at different times without a light box because the background is whiter in some images than others, but I can live with that.

And this is a shot I just quickly took. The guy on the right is a little blurry because the aperture doesn’t go particularly small on my camera, at least in macro mode. That could be fixed by moving the camera backwards a bit to reduce the relative depths. There also isn’t much ambient light today so the background is quite blue, but again that would be fixed with a light box.

Sep 162013
 

I’ve got a reasonably well painted Warhammer 40K Chaos Marine army, so I thought I’d show off a few pictures (click through for larger images). I’ve not really found the time to do any painting for over a year now (too much blogging, programming, gaming etc) but some of the new Chaos Marine miniatures look really nice so I like to thing I’ll get round to painting some eventually.

Starting off with my first HQ unit, an old Daemon Prince with wings. The skin was the first larger painted area that I managed to get a fairly smooth finish on.

This is my other large HQ unit, a Chaos Lord on a Manticore (which I ran as a counts-as Daemon Prince because it didn’t feel right to have two Princes in the same army). It’s definitely my favourite and best painted model. I don’t really like the pose of the original model with both arms lunging forwards, but rotating the right arm down gave it a really dynamic feel of pushing its way through the ruins. Trim the rocks off the back foot to leave it free standing, swap out the Lord’s arms for 40k arms and job done.

The skin of the Manticore turned out really well, and it’s the first (and possibly only) time that I managed to get some really smooth layered blending working. I’d like to work on my smooth layering more in the future (although I suspect this is just a function of time spent – I remember it taking a whole afternoon to just do the skin).

I’d wanted a set of Lightning Claw Terminators since about 1995, and these guys all look suitably menacing.

And they need a Land Raider to ride around in. It’s amazing how much difference a bit of paint chipping makes to the apparent realism of the model. I think the mud effect works well – that just involves getting some really watered down brown paint and sploshing it all over the bottom section with a big brush, not technical at all (and it’s slightly worrying when you do it, potentially ruining your careful painting underneath…).

The Thousand Sons give some opportunity to get a bit of colour into the army. Blue seems to be a really forgiving colour for blending and highlighting, I don’t know why. I try to theme all of my Rhinos so you know what squad they’re carrying, so I added some warp-flame and lanterns (some spare Fantasy bits I had lying around).

The Berserkers were some of the first models I assembled and painted since coming back to the hobby as an adult, so some of the poses are a bit weird. Never underestimate the value of good posing – more important than the paint job for getting good looking models.

 Some normal marines. The Black Legion colour scheme is fairly quick to paint – the most time consuming bit is all the silver and gold edging (straight on top of a black spray undercoat).

I ‘borrowed’ this idea for the Vindicator from one I saw online a long time ago – the body pile from the Corpse Cart kit fits almost perfectly on the dozer blade, and then pack all the holes with rubble and sand.

 The Daemonettes were a vaguely successful attempt at painting skin and brighter colours, but they look fairly messy close up (I’d like to try another batch one day). The purple claws came out really well, which I think was mainly just through drybrushing.

A couple of Obliterators. It was all the rage to have loads of these guys but the metal models are such a pain to stick together than I couldn’t face doing any more after the first two (plus they’re really expensive).

I accidently ended up with a few too many Dreadnoughts (currently up to six I believe) so I thought I’d better paint one. Nothing special but it looks fine on the table.

I did a few Orks early on to make a change from painting black. I like to keep them nice and (overly) bright – I’ve seen darker green Orks on the tabletop and they become quite hard to distinguish any details.

More recently I felt like trying something a bit different, so I tried one of the old Inquisitor models (which are twice the size of the normal miniatures). I’m quite pleased by how it turned out, apart from the green on the back of the cloak which isn’t as smooth as I was intending (hence no pictures of the back!).

 

Sep 112013
 

In this previous post I talked about Bokeh depth of field, where it comes from and why it is different to the type of fake depth of field effects you get in some (usually older) games. In this slightly more technical post I’ll be outlining a nice technique for rendering efficient depth of field, which I use in my demo code, taken from this EA talk about the depth of field in Need For Speed: The Run.

The main difference is the shape of the blur – traditionally, a Gaussian blur is performed (a Gaussian blur is a bell-shaped blur curve), whereas real Bokeh requires a blur into the shape of the camera aperture:

Bokeh blur on the left, Gaussian on the right

The first question you might be asking is why are Gaussian blurs used instead of more realistic shapes? It comes down to rendering efficiency, and things called separable filters. But first you need to know what a normal filter is.

Filters

You’re probably familiar with image filters from Photoshop and similar – when you perform a blur, sharpen, edge detect or any of a number of others, you’re running a filter on the image. A filter consists of a grid of numbers. Here is a simple blur filter:

\left(\begin{array}{ccc}\frac{1}{16}&\frac{2}{16}&\frac{1}{16}\\\frac{2}{16}&\frac{4}{16}&\frac{2}{16}\\\frac{1}{16}&\frac{2}{16}&\frac{1}{16}\end{array}\right)

For every pixel in the image, this grid is overlaid so that the centre number is over the current pixel and the other numbers are over the neighbouring pixels. To get the filtered result for the current pixel, the colour under each of the grid element is multiplied by the number over it and then they’re all added up. So for this particular filter you can see that the result for each pixel will be 4 times the original colour, plus twice each neighbouring pixel, plus one of each diagonally neighbouring pixel, and divide by 16 so it all adds up to one again. Or more simply, blend some of the surrounding eight pixels into the centre one.

As another example, here is a very basic edge detection filter:

\left(\begin{array}{ccc}-1&-1&-1\\-1&8&-1\\-1&-1&-1\end{array}\right)

On flat areas of the image the +8 of the centre pixel will cancel with the eight surrounding -1 values and give a black pixel. However, along the brighter side of an edge, the values won’t cancel and you’ll get bright output pixels in your filtered image.

You can find a bunch more examples, and pictures of what they do, over here.

Separable filters

These example filters are only 3×3 pixels in size, but they need to sample from the original image nine times for each pixel. A 3×3 filter can only be affected by the eight neighbouring pixels, so will only give a very small blur radius. To get a nice big blur you need a much larger filter, maybe 15×15 for a nice Gaussian. This would require 225 texture fetches for each pixel in the image, which is very slow!

Luckily some filters have the property that they are separable. That means that you can get the same end result by applying a one-dimensional filter twice, first horizontally and then vertically. So first a 15×1 filter is used to blur horizontally, and then the filter is rotated 90 degrees and the result is blurred vertically as well. This only requires 15 texture lookups per pass (as the filter only has 15 elements), giving a total of 30 texture lookups. This will give exactly the same result as performing the full 15×15 filter in one pass, except that one required 225 texture lookups.

Original image / horizontal pass / both passes

Unfortunately only a few special filters are separable – there is no way to produce the hard-edged circular filter at the top of the page with a separable filter, for example. A size n blur would require the full n-squared texture lookups, which is far too slow for large n (and you need a large blur to create a noticeable effect).

Bokeh filters

So what we need to do is find a way to use separable filters to create a plausible Bokeh shape (e.g. circle, pentagon, hexagon etc). Another type of separable filter is the box filter. Here is a 5×1 box filter:

\left(\begin{array}{ccccc}\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&\frac{1}{5}\end{array}\right)

Apply this in both directions and you’ll see that this just turns a pixel into a 5×5 square (and we’ll actually use a lot bigger than 5×5 in the real thing). Unfortunately you don’t get square Bokeh (well you might, but it doesn’t look nice), so we’ll have to go further.

One thing to note is that you can skew your square filter and keep it separable:

Then you could perhaps do this three times in different directions and add the results together:

And here we have a hexagonal blur, which is a much nicer Bokeh shape! Unfortunately doing all these individual blurs and adding them up is still pretty slow, but we can do some tricks to combine them together. Here is how it works.

First pass

Start with the unblurred image.

Original image

Perform a blur directly upwards, and another down and left (at 120°). You use two output textures – into one write just the upwards blur:

Output 1 – blurred upwards

Into the other write both blurs added together:

Output 2 – blurred upwards plus blurred down and left

Second pass

The second pass uses the two output images from above and combines them into the final hexagonal blur. Blur the first texture (the vertical blur) down and left at 120° to make a rhombus. This is the upper left third of the hexagon:

Intermediate 1 – first texture blurred down and left

At the same time, blur the second texture (vertical plus diagonal blur) down and right at 120° to make the other two thirds of the hexagon:

Intermediate 2 – second texture blurred down and right

Finally, add both of these blurs together and divide by three (each individual blur preserves the total brightness of the image, but the final stage adds together three lots of these – one in the first input texture and two in the second  input texture). This gives you your final hexagonal blur:

Final combined output

Controlling the blur size

So far in this example, every pixel has been blurred into the same sized large hexagon. However, depth of field effects require different sized blurs for each pixel. Ideally, each pixel would scatter colour onto surrounding pixels depending on how blurred it is (and this is how the draw-a-sprite-for-each-pixel techniques work). Unfortunately we can’t do that in this case – the shader is applied by drawing one large polygon over the whole screen so each pixel is only written to once, and can therefore only gather colour data from surrounding pixels in the input textures. Thus for each pixel the shader outputs, it has to know which surrounding pixels are going to blur into it. This requires a bit of extra work.

The alpha channel of the original image is unused so far. In a previous pass we can use the depth of that pixel to calculate the blur size, and write it into the alpha channel. The size of the blur (i.e. the size of the circle of confusion) for each pixel is determined by the physical properties of the camera: the focal distance, the aperture size and the distance from the camera to the object. You can work out the CoC size by using a bit of geometry which I won’t go into. The calculation looks like this if you’re interested (taken from the talk again):

CoCSize = z * CoCScale + CoCBias
CoCScale = (A * focalLength * focalPlane * (zFar - zNear)) / ((focalPlane - focalLength) * zNear * zFar)
CoCBias = (A * focalLength * (zNear - focalPlane)) / (focalPlane - focalLength) * zNear)

[A is aperture size, focal length is a property of the lens, focal plane is the distance from the camera that is in focus. zFar and zNear are from the projection matrix, and all that stuff is required to convert post-projection Z values back into real-world units. CoCScale and CoCBias are constant across the whole frame, so the only calculation done per-pixel is a multiply and add, which is quick. Edit – thanks to Vincent for pointing out the previous error in CoCBias!]

In the images above, every pixel is blurred by the largest amount. Now we can have different blur sizes per-pixel. Because for any pixel there could be another pixel blurring over it, a full sized blur must always be performed. When sampling each pixel from the input texture, the CoCSize of that pixel is compared with how far it is from the pixel being shaded, and if it’s bigger then it’s added in. This means that in scenes with little blurring there are a lot of wasted texture lookups, but this is the only way to simulate pixel ‘scatter’ in a ‘gather’ shader.

Per-pixel blur size – near blur, in focus and far blur

Another little issue is that blur sizes can only grow by a whole pixel at a time, which introduces some ugly popping at the CoCSize changes (e.g. when the camera moves). To reduce this you can soften the edge – for example if sampling a pixel 5 pixels away, blend in the contribution as the CoCSize goes from 5 to 4 pixels.

Near and far depth of field

There are a couple of subtleties with near and far depth of field. Objects behind the focal plane don’t blur over things that are in focus, but objects in front do (do an image search for “depth of field” to see examples of this). Therefore when sampling to see if other pixels are going to blur over the one you’re currently shading, make sure it’s either in front of the focal plane (CoCSize is negative) or the currently shaded pixel and the sampled pixel are both behind the focal plane and the sampled pixel isn’t too far behind (in my implementation ‘too far’ is more than twice the CoCSize).

[Edit: tweaked the implementation of when to use the sampled pixel]

This isn’t perfect because objects at different depths don’t properly occlude each others’ blurs, but it still looks pretty good and catches the main cases.

And finally, here’s some shader code.

Aug 212013
 

Licenced board games tend to follow a similar pattern to licenced video games: most of them are a bit rubbish, but a few are great and really make the most of their source material. Here are two of those.

Spartacus: A Game of Blood & Treachery

The Spartacus board game is based on the TV series and sees up to four players take the roles of the heads of battling Roman houses and engaging in plots, backstabbing, market auctions and, of course, epic gladiator battles. I’ve not actually watched any of the TV series, but it’s one of the most thematic games I’ve played and I quickly fell into the role of a great Dominus fighting for supremacy.

First in each turn is the Intrigue phase, where you draw and play cards which represent the plots and schemes of you house. For example, slaves can be set to work to bring in money, opposing gladiators can be poisoned and guards can be posted to keep your own assets safe.

Then it’s on to the Market phase where there are blind auctions to buy more gladiators and slaves, or maybe some armour and weapons – everyone hides their gold, then holds however much they want to bid in their hand, before everyone reveals at once. The winner pays all the gold they bid, so how much do you want that shield? Bid low and hope for a bargain, or bid high to ensure you win and risk  massively overpaying? Finally everyone bids once more for the right to host the gladiator fight.

Collecting slaves seems to be the way to go. My treasury is overflowing from all the cash they bring in!

The Arena phase is the action-packed finale of each turn. The host of the games chooses two players to send gladiators into the arena, to battle for the glory of their house. Once combatants are declared, everyone gets involved again by placing bets on who’s going to win and whether the loser gets seriously injured or even decapitated. Then it’s on to the fight itself.

The combat mechanics are clever. Each gladiator starts with a number of attack, defence and speed dice. When attacking, both players roll their attack and defence dice respectively, and dice are compared highest to lowest (similar to Risk). For every undefended hit, the defender has to lose one of their dice for the rest of the combat, as they get injured and become weaker. Do lose your speed and attack, and go for the long game while you wear down your opponent? Or throw caution to the wind and sacrifice your defence to maintain a hopefully lethal flurry of attacks (if you’re me: yes)?

When an attack leaves you with only two dice your poor gladiator has been beaten into submission. Be left with just one and it’ll be a while before he’ll recover from those wounds. And if you lose your last one he loses his head with it. Bad news for you, but payday for anyone who saw it coming. And if your losing gladiator survives – well, lets hope you haven’t annoyed the host recently as he has to give the famous thumbs up or thumbs down.

So what are the best bits that make me recommend this game?

  1. Fairly simple rules. A few cards to play, easy market system and streamlined arena combat make it easy to teach new players.
  2. Auctions. These are great in any game, and makes it self-balancing. Does everyone think Crixus is overpowered? Then expect to pay a lot for him in your next game. Javelins are useless? Expect to pick up a bargain. And they give plenty of moments of tension and theatricality.
  3. Player interaction. Everyone is involved at all stages of the game, with joint plots in the Intrigue phase and simultaneous bidding on the market. Even in the Arena, cheering on the gladiator you’ve backed to win keeps it interesting for those not fighting.
  4. Controllable game length. By changing how many points you need to win you can start a game that’ll done in an hour or an epic that goes on for three.
  5. Power-tripping as you decide whether a defeated gladiator was worthy enough to fight another day.

Also intriguing is the fact that it’s a ‘Mature’-rated game. Not that being ‘mature’ makes it good (if I play GTA it’s in spite of the (im)mature swearing and violence, not because of it), but because I never really considered that board games could have age ratings…

Rob with the best card in any board game ever

It’s not a perfect game, and my main complaint is that because it’s a race to get to 12 Influence points it all goes a bit Munchkin at the end. If you’ve not played Munchkin, what happens when someone is about to win is that everyone brings out the cards they’ve been hoarding all game that knock points off them or otherwise screw with their plans. Similarly with Spartacus, it’s probably the third person who tries to win who’ll actually succeed, once all the plan-foiling cards have been drawn out.

But apart from that it’s still great fun, especially if everyone gets into character a bit and plays along with the theme.

 

A Game of Thrones

I’ll be honest. A Game of Thrones is not a game you’ll want to play every day. Or every week. Or even every month. We play it once per year, at New Year’s. It’s long, challenging, exhausting and possibly friendship-ending. It’s also fantastic.

The concept is simple enough – take control of House Stark, Lannister, Baratheon, Greyjoy or Tyrell (and Martell with the expansion) and muster and send forth your armies to conquer Westeros. Hold enough cities at any time and you win.

On my way to a Stark victory

The bulk of the game involves placing order tokens on your territories, commanding your units to march, defend, raid, support and consolidate power. You have a limited number of each type of order, so it’s often a tough call to decide where to attack or where you expect your enemies to push.

So I mentioned friendship-ending? That would mainly be the fault of the Support order. When an army marches into an enemy, the total force strength on each side is compared. Support allows you to add your army’s strength to a fight in a neighbouring territory, even one between other players. Cue much negotiating and deal-brokering as everyone looks for help in return for promises of future favours. The only problem is that order tokens are placed secretly, and I’ve seen genuine horror when it’s been revealed that a once-trusted ally has decided not, in fact, to support you, but to come to the aid of the enemy. Or in fact ignore both of you and march into your now-undefended homelands.

This is another game with auctions, and remember, Auctions Are Good. Every few turns (at random) it’s all change in the royal court, and everyone uses their power tokens to bid for their place on three tracks – the Iron Throne track determines turn order, the Valyrian Sword track gives combat advantages and the Messenger Raven track allows better orders to be used. Any of these can be of critical importance at different points in the game, usually leading to cries of joy and despair as bids are revealed.

This is another game where it’s hard to avoid falling into character, especially if you’re familiar with the story. I once tainted a Stark victory while playing as Greyjoy by landing a force into undefended Winterfell right at the end of the game. I didn’t know the story at this point so I wasn’t aware of why this was bad, but despite her winning the game, a certain friend continued a vendetta against me in the rematch a whole year later. Tip of the day: don’t play A Game of Thrones with people who hold grudges…

This current edition has a nicer looking board

If you’ve got the right group of people, this is definitely a game worth playing. It’s long (our first game took six hours), it’s pretty complicated (the rule book is 30-odd pages), and it heavily encourages betrayals. But it does justice to the source material, there’s a real sense of achievement with every minor victory and it’ll leave you with stories of epic battles, unstable alliances and backstabbing that’ll be heard for years.

(Yes I remember Winterfell, let it go already!)

Aug 022013
 

[This is a bit more opinionated and political than I would normally post, but I think it’s important. I don’t care which party runs the country as long as it’s run fairly and honestly, according to logic, reason and evidence. Unfortunately that sounds hopelessly naïve. This contents of this post are hardly controversial, but it’s just a part of my small attempt at giving people a greater understanding of the world and enabling them to spot when others aren’t acting according to honesty, reason and evidence.]

I really don’t understand the logic of the Government’s Help to Buy scheme.

In the first part of the scheme, buyers of new homes only need a 5% deposit and can borrow a further 20% of the value of the house from the Government. This portion of the loan is interest-free for five years, after which fees are applied. The homebuyer then only needs to find a 75% mortgage. This can only do damage in the long run. Let’s look at an example:

The normal situation

Alice has a good job. With her deposit and mortgage she has £160,000 to spend on a house. Bob has a reasonable job, and he can spend £140,000 on a house. Chris can only afford to buy a house for £120,000. Two identical new houses have been built – what happens?

In a scarce market where there isn’t enough supply to go around, the price is set at the margin of availability. Bob is going to get a house, and Chris isn’t. Therefore the houses will be sold to Alice and Bob for somewhere between £120,000-140,000. (With a larger example there would be more people willing to pay £121k, £122k etc, so the price would be more finely defined.)

The end result is that two of the three people own a home, and paid not more than £140,000. The Government sees this and thinks it can do better, because Chris still doesn’t have a home. Seeing that Chris was only just priced out of the market (the houses go for slightly more than he can borrow), they decide to offer an extra loan to Chris so he can now afford the £140,000. Let’s see what happens:

With help to buy

Chris gets the additional loan of £30,000 so he can now afford to pay £150,000. Alice can still afford a house. Bob, however, is now set to miss out, so he also takes an extra loan of just over £10,000. Both houses sell to Alice and Bob for just over £150,000.

Then end result is that Alice and Bob still own houses and Chris still doesn’t, but houses now cost at least £150,000. This is obvious from the beginning – if you have two houses and three buyers, only two people can successfully buy, no matter how the money is distributed.

Maybe the problem is actually that there aren’t enough houses? It’s a bold claim, I know. Let’s see what happens:

Building more houses

Assume that these houses cost £120,000 to build. By building one more house, Chris could pay £120,000 and the house builder would still make £20,000 profit. The end result is that Alice, Bob and Chris all have houses, and they paid at most £120,000 for them.

An aside: where does the money go?

House prices are much higher compared to wages than they used to be 10+ years ago. So what is the money paying for? House building is a competitive business and there is no shortage of labourers looking for work, so it’s unlikely that house building are making large profits. And indeed, the construction sector has been struggling recently.

The bottleneck in the housing supply is land for development, due to our strict planning laws. While building a house doesn’t cost a huge amount more than it used to, buying the land does. The increase in cheap mortgage lending by banks enabled this boom in land prices last decade, because people with more money are able to spend more to outbid others for scarce resources (i.e. land for housing).

When you sell a house you realise the increase in the value of your land, but you spend it again on your new house so you’re no better off. Follow this through and it’s apparent that any increase in the money supply to buy new houses goes straight into the pockets of the landowners who sold the land. None of it stays with homeowners or builders.

Part two of the scheme

The potential damage from part one of the Help to Buy scheme is limited by the fact that it only applies to new builds. The price can’t be pushed up too far because you can always buy a cheaper old house instead, for which the money supply hasn’t increased. However, this is set to change.

In January 2014 the mortgage guarantee part of the scheme will launch. In this scheme, buyers will only need a 5% deposit to get a mortgage. Banks like large deposits because in the event of a default, repossession and auction, the sale will bring in more than the outstanding debt. With a small deposit there is a danger that the bank may lose out, so with this scheme the Government guarantees 15% of the mortgage, thus enabling banks to offer ‘safe’ (from their point of view) loans to people with tiny deposits.

What this will do is increase the money supply available to buy all houses, not just new builds. There are still the same number of houses, and there will be even more buyers competing for them than currently. An exercise for the reader: repeat my first example in this case and work out what will happen.

If you said that prices are likely to increase across the board, and no more people will get a house than would have done before (except that they’ll pay more for them), well done.

The wider point

Most of the population have (understandably) very little knowledge or care about economic principles. That shouldn’t be a problem because we should be able to trust that the people making economic policy do understand. Unfortunately it seems that governments exploit this lack of knowledge to pass policy that is actively damaging to the economy but serves political ends. If you can’t afford a home and suddenly you’re offered a cheap extra loan, you’re likely to be pleased and support the policy, even though it’s nonsensical to all economists.

In another example, austerity was heavily sold as being vital for maintaining the UK’s AAA credit rating. We were told that reducing the deficit makes lenders less likely to think that we’ll default, therefore maintaining the AAA rating, therefore keeping interest rates on UK bonds low. This is deliberate misinformation – nobody cares what the credit agencies think (after all they gave AAA ratings to all the subprime mortgage packages that caused the crisis in the first place), and a low risk of default isn’t the primary reason why bonds are cheap. It’s a small part, but the main reason is that our economy has been doing so badly, with such a slow recovery, that even getting a measly 2% for ten years is more attractive than investing the money elsewhere in the economy.

My hope is that we eventually get a critical mass of people, both in the general public and the media, who have a reasonable understanding of economic principles and statistics. Governments will no longer be able to get away with easily-refutable claims, dodgy statistical usage and illogical policies. Only then will we get governance based on what is best for the people of the country, and not damaging policies purely designed to  win votes.

Jul 012013
 

So far I’ve covered the basics of getting objects on the screen with textures, lighting, reflections and shadows. The results look reasonable but to increase the realism you need to more accurately simulate the behaviour of light when a real image is seen. An important part of this is the characteristics of the camera itself.

One important property of cameras is depth of field. Depth of field effects have been in games for a good few years, but it’s only recently that we’re starting to do it ‘properly’ in real-time graphics.

What is depth of field?

Cameras can’t keep everything in focus at once – they have a focal depth which is the distance from the camera where objects are perfectly in focus. The further away from this distance you get, the more out of focus an object becomes. Depth of field is the size of the region in which the image looks sharp. Because every real image we see is seen through a lens (whether in a camera or in the eye), to make a believable image we need to simulate this effect.

The quick method

Until the last couple of years, most depth of field effects were done using a ‘hack it and hope’ approach – do something that looks vaguely right and is quick to render. In this case, we just need to make objects outside of a certain depth range look blurry.

So first you need a blurry version of the screen. To do this you draw everything in the scene as normal, and then create a blurred version as a separate texture. There are a few methods of blurring the screen, depending on how much processing time you want to spend. The quickest and simplest is to scale down the texture four times and then scale it back up again, where the texture filtering will fill in the extra pixels. Or, if you’re really posh, you can use a 5×5 Gaussian blur (or something similar) which gives a smoother blur (especially noticeable when the camera moves). You should be able to see that the upscaled version looks more pixelated:

Blurring using reduce and upscale, and a Gaussian blur

Then you make up four distance values: near blur minimum and maximum distances, and far blur minimum and maximum distances. The original image and the blurry version are then blended together to give the final image – further away than the ‘far minimum’ distance you blend in more and more of the blurry image, up until you’re showing the fully blurred image at the ‘far maximum’ distance (and the same for the near blur).

In the end you get something that looks a bit like this (view the full size image to see more pronounced blurring in the distance):

Depth of field in Viva Piñata

This looks fairly OK, but it’s nothing like how a real camera works. To get better results we need to go back to the theory and understand the reasons you get depth of field in real cameras.

Understanding it properly

Light bounces off objects in the world in all directions. Cameras and eyes have fairly large openings to let in lots of light, which means that they will capture a cone of light bounced off from the object (light can bounce off the object anywhere within a cone of angles and still enter the camera). Therefore, cameras need lenses to focus all this light back onto a single point.

Light cones from objects at different distances to a lens

A lens bends all incoming light the same. This means that light bouncing off objects at different distances from the lens will converge at different points on the other side of it. In the diagram the central object is in focus, because the red lines converge on the projection plane. The green light converges too early because the object is too far away. The blue light converges too late because the object is too close.

What the focus control on your camera does is move the lens backwards and forwards. You can see that moving the lens away from the projection plane would mean that the blue lines converge on the plane, so closer objects would be in focus.

There is a technical term, circle of confusion (CoC), which is the circular area over which the light from an object is focussed over on the projection plane. The red lines show a very tiny CoC, while the blue lines show a larger one. The green lines show the largest CoC of the three objects, as the light is spread out over a large area. This is what causes the blur on out of focus objects, as their light is spread over the image. This picture is a great example of this effect, where the light from each individual bulb on the Christmas tree is spread into a perfect circle:

Bokeh

The circle of confusion doesn’t always appear circular. It is circular in some cases because the aperture of the camera is circular, letting in light from a full cone. When the aperture is partly closed it becomes more pentagonal/hexagonal/octagonal, depending on how many blades make up the aperture. Light is blocked by the blades, so the CoC will actually take the shape of the aperture.

This lens has an aperture with six blades, so will give a hexagonal circle of confusion:

So why is simulating Bokeh important? It can be used for artistic effect because it gives a nice quality to the blur, and also it will give you a more believable image because it will simulate how a camera actually works. Applying a Gaussian blur to the Christmas tree picture would give an indistinct blurry mess, but the Bokeh makes the individual bright lights stand out even though they are out of focus.

Here is the difference between applying a Bokeh blur to a bright pixel, compared to a Gaussian blur. As you can see, the Gaussian smears out a pixel without giving those distinct edges:

Bokeh blur on the left, Gaussian on the right

Using Bokeh in real-time graphics

In principle, Bokeh depth of field isn’t complicated to implement in a game engine. For any pixel you can work out the size of the CoC from the depth, focal length and aperture size. If the CoC is smaller than one pixel then it’s completely in focus, otherwise the light from that pixel will be spread over a number of pixels, depending on the CoC size. The use of real camera controls such as aperture size and focal length means that your game camera now functions much more like a real camera with the same settings, and setting up cameras will be easier for anyone who is familiar with real cameras.

In practice, Bokeh depth of field isn’t trivial to implement in real-time. Gaussian blurs are relatively fast (and downsize/upscaling is even faster) which is why these types of blurs were used for years. There aren’t any similarly quick methods of blurring an image with an arbitrary shaped blur (i.e. to get a blur like the left image above, rather than the right).

However, GPUs are getting powerful enough to use a brute force approach, which is the approach that was introduced in Unreal Engine 3. You draw a texture of your Bohek shape (anything you like), and then for each pixel in your image you work out the CoC size (from the depth and camera settings). Then to make your final image, instead of drawing a single pixel for each pixel in the original image, you draw a sprite using the Bokeh texture. Draw the sprite the same colour as the original pixel, and the same size as the CoC. This will accurately simulate the light from a pixel being spread over a wide area. Here it is in action, courtesy of Unreal Engine:

Depth of field with different Bokeh shapes

The downside of this technique is that it’s very slow. If your maximum Bokeh sprite size is, say, 8 pixels wide, then in the worst case each pixel in the final image will be made up of 64 composited textures. Doubling the width of the blur increases the fill cost by four times. This approach looks really nice, but you need to use some tricks to get it performing well on anything but the most powerful hardware (for example, draw one sprite for every 2×2 block of pixels to reduce the fill cost).

An alternative method

There is a good alternative method that I like which is much quicker to draw, and I shall go through that soon in a more technical post. It was presented in this talk from EA at Siggraph 2011, but it takes a bit of thought to decipher the slides into a full implementation so I’ll try to make it clearer. This is actually the technique I use in my Purple Space demo.

Cheaper depth of field effect

Jun 132013
 

I hadn’t been to Cheltenham Science Festival before so last Friday I popped down for the day. I went to half a dozen presentations so here’s a summary of the interesting bits.

Eight Great Technologies

This was a talk by David Willetts, the Minister for Universities and Science, giving us reasons to be optimistic about the future of technologies, science and industry in the UK. Despite what the doom-mongers claim there are still loads of things that we’re good at in the country, and he talked about eight key areas where we shine:

  1. Computing and big data – we have excellent software engineers in many related industries (although I believe we need to do much more to encourage young people into programming), alongside smaller important projects such as the Raspberry Pi.
  2. Space and satellites – the UK is a world leader in making small satellites, and while we don’t have our own launch facilities, companies like Virgin Galactic are pushing further into the commercialisation of space.
  3. Robotics and autonomous systems – the new ESA Mars rover, Bruno, is developed using British technology, and will have much more autonomy than the existing rovers. Also interesting is the Symbrion robot ‘swarm’, developed in large part in this country.
  4. Synthetic biology – we have history in this area of developing gene sequencing and assembly technologies, and the existence of a unified NHS allows for more integrated ‘big data’ approaches to gene analysis and treatment evaluation.
  5. Regenerative medicine – from the original work on Dolly the sheep, we are now pioneering research into restoring lost body functions, for example the stem cell treatments for Jasper the paralysed dog.
  6. Agri-science – 80% of the world’s breeding chickens are designed in the UK, and now give twice as much meat per food as 20 years ago. Similar advances are being developed (and increasingly required) to increase wheat yields and reduce chemical use.
  7. Advanced materials and nanotechnology – from graphene to 3D printing, materials are a massive UK export industry (more about this in the next talk).
  8. Energy and storage – I think this is the most important technological legacy we should be funding and building on in the near future, so that we maintain our past expertise in nuclear and renewables.

Overall I was impressed by the Minister. I didn’t have any of the usual sense of frustration or brain-dissolving that I usually get when listening to politicians speaking, and despite not having a science background himself, he seems very able and keen to listen and learn. While it seems obvious to everyone in the field that many wide areas of science should get a lot more funding, I think that the focus on and support of key areas is probably a good compromise between results and political will. In particular, the derogatory remarks he kept making towards the Daily Mail and their ilk regarding their scare stories and general stupidity (my words but the general gist) reassured me that he has his head screwed on and that science in this country isn’t doomed yet.

A hare and a minotaur (I have no idea why)

Is The Age Of Silicon Over?

Next up was a talk on new materials, with three guys talking primarily about gallium nitride, graphene and metamaterials and photonics, respectively.

Gallium Nitride

Gallium nitride is a material with a wealth of possible applications. One common use at the moment is in LED lighting, where it is combined with indium to creates high brightness LEDs, and the wavelength of light (therefore the colour) can be controlled by the amount of indium used. Unfortunately, these GaN LEDs can only currently be fabricated on sapphire and are therefore very expensive to produce (hence LED bulbs costing £15 a pop). They are very efficient though, and have the potential to save 50% of the energy used in lighting, which makes up around 20% of the total UK energy use. New manufacturing techniques using silicon substrates instead of sapphire are being developed, and this should lead to a large reduction in cost and a much wider update of LED lighting.

Another really interesting use is that by combining it with aluminium instead of indium, you can produce LEDs that emit in the deep ultraviolet part of the spectrum. This type of light is fatal to life on earth, and the idea is that deep-UV lamps could be placed inside water pipes in developing countries where waterborne diseases are commonplace, killing all the microbes in the water at the point where it enters the home. A similar potential use is in air-con units to kill the recirculating bacteria.

A final use is that GaN is 40% more energy efficient that silicon when used for computing, thus leading to even more energy savings (another potential 5% off national consumption).

Graphene

Graphene was in the news a few years back when Kostya Novoselov and Andre Geim at the University of Manchester won the Nobel prize for its discovery. Since then it’s all gone a bit quiet in the mainstream press but this ‘miracle material’ will likely still cause a revolution in many areas.

It is 100 times stronger than steel, harder than diamond, very conductive, 98% optically transparent and impermeable. The possible uses talked about were in flexible electronics (e.g. bendable screens), wearable computing and supercapacitors, where the tiny thickness enables huge surface areas to be contained in a small space. Another use is in very accurate sensors, but I didn’t really understand the specifics (seems to be to do with the change in Hall voltage when a single sheet of graphene comes into contact with even single atoms of the thing being measured).

Photonics

Photonics is to do with processing and manipulating light. I have to admit I didn’t really take in anything that this guy was saying, save that optical computers are still a long way off and we’ll have to stick to electronic computers for the time being.

The festival venue

 Quantum Biology

With a title like this I couldn’t not attend, even though I didn’t really know what to expect. Executive summary: this is a very new field looking at quantum mechanical effects on and used by biological systems. And nobody is really sure how important it is.

The general principle is that classical physics works when you have loads of atoms or molecules, because all the quantum variations cancel out. However, down at the level of DNA and some cell biology you’re dealing with individual molecules, and therefore you may have to take quantum effects into consideration.

The problem with quantum biology is that to get observable quantum effects, you need coherence. For example, the double-slit experiment gives an interference pattern, showing photons behaving as waves, only when the you use a coherent light source such as a laser. In the warm and wet world of biology, being able to maintain coherence for long is challenging.

There are currently three main candidates for quantum effects – photosynthesis, sense of smell and magnetic sensing in birds. The most well-developed theory is with photosynthesis where it’s hypothesised that the electron transfer is made more efficient by quantum tunnelling – the tunnelling destinations are defined by the peaks in the interference pattern of a coherent source, similar to in the double-slit experiment.

There were some thoughts about consciousness being a result of quantum effects – the opinion being no, as thought is far too slow and the brain is too big and complex (all other observed quantum effects are between one or two molecules only).

One of the presenters, Paul Davies, finished up were some of his more ‘wacky’ (in his own words) theories. Did quantum mechanics facilitate life to begin with? Quantum superposition can enable rapid exploration of a vast number of arrangements of matter, and if a self-replicating state is in some way selected for, this could vastly reduce the search space. There is more about this in chapter one of his book.

Camera phones don’t work well in the dark

Particle Physics and Energy

These two talks weren’t presenting anything new, but were entertaining and fast-paced introductions to particle physics and the Standard Model, and the concepts of energy respectively. The Particle Physics talk went through 100 years of scientific progress, from the initial discovery of the atom to the Higgs boson, and our understanding of what all matter is composed of. The Energy presentation started with all the different forms energy can come in (kinetic, potential, sound, chemical etc) and how most of these are different forms of the same thing, before bringing in Einstein and the energy of empty space, and finishing with some brief words on the Casimir effect, negative energy and wormholes.

These were good fun but I think next time I’ll benefit more by heading to talks on subjects that I’m less familiar with, as they usually seem to pitched at a scientifically-literate but definitely non-expert audience.

I found the Portal facility

Famelab Grand Final

This session was well worthwhile. Famelab is an international competition to find the next generation of young science communicators. Contestants have three minutes to do a talk on an interesting scientific topic, but in as entertaining and engaging way as possible. All of the presentations were really good, and the winner was Fergus McAuliffe from Ireland, who you can see here talking about freezing frogs:

Another entertaining one was Christopher See from Hong Kong, talking about probabilistic medicine:

The rest of the talks are all up here as well.

Overall it was a very enjoyable and worthwhile day out, and I’ll definitely be going back next year.

Jun 082013
 

Hardly anyone I know shares my taste in music. To potentially help alleviate this I present a selection of great female-fronted bands who aren’t as well known as they should be. (Looking back at my selection it would appear I mainly like musical duos and singers with red or black hair…)

Anyway, in no particular order:

1. Helalyn Flowers

Helalyn Flowers are an Italian duo who I randomly stumbled across a couple of years ago, and are now one of my favourite bands. They have a very catchy melodic electro/goth sound, and you can hear a bunch of their stuff here.

They have three albums out, A Voluntary Coincidence,  Stitches Of Eden and their recently released White Me In / Black Me Out. The new one hasn’t quite grown on me yet so I’d have to recommend Stitches Of Eden as the best one. Standout tracks for me are Hybrid Moments and Friendly Strangers.

Hoping they’ll make it to the UK one day, but I suspect that’s unlikely.

2. Indica

Indica are a Finnish all-female melodic rock band with some classical influences. They have one English album, A Way Away, which is well worth a listen. My favourite tracks are Precious Dark and Islands of Light:

 



Apparently they have a new album out soon, so I’m looking forward to that. I need to get hold of some of their earlier material too.

3. I:Scintilla

Something a bit more industrial this time. I:Scintilla come from Chicago and have been going for over a decade. They have a couple of great albums in Optics and Dying & Falling, for when you’re looking for something with a harsher sound and a bit more energy.

Swimmers Can Drown is from Dying & Falling:

My recommended tracks from their previous album Optics are The Bells and Melt.

4. L’Âme Immortelle

L’Âme Immortelle are an Austrian duo with a French name who sing in a mixture of German and English. Lyrically they can be a bit cheesy but on half the songs I don’t understand the language so I let them off. They used to be predominantly synth-based, but then they moved into more traditional rock territory which I wasn’t so keen on. Therefor my pick of their albums would be a couple of their older ones, Gezeiten and Als Die Liebe Starb.

They have a playlist here, of which I recommend Judgement, Tiefster Winter and 5 Jahre.

5. Collide

Slowing the tempo a bit, Collide are an American duo who mix stark electronica with ethereal vocals and Eastern influences. They have a few songs here, and I Halo and their cover of White Rabbit.

For albums, I’d probably go for Chasing the Ghost or Some Kind of Strange.

6. Hungry Lucy

Winning the award for most bizarre band name is Hungry Lucy. They are another American duo and have been around for fifteen years now, writing haunting melodies alongside some more upbeat grooves. Their entire catalogue is available to stream on their website.

To Kill A King is my album of choice, although it was Alfred from Apparitions that first made me curious about the band.

7. Emilie Autumn

Emilie Autumn styles herself on a Victorian asylum aesthetic, mixed with elements of industrial music, so you can expect violins, harpsichords, synths, guitars and drum machines, presented with elements of caberet, burlesque and some humour.

Recommended tracks are Opheliac and Fight Like A Girl, while Thank God I’m Pretty and Girls! Girls! Girls! have good use of comedy to make a point. Opheliac is my recommended album overall, and the live shows are good fun (she’s touring the UK later this year).

Jun 042013
 

The last basic rendering technique to talk about is shadowing. Shadows are really important for depth perception, and are vital for ‘anchoring’ your objects to your world (so they don’t look like they’re just floating in space).

Shadowing is a very popular topic for research, so there are loads of variations on how to do it. Years ago you could just draw a round dark patch at a character’s feet and call it done, but that doesn’t really cut it these days.

Blob shadows

Shadows are usually implemented using a technique called shadow mapping. To go right back to basics, you get shadows when the light from a light source (e.g. the sun) is blocked by something else. So if you were stood at the sun (and assuming no other light sources) you wouldn’t see any shadows, because you would only see the closest point to the sun in any direction. This fact is the basis of shadow mapping.

Shadow map texture

What we’re going to do is draw the scene from the point of view of the sun, into a texture. We don’t care about the colour of the pixels, but we do care about how far away from the sun they are. We already do this when drawing a depth map, as I spoke about here. Because we need to use the shadow map when rendering the final image, we need to render the shadow map first (at the beginning of the frame).

There are considerations when rendering your shadow map that I’ll get to later, but first I’ll talk about how we use it. Here is the final scene with shadows that we’re aiming for:

Final scene with shadows

And this is the shadow map, i.e. the scene as seen from the light (depth only, darker is closer to the camera):

Shadow map rendered from the light source

When drawing the final image we use ambient/diffuse/specular shading as normal. On top of that we need to use the shadow map to work out if each pixel is in shadow, and if so we remove the diffuse and specular part of the lighting (as this is the light coming directly from the sun). To work this out we need to go back into the world of transforms and matrices.

Rendering with the shadow map

When I spoke about view matrices I introduced this, which is how the basis, view and projection matrices are used to get the screen position of each vertex in a model:

FinalScreenPosition = BasisMatrix * ViewMatrix * ProjectionMatrix * Position

If you remember, the basis matrix will position an object in the world, and the view and projection matrices control how the camera ‘sees’ the world. In the case of shadowing, we have two cameras (the usual camera we’re rendering with, and the one positioned at the light source). The other thing we have is the shadow map, which is the scene as viewed with the second camera at the light.

To perform shadowing, we need to find exactly where each pixel we’re drawing would have been drawn in the shadow map. So, when transforming our vertices, we need to do two separate transforms:

FinalScreenPosition = BasisMatrix * MainViewMatrix * MainProjMatrix * Position
ShadowMapPosition = BasisMatrix * ShadowViewMatrix * ShadowProjMatrix * Position

The first tells us where on the screen the vertex will be drawn (X and Y positions, and depth), and the second tells us where in the shadow map it would be drawn (again X and Y positions, and depth). Then in the pixel shader, we can perform a texture lookup into the shadow map to get the depth of the closest pixel to the light. If our calculated depth for that pixel is further away then the pixel is in shadow!

One problem you will have is shadow acne. When you’re rendering a surface that’s not in shadow, you’re effectively testing the depth of that pixel against itself (as it would have been rendere into both the shadow map and the final image). Due to unavoidable accuracy issues, sometimes the pixel will be very slightly closer and sometimes it’ll be slightly further away, which leads to this kind of ugly effect:

Shadow acne

Because a surface should never shadow itself we use a depth bias, where a small offset is added to the shadow map depths to push them back a bit. Therefore a surface will always be slightly in front of its shadow, which cures this.

Rendering the shadow map

You need to take some care when deciding how to draw your shadow map – exactly as when you’re drawing your scene normally, you could point the camera towards any point in the world and your frustum could be any size. Also, you could use any sized texture to render into. All of these things affect the shadowing.

Let’s start with the easy one, the resolution of the texture. A small texture won’t have enough resolution to capture the small details in the shadow, but a large one will take a long time to draw, affecting your framerate. You might go for a 1024×1024 pixel shadow map, or double that if you want high quality.

The effective resolution of the shadow map is also affected by how wide a field of view you use when rendering it – if it’s very zoomed in then you’ll get a lot of pixels per area in the scene, but you won’t have any information at all outside of that area (so you won’t be able to draw shadows there). Therefore you need to pick a happy medium between high detail and a large shadowed area.

Cascades Shadow Maps

There is a way to get around the problem of having either detailed shadows or a large shadowed area, and that is by using a technique called Cascaded Shadow Maps. This just means that you use multiple shadow maps of different sizes. Close to the camera you’ll want detailed shadows, but further away it doesn’t matter so much. Therefore you can draw a second, much bigger shadow map (that therefore covers a much larger area) and when rendering you check whether the pixel is within the high detailed map. If not, sample from the lower detailed map instead.

This scene is showing how a cascade of two shadow maps can be used. You can see the blockiness caused by the lower detail map on the blue part of the box, but it enables the shadows to be rendered right into the distance:

Cascaded Shadow Maps – red is in the high detail map, blue is in the low detail map, green is outside both maps and not shadowed

You don’t need to stop at two shadow maps – the more you have, the more detailed the shadows can be in the distance, but the more time you have to spend drawing the maps. Three maps is a common choice for games with a long draw distance.

Filtering

One of the biggest problems with shadowing is filtering, or how to get soft shadows (which I talked a bit about here). The shadows we’re drawing here are hard shadows, in that each pixel is either fully in or out of shadow, with a hard edge in between. In the real world, all shadows have some amount of ‘softness’ (or penumbra) around the edge.

With standard texturing, you can avoid hard-edged pixels by blending between all the neighbouring pixels. This lets you scale up textures and keep everything looking smooth. Or you can not, and you get Minecraft:

With and without texture filtering

This doesn’t work with shadow maps. Using a shadow map always gives a yes/no answer: is the pixel further from the light than the shadow caster. Using texture filtering on the depth map between two pixels at different depths give a depth somewhere in between. It will still only give you a yes/no answer, but it’ll be wrong because it’s using some unrelated depth value. Instead, you have to use a more complicated method.

There are vast numbers of ways to do nice soft shadowing, so I won’t go into them here apart from to mention the simplest, which is called Percentage Closer Filtering (PCF) [one thing I find is that most graphics techniques have long and complicated sounding names, but they’re usually really simple]. With PCF, instead of doing a single shadow test, you do a few tests but offset the lookup into the shadow map slightly for each one. For example, you could do four tests – one slightly left, one right, one up and one down from where you would normally sample from the shadow map. If, for example, three of them were in shadow but one wasn’t, then the shadow would be 75% dark. This gives you some amount of soft shadowing.

Basic 4-sample Percentage Closer Filtering

As you can see it doesn’t look great, but more advanced sampling and filtering can be used to give decent results, and PCF is exactly what the soft shadowing in a lot of modern games is based on.

End of part 1

That sums up my introduction to basic graphics techniques. Hopefully it made sense and/or you learnt something… I will be carrying on with a bunch of more advanced techniques that I find interesting, and hopefully manage to present those in a way that makes sense too!

May 142013
 

I’ve been playing Eve Online on and off since 2006. It’s a very complex game with 500000 active accounts, all playing on the same server. What this means is that it has the most interesting economy of any game out there due to the huge number of interacting players, complex supply and manufacturing chains and a complete lack of market regulation.

This makes it a great sandbox for observing economic behaviours and principles from the real world, manifesting in its virtual economy. Here I’ve picked a few interesting or important things that I’ve come across.

Opportunity Cost

This is probably the most basic and misunderstood concept in Eve, so it’s likely misunderstood in real life as well. The opportunity cost of doing something is the price of not being able to do something else instead. For example, you’ll frequently come across advice in game like “mine your own minerals to increase your profit margin on manufacturing.”

Sounds sensible, but it’s completely wrong. Mining minerals may be a worthwhile thing to do, but it doesn’t increase manufacturing profits. They are two independent activities, and should be treated as such. Profit calculations for manufacturing should use the market value of the input materials, and not how much it personally cost you to acquire them. This is because you could have sold your mining output instead of using it. The opportunity cost using your minerals is that you can’t sell them (although there are other reasons why you might want to produce your own inputs, e.g. avoiding transaction taxes, or saving time selling and transporting, or just for fun).

Here’s a real world example. Suppose you need to clean your house, but you also have the option to work some paid overtime. You may think you can save money by doing the cleaning yourself, but if the overtime pays more than the cost of a cleaner then you’re better off working and paying someone else. The opportunity cost of doing your own cleaning is that you can’t be working at the same time.

Emergent market hubs

The design of Eve includes no central trade areas. Anyone is free to sell or buy anything anywhere in the galaxy. What is interesting is that a series of market ‘hubs’ have spontaneously emerged, with one main super-hub in the solar system of Jita. Many years ago, when there were less players and no major markets, some systems with slightly better facilities and connections were more densely populated that others. This meant slightly more market activity in those systems which attracted new players looking for markets, which grew the market even more. The result is a positive feedback loop, with the end result being that if you now want to buy or sell something quickly you go to Jita (or one of the smaller ‘regional hubs’ that grew along the same principles, but to a lesser extent).

There are a few downsides of basing your Eve industrial business in or near a popular hub. One is the rental cost of corporation offices (required for efficiently managing your industrial activities), which are much higher in popular systems. Another is availability of factories – these are limited and you can wait many days for a slot around trade hubs.

There are analogous factors at work in the real world, affecting where people and businesses base themselves. There are bigger markets and trade opportunities for London-based companies, but the rents are huge. Smaller cities have reduced costs at the expense of available markets and potential workforce. Different companies weigh up these factors and choose an appropriate location.

I found some statistics about the populations of the most popular systems in Eve, so I thought I’d compare it to the sizes of the largest UK cities. The results are quite striking:

Normalised populations of the 30 most populous Eve systems and UK cities

Comparing the normalised populations of the top 30 systems and UK cities, you can see a large degree of similarity in the distribution. There is one super-system/city (Jita/London), followed by a few large ones (Amarr/Birmingham, Rens/Manchester, Dodixie/Liverpool), followed by lots of much smaller ones.

The relative proportions are very similar. This suggests that similar price/opportunity trade-offs may be being made in Eve as in the real world, leading to a similar distribution. Eve populations seems to favour the larger hubs while the UK population is spread slightly more evenly, which could be explained by there being stronger non-economic reasons for living in a certain place in the real world than in Eve.

Edit:

Here’s a more technical version of the above graph – log of population against log of rank. You can see they both follow a power law (very high R² value means the linear trend line is a good fit), even though the absolute population numbers in solar systems are very small so are prone to noise (or large passing fleets). The drop off in population is indeed faster in Eve as I mentioned above. Further analysis is left as an exercise to the reader 🙂

Log(population) against Log(rank), with least-squares trend line

Price vs Convenience, and Time Is Money

There are many ways of making money in Eve, and the one thing they have in common is that they take time. Time is the limiting resource of both Eve players and real-world workers. Travel in Eve is also slow, and flying around for fifteen minutes to save a few ISK (the in-game currency) is inefficient, as you’re paying the opportunity cost of not doing something more profitable.

Similarly, coffee shops in busy station charge a huge markup compared to a café you could find ten minutes walk away. It’s inefficient to use your time to save a pound if it takes a sizeable chunk out of your working day.

This inefficiency creates market opportunities. Ammo and other consumables are generally priced much higher outside of trade hubs, leading to higher profits, in the same way as station coffee shops. (Well, actually it’s the people renting the space to the coffee shop that make the profit, but as there is no shop rent in Eve the producer can keep it.)

Profit vs Volume

In an efficient free market it’s possible to either make a large profit per item sold, or to move large volumes of items, but not both. Where large volumes are shifted (trade hubs, busy shopping streets) you will find other businesses in competition and high prices will be undercut. In the ‘convenience’ market described above (backwater systems, small village shops) less people can be bothered to fight for the meagre sales so there is less competition and prices can be higher.

One example is manufacturers who sell their own goods. It is worth putting a few items on sale at the place of production, at inflated prices, to pick up passing trade, but manufacturing ability far outstrips these sales volumes. The bulk of the items will need to be sold at trade hubs where turnover can be huge – a smaller markup per item but higher overall profits.

You can see this with farmers markets and farm shops. Volume is small, but profits per item are much higher. The excess has to be sold to bulk buyers (e.g. supermarkets), but the additional profits from direct sales provide a nice additional income.

Fluid markets from professional traders

In the real world, if you want to buy or sell shares or a commodity then it is nearly always possible to conduct the trade immediately. It doesn’t require waiting for another person who wants to buy exactly the same amount of shares that you’re selling. This is good because it enables more trades to take place quicker, which means more benefits to those trading. The market is said to be fluid.

Fluid markets are enabled by professional traders – if you go to a foreign exchange they will buy currency at slightly below the average price, and sell it at slightly above. Therefore there is always a reasonably priced seller or buyer for your currency. The same principle applies to shares, commodities and other goods. The higher the potential profit (total traded volume multiplied by markup) the more competition will be attracted, and the buy and sell prices will converge.

Professional trading can be seen in action at trade hubs on almost all of the 6000-odd different items available in Eve. You will see buy and sell orders for huge volumes of items that could never be for personal use. High value items like ships attract a lot of competition and will have margins of 5-10% between the highest buy order and the lowest sell order. This means that whenever you want to buy or sell a ship, you can do so immediately with very little financial loss – you’re effectively paying the traders for convenience.

A weakness in the market can be seen by looking at some of the less popular items, where you’ll see 100% or more markup between the buy and sell prices. The volumes just aren’t there to support more competition – bringing the buy and sell prices closer together would mean it’s not worth the bother of trading, given that there are plenty of other things to do in game.

Barrier to entry, monopolies and cartels

An important principle in economics is the barrier to entry to a market. There are two reasons why the profit margin on an item may be high, but only one means you’re being ripped off:

  1. Sales volumes are low, so high margins are required to make it a worthwhile business.
  2. There are high barriers to entry, so it’s hard to compete with existing businesses.

To tell if you’re being taken advantage of, look at how hard it is to set up a competing business. If it’s easy then you’re probably not being ripped off and it’s just hard to make a profit. There are lots of examples of barriers to entry, from government restrictions (e.g. pre-privatisation utilities), to long professional training requirements (lawyers, doctors), to prohibitive startup costs (building a supermarket).

You can see these in operation in Eve. One interesting development was the creation of OTEC – the Organisation of Technetium Exporting Corporations – founded on the same principles as the more familiar OPEC. Technetium is a vital component in the manufacturing chain of many items in Eve, and is only found in one area the galaxy under the control of a small number of corporations. The barrier to entry here is nearly absolute – getting into the market would require overwhelming military force and months of warfare. Hence it was practical to set up a price fixing cartel and reap the resulting massive profits.

Another historical example was the breaking of the monopoly on Tech2 goods (these are better ships and weapons than the standard ones). Originally, the ability to manufacture these goods was distributed via lottery to a few lucky individuals. The one I’m most familiar with is the Damage Control II component, where I made most of my ISK.

With no competition, the first of these things were selling at 200+ million ISK each.  After a couple of years a game mechanic called Invention was introduced which allowed anyone to make them,  but less efficiently and with a significant up-front investment required for tools, skills and materials (in the order of 500 million ISK, and several weeks of in game training time). When I jumped in, prices had dropped to around 2 million ISK/unit, with about 60% of that as profit.

Over the next few months the initial investment price dropped and now it’s possible to enter the market with a few tens of millions. Profits are right down to something like 300,000 ISK/unit, and it’s approaching the “worth it” line where I need to decide whether it’s worth carrying on production.

Thus enabling competition has successfully brought the cost of a good right down to theoretical minimum of the cost of production plus the minimum profit required to make people bother. Economics in action!

May 012013
 

The next lighting technique I want to cover is environment mapping with cube maps.

Environment mapping

Environment mapping is another form of specular lighting. When I spoke about specular lighting here, I was talking about simulating the light reflected from one bright light source. You can repeat the calculations for multiple light sources but it quickly gets expensive. For a real scene, an object will be reflecting light from every other point in the scene that is visible from it. Using the standard specular approach is obviously not feasible for this infinite number of light sources, so we can take a different approach to modelling it by using a texture. We call this environment mapping.

Cube maps

We need to use a texture to store information about all of the light hitting a point in the scene. However, textures are rectangular and can’t obviously be mapped around a sphere (which is needed to represent all the light from the front, back, sides, top, bottom etc). What we do is use six textures instead, one for each side of a cube, and we call this a cube map. When arranged in a cross shape you can see how they would fold together into a cube:

Uffizi Gallery environment map

This is a famous cube map of the Uffizi Gallery in Florence, and is a bit like a panoramic photo with six images stitched together. These six textures are actually stored separately (they’re not actually combined into one big texture), and they are labelled front, back, left, right, top and bottom as in the image.

Lighting with cube maps

Remember how the reflection vector is calculated by reflecting the view vector around the normal. This reflection vector points to what you would see if the surface was a mirror (and is therefore the direction where any specular lighting comes from). The problem is to know what light would be reflected from that direction, and that is where the cube map comes in.

An environment map is another name for a cube map that contains a full panoramic view of the world (or environment) in all directions, such as the one above. The reflection vector can be used directly to look up into the environment map. This then gives you the colour of the reflection from that point.

In a pixel shader, it’s as simple as doing a normal texture lookup apart from you use the reflection direction as the texture coordinate:

float4 reflection = reflectionTexture.Sample(sampler, reflectionDirection);

Here is an example of my test scene where the only lighting on the objects is from the nebula cube map I used here:

Only lit using an environment map

How does the GPU actually look up into the texture though? The first thing it needs to do is find which face of the cube the reflection ray is going to hit. To find this, we just need to find the longest component in the reflection vector. If the X component is longest then the vector is mainly pointing left or right (depending if it’s positive or negative) so we will use the left or right face. Similarly, if the Y component is longest then we use the top or bottom face, and if Z is longest then we use the front or back face. Let’s start with an example:

reflectionVector = { -0.84, 0.23, 0.49 }

The X component is the largest, so we want either the left or right face. It’s negative, so we want the left face.

Now the face has been determined, the vector can be used to find the actual texture coordinates in that face. In exactly the same way as a vertex is projected onto the screen, the vector is projected onto the face we’ve just found by dividing by the longest component:

{ -0.84, 0.23, 0.49 } / -0.84 = { 1.0, -0.27, -0.58 }

Take the two components that don’t point towards the face (Y and Z in this case) and map them from (-1, 1) range to (0, 1) range, as this is the range that texture coordinates are specified in:

textureCoords = { -0.27, -0.58 } * 0.5 + { 0.5, 0.5 } = { 0.37, 0.21 }

And that is what texture coordinate will be sampled from the left face in this example.

Blurry reflections

Using a cube map like this will always give very sharp, mirror-like reflections, as sharp as the texture in the cube map. Surfaces that aren’t completely smooth should have blurred reflections instead, due to the variety of surface orientations on a small area of rough material (like I talked about here). One way of doing this would be to sample the cube map multiple times in a cone around the reflection vector and average them, which would simulate the light reflected in from different parts of the world. However, this would be very slow. Instead we can make use of mipmapping.

Mipmapping

Mipmapping is an important technique when doing any kind of graphics using textures. It involves storing a texture at multiple different sizes, so that the most appropriate size can be used when rendering. Here are the mipmaps for the left face of the texture above:

Mipmap levels for one cube face

Each successive mipmap level is half the resolution of the previous one. To make the next mipmap level, you can just average each 2×2 block of pixels from the previous level, and that gives the colour of the single pixel in the lower resolution level. What this means is that each pixel in a smaller mipmap level contains all of the colour information of all of the pixels it represents in the original image. As shown in the picture, if you blow up a smaller mipmap you get a blurry version of the original image.

Funnily enough, this is exactly what we need for blurry reflections. By changing what resolution mipmap level we sample from (and we are free to choose this in the pixel shader), we can sample from a sharp or a blurry version of the environment map. We could change this level of blurriness per-object, or even per-pixel, to get a variety of reflective surface materials.

Mipmaps in standard texturing

To finish off, I’ll give a quick explanation of why mipmapping is used with standard texturing, as I didn’t cover it earlier.

There are two main reasons why mipmaps are useful, and these are to do with aliasing and memory performance. Both of these problems are only seen if you’re drawing a textured polygon on the screen, and it’s really small. The problems in this case is that you’re only drawing a few pixels to the screen, but you’re sampling from a much larger texture.

The first problem of aliasing occurs when a sample of a texture for a pixel doesn’t include all of the colour information that should be included. For example, imagine a large black and white striped texture drawn only a few pixel wide. When sampling from the full-resolution texture each pixel must be either black or white, as those are the only colours in the texture. Each pixel should actually be drawn grey, because each pixel covers multiple texels (a texel is a pixel in a texture), which would average out to grey. Using a smaller, blurrier mipmap level would give the correct grey colour. If mipmapping isn’t used, the image will shimmer as the polygon moves (due to the where in the texture the sample happens to land).

Aliasing when a highly detailed texture is scaled down and rotated clockwise a bit

The second problem is performance. Due to the way that texture memory works, it is much more efficient to use smaller textures. Using a texture that is far too big not only looks worse due to aliasing, it’s actually a lot slower because much more memory has to be accessed to draw the polygon. Using the correct mipmap level means less memory has to be accessed so it will draw faster.

Next time I’ll conclude this part of the series by talking about basic shadowing techniques.

Apr 102013
 

Recently the wife and I have been playing through Tomb Raider and Bioshock Infinite, and there is one thing that has really stuck in my mind from those games: how much I hate scouring a world for crates of loot.

Tomb Raider

First things first – the new Tomb Raider is really good. It’s actually the first Tomb Raider game I’ve played (not sure how I managed that) so I don’t know how it compares to previous ones, but it looks great and the animation on Lara is top-notch. The storyline is pretty interesting and very dark, exploring her journey from gentle young woman to bloodthirsty killing machine. Well, the story elements explore her trauma and shock at having to kill to survive, but it obviously doesn’t affect her too much because by the end of the game you’ve murdered several hundred people. Such are the concessions to interesting gameplay I suppose, but it does create a bit of a disconnect.

The were a few moments of intense frustration and boredom for me. These weren’t directly the fault of the game, but from years of conditioning. Most of the time my wife was playing the game and I was watching. Every so often we’d come into a large open area with loads of buildings and things to climb around. After ten minutes of viciously killing the local inhabitants, the next hour was spent meticulously exploring every single corner of every single part of the map, in search of crates of salvage, ancient relics or log books. It must be an obsessive compulsive thing learnt over her previous fifteen years of playing Final Fantasy games.

“OK, we must be done here, are you going to go rescue your friend now?”

“Hang on, there’s another thing over there I think I can get to…”

“You don’t need any more stuff.”

“But… need to get the things…”

“…I’m going to do the washing up, call me when you’re ready…”

The best part of the game in this regard is the Survival Instincts feature. Press a button and everything you can see that can be interacted with glows yellow. Which I feel should have sped up the exploration process a lot more than it actually seemed to.

Overall though, I highly recommend giving Tomb Raider a go. Interesting story, satisfying combat and very pretty.

Bioshock Infinite

I’m currently undecided about Bioshock Infinite, although it would appear that I’m in the minority. We both loved the first game – the art deco styling; the creepy atmosphere; the scary enemies; all came together to make a memorable experience. I don’t remember much about how it actually played, and as the general opinion is that Infinite is basically the same gameplay-wise, I think I may have a case of rose-tinted glasses. Because I’m just not having much fun actually playing the game.

We’re maybe three-quarters of the way through. It looks nice enough (we preferred the aesthetics of the first game, but that’s just personal preference), the storyline is getting quite interesting, and your companion character brings a lot of life to the game when you meet her. But far too often the game falls into that all-too-familiar and ludicrous situation of:

“I need to rescue Elizabeth, she’s been seized by the baddies again. Hmm, she can wait, I need to finished searching all of these bins for loose change and scraps of food. Oh, a few random rooms, I really want to get on with the story but if I don’t thoroughly search them I might miss a big pile of coins or an infusion or a good piece of gear…”

Nothing breaks the flow of a game more for me than having to take time out from saving the world to scavenge for junk, search for trinkets, or steal from random people’s houses. It’s not so bad if it’s just optional extras (e.g. relics and audio extracts in Tomb Raider), but in Bioshock it’s almost the entire progression mechanic. Upgrading your guns and powers requires money, which is primarily obtained from scavenging, and the health/shield/salt upgrade infusions are usually tucked out of the way behind closed doors. All of which means that you’re ill-advised to ignore the quest for loot.

Overall, it’s still a pretty good game I think. It’s very old-school in its mechanics (particularly the combat, which feels really dated), but it’s worth playing to see how a companion character can be done right.

So you should probably get it. But play Tomb Raider first.

Apr 022013
 

Last time I spoke about specular lighting, which combined with diffuse and ambient from the previous article means we now have a good enough representation of real-world lighting to get some nice images.

However, this lighting isn’t very detailed. Lighting calculations are based on the orientation of the surface, and the only surface orientation information we have is the normals specified at each vertex. This means that the lighting will always blend smoothly between the vertices, because the normals are interpolated. Drawing very detailed surfaces in this manner would require very many vertices, which are slow to draw. What would be better would be to vary the lighting across a polygon, and for this we can use normal maps.

Normal maps

This is an example of what a normal map looks like:

Example of a normal map

A normal map looks like a bluey mess, but it makes sense when you understand what the colours mean. What we’re doing with a normal map is storing the orientation of the surface (the surface normal) at each pixel. Remember that the normal is a vector that points directly away from the surface.  A 3D vector is made up of three coordinates (X, Y and Z), and coincidentally our textures also have three channels (red, green and blue). What this means is that we can use each colour channel in the texture to store one of the components of the normal at each pixel.

We need the normal map to work no matter the orientation of the polygon that is using it. So if the normal map is mapped onto the floor, or a ceiling, or wrapped around a complex shape, it still has to provide useful information. Therefore it’s no use encoding the direction of the normal directly in world-space (otherwise you couldn’t reuse the map on a different bit of geometry). Instead, it is encoded in yet another ‘space’ called tangent space. This is a 3D coordinate system where two of the axes are the U and V axes that texture coordinates are specified in. The third axis is the surface normal.

How tangent space related to UV coordinates

Encoding a normal in this space is straightforward. The red channel in the texture corresponds to the distance along the U axis, the green channel is the same for the V axis, and the blue channel is the distance along the normal. The distances along U and V can go from -1 to 1 (as we’re encoding a unit vector), so a texture value of 0 represents -1, and 255 (the maximum texture value if we’re using an 8-bit texture) represents +1. Because a surface normal can never face backwards from the surface, the blue channel only needs to encode distances from 0 to 1.

Now we can understand what the colours in the normal map mean. A pixel with lots of red is facing right, and with little red is facing left. A pixel with lots of green is facing up, and with little green is facing down. Most pixels have a lot of blue, which means they’re mainly facing out along the normal (as you’d expect, as this is the average surface orientation).

Shading with normal maps

So now we have a normal map, and it’s mapped across our object in world space. We can read the texture at each pixel to give us a tangent space normal, but the lighting and view directions are specified in world space. We need to get all of these vectors into the same space, and for this we need a matrix that converts between tangent and world space. Luckily, that’s fairly easy to get.

Matrices

First a quick diversion into rotation matrices. I’ve talked about 4×4 transform matrices for transforming from one 3D space to another, but the top left 3×3 part of the matrix is all you need to perform just the rotation. Because we only want to rotate the normal we don’t need to apply any translation, so we just need a rotation matrix.

Green is the rotation part of a transform matrix. Red is the rotation part, made up of the X, Y and Z axes in the new space.

Rotation matrices between coordinate systems with three perpendicular axes (i.e. the usual ones we use in graphics) have a couple of nice properties. The first is that the columns are just the original axes but transformed by the rotation we’re trying to represent, i.e. the first column is where the X axis would be after the rotation, the second column where the Y axis would be, and the third column where the Z axis would be.

The second nice property is that the inverse of a rotation matrix is its transpose. This means that the rows represent the three axes with the inverse rotation applied. If you’re interested, this is a more in depth explanation of rotation matrices here.

Tangent to world space

So how does this help us? We need to build a rotation matrix to convert between tangent space and world space. The first thing to do is to add a couple more vertex attributes – these are the tangent and the binormal vectors. These are similar to the normal, but they define the other two axes in tangent space. Remember that these are defined by how the UV texture coordinates are mapped onto the geometry. Your modelling package should be able to export these vertex attributes for you.

Now, we need to use these to get the light, view and normal vectors into the same space. In this case we’ll transform the view and light directions into tangent space in the vertex shader (although you could instead transform the normals into world space in the pixel shader, if that makes your shader simpler).

As shown above, the tangent-to-world matrix is just the 3×3 matrix where the columns are the Tangent (X axis in the normal map), Binormal (Y axis) and Normal (Z axis), in that order. To get the world-to-tangent matrix, just transpose it so the rows are Tangent, Binormal and Normal instead:

World to tangent space matrix, made up of the tangent, binormal and normal vectors in world space

Then you can use this to transform your light and view vectors! In case it helps, here’s some vertex shader HLSL code to do all this:

// Transform the normal, tangent and binormal into world space. ModelViewMtx
// will be a 4x4 matrix, so take care not to include the translation.
float4 normalWorld = float4(input.normal, 0.0f);
normalWorld.xyz = normalize(mul(normalWorld, ModelViewMtx).xyz);
float4 tangentWorld = float4(input.tangent, 0.0f);
tangentWorld.xyz = normalize(mul(tangentWorld, ModelViewMtx).xyz);
float4 binormalWorld = float4(input.binormal, 0.0f);
binormalWorld.xyz = normalize(mul(binormalWorld, ModelViewMtx).xyz);

// Build the world-to-tangent matrix (transpose of tangent-to-world).
float3x3 worldToTangentSpace =
    float3x3(tangentWorld.xyz, binormalWorld.xyz, normalWorld.xyz);

// Transform the light and view directions.
output.lightDirTangentSpace = mul(worldToTangentSpace, lightDirWorldSpace);
output.viewDirTangentSpace  = mul(worldToTangentSpace, viewDirWorldSpace);

In the pixel shader you read the normal map the same as any other texture. Remap the X and Y components from (0, 1) range to (-1, 1) range, and then perform lighting calculations as usual using this normal and the transformed view and light vectors.

Here’s my test scene with the normal map on the floor:

And here’s a screenshot with the nicer shading and shadows turned on:

A caveat

One last technical point – the properties of rotation matrices I talked about only hold for purely rotational matrices, between coordinate spaces where the axes are all at right angles. Due to unavoidable texture distortions when they’re mapped to models, this usually won’t be the case for your tangent space. However, it’ll be near enough that it shouldn’t cause a problem except in the most extreme cases. If you renormalise all your vectors in the vertex and pixel shaders then it should all work out alright…

Next time should be a bit simpler when I’ll be talking about environment mapping!