Sep 232013
 

I was pleased with the quality of my miniatures photos came out, so this is my cheap, cheerful and minimal effort method of photography.

What you’ll need

  • A camera –  any basic compact will do, as long as it has a manual settings mode.
  • Tripod – or at least some way of keeping the camera still.
  • A sheet of A1-sized thin white card – I found a sheet for £1.75 in Hobbycraft.
  • A desk lamp.
  • A daylight bulb – these shine white instead of the yellow of normal bulbs and cost about £10 at Hobbycraft (and if you’re painting using a standard yellow lamp, just buy one of these now, you won’t regret it!)
  • Some other form of light, either another lamp or a bright room.

The most important thing on this list (apart from the camera) is the daylight bulb. Balance the camera precariously, improvise a white sheet, whatever, but without a good bulb your photos will look horrible!

 

Room setup

Put the card on a flat surface and lean the back on something to make a nice curved surface. Put your miniatures on the card. Set up the lamp out front, shining more-or-less straight forwards. Put your camera on the tripod, maybe a foot or so from the miniature. Something like this (not ideal because some of my white card has been cut off and used for something…):

You want to minimise shadows, which is why you put the lamp in front and not off to the side. Multiple lamps from different angles help here, as does having a big window nearby and a bright day. But don’t use direct sunlight or it’ll cast shadows again. If you’re serious you can make a light box like this one which should give better results than I’ve managed to get (but requires some non-negligible amount of effort to make, plus storage space and more lamps than I have).

 

Camera setup

This style of photography is basically the ideal environment in every possible way – you have a completely static scene, full control of everything in it, controllable lighting and as long as you need.

First put your camera in macro mode (little flower icon) – this enables it to focus up close, and we’re very close here. Then put it in manual mode and use these setting:

  • F-Stop – this controls the aperture size. Set it as high as it will go. This makes the aperture as small as possible, which has the effect of making the in-focus depth range as large as possible (you may be able to see this from the diagram is my depth of field post). This will help keep both the front and the back of your miniature in focus in your photo.
  • ISO – this controls the sensitivity of the light sensor. Set this as small as possible. This means that a lot of light has to reach each pixel before it registers, which reduces the noise in your image (more light means that the relative random differences between neighbouring pixels are smaller).
  • Shutter speed/exposure – change this until your photos come out at the right brightness. With the tiny aperture and low ISO you’ll need a relatively long exposure, maybe 1/10th second.
  • Delay mode – you want to set a delay between pressing the shutter release button and it taking the photo. This is because you’ll move the camera slightly when pressing it which will blur your image. My camera has a two second delay option which is more convenient than the standard ten second delay.

Then just snap away, adjusting the shutter speed until you get the right exposure. I prefer to slightly over expose rather than have it too dark, to get a nice white background and bright colours. You can tell I’ve taken photos at different times without a light box because the background is whiter in some images than others, but I can live with that.

And this is a shot I just quickly took. The guy on the right is a little blurry because the aperture doesn’t go particularly small on my camera, at least in macro mode. That could be fixed by moving the camera backwards a bit to reduce the relative depths. There also isn’t much ambient light today so the background is quite blue, but again that would be fixed with a light box.

Sep 162013
 

I’ve got a reasonably well painted Warhammer 40K Chaos Marine army, so I thought I’d show off a few pictures (click through for larger images). I’ve not really found the time to do any painting for over a year now (too much blogging, programming, gaming etc) but some of the new Chaos Marine miniatures look really nice so I like to thing I’ll get round to painting some eventually.

Starting off with my first HQ unit, an old Daemon Prince with wings. The skin was the first larger painted area that I managed to get a fairly smooth finish on.

This is my other large HQ unit, a Chaos Lord on a Manticore (which I ran as a counts-as Daemon Prince because it didn’t feel right to have two Princes in the same army). It’s definitely my favourite and best painted model. I don’t really like the pose of the original model with both arms lunging forwards, but rotating the right arm down gave it a really dynamic feel of pushing its way through the ruins. Trim the rocks off the back foot to leave it free standing, swap out the Lord’s arms for 40k arms and job done.

The skin of the Manticore turned out really well, and it’s the first (and possibly only) time that I managed to get some really smooth layered blending working. I’d like to work on my smooth layering more in the future (although I suspect this is just a function of time spent – I remember it taking a whole afternoon to just do the skin).

I’d wanted a set of Lightning Claw Terminators since about 1995, and these guys all look suitably menacing.

And they need a Land Raider to ride around in. It’s amazing how much difference a bit of paint chipping makes to the apparent realism of the model. I think the mud effect works well – that just involves getting some really watered down brown paint and sploshing it all over the bottom section with a big brush, not technical at all (and it’s slightly worrying when you do it, potentially ruining your careful painting underneath…).

The Thousand Sons give some opportunity to get a bit of colour into the army. Blue seems to be a really forgiving colour for blending and highlighting, I don’t know why. I try to theme all of my Rhinos so you know what squad they’re carrying, so I added some warp-flame and lanterns (some spare Fantasy bits I had lying around).

The Berserkers were some of the first models I assembled and painted since coming back to the hobby as an adult, so some of the poses are a bit weird. Never underestimate the value of good posing – more important than the paint job for getting good looking models.

 Some normal marines. The Black Legion colour scheme is fairly quick to paint – the most time consuming bit is all the silver and gold edging (straight on top of a black spray undercoat).

I ‘borrowed’ this idea for the Vindicator from one I saw online a long time ago – the body pile from the Corpse Cart kit fits almost perfectly on the dozer blade, and then pack all the holes with rubble and sand.

 The Daemonettes were a vaguely successful attempt at painting skin and brighter colours, but they look fairly messy close up (I’d like to try another batch one day). The purple claws came out really well, which I think was mainly just through drybrushing.

A couple of Obliterators. It was all the rage to have loads of these guys but the metal models are such a pain to stick together than I couldn’t face doing any more after the first two (plus they’re really expensive).

I accidently ended up with a few too many Dreadnoughts (currently up to six I believe) so I thought I’d better paint one. Nothing special but it looks fine on the table.

I did a few Orks early on to make a change from painting black. I like to keep them nice and (overly) bright – I’ve seen darker green Orks on the tabletop and they become quite hard to distinguish any details.

More recently I felt like trying something a bit different, so I tried one of the old Inquisitor models (which are twice the size of the normal miniatures). I’m quite pleased by how it turned out, apart from the green on the back of the cloak which isn’t as smooth as I was intending (hence no pictures of the back!).

 

Sep 112013
 

In this previous post I talked about Bokeh depth of field, where it comes from and why it is different to the type of fake depth of field effects you get in some (usually older) games. In this slightly more technical post I’ll be outlining a nice technique for rendering efficient depth of field, which I use in my demo code, taken from this EA talk about the depth of field in Need For Speed: The Run.

The main difference is the shape of the blur – traditionally, a Gaussian blur is performed (a Gaussian blur is a bell-shaped blur curve), whereas real Bokeh requires a blur into the shape of the camera aperture:

Bokeh blur on the left, Gaussian on the right

The first question you might be asking is why are Gaussian blurs used instead of more realistic shapes? It comes down to rendering efficiency, and things called separable filters. But first you need to know what a normal filter is.

Filters

You’re probably familiar with image filters from Photoshop and similar – when you perform a blur, sharpen, edge detect or any of a number of others, you’re running a filter on the image. A filter consists of a grid of numbers. Here is a simple blur filter:

\left(\begin{array}{ccc}\frac{1}{16}&\frac{2}{16}&\frac{1}{16}\\\frac{2}{16}&\frac{4}{16}&\frac{2}{16}\\\frac{1}{16}&\frac{2}{16}&\frac{1}{16}\end{array}\right)

For every pixel in the image, this grid is overlaid so that the centre number is over the current pixel and the other numbers are over the neighbouring pixels. To get the filtered result for the current pixel, the colour under each of the grid element is multiplied by the number over it and then they’re all added up. So for this particular filter you can see that the result for each pixel will be 4 times the original colour, plus twice each neighbouring pixel, plus one of each diagonally neighbouring pixel, and divide by 16 so it all adds up to one again. Or more simply, blend some of the surrounding eight pixels into the centre one.

As another example, here is a very basic edge detection filter:

\left(\begin{array}{ccc}-1&-1&-1\\-1&8&-1\\-1&-1&-1\end{array}\right)

On flat areas of the image the +8 of the centre pixel will cancel with the eight surrounding -1 values and give a black pixel. However, along the brighter side of an edge, the values won’t cancel and you’ll get bright output pixels in your filtered image.

You can find a bunch more examples, and pictures of what they do, over here.

Separable filters

These example filters are only 3×3 pixels in size, but they need to sample from the original image nine times for each pixel. A 3×3 filter can only be affected by the eight neighbouring pixels, so will only give a very small blur radius. To get a nice big blur you need a much larger filter, maybe 15×15 for a nice Gaussian. This would require 225 texture fetches for each pixel in the image, which is very slow!

Luckily some filters have the property that they are separable. That means that you can get the same end result by applying a one-dimensional filter twice, first horizontally and then vertically. So first a 15×1 filter is used to blur horizontally, and then the filter is rotated 90 degrees and the result is blurred vertically as well. This only requires 15 texture lookups per pass (as the filter only has 15 elements), giving a total of 30 texture lookups. This will give exactly the same result as performing the full 15×15 filter in one pass, except that one required 225 texture lookups.

Original image / horizontal pass / both passes

Unfortunately only a few special filters are separable – there is no way to produce the hard-edged circular filter at the top of the page with a separable filter, for example. A size n blur would require the full n-squared texture lookups, which is far too slow for large n (and you need a large blur to create a noticeable effect).

Bokeh filters

So what we need to do is find a way to use separable filters to create a plausible Bokeh shape (e.g. circle, pentagon, hexagon etc). Another type of separable filter is the box filter. Here is a 5×1 box filter:

\left(\begin{array}{ccccc}\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&\frac{1}{5}\end{array}\right)

Apply this in both directions and you’ll see that this just turns a pixel into a 5×5 square (and we’ll actually use a lot bigger than 5×5 in the real thing). Unfortunately you don’t get square Bokeh (well you might, but it doesn’t look nice), so we’ll have to go further.

One thing to note is that you can skew your square filter and keep it separable:

Then you could perhaps do this three times in different directions and add the results together:

And here we have a hexagonal blur, which is a much nicer Bokeh shape! Unfortunately doing all these individual blurs and adding them up is still pretty slow, but we can do some tricks to combine them together. Here is how it works.

First pass

Start with the unblurred image.

Original image

Perform a blur directly upwards, and another down and left (at 120°). You use two output textures – into one write just the upwards blur:

Output 1 – blurred upwards

Into the other write both blurs added together:

Output 2 – blurred upwards plus blurred down and left

Second pass

The second pass uses the two output images from above and combines them into the final hexagonal blur. Blur the first texture (the vertical blur) down and left at 120° to make a rhombus. This is the upper left third of the hexagon:

Intermediate 1 – first texture blurred down and left

At the same time, blur the second texture (vertical plus diagonal blur) down and right at 120° to make the other two thirds of the hexagon:

Intermediate 2 – second texture blurred down and right

Finally, add both of these blurs together and divide by three (each individual blur preserves the total brightness of the image, but the final stage adds together three lots of these – one in the first input texture and two in the second  input texture). This gives you your final hexagonal blur:

Final combined output

Controlling the blur size

So far in this example, every pixel has been blurred into the same sized large hexagon. However, depth of field effects require different sized blurs for each pixel. Ideally, each pixel would scatter colour onto surrounding pixels depending on how blurred it is (and this is how the draw-a-sprite-for-each-pixel techniques work). Unfortunately we can’t do that in this case – the shader is applied by drawing one large polygon over the whole screen so each pixel is only written to once, and can therefore only gather colour data from surrounding pixels in the input textures. Thus for each pixel the shader outputs, it has to know which surrounding pixels are going to blur into it. This requires a bit of extra work.

The alpha channel of the original image is unused so far. In a previous pass we can use the depth of that pixel to calculate the blur size, and write it into the alpha channel. The size of the blur (i.e. the size of the circle of confusion) for each pixel is determined by the physical properties of the camera: the focal distance, the aperture size and the distance from the camera to the object. You can work out the CoC size by using a bit of geometry which I won’t go into. The calculation looks like this if you’re interested (taken from the talk again):

CoCSize = z * CoCScale + CoCBias
CoCScale = (A * focalLength * focalPlane * (zFar - zNear)) / ((focalPlane - focalLength) * zNear * zFar)
CoCBias = (A * focalLength * (zNear - focalPlane)) / (focalPlane - focalLength) * zNear)

[A is aperture size, focal length is a property of the lens, focal plane is the distance from the camera that is in focus. zFar and zNear are from the projection matrix, and all that stuff is required to convert post-projection Z values back into real-world units. CoCScale and CoCBias are constant across the whole frame, so the only calculation done per-pixel is a multiply and add, which is quick. Edit – thanks to Vincent for pointing out the previous error in CoCBias!]

In the images above, every pixel is blurred by the largest amount. Now we can have different blur sizes per-pixel. Because for any pixel there could be another pixel blurring over it, a full sized blur must always be performed. When sampling each pixel from the input texture, the CoCSize of that pixel is compared with how far it is from the pixel being shaded, and if it’s bigger then it’s added in. This means that in scenes with little blurring there are a lot of wasted texture lookups, but this is the only way to simulate pixel ‘scatter’ in a ‘gather’ shader.

Per-pixel blur size – near blur, in focus and far blur

Another little issue is that blur sizes can only grow by a whole pixel at a time, which introduces some ugly popping at the CoCSize changes (e.g. when the camera moves). To reduce this you can soften the edge – for example if sampling a pixel 5 pixels away, blend in the contribution as the CoCSize goes from 5 to 4 pixels.

Near and far depth of field

There are a couple of subtleties with near and far depth of field. Objects behind the focal plane don’t blur over things that are in focus, but objects in front do (do an image search for “depth of field” to see examples of this). Therefore when sampling to see if other pixels are going to blur over the one you’re currently shading, make sure it’s either in front of the focal plane (CoCSize is negative) or the currently shaded pixel and the sampled pixel are both behind the focal plane and the sampled pixel isn’t too far behind (in my implementation ‘too far’ is more than twice the CoCSize).

[Edit: tweaked the implementation of when to use the sampled pixel]

This isn’t perfect because objects at different depths don’t properly occlude each others’ blurs, but it still looks pretty good and catches the main cases.

And finally, here’s some shader code.