Oct 212013
 

The screenshots in my graphics posts are from my raymarching renderer, and mainly consist of boxes and spheres. That’s a bit boring so I thought I’d have a look for something more interesting, and came across the Syntopia blog talking about raymarching 3D Mandelbulb fractals. I plugged the code into my renderer and it worked a treat. Here’s a video I rendered out from it:

It’s not exactly artistically shot (just a programmatically spinning camera and light) but it looks quite nice. I may set up some proper paths and things at some point and do something nicer. (Here is a much nicer animation I found, which was rendered with particles in Lightwave and apparently took nearly an hour per-frame to render. Mine took about 45 mins total.)

Running the code yourself

If you fancy having a play around (or just want to make your GPU overheat) you can download the program. I make no guarantee that the code runs on your system (I’ve only tried it on my Radeon HD7950, but it should work on most DX11 cards), and use this software at your own risk etc.

Download FractalRender.zip

This has the .exe, a couple of textures and two shader files you can edit. The shader files are compiled on load, so you can modify the shaders (in the .fx files, you can just use Notepad or something to edit them) and re-run the .exe. Compile errors should pop up in a box.

Controls

WASD to move, hold the left mouse button to turn and hold the right mouse button to move the sun. Press H to toggle the full help text with the rest of the controls (sun size, depth of field etc).

Shaders

This demo is using a deferred renderer (for no particular reason) so there are two separate shaders you can play with. The fractal generation code is in sdf.fx and calculates and writes out the depth, colour, normals, sun shadow and ambient occlusion into buffers for every pixel. deferredshader.fx applies the lighting. Depth of field, antialiasing and lens flare are applied afterwards.

deferredshader.fx is the simpler of the two. The lighting setup is pretty much copied from this Iñigo Quilez article for the sun, sky and indirect lighting. A physically-based specular term is added (more complex but better looking than the simple specular I wrote about before) and then some fog is applied. Light colours and specular gloss can easily be edited.

sdf.fx controls the geometry generation. The SDF() function is the signed distance function, and takes a world position and returns the distance to the nearest surface. There are a few alternative functions you can try (the others are commented out). SDFMandelbulbAnimated() generates the animated Mandelbulb in the video and is the code from the Syntopia blog. SDFMandelbulbStatic() is an optimised function for generating the static power-8 Mandelbulb, using the optimised code from here. As well as that there are a couple of other functions for fun – some infinite wibbly spheres and the box and two spheres from my earlier videos.

After using the SDF to find the surface intersection point it calculates the soft shadowing from the sun (which I described here). This can be really slow so there is an option to turn it off completely at the top of the file if your frame rate is too low, or you could tweak the iterations and max radius down a bit. It also calculates an ambient occlusion term using five rays at 45° from each other. This can also be slow, and the same applies.

This is just a quick overview. Any specific queries or questions, leave a comment.

Finally, here are a few screenshots I took. Macro photography of fractals gives some quite nice results. Enjoy!

fractal

fractal3

fractal4

Oct 042013
 

High Dynamic Range (HDR) rendering is a way of coping with the large possible range of brightness in a scene, and how to render that to a screen. There are a few parts to this which are all required to get good results. In this part I’ll cover the initial rendering, tone mapping and dynamic exposure control. Next time I’ll cover bloom and throw in some lens flare for good measure.

What is HDR rendering?

Do an image search for HDR photography and you’ll see loads of weird and unnatural looking pictures. This completely isn’t what HDR rendering is, although it’s trying to tackle the same issue.

The human eye has a dynamic range of around 1,000,000,000 : 1. That means we can see in starlight, and we can see in light a billion times brighter such as a sunny day. At any given time we can see contrast of up to 10,000 : 1 or so, but our eyes adjust so that this range is useful for whatever we’re looking at. In a bright scene we can’t perceive much detail in dark shadow, and at night a bright light will blind us.

LCD displays have contrast ratios of around 1000 : 1, so a white pixel is only a thousand or so times brighter than a black pixel (depending on the screen). This means we can’t display the billion-to-one contrast of the real world directly on a screen, but it does map pretty well to the range of the eye at any given time.

Rendering in HDR – floating point render targets

The final image that you output to the screen has 24 bits per pixel – 8 bits of accuracy for each of red, green and blue. Renderers traditionally draw to a render target with the same accuracy, so that the result can be displayed directly. This means that only 256 levels of brightness can be stored for each colour channel at any pixel. 256 levels are enough that you can’t really see any banding between the colours when looking at a smooth gradient on the screen, but it’s not enough accuracy to capture a scene with a lot of dynamic range.

Time for a contrived example. Here is a photo on a lamp on my window sill:

The exposure on this photo was 1/20 seconds, which let in enough light that you can see the detail in the trees. However, the lamp itself is pure white and you can’t see any detail. There is no way to tell if the lamp is fairly bright, or really really bright. Let’s see another photo with the exposure reduced to 1/500 seconds.

You can start to see some detail in the lamp now, but you can only just see the outline of the trees. The bulb is still white, even with this short exposure, so you can tell that it’s really bright. One more with the exposure set to 1/1300 seconds:

Now we can see more of the detail in the bulb, and the trees have almost completely disappeared. The camera won’t go any quicker and the bulb is still white, so it must be really really bright.

So you can see that with only 256 intensity levels, you have to lose information somewhere. Either you have a long exposure so you can see darker objects but lose all detail in the bright areas, or a short exposure where you can see detail in bright objects at the expense of darker areas.

To get around this we need to use a higher precision render target with a lot more than 256 intensity levels. The ideal format currently is 16-bit floating point for each colour channel (which if you’re interested can represent values from 0.00006 to 65000 with 11 bits of accuracy in the mantissa). This is more than enough precision for accurately drawing both moonlit nights and blazing days at the same time.  The downsides are that 16-bit render targets require twice as much memory and they’re a bit slower to render into (more data to push around), but on modern hardware, and certainly the next-gen consoles, they’re completely viable to use.

Tone mapping – getting it on the screen

So you’ve rendered your scene into a 16-bit render target. Given that your monitor still wants 8-bit colour values, you need to do some conversion. While your HDR source image is an accurate representation of the world (including all possible light intensities), we need to attempt to simulate what your eyes would actually see so that we can draw it on a screen. This stage is called tone mapping.

Eyes and cameras can adjust to let in more or less light, or be more or less sensitive. On a camera this is controlled by the aperture size, shutter speed and ISO settings, and in the eye you have pupil size and chemical changes in the photoreceptors. This means you can see intensity variation in some small part of the entire dynamic range. Everything darker than this will just look black, and everything brighter will just look white. A tone mapping function is one that can map the entire infinite range of light into a zero-to-one range (and then you multiply by 255 to get values that can be displayed on a screen), while preserving the contrast in the part of the range you’re interested in.

One simple tone mapping operator is the Reinhard operator:

x_{out}=\frac{x}{x+n}

where n is a number that controls the exposure and x is your rendered value. You can see that zero will map to zero, and large values will converge on 1. A larger value chosen for n will make the final image darker (in fact n is the input value that will map to half brightness on your screen).

Let’s try it on a rendered image. I’ll use one of my demo scenes from before with no extra post-processing so you can see what’s going on. I need to pick a value for n so I’ll try 0.2:

This is over-exposed, so now lets try n = 1.0:

That’s pretty good, but the sun is still completely white so let’s see what happens with n = 10:

Like with the lamp photos, the ‘exposure’ is now short enough that the sun isn’t just a white blob (it’s still a blob, but you can now make out the slightly yellow colour).

The Reinhard operator isn’t the only option, and in fact it’s not that good. It desaturates your blacks, making them all look grey. Lots of people have tried to come up with something better, and a good one (which I’m using in my own code) is John Hable’s Filmic Tone Mapping which debuted in Uncharted 2 (and which you can read all about here if you want, including the issues with Reinhard).

Swapping to Filmic Tone Mapping we get this, where you can see a lot more contrast and saturation in the colours:

Automatic Exposure Control

We can now render our HDR image to a screen, but we’re relying on this magic exposure value. It would be really nice if we could handle this automatically, in the same way as our eyes. In fact we can do something quite similar.

When your eyes see a really bright scene they automatically adjust by closing the pupil to let in less light. In a dark environment the pupil reopens, but more slowly. To simulate this we first need to know how bright our rendered scene is. This is easy to do – we can just add up all the pixels in the image and divide by the number of pixels, which will get you the average brightness.

[To be technical, you should actually use the log-average luminance – take the log of the luminance of each pixel, average those, and then exponentiate it again. When doing a straight average a few really bright pixels can skew the result noticeably, but this has much less effect on a log-average. Also, for performance reasons you don’t actually add up all the pixels directly – instead write the log-average values into a half-resolution texture and then generate mipmaps to get down to a single pixel, which you then exponentiate to give the same result.]

Then you need to pick a target luminance, which is how bright you want your final image to be. Applying your tone mapping (with your current exposure value) to you average input luminance will give you your average output luminance. Now we can set up a feedback loop between frames – if your scene is coming out too bright, reduce the exposure for the next frame, scaling the reduction by how far off you are. Similarly if it’s too dark, increase the exposure for next frame. You can go further and increase the exposure slower than you decrease it, to simulate the eye adjusting quicker to bright scenes.

And now you never need to worry about your image being too light or dark ever again – it’s all taken care of automatically! This is another major advantage of using HDR rendering – it doesn’t matter how bright any of the lights in your scene are (in absolute terms) as long as they’re correct relative to one another.

There’s more…

The image is starting to look good, but the sun is still just a white circle. Next time I’ll talk about using bloom to ‘go brighter than white’…