Jan 302013
 

Lighting is what makes a 3D scene look real.  You used to hear talk of the number of polyons or the resolution of the textures in a game, but these days GPUs are powerful enough that we’re rarely limited by these things in any meaningful sense.  The focus now is on the lighting.  The most detailed 3D model will look terrible if it’s unlit, and the simplest scene can look photorealistic if rendered with good lighting.

I’ll first cover the most basic lighting solutions that have been used in games for years, and later talk about more advanced and realistic methods.  This will cover simple ambient and diffuse lighting.  But first of all I’ll explain a bit of the theory about lighting in general and how real objects are lit.

A bit of light physics

Eyes and cameras capture images by sensing photons of light.  Photons originate from light sources, for example the sun or a light bulb.  They then travel in a straight line until they are bounced, scattered, or absorbed and re-emitted by objects in the world.  Finally some of these photons end up inside the eye or the camera, where they are sensed.

Not all photons are the same – they come in different wavelengths which correspond to different colours.  Our eyes have three types of colour-sensitive cells which are sensitive to photons of different wavelengths.  We can see many colours because our brains combine the responses from each of the cell types, for example if both the ‘green’ and ‘red’ cells are stimulated equally we’ll see yellow.

Responses of the three types of cones in the eye to different wavelengths of light

Televisions and monitors emit light at just three wavelengths, corresponding to the red, green and blue wavelengths that our eyes are sensitive to.  By emitting different ratios of each colour, almost any colour can be reproduced in the eye.  This is a simplification of the real world because sunlight is made up of photons with wavelengths all across the visible spectrum, and our cones are sensitive across a range of wavelengths.

Objects look coloured because they absorb different amounts of different frequencies of light.  For example, if a material absorbs most of the green and red light that hits it, but reflects or re-emits most of the blue light, it will look blue.  Black materials absorbs most of the light across all frequencies.  White materials absorb very little.

The rendering equation

Feel free to skip this section if you don’t get it.  The rendering equation is a big complicated equation that states exactly how much light will reach your eye from any point in the scene.  Accurately solving the rendering equation will give a photorealistic image.  Unfortunately it’s too complicated to solve it exactly, so lighting in computer graphics is about finding better and better approximations to it.  The full equation looks something like this:

Don’t worry about understanding it.  What it says is that the light leaving a point towards your eye is made up of the light directly emitted by that point, added to all of the other light that hits that point and is subsequently deflected towards your eye.

If you zoom in closer and closer to any material (right down to the microscopic level), at some point it will be effectively ‘flat’.  Hence we only need to consider light over a hemisphere, because light coming from underneath a surface would never reach it.  Here is a slightly easier to understand picture of what the rendering equation represents:

The reason that the rendering equation is so hard to solve is because you have to consider all of the incoming light onto each point.  However, all of that light is being reflected from an infinite number of other points.  And all the light reflected from those points is coming from an infinite number of other points again.  So it’s basically impossible to solve the rendering equation except in very special cases.

Ambient light

Hopefully you’re still with me!  Let’s leave the complicated equations behind and look at how this relates to simple lighting in games.

As the rendering equation suggests, there is loads of light bouncing all over the place in the world that doesn’t appear to come from any specific direction.  We call this the ambient lighting.  At its simplest, lighting can be represented as a single colour.  Here is my simple test scene (a box and a sphere on a flat floor plane) lit with white ambient light:

White ambient light only

To work out what the final colour of a surface on the screen is, we take the lighting colour for each channel (red, green, blue) and multiply it with the surface colour for that channel.  So under white light, represented in (Red, Green, Blue) as (1, 1, 1), the final colour is just the original surface colour.  So this is what my test world would look like if an equal amount of light hit every point.  As you can see, it’s not very realistic.  Constant ambient light on its own is a very bad approximation to world lighting, but we have to start somewhere.

As a quick aside, I mentioned earlier that we only use three wavelengths of light when simulating lighting, as a simplification of the real world where all wavelengths are present.  In some cases this can cause problems.  The colour of the floor plane in my world is (0, 0.5, 0), meaning the material is purely green and has no interaction with red or blue light.  Now we can try lighting the scene with pure magenta light, which is (1, 0, 1), thus it contains no green light.  This is what we get:

Lit with pure magenta ambient light

As you can see, the ground plane has gone completely black.  This is because when multiplying the light by the surface colour, all components come out to zero.  Usually this isn’t a problem as scenes are rarely lit with such lights, but it’s something to be aware of.

Directional light and normals

The first thing we can do to improve the lighting in our scene is to add a directional light.  If we are outside on a sunny day then the scene is directly lit by one very bright light – the sun.  In this case the ambient light is made up of sunlight that has bounced off other objects, or the sky.  We know how to apply basic ambient, so we can now look at adding the directional light.

It’s time to introduce another concept in rendering, which is the normal vector.  The normal vector is a direction that is perpendicular to a surface, i.e. it points directly away from it.  For example, the normal vector for the floor plane is directly upwards.  For the sphere, the normal is different at every position on the surface and points directly away from the centre.

So where do we get this normal vector from?  Last time, I introduced the concept of vertex attributes for polygon rendering, where each vertex contains information on the position, colour and texture coordinates.  Well, the normal is just another vertex attribute, and is usually calculated by whichever tool you used to make your model.  Normals consist of three numbers, (x, y, z), representing a direction in space (think of it as moving from the origin to that coordinate in space).  The length of a normal should always be 1, and we call such vectors unit vectors).  Hence the normal vector for the floor plane is (0, 1, 0), which is length 1 and pointing directly up.  Normals can then be interpolated across polygons in the same way as texture coordinates or colours, and this is needed to get smooth lighting across surface made up of multiple polygons.

We also need another vector, the light direction.  This is another unit vector that points towards the light source.  In the case of distant light sources such as the sun, the light vector is effectively constant across the whole scene.  This keeps things simple.

Diffuse lighting

There are two ways that light can interact with an object and reach your eye.  We call these diffuse and specular lighting.  Diffuse lighting is when a photon is absorbed by a material, and then re-emitted in a random direction.  Specular lighting is when photons are reflected off the surface of a material like a mirror.  A completely matt material would only have diffuse lighting, while shiny material look shiny because they have specular lighting.

Diffuse and specular lighting

For now we will only use diffuse lighting because it is simpler.  Because diffuse lighting is re-emitted is all directions it doesn’t matter where you look at a surface from, it will always look the same.  With specular lighting, materials look different when viewed from different angles.

The amount of light received by a surface is affected by where the light is relative to a surface – things that face the sun are lighter than things that face away from it.  This is because when an object is at an angle, the same amount of light is spread over a greater surface area (the same reason as why the earth is cold at the poles).  We now have the surface normal and the light angle, so we can use these to work out the light intensity.  Intuitively, a surface will be brighter the closer together the normal and the light vectors are.

The exact relation is that the intensity of the light is proportional to the cosine of the angle between them.  There is a really easy and quick way to work this out, which is to use the dot product.  Conveniently enough the dot product is a simple mathematical function of two vectors that happens to give you the cosine of the angle between them, which is what we want.  So, give two vectors A and B, the dot product is just this:

(A.x * B.x) + (A.y * B.y) + (A.z * B.z)

To get the diffuse lighting at a pixel, take the dot product of the normal vector and the light vector.  This will give a negative value if the surface is pointing away from the light, but because you don’t get negative light we clamp it to zero.  Then you just add on your ambient light and multiply with the surface colour, and you can now make out the shapes of the sphere and the box:

Constant ambient and single directional light

Better ambient light

At this point I’ll bring in a really simple improvement to the ambient light.  The ambient represents the light that has bounced around the scene, but it’s not all the same colour.  In my test scene, the sky is blue, and therefore the light coming from the sky is also blue.  Similarly, the light bouncing off the green floor is going to be green.  When you can make generalisations like this (and you often can, especially with outdoor scenes), we may as well make use of this information to improve the accuracy of the ambient light.

In this case we can simply have two ambient light colours – a light blue for the sky and a green for the ground.  Then we can use the surface normal to blend between these colours.  A normal pointing up will receive mostly blue light from the sky, and a normal pointing down will receive mostly green light from the floor.  We can then apply a blend based on the Y component of the normal.  This is the scene using just the coloured ambient:

Hemispherical ambient light

And then add the diffuse lighting back on:

Hemispherical ambient and diffuse

Now just combine the lighting with some texture mapping and you’re up to the technology levels of the first generation of 3D polygonal games!

That’s more than enough for this post, so next time I’ll talk about specular lighting.

Jan 252013
 

I recently found out that the awesome Spectrum game The Lords Of Midnight had been updated for iOS, so I promptly coughed up the £2.99 asking price and downloaded it.

I remember playing the original on my Spectrum when I was probably nine or ten years old.  I paid zero attention to the actual quest, I had no idea on how to even attempt to win the game, but I’d just spend hours exploring the landscape, admiring the views, recruiting a few allies and battling dragons and random passing armies.

Lords Of Midnight Spectrum

Spectrum version

Released back in 1984, this game was completely different to anything else around at the time.  The concept of an epic 3D war game just didn’t exist back then.  These were the days of the text adventure, of brief text descriptions of environments and guessing commands to type into a  terminal window.  Actually seeing graphics of armies marching across the plains in front of you as you moved through the landscape was revolutionary.  Also new was the variety of gameplay with two completely different ways to win the game, leading to two very different game styles.  The first was as an war game, gathering armies together for an assault on the enemy stronghold.  The second was to guide Frodo Morkin to the Tower of Doom, on his own, to destroy the Ice Crown.  It was all very atmospheric.

However, due the the very complex (at the time) nature of the game, it was quite demanding of the player – to have any chance of winning you really had to manually map the game world and keep track of where your characters were in relation to one another.  That was far too much like hard work to me, which is probably why I never bothered with it.

iPad version

So how good is the new version, and how does it stand up today?  In short, I was surprised at how well it’s stood the test of time.  The quality of the port is top-notch.  The style of the graphics has been preserved really nicely, just recreated in high resolution, so it’s unmistakably the same game.  Nicely, the landscape smoothly scrolls when moving or looking rather than just drawing discrete frames at each location.

The touch controls work well enough.  Some of it feels a little clunky, such as the Think screen and having to go to a whole options page to search and attack, but that’s just to keep it as faithful as possible to the original and you soon get used to it.  The vital addition to the game though it that it now has a built in map.  This shows the area of the world that you’ve seen so far with the locations of all of your characters.  It’s so much easier to get a handle on what’s going on than I remember, and it makes the game much more enjoyable to play.

In actual gameplay terms it holds up well enough as a fairly simple strategy game, and I think it would pass as a newly released mobile game today.  My current game sees Morkin dodging whole armies as he sneaks ever closer to the Ice Crown, while further south Luxor is heading up a desperate defence as more and more armies from both sides pile into a massive meat-grinder battle that’s been going on for days.

The meat grinder

If you’ve got fond memories of the game first time around you’ll love this version.  Even if not I’d still recommend checking it out to see what the granddaddy of all modern wargames was all about.  The game is still fun and challenging, but it was never really about the deep strategy so much as the atmosphere of experiencing the world of Midnight, and it’s still got that in spades.

Jan 172013
 

We saw in part 3 how to move the camera around a wireframe world.  Now it’s time to move onto proper solid 3D rendering.  For this we need to introduce a few new concepts: basic shading, vertex attributes, textures and the depth buffer.

Solid rendering

Moving from wireframe to solid rendering can be as simple as filling in between the lines.  There are a load of algorithms for efficient scanline rasterisation of triangles, but these days you don’t have to worry about it because your graphics hardware deals with it – simply give it three points and tell it to draw a triangle.  This is a screenshot of a solid-rendered triangle, which is pretty much the most basic thing you can draw in D3D or OpenGL:

That’s not very exciting, but that’s about all you can draw with only positional data.  In case you wonder why I choose a triangle, it’s because a triangle is the simplest type of polygon and all rendering eventually breaks down to drawing triangles (for example a square will be cut in half diagonally to give two triangles).

Vertex attributes and interpolation

I’ll just clarify a bit of terminology.  A vertex is a single point in space, usually used to define the corners of a shape, so a triangle has three vertices.  An edge is a line between two vertices, so in wireframe drawing we just draw the edges.  A polygon is the whole filled in shape, such as the triangle above.

Vertices don’t just have to contain positional information, they can have other attributes.  One example of a simple attribute is a colour.  If all the vertices are the same colour then the whole polygon could just be drawn the same colour, but if the vertices are different colours then the values can be interpolated between the vertices.  Interpolation simply means to blend smoothly from one value to a different value – for example, exactly half way between the two vertices the colour will be an even mix of each.  Because triangles have three vertices, a slightly more complex interpolation is actually used that blends smoothly between three values.  Here is an example of the same triangle with coloured vertices:

Texture mapping

The other common vertex attributes are texture coordinates.  A texture is just a picture, usually stored as an image file (although they can be generated algorithmically at runtime).  Textures can be applied to polygons by ‘stretching the picture’ across the polygon.  You can think of a 2D picture as having coordinates like on a graph – an X coordinate running horizontally, and a Y coordinate running vertically.  These coordinates range from 0 to 1 across the image, and the X and Y coordinates are usually called U and V in the case of textures.

Textures are applied to polygons by specifying a U and V coordinate at each vertex.  These coordinates (together referred to as UVs) are interpolated across the polygon when it is drawn, and instead of directly drawing a colour for each pixel, the coordinates are used to specify a point in the texture to read from.  The colour of the texture at the point is drawn instead.  This has the effect of stretching some part of the texture across the polygon that is being drawn.

As an example, here is a texture and a screenshot of part of it mapped onto our triangle:

   

This is just a really quick introduction to the basics of polygon rendering.  There are a lot more clever and interesting things that can be done which I will talk about in later sections.

A question of depth

The problem with rendering solid geometry is that often two pieces of geometry will overlap on the screen, and one will be in front of the other.  The two options for dealing with this are either to make sure that you draw everything in back-to-front order, or to keep track of the depth of each pixel that you’ve rendered so that you don’t draw more distant objects in front of closer ones.

Rendering back to front will give the correct screen output but is more expensive – a lot of rendering work will be done to draw objects that will later be drawn over (called overdraw), and additional work is required to sort the geometry in the first place.  For this reason it is more efficient in almost all cases to use a depth buffer.

Screen buffers

First lets talk about how a computer stores data during rendering.  It has a number of buffers (areas of memory) which store information for each pixel on the screen.  It renders into these buffers before copying the contents to the display.  The size of the buffers depends on the rendering resolution and the quality.  The resolution is how many pixels you want to draw, and as an example if you want to ouput a 720p picture you need to render 1280×720 pixels.

The colour buffer stores the colour of each pixel.  For a colour image you need at least one byte of storage (which can represent 256 intensity levels) for each of the three colour channels, red, green and blue.  This gives a total of 256x256x256 = 16.7 million colours, and so each pixel required three bytes of storage (but due to the way memory is organised there is a fourth spare byte for each pixel, which can be used for other things).  These days a lot of rendering techniques require higher precision but I’ll be writing more about this later on.

The second type of buffer is the depth buffer.  This is the same resolution as the colour buffer but instead stores the distance of the pixel along the camera’s Z axis.  This is often stored as a 24-bit floating point number, meaning it can represent basically any number (to varying degrees of accuracy).  Here is an example courtesy of Wikipedia of the colour and depth buffers for a scene, where you can see that pixels in the distance have a lighter colour in the depth buffer, meaning that they are a larger value:

Depth buffer rendering

Using a depth buffer to render is conceptually very simple.  At the beginning of the frame, every pixel in the buffer is reset to the most distant value.  For every pixel of every polygon that is about to be rendered to the screen, the depth of that pixel is compared with the depth value stored for that pixel in the depth buffer.  If it’s closer, the pixel is drawn and the depth value updated.  Otherwise, the pixel is skipped.  If a pixel is skipped then no calculations have to be done for that pixel, for example doing lighting calculations or reading textures from memory.  Therefore it’s more efficient to render object from front to back so that objects behind closer ones don’t have to be drawn.

That’s all that needs to be said about depth buffers for now.  Next time I’ll be talking about lighting.

Jan 092013
 

I’ve been playing a couple of space games lately, which is good because I like space games.

FTL

You’ve probably heard of FTL, if you’ve not already played it.  It’s a small game from a two-man team, and was funded by a successful Kickstarter campaign.  The game is basically about flying your ship through randomly generated sectors of space, facing random enemies and encounters and picking up random loot.  What makes it great compared to some of the overblown epics you might have played is that you have one life, there is only one autosave and a whole playthrough only takes an hour or two.

I’m not going to describe the game in detail because that’s been done to death elsewhere.  What’s interesting is that it plays very much like a board game.  Pick a location to move to, draw an event card, pick an action and roll a dice to see what the outcome is.  The combat is realtime and has enough nuances to make it interesting (especially when it comes to the harder enemies and the final boss), but you could definitely see something like this working as a solo tabletop game.

The problem I tend to have with random single player games is I don’t like losing by pure luck.  This is a problem I often find with computer adaptations of games like Catan and Carcasonne, where there is some element of skill but a lot comes down to pure luck of the draw.  When playing against real people this isn’t a problem because, I think, there is still a winner in the room.  But when losing to a computer there is no winner, only a loser, and nobody likes to be a loser…

Having said that, I don’t have that problem in FTL.  Games don’t come a lot more random that FTL, but if you’re getting unfairly screwed you find generally find out in the first few minutes of the game so you can just restart.  Another reason is that you shouldn’t really expect to win.  The game is hard, really hard (and that’s just on the easy setting).  If you just go in with a goal of unlocking a new ship, getting one of the achievements or playing with a different play style (e.g. focus on boarding actions, drones, ion cannons etc) then dying isn’t really a failure, it’s the expected outcome.  I think I’ve just been spoilt for too long with spoon-fed “experience” games, but FTL is a perfect antidote.

So I very much recommend you take a punt if you’ve not played it.  It usually seems to be up for sale for under a fiver, so for that you really can’t go wrong.

 

Endless Space

I picked up Endless Space in the Steam sale and I’ve been playing around with it a bit.  My first comment is this – it is most definitely not wife-friendly.  The most common question in the house recently has been “What are you playing, that boring game again?”

It is not a boring game (well, if you’re into this kind of thing).  True, there will be a lot of staring at spreadsheets and production queues, but it induces in me exactly the same “one more turn” compulsion that keeps me up late into the night as the various Civilization games over the years.  There is good reason for that – it is basically just Civ In Space.

Instead of cities you have planetary systems, instead of city buildings you have system improvements.  The resources types are exactly the same: food to increase system population, production in each system to build improvements and units, “dust” (gold) to rush constructions, science generated by each system, strategic and luxury resources to unlock new constructions and keep your population happy.  It’ll all seem very familiar.  But that’s not necessarily a bad thing, and it’s all really slickly presented with a nice intuitive interface.

Combat is rather abstract.  Each fight is played out at three range bands – long where missiles dominate, medium where lasers work best, and close which favours projectile weapons.  Your only control is picking an “order” card for each phase, using a board/card game concept again.  Some are offensive, some defensive, and each card cancels out a different class of card.  So if you expect the opponent to play a defensive card in a phase you can for example use the “Sabotage” order which will cancel it out.  It’s a bit random and takes some getting used to but it adds some more interactivity over Civ’s battle system.

My main problem with the game, and it’s exactly the same problem as Civ has, is that it all gets a bit bogged down about half way through.  When you’ve got 20+ systems to manage, with half a dozen finishing constructions each turn, it takes an age to micromanage what they should all be doing next.  Turns start taking 5-10 minutes each and the game can start to drag a bit.

The tech tree is also huge, and you’ll need a couple of games under your belt to become familiar with all the upgrades.  A couple of times I’ve come across some amazing technology (e.g. the planetary exploitation upgrades) that would have provided some massive boost to my empire if only I’d noticed it 20 turns earlier.

Overall though, it’s a decent game.  If you like Civ games you’ll love it.  If you don’t then you probably won’t.

Jan 092013
 

There have been a lot of game developers coming out recently to complain about Windows 8, most notably Gabe Newell.  In the comments sections of news stories you get a lot of people dismissing their concerns and accusing them of overreacting.  I believe that some of these are justified so I’m going to explain why, but also how the problem could be averted.  There is an excellent article here which brings up many points that I agree with, but it’s quite long so I’ll summarise.

1. Closed development environment for Metro apps

This really just sounds like an inconvenience until I get to the real issues, but you can only develop Metro apps if you are a licenced developer (which requires paying an annual fee), in a closed development environment.  No longer can Average Joe whip out a text editor and download a compiler and start writing programs for free.

Closed development environments are a pain – if the Apple experience is anything to go by you’ll waste many hours trying to sort out development profiles and permissions on all of your devices, and delays when a new person joins the team or a new external person wants to test the app.  You’ll long for the days of being able to build an executable and just run it anywhere.

2. Windows Store for Metro apps

This is the main issue with the closed development model – only approved apps can be put on the store.  Although Microsoft has recently relaxed its stance on 18-rated games, anything you publish will still need to be approved by an arbitrary approval committee within Microsoft.

Now, people whose job it is to approve software apps generally aren’t the same people who know what bizarre and innovative app will kick off a revolution in the computer industry.  I guarantee that if every piece of software already written had been subject to approval by some committee before it could be released, the world of computing today would look very different.  How many innovations wouldn’t have seen the light of day because the approver didn’t understand what it was or what the possibilities could be?

This is a real problem if app stores are the only way to get software in future.

3. The Windows Desktop mode will eventually go away

So this is the crux of the issue.  Most accusations of overreacting and scaremongering go something along the lines of: “but you can still run apps on the Desktop mode and you can ignore Metro”.  That is a flawed argument.

While you can currently still run all programs in the Desktop mode, these programs don’t, and will likely never have, access to all the shiny new features and APIs of Windows 8.  Again, at the moment, this isn’t a problem as there isn’t that much new stuff yet.  However, think back to DOS…

20+ years ago everything was a DOS application, and then Windows came along.  But all your old DOS programs still worked as you could run them in DOS mode.  But, none of the new shiny features and APIs of Windows were accessible.  Sounds familiar?  How many new DOS products are written today?

Here is a concrete example – DirectX 11.1 is only supported on Windows 8.  There is no deep architectural reason why it can’t be supported on Windows 7, it just isn’t.  Come Windows 9 or Windows 10, who knows what new APIs will be Metro-only?

What this means

Taking these points together, it is really not inconceivable that in ten years time the only way to get a modern app on Windows is through the Windows Store, and publishing one would require being a registered developed and having your software approved before release.  This is what everyone is getting so worked up about.

How it can be avoided

This can all be easily avoided however – Microsoft just needs to allow open development of Metro apps.  I would say that the main competitive advantage that Windows has enjoyed up to this point is that anyone can write software for it, and so the software library is huge.  Moving to the same model as Apple puts them in direct competition, on equal terms, with MacOS and the iOS mobile devices, and it’s an optimistic person who thinks that’s a favourable matchup.

I think Microsoft have a few years yet to change their mind.  People will still buy new versions of Windows, apps will get developed for the Windows Store, and a lot of people will be happy with that.  But a lot of people will be sad to see a platform that previously offered so many opportunities go to waste.

The rise of Linux?

There is one more possible outcome, and that is the rise of Linux as a viable mainstream OS.  Moves are already afoot with Valve’s upcoming Steam Box.  As I understand it, this may work by having Linux on a bootable pen drive with Steam pre-installed.  There is no reason that this won’t be the way OSes go in future – Windows doesn’t do what you want?  Just swap out the system drive for an OS that has the features you want, just by popping in a different pen drive.  Bootcamp on the Mac is basically doing this already, and I swap between MacOS and Windows on a fairly regular basis on the laptop, depending on what I want to do.

So maybe the future isn’t that bleak after all.  Just don’t expect it to be a Windows-only future.

Jan 082013
 

[Warning: this is a slightly more technical article containing talk of matrices, transforms and spaces.  If it doesn’t make sense just carry on to the next article, you don’t need to understand the details!]

We saw in the previous article how to draw a point in 3D when the camera is at the origin and looking down the Z axis.  Most of the time though this won’t be the case as you’ll want to move the camera around.  It’s really hard to directly work out the projection of a point onto an arbitrarily positioned camera, but we can solve this by using transforms to get us back to the easy case.

Transform matrices

A transform in computer graphics is represented by a 4×4 matrix.  If you’re not familiar with matrix maths don’t worry – the important things to know are that you can multiply a 4×4 matrix with a position to get a different position, and you can multiply two 4×4 matrices together to get a new 4×4 matrix.  A transform matrix can perform translations, rotations, scales, projections, shears, or any combination of those.

Here is a simple example of why this is useful.  This is the scene we want to draw:

The camera is at position Z = 2, X = 1, and is rotated 40 degrees (approximately, again exact numbers don’t matter for the example) around the Y axis compared to where it was in the last example.  What we can do here is find a transform that will move the camera from its original position (at the origin, facing down the Z axis), to where it is now.

First we look at the rotation, so we need a matrix describing a 40° rotation around the Y axis.  After that we need to apply a translation of (2, 0, 1) in (x, y, z) to move the camera from the origin to its final location.  Then we can multiply the matrices together to get the camera transform.

If you’re interested, this is what a Y rotation matrix and and a translation matrix looks like:

This camera transform describes how the camera has been moved and rotated away from the origin, but on its own this doesn’t help us.  But think about if you were to move a camera ten metres to the right – this is exactly the same as if the camera had stayed where it was and the whole world was moved ten metres to the left.  The same goes for rotations, scales and any other transform.

We can take the inverse of the camera matrix – this gives us a transform that will do the opposite of the original matrix.  More precisely, if you transform a point by a matrix and then transform the result by the inverse transform the point will end up back where it started.  So while the camera transform can be used to move the camera from the origin to its current position, applying the inverse (called the view transform) to every position in the world will move those points to be relative to the camera.  The ‘view’ of the world through the camera will look the same, but the camera will still be at the origin, looking down the Z axis:

This diagram may look familiar, and in fact is the exact same setup that was used for rendering in the previous article.  So we can now use the same rendering projection method!

World, camera, object and screen spaces

Using transforms like this enables the concept of spaces.  Each ‘space’ is a frame of reference with a different thing at the origin.  We met world space last time, which is where the origin is some arbitrary position in the world, the Y axis is up and everything is positioned in the world relative to this.

We’ve just met camera space – this is the frame of reference where the origin is defined as the position of the camera, the Z axis is straight forwards from the camera and the X and Y axes are left/right and up/down from the camera’s point of view.  The view transform will convert a point from world space to camera space.

You may also come across object space.  Imagine you’re building a model of a tree in a modelling package – you’ll work in object space, probably with the origin on ground level and at the bottom.  When you’ve build your model you’ll want to position it in the world, and for this we use the basis matrix, which is a transform (exactly the same as how the camera was positioned above) that will convert the points you’ve modelled in object space into world space, thus defining where the tree is in the world.  To render the tree, the points will then be further converted to camera space.  This means that the same object can be rendered multiple times in different places just by using a different basis matrix.

Finally, as we saw in the previous article, we can convert to screen space by applying the projection matrix.

Using this in a renderer

All of these transforms are represented exactly the same, as 4×4 matrices.  Because matrices can be multiplied together to make new matrices, all of these transforms can be combined into one.  Indeed, a rendering pipeline will often just use one transform to take each point in a model and project it directly to the final position on the screen.  In general, each point in eac object is rendered using this sequence of transforms:

Basis Matrix * Camera Matrix * Projection Matrix * Position = Final screen position

You now pretty much know enough to render a basic wireframe world.  Next time I’ll talk about rendering solid 3D – colours, textures and the depth buffer.

Jan 022013
 

The usual image of board games is of a sedentary pursuit, with players taking their time to decide their next move or roll some dice.  These games can be incredibly tactical and intense, but one thing that’s usually not associated with board games is adrenaline.

I’ve played two great games recently that are completely different from this, and I’m going to tell you what makes them awesome.  The games are Space Alert and Galaxy Trucker.  They are both designed by the same person, one Vlaada Chvátil, and they both include the element of real-time gameplay.

Space Alert

Space Alert is a co-operative game for four or five players.  You play the crew of a spaceship sent into hostile territory who must survive for ten minutes.  What makes this game different though is that those ten minutes are played out in real time, to an accompanying soundtrack CD.  Every minute or two the ship’s computer will call out a new threat that has to be dealt with, and as a team you have to work together to defeat each threat before it tears your ship apart.

During those ten minutes you put down cards to plan out what your little crewman is going to do.  With each action you can move to a different room, or you can press the A, B or C button in each room which either fires the guns, powers up the shields, moves power to where it’s needed, or a few other special actions.  All players plan out their moves, but the move cards are placed face down so nobody can see what anyone else is doing – you have to talk to each other to make sure all threats are being dealt with.

Finally, after the ten minutes are up, there is the “action replay” phase.  Each player turns over their action cards and replays on the board what actually happened.  If all went well and the team played as a cohesive whole then a whole pile of alien spaceships will appear and be blown away by coordinated volleys of laser fire, and you can congratulate yourselves on a job well done.  More often than not though one player will be pressing the fire button on the main laser but nothing will happen because someone else drained the batteries to power up the shields and nobody remembered to throw more fuel into the reactor and the missiles were fired too early and the target was out of range and…  and it’s just funny to watch what you thought was a bulletproof plan fall to pieces as your ship does likewise!

Playing this game is completely unlike playing any other game.  You will experience ten minutes of pumping adrenaline, shouting and panic as five people all try to coordinate their actions against the clock.  Make a mistake, lose track of what room you thought you were in, press the fire button at the wrong time and the whole carefully laid plan can fall down around you.  Everyone is relying on you, and you are relying on everyone else.  But each game only takes around 20 minutes, so if you fail horribly (and you will), just get back up and try again.

Galaxy Trucker

I love Galaxy Trucker.  It’s very high on my list of “Games to try with people who don’t play board games”.  The premise is simple: build a spaceship out of junk and fly it to the other side of the galaxy, and hope it makes it in one piece.

This is another game of two halves, one frantic half against the clock and a more sedate resolution phase.  The best bit is building your spaceship.  Each player has a board with squares on in the rough shape of a spaceship.  In the middle of the table is a pile of face-down square tiles.  Each of these tiles is a ship component (such as an engine, laser or crew compartment), and each has different shaped connectors on each edge.  You build a ship by taking a tile, looking at it, and then either connecting it to an existing compatible connector on your ship, or placing it back on the pile face-up.  But, there is no turn-taking.  Everyone plays are once, grabbing pieces, looking at them, and deciding whether to attach them or put them back on the table.

What you’ll get is a mad dash as everyone grabs components looking for the best bits for their ship.  But as you’re taking new tiles you have to keep an eye on what’s being thrown back, in case it’s that battery with the awkard connectors you’ve been looking for.  As the tiles start to run out the panic can set in as you realise you still need to find shields or engines and you just can’t get them.

It’s all so tactile, and the few rules about how components fit together are pretty simple and all make sense.  It’ll take five minutes to explain the basics to new players before they’re building their first ships, and this is what makes it so approachable for novices.

After the building phase there is a simple adventure phase, where random event cards are used to throw asteroids, pirates and planets full of loot at the players.  Bits fall off the ships when they’re hit, and there is great comedy potential as you watch a lucky asteroid hit cleave your friend’s precarious construction in half before it limps to the destination with nothing but a crew compartment attached to an engine.  But nobody minds because the whole thing is over in 20 minutes again so you just move up to a bigger and better ship and try again.

So, Space Alert and Galaxy Trucker are two of the most fun games I’ve played recently, and I don’t think that it’s a coincidence that both involve real-time gameplay.  I appreciate a complex strategy game as much as the next geek, but playing these games scratches a different itch and comes much closer to what we think of as playing in the traditional sense.