GDCE 2004 - AI & Sound- archive
Cutting Edge AI in Console Games Development
Pierre Pontevia, Knoygen
This session was about AI on future console platforms and demos were from Renderware AI.
The AI paradigm is based on three steps:
Or understand the situation, take a decision and then apply it correctly.
When people think about the latest AI they tend to think about step 2 - the decision making however steps 1 and 3 are really key and should not be underestimated.
The perception step is a key step, currently AI is like a blind man with a stick. Generally developers have like hints on the maps etc. but with larger more dynamic game worlds and many more entities this no longer works. Advanced 3D topology dynamic analysis will become a must. The AI will need a specific model of the world. There was a good demo here of a 3D cavern. The cavern has lots of curves, holes in the ground and corridors leading off. When the AI algorithm was applied a yellow colour was superimposed showing the AI detection of areas of interest. So corridors and holes in the ground were marked as potential points of interest. This changed in real time as the entity moved around.
The next demo was a bodyguard demo. These AI entities analysed the area all the time and positioned themselves to cover areas of interest. It was cool to watch as they backed up against a wall and covered entrance points. Another demo showed a typical FPS shooter type game with a number of AI enemies. These enemies positioned themselves depending on where the player was - this looked really affective.
The key to a good action step is pathfinding. Production process must also match the quality of the AI. Pathfinding on large maps will require streaming mechanisms. It will need to divide maps into areas as the levels will be too large. There was a demo here that showed the automatic generation of path finding hint points. A complex ants nest type level was analysed.
Some interesting projects were mentioned, like ASPIC and Genesys.
Richard Joseph, Andy Mucho from Elixir Studios and Nick Laviers from EA.
The first case study was Evil Genius. The problem with sound and music in an RTS is that unlike films there are no highs and lows, all it has is a certain atmosphere. They described how each room had its own mixer, including filtering. Where you can hear sounds is calculated in a similar way to portal graphic calculations. As you zoom in on a room the sound from that room gets more defined. They wanted to have the NPCs talk but found that real conversation was repetitive and confusing and so ended up using a mumble that worked very well. Outside they attached sound effects like rain to the trees - creating a complete 3D jungle effect.
The second case study was given by a friend of mine from University, Nick Laviers. He is the senior audio director at EA. He described the process that went into the creation of the sound and music for the Harry Potter game on the PS2 (he won a BAFTA for his work on the Chamber of Secrets). One issue he has found recently is that sound is being given less and less room on the PS2 as the art is now taking up much more space. They made a scheme to extract 3 streams out of two and in the end used 17 MB of samples + streaming. They wanted to make the sound and music dynamic and one of the areas they focused on was when players restarted a level - rather than reset the music they varied it so the player did not simply get a repeat. He talked a bit about the problems with making Dementors sound terrible - the final demo sound was very impressive. The music starts stealthy and then becomes more stressful. It comes down to casting emotion and drama outside of a normal narrative structure. The demo was really good, the way the music changed and the mix of music and sound effects created a real atmosphere - dead impressive :)