GameArchitect.net: Musings On Game Engine Structure and Design

 

Home

What It's All About

Who Is This Guy?

The List

Complete Archive

Personal 

RSS Feed

People




 

 

GDC 2006

By Kyle Wilson
Friday, March 31, 2006

Introduction

I attended my first Game Developers Conference in 1998.  It was called the Computer Game Developers Conference then, and it was down in Long Beach.  I went out early with guys who were taking in the tutorials, and I toured Warner Brothers Studios where they were filming Lethal Weapon 4 and visited Venice Beach and drove up to Griffith Park Observatory.

GDC 2006, in San Jose, was my fifth GDC.  They're all starting to run together at this point, and this year's GDC was, as always, huge, frustrating, inspirational, boring, fascinating and exhausting in random measure.

First, huge.  GDC has become a massive beast.  It was supposed to attract more than twelve thousand developers this year.  Many of the talks that I went to were filled to capacity.  I didn't get into a couple of talks that I wanted to see because the rooms were packed and the Conference Assistants were turning people away at the door.  GDC has contracted to be in San Francisco for the next decade.  I hope they're able to handle the crowds better than San Jose.  I suspect, though, that the show might be better off fracturing into smaller subsidiary conferences more narrowly focused on technical, artistic or business issues.

The conference looks more and more corporate, and the addition of a "serious games" track this year only hastens that process.  There are, I joked to my friends, fewer shaved heads to be seen these days and more guys who've just naturally gone bald.  There are also more women, which is a good thing.  There are more suits, which isn't.  I saw more Asian game developers than ever before, which was a welcome change.  I saw nametags from Korea and China this time, as well as Japan.  Some talks were simultaneously translated into English, Korean and Japanese.

The industry is healthy and very, very hungry.  I've never seen as many recruiting booths as I did this year, nor have I seen them as packed with applicants.  I guess everybody is growing to handle the content-creation demands of the new generation of consoles. 

What's in:  Gameplay prototyping, outsourcing, automated testing, Guitar Hero, advergaming (gamevertising?), multithreaded execution

What's out:  Spherical harmonics, HDR, hardware chest-thumping (ATI vs. Nvidia, PS3 vs. Xbox 360), Peter Molyneux

Although by and large I enjoyed this GDC more than my last GDC two years ago, the conference administration has definitely gone downhill.  There are no longer any conference proceedings at all.  Instead, slides can be downloaded from www.gdconf.com for a small subset of the talks -- 63 out of 445 sessions as I write this.  The conference wifi network didn't reach into any of the conference rooms, making it impossible to download slides during a talk.  The space was insufficient to the number of attendees.  And the food's gotten worse.  Lunch was identical sour sandwiches every day, while the snacks I remember being served at the afternoon coffee break have disappeared entirely.  Meanwhile, GDC admission continues to get more expensive every year.

Wednesday

I feel moderately guilty about the fact that first thing Wednesday morning I skipped my friend Noel's talk on Test-Driven Development to go see the gaming glitterati panel discussion on "What's Next".  I feel doubly guilty because in doing so I penalized Noel for being conscientious and submitting an accompanying 16-page paper with his talk.  I figured that I could read Noel's paper, but that there wouldn't be any paper to read on whatever Louis Castle and Mark Czerny might say.

I still have no idea what Masaya Matsuura, also on the panel, might say.  He was trying to participate through two-way simultaneous translation, and the latency was enough to keep him from getting a word in edgewise.  What the rest of the panel--Louis Castle, Mark Czerny, Dave Perry (who sounds more American every time I hear him talk), and Cyrus Lum--say sounds a lot like what any of us who work professionally in the game industry might say:

  • The game industry is becoming mature, and therefore becoming increasingly specialized.  Companies like Harmonix are finding their niche and defining themselves as the owners of that niche.
  • Guitar Hero rocks.
  • We still haven't figured out what the best approach to middleware is--little gems, or all-encompassing frameworks.
  • Costs are increasing faster than sales, so we need to find new sources of revenue (advergaming, media tie-ins) and control costs (define game faster, re-do less, Internet distribution, outsourcing).
  • Dave Perry made the interesting point that there's a schism between hardware manufacturers, who want innovative products that distinguish their hardware and publishers, who want to control risks through stable, established franchises.
  • Louis Castle says demos are more essential than ever for sales.  Nobody pays $60 for a game unless they know it's fun.
  • And that's what's next.  In six bullet points.

After the "What's Next" panel I skipped across the street to the Civic Auditorium for the keynote talk given by Phil Harrison of Sony Computer Entertainment.  The talk has been amply covered elsewhere, so I'll just transcribe verbatim the quick notes I jotted down after the talk ended and the lights came back up:

Much hype.  Claims all TCR-compliant PS2 games will be backward-compatible w/PS3.  Likely bullshit.  PSP will soon allow downloadable games, including complete PS1 catalog.  PS3's network platform will be very, very much like Xbox Live.  Much talk about how wonderful Blu-Ray is for large data sets.  Trying to justify cost, I think.  PSP sold more units in 14 months than PS1 or PS2.  Goal higher still for PS3.

I hung around the Civic Auditorium for the next keynote, Ron Moore's talk on creating Battlestar Galactica for the SciFi network.  The talk was opposite Tim Sweeney's talk, "Building a Flexible Game Engine:  Abstraction, Indirection and Orthogonality," which I kind of wanted to catch.  But I've heard Tim Sweeney pitch the Unreal Engine several times, and I'd never heard Ron Moore talk about Battlestar Galactica before.

I decided to check out the new Battlestar Galactica on DVD a couple of months ago after reading that Time's TV critic had named it the best show of 2005, writing, "Most of you probably think this entry has got to be a joke. The rest of you have actually watched the show."   I was skeptical, but that's a pretty strong endorsement.  So I watched the show, at least all of it that's out on DVD so far, and was greatly surprised to find something intelligent and authentic, featuring characters who were flawed and human coping with a world that felt like it almost could be real.  The show's not perfect:  it's still got a bit of a weakness for melodrama and for improbably tidy resolutions to conflicts.  But it's rare television that allows you to single out the things that are wrong with it instead of the things that are right with it.

Back to the Ron Moore talk.  Moore is the creator and producer of the new Battlestar Galactica.  His talk revolved around the reasons he had for the decisions he made about who the characters were going to be in the new Battlestar, what their universe would be like, and how the characters and the universe would differ from the original series.  What surprised me about his talk was the fact that he made decisions like an engineer.  In constructing the show, he tried to create a universe and characters who were accessible and real to the viewers.  He tried to connect the characters so that they had inherent reasons for conflict, to create drama and give viewers something to watch every episode.  Almost every change between the original series and the new Battlestar flows logically from those motivations.

And Googling, I see that Ron Moore has a blog, where he writes about what it was like speaking at GDC!  Reading that entry makes me feel like I've wandered into a hall of mirrors.

After lunch I went to "The Next Generation Animation Panel", hosted by Julien Merceron of Ubisoft and featuring Okan Arikan, Leslie Ikemoto, Lucas Kovar, Ken Perlin and Victor Zordan.  Each of the speakers talked for five or ten minutes on his or her own particular approach to procedural animation synthesis.  They showed some very, very cool technology.  Unfortunately, the speakers are coming from academia, and the problems they're solving mostly aren't the problems game developers have.  The academics seem to presuppose that we have untold hours of mo-cap data for a character and that we want to search through that data offline to find some sequence of individual poses or short clips that satisfy a set of constraints.  But we usually don't have mo-cap data, and even if we did have hours of mo-cap sequences, we couldn't afford to load them into console memory and we couldn't afford to search the motion database in real time.

The animation problem, as far as a game is concerned is this:  We have a character driven by AI or player control.  The controller driving the character moves it through a dense virtual world with the application of some velocity and additional state annotations to indicate whether it's crouched, shooting, rolling, etc.  We want that character to behave in as believable a way as possible and, by implication, in as physically correct a way as possible.  We are given nothing for free.  If it's cheapest to hand-animate every permutation of character behavior, we'll do that.  If it's cheapest to mo-cap all possible character actions, we'll do that.  And if there's ever a way to just specify joint constraints and trust a fully procedural system to create whatever animations are necessary for a character, that would be wonderful.  Especially if it could be done in real time.

On that front, Ken Perlin did actually have some pretty impressive demos to show.  He and Noise Machine (if Noise Machine is anyone but him) have assembled the beginnings of a "virtual actor" tool.  So far, it's a lower body.  A user can define character motion by placing virtual footsteps in sequence.  The tool will synthesize a walk animation to fit the footsteps.  Additional modifiers can be applied to add style to the animation and the character's proportions can be modified as desired.  It's a little stiff compared to real people, but it's an impressive start.  I hope that before too long we'll be able to generate natural-looking animations for entire characters the same way.  This is the future, defining animations not by tweaking individual bones and keyframes but by defining desired results and generating animation procedurally.

The last talk I went to on Wednesday was "Valve's Design Process for Creating Half-Life 2" by programmers Brian Jacobson and David Speyrer.  This was the first of the many gameplay prototyping talks at GDC this year, and for me the best talk of the show.

Valve has no pure designers.  Programmers, level designers and artists all participate in the design process.  Valve has three design teams of about six people each.  The make-up of a team is about two programmers, two level designers, and two artists/animators, all in one room.  Each team will spend about three months making a chapter of gameplay.  They spend two or three weeks getting something up and runnable:  about fifteen minutes of gameplay built in a level of ugly orange collision geometry.  (Why spend time making something pretty when you're going to have to change it?)  Then they bring in someone from outside to play the level.  Based on the results of that playtest, the design team decides what they're going to iterate on over the next week.  Then they bring in another playtester.

Playtesters are pulled from volunteers collected at local EBs and Gamestops.  Valve keeps a list of candidates on their internal wiki.  When a team wants a playtester, they call the next guy on the list.  Playtesters play in a "virtual living room" set-up with a big TV for the design team to watch.  The entire design team watches the play session.  No one gives hints or answers the player's questions, to avoid biasing the results.  After the session's over, the designers ask the player non-leading questions to test what is memorable and noticeable.  They try not to rely to much on questions though.  Players lie, forget and overpraise to the designer's faces.  The observed player experience is the important thing.

This is very much an engineering approach to fun:

  • Define goals
  • Come up with hypothesis of how to meet goals
  • Experiment to test hypothesis
  • Evaluate quality of the experiment, of the hypothesis tested, and the goals themselves
  • Repeat

The Valve programmers got some questions from the audience about whether this process was responsible for the fact that Half Life 2 took five years to develop.  They said that the reason the game took so long was that it was very technically aggressive, not that the design process was slow.  The Half Life 2 engine made major state-of-the-art advances in a half-dozen different areas, and because of that, it was several years into the development process before the technology was even stable enough for them to start creating level content.  The lesson they've learned from this is that their technology needs to be developed in a more iterative fashion just as their gameplay content is.  Now they're trying to finish and ship one new tech feature at a time, like the HDR support in the "Lost Coast" expansion.

Wednesday night I went to the Sony party at Parkside Hall with Alan Noon, Day 1's Lead Technical Artist.  Getting to attend a Sony party was a new experience for me, since the last time I was at GDC, Day 1 was signed to Microsoft and we weren't welcome at Sony's affairs.  The party began wonderfully, with good food, quiet music, attractive women, and battling robots in a cage.  As the evening wore on, though, it got more crowded, the music got much, much louder, and the robots wore one another out.  Eventually, the music won:  it got so loud, and I got so hoarse trying to shout over it, that I realized I couldn't even hear myself when I yelled.  I called it a night and went back to the hotel.

Thursday

I woke up on Thursday with a slight sore throat, which I attributed to an evening spent trying to yell over loud techno.

I wanted very much to go to a talk titled "Sim, Render, Repeat--An Analysis of Game Loop Architectures".  But when I got to the door, it was full, and the CAs weren't letting anybody in.  Slides for "Sim, Render, Repeat" are, of course, not available, so I have no idea what was said.  (Edit:  Slides are now up!  Look under GDC 06 Proceedings.  It's very interesting reading.)  I assume it was a fabulous, life-changing epiphany for anyone lucky enough to attend.  I ended up wandering next door to listen to Lord British talk about Tabula Rasa.

(Noel tells me that I should have been in Chaim Gingold and Chris Hecker's talk on "Advanced Prototyping" in Spore, anyway, which he says was the best talk of the show.  GDC always seems to have three must-see talks going on at once.)

Anyway.  Tabula Rasa has, apparently, been a total train wreck.  They used middleware tech which left them with last-gen visuals.  The game's overly alien art ended up being simply alienating.  It's a massively-multiplayer online game, but it originally had "private quest" gameplay that left players feeling lonely and isolated.  There were communication issues among the mixed Korean/American team.  And attempts to target Korean and American audiences ended up being costly; art for Korea ended up being scrapped well into production based on late feedback from the Koreans.

Supposedly this is all fixed now.  NCSoft Austin laid off twenty or so people who couldn't agree with the new design direction.  Art direction now emphasizes more approachable costumes and environments.  The game will target only an American audience initially, and will be ported if there's going to be a Korean version.  And graphics technology is receiving more emphasis.  That may all be true, and Tabula Rasa may be a fantastic game when it comes out, but the talk still left me very glad that I didn't spend the last four years at NCSoft Austin.

I went to the keynote talks given by Satoru Iwata of Nintendo and Will Wright of Maxis, but the auditorium was dark and crowded, and I figured they'd be well-covered on the web, so I didn't take notes.  The Nintendo talk gave away little or no real information about the Revolution, but made clear that Nintendo recognizes that they're best off targeting neglected niches in the market while Sony and Microsoft ram antlers over the hardcore console gaming market.  Iwata radiated an almost childlike wonder and good humor as he described Nintendo's business and game development philosophy.  As he talked about their plans to reach a broad audience through more approachable games, to become the PopCap Games of the console world, it was hard to recall that Nintendo used to be Sony, the unstoppable monopolistic colossus that brutally squelched all competition and innovation.  I think I like Nintendo better now that they've lost.

Will Wright's talk was an all-but-indescribable collage of ramblings on extraterrestrial life, game design and prototyping.  The two takeaway new bits of knowledge in the talk for me were:

  1. There's a theory that just as the Earth is in a habitable zone about the sun, which keeps us "just right" for life like Baby Bear's primordial porridge, so our solar system is also in a Galactic habitable zone where we orbit at the right pace not to drift out to dead zones of hard radiation between the galaxy's spiral arms.
  2. During the seventies, the Russians mounted a 23-mm antiaircraft cannon on the Salyut 3 space station to defend against possible attack from high-altitude American fighters.  I know it's sad that we're exporting mankind's propensity for violence off our planet, but... that's just cool.  Apparently they tested it by shooting at nearby pieces of debris.

After lunch I went to see Soren Johnson and Dorian Newcomb from Firaxis talk about prototyping on Civilization IV.  I probably would have gotten more out of the talk if I'd ever played any of the Civ games, since it mostly involved demos of running Civ IV executables and data taken at snapshots throughout the course of the game's development.  I think Soren's a great guy (and one of The Hot 100 Game Developers!), and I loved his design talk two years ago, but this one impressed me less for having seen the Valve prototyping talk the day before.

I next went to David Wu's talk, "Threading Full Auto Physics".  Taking best advantage of multi-core hardware is a subject of great interest to me, and David Wu's a really smart guy who's had to deal with it earlier than most of us, since he's shipped a game so early in the Xbox 360 life-cycle.  (I wouldn't be surprised if his game is the first Xbox 360 title that actually uses multiple threads.)

The guys at Pseudo Interactive have made some interesting decisions.  They run their main game loop in one thread and rendering in another.  Renderer state is double-buffered.  Most of the main-thread cost is physics.  The main loop spends about 30% of its time in collision detection, 40% of its time executing game logic, and 30% of its time doing integration and constraint-solving.  Collision and integration steps are parallelizable.  The game distributes those tasks to worker threads, which process individual objects.  Callbacks from the collision system require mutexes to keep from interfering with one another.  This eats into parallelization speed-ups significantly.  The physics system's speed-up from parallelization is three to four times in a scene with few callbacks, two times in a scene with many callbacks.  That's running physics on five threads.  In response to a question, David Wu said that he estimated that parallelizing the Full Auto engine doubled the engine-code development effort.

Finally, I went to the "Burn Baby, Burn: Game Developers Rant."  From what I've read about last year's rant session, it aimed most of its vitriol at the publishers who run our industry and decide what gets put on shelves and what never gets a contract.  This year, most of the vitriol was aimed at the audience:  for being complacent, for not being effective agents of change and, I guess, for not having run out and started a revolution after last year's rants.  I can only assume that next year's rants will be abject orgies of self-loathing by a panel overwhelmed by their failure to persuade their audience this year.

On my way to the Havok party that evening, Alan and I ran into Noel, who invited me along to the High Moon party first.  High Moon apparently sent something like thirty people to GDC this year and threw a party there just for their employees.  That's pretty remarkable in the game industry, and may have something to do with the fact that everyone there seemed genuinely happy and really seemed to love their jobs, which is also pretty remarkable in the game industry.  We continued to the Havok party next door, where a tipsy Mark Harris filled me in on Nvidia's contribution to HavokFX, Havok's GPU-accelerated physics library.  I used to work with Mark at iROCK, and he's a really smart guy; I'm sure HavokFX is great.

Increasingly suspecting that my throat was sore because I was sick, and not just from yelling over party noise, I called it a night early and went to bed.

Friday

Oh, yeah, definitely sick.

Feeling like a rotting cadaver wrested untimely from its grave by eldritch powers, I dragged myself to Jay Stelly's "Physical Gameplay in Half-Life 2" in time to catch most of the talk.  My notes are pretty incoherent, and my memory is a total blank.  Since you have about as much chance of making sense of them as I do, here are the most coherent scribblings from my notes:

  • Physics prototypes -- zombie basketball, watermelon skeet shooting, glue gun, danger Ted playset, toilet crossing (really wish I'd written a little more detail about that last).
  • When adding new technology, create your own referents for it by extensive prototyping.
  • Game design can be reduced to training and testing.  Teach the player how to do something, then give him a chance to demonstrate that skill.
  • Design economy
    • Skill must have value or be cut
    • Limit to number of skills you can train
    • In deciding what to keep, extra weight given to skills that interact

After a morning meeting and a stroll around the expo floor, I dropped in to catch "God of War:  How the Left and Right Brain Learned to Love One Another," by Tim Moss, the lead programmer on God of War.  The success of the game is astonishing considering how tortured the development process apparently was.  The team that made the game had already shipped one game together, Kinetica, which had "nice technology" but average sales, according to Moss.  For God of War, designer David Jaffe was brought in from outside the team to act as Game Director, handing down high-level design decisions from above.  Unsurprisingly, this resulted in friction between Jaffe and Moss.

Jaffe also had a talk at GDC, which I missed.  Alan says that Tim Moss walked out of Jaffe's talk.  (Edit:  Or not -- I'm assured by one of the SCEA programmers that Jaffe just joked that Moss was getting up to walk out when he came in late looking for a seat.  My apologies for spreading scurrilous rumors.)  That's not too surprising, since Gamespot quotes Jaffe as saying, "I would be [spin-cycling] and there's this song that Christina Aguilera sings, and I would be thinking of [lead programmer] Tim [Moss] the whole time, because it's like a '[expletive] you' song. . . . So I would sit in spin class and I got Tim on my mind and would think, '[expletive] you, man, I'm going to make this work.'"

That's swell, but even controlling for my own programmer bias and the fact that I've only heard one side of the story, Tim Moss's talk did a strong job of convincing me that he was the one who made God of War work.  He seemed like a really sharp, capable guy, and I suspect he's the only thing that kept the game from running off the rails into Duke Nukem Forever territory.  Moss pushed for the game to run at sixty Hertz, he controlled the designers' thirst for exotic special-cases, and he led the creation of a data-driven engine that allowed widely varied gameplay to be created entirely in tools by artists and designers.

Speaking of data-driven, God of War had only seven programmers.  In the end they shipped a tight executable that was only a meg and half.  The rest was done with triggers and events.

Their streaming set-up was interesting.  It's stunningly basic and sensible.  They reserved sixteen megs for level data and kept two levels in memory at a time, either the current and next or current and last.  The sixteen megs could be allocated between the two in any way designers wanted, as long as max(current+last, current+next) < 16.  Designers would place trigger volumes to control streaming.  One trigger would kick off streaming of the next level.  Another, further on, would check that the next level was loaded and block while displaying a loading screen if it wasn't.  A level was about five minutes of gameplay.

There was more, but the slides are on-line.  It was a great talk.  Give them a read.

The after-lunch talks looked bleak and I felt miserable, so I went back to the hotel and slept for a couple of hours, thereby missing what I'm told was a really impressive presentation by Patrice Desilets and Jade Raymond of Ubisoft called "Defining the Assassin", about the design of Ubi Montreal's latest game.  There's a write-up on Gamasutra, but it basically says the same thing Alan said:  they showed really kick-ass demos that you should be really sorry you missed.  Sigh.

I dragged myself out of bed again and shambled back over to the convention center to see "Crowd Simulation on PS3" by Craig Reynolds, the creator of Boids.  I stopped at the GDC store to order Day 1 a copy of the audio proceedings, so by the time I made it to the talk I was ten minutes late.  This was actually for the best, since the talk was completely full, but I arrived just as the first person inside left.  So I got the newly-available seat and got to listen to Reynolds talk about how to simulate flocking on a Cell SPU.

I was a little disappointed that the talk was limited to flocking.  I'd hoped for a more complete crowd solution that handled animation and skinning for large numbers of characters.  Instead, the demos were simple fish (in 3D) and dots (in 2D).  The problem was  reduced to finding the N nearest neighbors for each entity, and adjusting behavior (mostly velocity) for each entity based on the properties of the nearest neighbors.  Reynolds partitions space into a regular 3D grid and tracks entities as they move from one grid cell to another.  Each frame he does a simulation pass, in which he DMAs all entities in each possible 3x3x3 block of cells to the SPUs and runs a simulation on all entities in the center cell.  Then, back on the PPU, he moves entities that have crossed cell boundaries to the appropriate cells.

I found the algorithm actually less interesting than the performance numbers presented.  The 3D fish simulation could simulate 5,000 fish at 30 Hertz.  The 2D point simulation could simulate 10,000, uh, dot-people at the same frame-rate.  The SPUs were only 30% busy with all the sorting and simulating that they were tasked with, and just spent 70% of their time idle.  DMAing every entity to an SPU 27 times took only 1% of the frame time.  The biggest performance hit, actually, was iterating over all entities on the CPU to transfer them to the appropriate cells after their positions have been updated.  Only a tiny fraction of entities actually move between cells each frame, though.  The real cost is just iterating over all that memory and checking all their positions.

And that was my GDC.  Returning to my room I rode up in the elevator with Satoru Iwata, the President of Nintendo, so he may even now have my cold.  I apologize to Nintendo fans everywhere.

Any opinions expressed herein are in no way representative of those of my employers.


Home