Sunday, March 6, 2011

GDC 2010 Trip Report

I found this report from last year while starting to write up my notes from this year. It’s interesting to see what my thought process was last year- some stuff I still agree with, some I don’t. Aside from removing some stuff that is likely under NDA, this is it in its entirety.

It should be noted that I wrote this during my tenure at Volition, and that the opinions and statements are my own, and not those of THQ, Volition, or my current employer, 5TH Cell.

--------------------------------------------------------------

I came away from GDC this year with fewer notes on talks than the last time I went, but more opinions and ideas based on social interactions with people at studios who are considered leaders in animation for games.

Tuesday, the Travel Day From Hell™

(Redacted due to lack of pertinent info) It took a while to get to SF, but we got there. ‘Nuff said.

Wednesday, the Meeting Day From Hell™

Due to the delay in travel on Tuesday, I had meetings (both work-related and social) that I pushed to Wednesday. Those, coupled with other meetings I had already scheduled for Wednesday, made my day unbelievably busy. To top it off, each vendor I was meeting with was in a different hotel, and getting a cab in SF sucks, so I did a lot of walking.

Prior to the meetings, I headed over to Moscone Convention Center to grab my credentials and my free Droid phone (this year’s gift to the speakers). I ran into Seth Gibson, a TA at 343 Industries (The MS studio rumored to be taking over Halo), we went over what our panel was going to be like on Friday, and scoured the hall for some party invites. We split ways when he ran off to do some powerlifting training and I had to get to my meetings.

In the interest of keeping my report shorter, I’ll go over the meetings that I felt were the most relevant to what we are doing here.

Meeting with Natural Motion

Wow, these guys have come a long way. They haven’t addressed all of our concerns from when we evaluated them a year and a half ago, but they are doing some really cool stuff nonetheless:

· Morpheme : Connect 2.x

o The meeting started out with them showing some MC 2.0 stuff. We evaluated 1.x, so I wanted to see what they had improved.

o They have a timeline now that shows events (triggers) as they happen

§ Also have “event detection” that works with footsteps

o Right-clicking the control param node lets you create a new one via a modal dialog. We should steal this.

o New gun-aim node that controls the arms and spine. It’s basically a spinebending node but specifically for aiming firearms.

o Head look-at node is improved in that it can drive other bones for a more realistic feel

· Added new Physics Blendtrees and State Machines

o They have ragdoll nodes that actually simulate in the editor

o The networks can transition to/from physics and animation

o Editor has world objects that the characters can collide with

o They have a physics rig editor, where you can setup the skeleton and the joint types/limits

§ These rigs/limits work in game

o Have a “closest anim” setting that will choose the appropriate animation based on root orientation plus limb positions

§ E.g., for a getup, they’ll have 20 motions, and this feature will choose the right one to play based on the position/orientation of the body and limbs.

o Physics objects can affect animation and vice versa

§ They do this via Hard Keying (Where the animation influences the environment) and Soft Keying (Where the environment influences the animation)

§ Both techniques can be combined on different body parts, and be given different strengths

o Zombie demo was cool

· Added some more organization stuff

o You can resize state nodes, which is nice if you’ve got a ton of transitions coming in or out

o Transitions can be broken out of

o Transitions have a group collapse/expand

o Added wild-card transitions, but they also contain limits to prevent them from coming in from certain states

· Working on speeding up network execution

o Version 3 is much faster, as it previews for errors as you work instead of runtime

o Want to make simulation time less than 1 second

· Their live-link is now productized, so it is a bit more efficient to look at multiple characters

o But, multi-characters will never be in Connect. It relies too much on game code.

· Doing the same stuff as us with Choice Nodes (When a state is entered)

· Miscellaneous details

o They have an “uneven terrain” node that drives how the characters react to uneven terrain

o There is euphoria integration into Version 3

o Gun-Aim Node is all IK (Arms, head, spine). No posing. It looked really good.

o Animation sets are improved, but still not what we’d need

§ In a clip node, you set what clip is used for what animation set, which is backwards to how we do it

§ They also use file names, but they claim we can change the code to use tags

o Have a mirror node now. Can mirror whole network or just a clip, like we have planned

· Euphoria

o It now plugs into Morpheme:Connect

o Behaviors are created by the client via an editor. No more sending programmers to a studio for custom coding

§ Ships with a few stock behaviors as well

o They have behavior nodes, which execute Euphoria behaviors, blending to/from/with animation

o Arrow Demo was cool

§ Arrow was user-controlled, and depending on the speed of the arrow thrown at a character, it reacted differently by ducking, or putting its hands up, pushing it away, etc.

§ I expressed interest in having public videos to do this justice

o Natal stuff is coming down the pipe

Overall, I was impressed with what I saw. It was cool to see Euphoria running in their editor, and good to see that they are improving some processes. It was disappointing that some major flaws from when we evaluated it were still there, but at the same time, it affirmed that we made the right choice, and that we are on the right track with our tools development. A lot of their cool new features are ones we either already had or already planned for, and the ones we didn’t we are definitely going to investigate.

Meeting with Moanima

Moanima is a motion capture cleanup outsourcer based in the Philippines. This meeting was to catch up with their management to discuss the potential to use them to cleanup any motions we get from our internal studio, as well as external if we feel that we can save money by moving cleanup to them. Their data from their POC was good, and it’s about half as much as we typically pay. They have 24-hour shifts, so the time difference is supposedly not an issue, and their client list is impressive if it’s accurate.

I personally feel that we should explore this avenue for cost savings, whether it be with Moanima or another outsourcer.

After all of my meetings, I wound up back at the hotel to wash up, check email, and relax for a bit before heading out to dinner.

Thursday, the Roundtable From Hell™

“So, why DO we need Technical Animators?”

We’ll get to the relevance of that question in a minute.

Thankfully, the first day of the actual convention had nothing scheduled too early for me, so I took the opportunity to walk the expo floor before lunch. I spent some time at the THQ booth talking to prospective recruits and other THQ studio and corporate folks. After that, I crawled around the floor looking for people I knew at other studios, seeing what the booths had to offer, and throwing rocks at the Autodesk booth. The expo floor was very tight, and there were tons of students looking to get interviews, so I didn’t stick around too long. Once the speaker lunch was served, I ran in, ate my craptastic sandwich, and headed off to my first talk- the Physics and Animation in Just Cause 2. Or so I thought. I got derailed by someone from corporate who wanted to talk about mocap-related topics, so I did that. I heard it was a good talk, and am looking forward to seeing the slides and video. I did make it to the rest of the talks I wanted to see, though.

Behind the Scenes: Uncharted 2’s Unique Cinematic Production Process

This talk was very informative, as it went over how Naughty Dog got their cinematics from start to finish. Some of it was “common knowledge,” in that they’ve disclosed some of this info before, but I’m going to include all of it in my notes below:

· Animators get 1 week to finalize 15 seconds of cinematic motion. This includes fingers, facial and props, since none of those are mocapped.

· All props are built to spec. If they exist in game, they are built to those measurements. If the prop isn’t modeled yet, they build the real-world prop, take measurements, and the in-game prop is modeled to those specs.

· Josh showed a lot of video of the shoots.

· The mics for audio capture were lavaliered to their heads. When they did ADR in studio later (when it was needed), the mics were put in the same exact spot.

· They mocap no crazy stunts. They leave those up to the animators.

· Prepping for a shoot involves auditions, callbacks, rehearsal, read-throughs. Each of these steps brings changes to the script based on ad-libbing and what feels right.

· Story guys, designers, cinematics guys are all in a room together during initial development. No one comes in from high (or 3000 miles away) and tells them to make sweeping changes.

· Their director, Gordon Hunt, is a film director.

· No storyboards, no animatics. On a shoot, they bring an overhead schematic that blocks out the motion of the actors, and placement on props. Camera blocking is known, but not final here.

o Josh did admit that there were some scene types that, in the future, they will create rough animatics for due to the sheer complexity of them

· No animators acting. They have had plenty of issues with animators and even actors making big, exaggerated motions, so they need to direct it out. They prefer people with real stage and film experience. The videos shown for this part were pretty entertaining.

· The have 4 camera operators shooting live footage for every scene. This footage is used for reference for finger/face animation.

· They still will mix and match takes when necessary, either by blending mid-shot or cutting on camera cuts.

· Mocap shoots occur every few weeks. This way they can shoot scenes as they are ready, keep the animators busy, and give the actors a real chance to invest in the characters. Since the work is steady, the actors don’t just come in, run through the motions, and leave after 2 shoots. A lot of ad-libbing and natural movement came out of this.

· Mocap data is heavily modified. He even went so far as to allude to how the “Avatar” way is a joke.

· Facial animation trumps body animation. If they feel a scene is missing something, or feels wrong, they attack the facial animation first. The reason being, and I agree with this, is that we are more forgiving on weird body motion, but not of weird facial animation.

· If they have to ADR, or when they are doing in-game audio, they have the actors in the booth together. It makes for a more natural exchange.

· Since they record audio with the mocap, they had to custom-tailor mocap suits with no Velcro, so the actors had no chance of sticking to one another and causing Velcro ripping to be recorded. Amazing.

Overall, they do this way better than we do. Part of it is probably that they are first party, and as such are fairly independent of Sony. There is a lot of trust and collaboration that occurs within their team, and if you’ll believe what they say, no egos.

I think we have the talent here to do this type of development (not the headcount), but I think we need to change how we do things in order to be as successful as ND is with their cinematics.

Technical Animation Roundtable

Back to the question, “So, why DO we need Technical Animators?” This was the first question of the first Technical Animation roundtable in GDC history, asked to a person (me) who was running his first roundtable ever, in a room full of mostly students. And it was immediately followed by the same person with “because at Bioware, I’ve automated everything a tech animator does, so I’ve made you obsolete.”

The air was sucked out of the room. The students looked shocked, and the industry vets looked confused. What’s worse was that Rob Galanakis, the asker of “The Question,” had made a declaration in the previous Tech Art roundtable (that had just concluded) that he was going to ask “The Question” in my roundtable. He was warned against it, but did not heed those warnings.

To be fair, it’s a valid question, and it makes for good debate when properly framed. But the intent behind the question was a bit questionable.

So that spurred a 20 minute conversation that I let go on too long about what Tech Animators are needed for. After a while it got cyclical, and I knew I let it go too long, so I switched the topic to pipelines and processes that people use at their studios or in school. We shifted topics a few times, started discussing network editors, mocap, and eventually ended the day with some topic ideas for the next session.

One interesting comment, from Rob, was that they use Morpheme at Bioware and found that they had to separate their networks into smaller chunks to make them manageable for loading and running. Morpheme is really, really slow.

After that debacle, I decided I needed a drink. (redacted- insert generalities about going to Valve party, bars, seeing old friends, etc)

Friday, the Panel Time From Hell™

Technical Art Techniques: Character Rigging and Technical Animation

First talk of the day was at 9am, and it happened to be the Technical Animation Panel I was on with Ben Cloward of Bioware and Shawn McClelland of Autodesk, moderated by Seth Gibson. Given that Thursday night was heavy on the partying, we were surprised to have the session fill up to standing room only. I felt that the panel went OK, as we discussed the past, present and future of technical animation, but I think we all felt that it was hard to get into specifics without breaking NDA. I think that if we had spent more time preparing for the panel, it would have went a lot better. We fielded very few questions from the audience, which was surprising considering the amount of people there, but did have quite a few people stay behind to talk with us after. One point of interest is that our use of Biped is a bit… ridiculed. I got in on it and poked fun at the situation, and while I maintained that if it was a broken tool, it wouldn’t be in our pipeline, the points made for switching are valid. It is worth further discussion, especially with (redacted) coming up and the chance to completely replace that pipeline with a much better one. I think our excuses to date to not switch have been pretty weak and short-sighted.

I had no more talks I wanted to attend till after lunch, so I spent a few hours talking with various people, walking the expo floor a bit more to catch up with some industry folks, and eating lunch.

Creating the Active Cinematic Experience of Uncharted 2: Among Thieves

This talk was pretty amazing, from the first line (“Narrative drives gameplay, and everything is done in service of the story”) to the last. This may contain spoilers. My notes:

· On Uncharted 2, narrative drove the gameplay. Everything in the game was done in service of the story.

o Story and Gameplay go hand in hand. When both are being developed, they are developed collaboratively. Story does not insist on gameplay for story’s sake, and gameplay does not insist on story to make a gameplay element work. They come to those conclusions together.

· They decide on Genre before anything else, so as to have pre-determined expectations. This helps them ground the world and the story.

· Next step is to ground your world. It aids in believability, and defines the limitations of what can happen.

o i.e., no jetpacks or Bayonetta boots just for the sake of having them or making obstacles easier to overcome. In order to get to the top of a building, they made it so you had to climb, not fly.

o In situations where they wanted to limit things, they did so. For example, they removed the Yaks during the village invasion so no one would want to shoot them.

o Characters themselves behaved differently. Drake boosted the women and Sully up, since in this world, the women and Sully were too weak to boost up Drake.

· Pacing was movie-like, in order to keep the player invested. They had little to no repeating of gameplay or sets.

o Cutscenes were meant to prop up player interest, but were not the only thing used to do so

o Core mechanics were varied. No stealth missions 3 times in a row.

o Cutscenes were used to setup the frame of the story

o They involved the whole team, plus focus testing, to get suggestions for change.

· “The Gap”

o When the hero had a goal to reach, they presented him with a first action. This was typically the easiest way to reach the goal.

o They would then add a “Gap,” which made the first action impossible, and force the player to use a second action to reach the goal. This is probably writing and design 101 J

o The timing and intensity of the placement of these gaps was varied so as to keep the player from becoming too bored, or from expecting it.

§ They admit the Shaefer (sp?) rescue scene/mission was too long and did not contain enough of this.

· Along with gaps, they used contrast in gameplay and story in order to keep the player engaged

o Each moment in the game had a differing intensity

o Climaxes in the acts were contrasting as well.

o They would use calm moments, then exciting moments, but even the intensity of calm to exciting was varied so it wasn’t just a sine wave.

· Using Cutscenes

o They did everything they could to not use action set-pieces in cinematics. Those were for gameplay- it’s more fun to play the action than watch it.

o These are for telling the story, and for giving a performance. They don’t make a cutscene unless a story element is dramatic enough for one

o These are also used to manage physical, tonal, and player/character continuity

o Most scenes are started on an action, so as to not take the player out of the experience

§ i.e., if the player just escaped an exploding building, but the cutscene has the player clean and walking with no indication that he just jumped out of a building, it is bad. So they avoid this

o Will also cut on an external action

§ In these cases, they’ll force the player to do a movement via gameplay (like stumbling on the collapsing bridge) and cut on that, so the cutscene still has Drake stumbling.

o Always made sure to do environmental and FOV check to make sure the player didn’t miss big moments in game (like the tank)

o Design drove the in-game cameras.

o Design and Animation need to collaborate

§ Design is the driving force. Design needs to communicate what it wants and animation has to deliver. There is collaboration and back and forth, but ultimately, design makes the call.

§ The designer of the current level is the one who makes the final call on everything

· Using “Scenes”

o Cutscenes were used to create a change in the player’s world. Whatever gameplay tone there is, is also the narrative tone.

o All scenes are used to build up future scenes. They aren’t there for the sake of filler.

o All of the small details are meant to move towards the larger goal of moving the story forward.

· Misc Notes from questions asked by the audience and by me

o In-game dialogue was done with necessary actors in the same booth. A lot of ad-libbing occurred.

o Contextual move sets were used a lot. For example, in the village, shooting was replaced by handshaking

o In game cameras determined by design

o No motion in the game was done without design direction. Animation did not have final say on any motion, even though there was collaboration with design. This is an interesting approach, but hard to argue with considering the success of the animation in Uncharted 2.

o The designers who had final say per level were all senior, with years of experience. They also had to have their approvals approved by the game director.

Overall, I was happy with this talk. In my experience, I’ve had a better time as an animator and animation lead when design was allowed to be heavily involved in the look and feel of motion in the game. The key is to not have an ego about it (on both sides), and to work together to get the best results possible. Even still, design should be empowered to make the final call. The most successful animation we have here, in my opinion, is when gameplay mechanics drive animation. Please note that I am not saying animation quality should be sacrificed or that animation quality should be determined by design, just that animation shouldn’t drive gameplay unless that is the overall design from the start.

Technical Animation Roundtable

Today, we had half returnees and half new people. Once again, no one left and even had some stragglers file in. Today’s topics trended towards motion capture, 3rd party rigging and other software, blendtree editors, and batch export questions.

The most interesting conversation, that I had to cut off due to it becoming a 2-person event, was started by Travis McIntosh, Lead Programmer on Uncharted 2. He was in charge of the animation pipeline, and noted that Drake had 3000-4000 motions alone. They have a farm of machines at ND to handle batch exporting as well as many other things, which is good when they have to re-export every animation. When the farm is free, it takes 30 minutes. When only one machine is free (which is the case during the end of the dev cycle), it takes 27 hours. There was a lot of back and forth over spawning the batch event once per file, using ASCII files, etc. I took the topic offline and held it for after the session.

I ended the session with proposing a story time/rant session for the final session, to hear what people have had to deal with and how they overcame the problems. Everyone seemed amenable to that. I stuck around a bit to talk to a bunch of people with questions (people are always so much more talkative after the session, I need to figure out how to foster that DURING the session more), and then headed to the last talk of the day.

Animation Process of God of War III

This talk was pretty good, if anything to see where the line between design and animation is drawn at another studio. Notes:

· Animation is involved as early as the concept phase for a creature

o Here, animation helps determine scale, identifies rigging issues, and potential concerns over whether the creature is feasible to animate

o It sounds like (redacted) is doing this? Awesome.

· Once a creature concept is done, there is a Character Kick-Off meeting

o All characters go through this process, and the intent is to get all disciplines (art, programming, design) on the same page as to how it needs to work

o Personality of the character is developed here, and rules for its behavior and motion are setup

o The grounding of the character in the story is set here as well

o Our of this meeting comes a small strike team (animator, rigger, designer, gameplay programmer)

· In this strike team, a simple model and rig of the character is made and collaborated on between all disciplines in the strike team

· Once the character is rigged, an animator is paired with a combat designer

o The combat designer provides the motion list, many times down to exactly how the character should move

§ This process is collaborative, but the designer ultimately owns how the character should move and behave

§ He even went so far as to say that animators WANT designers to have that control. Crazy talk!

o With this small pairing, immediate implementation is possible. Once an animation or group of animations is done, they can be implemented

o Chimera character videos for GoW3 were shown here and looked amazing

o There was also the mention of using a silhouette pass on creature/boss intros. If the silhouette didn’t read, they’d change motion of cameras. This is a slick idea.

· Contact Sensitive Moves (AKA, QuickTime Events)

o These moves were designed by the combat designer, and were used to sell the brutality of Kratos

§ This is the only time the strike team animator animated Kratos.

o But, these are the times where the animators are given the most freedom to shine with the over-the-top animation.

§ This is also the only time animators are allowed to control the camera animation.

§ Even still, the designer had the final say on these

· Keeping Kratos Consistent

o They developed a set of rules for Kratos, so that all animators would animate him consistently.

§ Since all animators animate Kratos for their QTEs, this had to be put in place

§ Also made it easier for new hires, outsourcers and other studios using the Kratos character.

o Rules

§ He never falls on his back unless he dies

§ Never smiles

§ Always moves forward when initiating an action (movement, attacks, etc)

§ Always at the center of the action, so as to never appear weak

§ Fortunate to have the same voice actor for all instances that Kratos exists

o They showed a theme video that had been around since GoW1, that they still show to new hires to sell the Kratos vision

· Balance of Gameplay and Animation

o This was a common theme at GDC this year. This section wasn’t as informative as I’d hoped, but there were a few interesting tidbits

o Move cancelling was very important

§ Can jump, roll, cast magic, throw, guard from any other state

§ All moves were animated through and triggered for cancel

o They telegraphed an enemy’s moves

o On previous GoW titles, they used code blends for stand/walk/run, but on GoW3, they wanted to add in transitions

§ This caused issues with previous gameplay and control responsiveness

§ In order to fix that, they limited the frames of those transitions to the old code blend frame count

o There were rules for modifying playback speed (stockMAAAAANNN!!!)

§ There was a mutual respect between designers and animators

§ If the speed change was big (say, over 50%), it got sent back to animation for a real update

Overall, it was cool to see another take on how to balance design and animation. This was another talk, though, that emphasized that while it is a team effort, there is one driving force behind how characters move (again, not quality), and it’s typically design, not art.

After the GoW3 talk was done, I got food, went to the speaker party (and gave Rob some grief), met up with some corporate folks to discuss animation direction at THQ as a whole, and then called it a night.

Saturday, the Recovery Day From Hell™

The whole week finally caught up with me on Saturday morning. I was happy to not have my roundtable till 10:30am with no animation talks at 9 to attend, so I slept in. I think I was also finally converted to Pacific time, which I knew was going to hurt on Sunday when we traveled home. I grabbed some breakfast on the way to the convention center and got there with plenty of time to spare for my roundtable. The Tech Art roundtable was finishing up, so I caught up with a bunch of people there (my roundtable was in the same room, right after) before diving in to my final responsibility of GDC.

Technical Animation Roundtable

This was the final session of the first ever Technical Animation roundtable. I started out by getting everyone to give themselves a round of applause.

Today had some people leave, but they were quickly replaced with stragglers. I think the subject matter turned them off (those who left were all newcomers), so perhaps next year (if there is a next year) I’ll not have the final day be a rant session. I think the folks who stuck around appreciated it though, as after the rants simmered off, I asked everyone what they had learned from those situations in order to avoid repeating the same mistakes.

The session wrapped up with discussion about the importance of communication, as well as some questions from students. We ended a few minutes early, and we even had Rob lead a round of applause for the group. I think he was trying to keep me from sacrificing him to the tech animators in the room.

I enjoyed running this session and hope to do it again. There were plenty of roadbumps, lessons learned, and things I wish I had done better, but overall I felt like it went pretty well.

Off to the next talk.

Animation and Player Control in Uncharted1&2

This talk was given by Travis McIntosh, the same guy from the Tech Animation roundtable. This one was the most informative from the animation side of things of any talk I attended. Some of it is a bit rehashed from previous knowledge, but I’m including all the notes:

· Control and animation conflict with each other

· There is a heavy use of animation layering in UC1 & 2

o Face, Hand, Hair, Base, Run Randomizer, weapon, breathing all were separate layers

o At any one time, 30 motions could be playing

· Animations on the same layer are all the same length and synced up

o Lengths and syncs are different across layers, though

o There is different code controlling each different layer

· There are partial bone sets for each body area

o There are 12 (arms, head, spine, etc)

· Used reference nodes, which are actual nodes placed in a level by the animator

o These were used for determining what points in the level would play what contextual animation set

o Had a huge box level for each type of scenario for testing. The video was good to see here.

· Used animation mirroring everywhere

· Sphere Man in Box World

o Had this idea that just creating a “sphere man” in a box world would be enough to determine controls. If it felt right they’d just make the animations work!

o This turned out to be a horrible idea, since the animations themselves contributed to the feel of the controls.

· Stand to run animations were created with 8 different directions

o No blending between the 2, which meant that if you weren’t point in a direction divisible by 45, you were going in the wrong direction for the duration of the “Stand to run” motion.

§ The code corrected you as it ended

· Turn in place motions were 60, 90, 120 and 180 degrees

o The code would pick an animation and counter rotate in order to avoid foot slide

o Used the same motions for when you were standing or moving

· Bone counts

o 30-40 body bones, 30-40 finger bones, and 97 facial bones

o Face and finger bones were on layers, so not all motions had them

· Additive Animation (My note here is “weapon layering video is insane”)

o Showed how they used layering for reloading, and had same reload animation work for stand, walk, run, rolling, jumping, etc

o In Maya, they animate to one pose, then use a “diff” tool to subtract that pose from the animation, for their additive layering

o All aiming is additive (using poses) in fine aim mode. In non-fine aim, they don’t care if it’s accurate.

o The way they do the idle/stand motion is clever. It’s a 300 frame motion of just random, arm swing, breathy-type motion. They then layer a single frame pose on top. They use this same 300-frame motion for stand, ready, stealth, crouch, cover, etc., each with a different 1-frame pose on top. Awesome.

o Also use it for run randomness.

§ The run is 30 frames, but the randomizer layer is 302 frames, on purpose. This is so that after 10 cycles, it looks completely different and more “real.”

· IK

o No FBIK, but they are thinking of adding it on the next round of projects

o IK was 2-bone IK w/locators on weapons, much like we do

§ Also had foot IK to the ground, with a root offset. This fix floaty feet from the additive pose over the 300 frame idle motion

§ IK was done to the renderable geometry!

· Move Set Remapping

o Much like our animation groups, but they did it via code

o Their normal pipeline is to have the designer and animator work together to create move sets, but there are times when a move set type has already been designed and coded (like a pistol move set), and there is no programmer/designer time needed.

o Animator goes in and creates a remap for a rifle, for example, and it just works.

§ This is done via a script, not via a table file or editor

o There are many parameters exposed to the animators and designers so they can be less reliant on code support

§ Rotation, move speed, anim playback speed, etc.

· Memory Use

o Animation gets 15-30MB per level, which is 20-40% of their level mempool

o 3000-4000 moves loaded at any time, 3000 of which are Drake

§ They load only what they need per level

§ The sample their anims at 10Hz at export, keeping 15 and 30Hz for faster moves. Apparently, this doesn’t look bad. Maybe we should try it?

§ Insist that their compression schemes are “as good as they can be.”

· Other tidbits that came from questions or post-talk conversation directly with Travis

o ND does not want animators prototyping gameplay on their own. A designer must be involved.

o No skeleton retargeting! Wow…

Overall, this was a great talk. It was interesting to see how the tech side of their animation pipeline works. The videos they showed were really cool, and illustrated the animation layers nicely. There is definitely stuff here we should look into doing.

Building a Better Halo With Python: Production Proven Techniques

I didn’t take notes for this session, as I figured I could just get them from Seth later. It sounded like he had grandiose plans for this talk, but the IP hammer was brought down by Microsoft, so he had to make it quite generic. The talk went very fast (30 minutes), but he had a lot of stragglers asking questions later. I can’t say that I learned a LOT from the talk, but it was interesting to see how he handles the use of Python. I just wish I could have seen real-world Halo examples, and he wishes he could have shown them. From what I’ve heard, he’s already made the subject matter of this session obsolete due to things he’s learned since GDC.

Since Seth’s talk went short, a bunch of the TAs and FX people hung around to form a plan of attack for the evening. Since there were no public parties to attend, and most people leave right after the conference ends, we organized a group dinner followed by some drinks and conversation at a nearby bar. The dinner was initially set for Jillian’s, but there was a private party there and we wound up at Buca De Beppo’s. We had to split off into 2 tables, which was initially a shame because I didn’t get to mingle with the people I’d wanted to, but it turned out really well because our table was full of students with a lot of questions. Thankfully Tranchida was with me to aid in any awkward silences. Service was pretty horrible, and it was packed, but the conversations were good. The industry vets at dinner decided to pay for the students’ dinners, and once that was all worked out, we headed out for drinks. A few of the people at dinner had flights to catch that night, so we bid them farewell and looked around for a bar. A bar was found, and a good time was had all around. The group was a mix of students and professionals, so there was a lot of talking about how to get into the industry, as well as experiences while in the industry, along with more social topics overall. It was a good time.

I guess it should be worth noting that ‘The Mittani’ (of EVE fame) was there, unbeknownst to me. I wound up talking to him about fitness-related stuff a lot, and didn’t know it was him till the night ended. If I were an EVE player, I hear I would have wet my pants. A few of us got his ‘business card’- I will sell it to the highest bidder. He had quite the ego, but was far from a jackass about it, which was refreshing.

Sunday, the Jetlag Day From Hell™

Travel back to cornlandia.

Final Thoughts

Overall, this was a great trip. I do hope I can continue the technical animation roundtable and attend GDC every year. I learned a lot from the talks I attended, but I learned more from talking with other people in the halls, out at dinner, at the hotel, and at parties.

The volume of animation-related talks can be directly attributed to Naughty Dog, but it’s hard to argue with their success. I don’t think that there way of doing things will work at every studio, simply because not all studios are structured the way they are- no producers, no PMs, and a great respect across disciplines that leads to trust and faith in leadership.

As an animator, my desire here has always been to create the best possible animation for the games we create. We’ve shipped plenty of good and bad animation with our games. We have extremely talented animators here who can produce high-quality motions. But, unless there is a driving force behind how our motions work in game, how they are meant to look and feel, and ultimately, how they fit into our game worlds, we’ll never be considered as successful as Uncharted or God of War. That vision needs to be championed by the same people who champion the overall vision of our games- the designers.

The common theme among all the animation talks and animators who I talked to at GDC who have been successful with animation in their games was that design drove animation, but with a mutual respect and collaboration with the animators. If animation had an idea, they collaborated with design, but if it didn’t fit into the vision of the game, it wasn’t worked on. There’s no kicking and screaming until animation got its way, and design didn’t give in just because it was easier to not deal with it. At the same time, if design had an idea, but animation had concerns over how it would work, look or feel, design would listen to animation and come to solution.

I don’t think the “design rules all with an iron fist” approach is right, but I do think that design needs to have the final say and needs to be empowered to do so via strong project and studio leadership. How that final say is reached, however, needs to be through mutual respect, collaboration and honesty between all the disciplines. I liken in to erecting a building: Design is the Architect, Programming is Engineering, and Art is Construction. The Architect comes up with the grand vision of the building and its surrounding space. Engineering works with the Architect and Construction to make changes based on what is possible and what isn’t, concluding where to work within current limitations and where to innovate. Construction works with Engineering and the Architect to ensure the materials being used will not only keep the building standing based on the innovations desired, but also provide the overall look and feel that the Architect was going for. Is this idealized? Yes, but if we don’t strive for ideal, we’re going have a higher probability of mediocrity.

We need our projects to have someone in place that has the whole vision in mind, and can properly communicate that vision while encouraging collaboration among the team in order to succeed in bringing that vision to life.

4 comments:

  1. Great writeup, I think your last comment is one of the most important.

    "We need our projects to have someone in place that has the whole vision in mind, and can properly communicate that vision while encouraging collaboration among the team in order to succeed in bringing that vision to life."

    Animation and Design and Programing need to be collaborative instead of combative to produce great game play and games that people want to play.

    ReplyDelete
  2. Awesome notes, thanks for posting!

    ReplyDelete
  3. Thanks guys! Hopefully I can get this year's writeup posted soon.

    ReplyDelete