Q: How big is the audio production team in total, and how do they keep up with the two week release cadence with all the voice overs in four different languages?
James Ackley: The team has grown over the near 6 years I’ve been here. When I started in 2008 there was no internal audio team, it was all outsourced. For the first 2-3 years it was mainly just Drew Cady and I doing the sounds, Jeremy Soule writing the music, Jim Boer programing the audio engine and tools with a couple of contract workers here and there. As we got closer to ship the team grew and grew, and now with our 2 week cadence we have quite a lot. In no particular order, we have Aaron McLeran working on the audio engine and some tools, Robert Gay also working on some tools and sound design/implementing, Cody Crichton as a Sound Designer/Implementer, Brecken Hipp on VO processing and import/export, Jerry Schroeder as a Sound Designer, Drew Cady as a Sound Designer, Keenan Sieg as our Audio QA, Maclaine Diemer on music, Amy Liu in audio production and myself as the Audio Director. We all do a lot and even though some may have specialties, we share the work and most of us have a few other roles as well.
Q: Who currently composes the music for GW2?
Maclaine Diemer: My name is Maclaine Diemer, and I am the in-house composer at ArenaNet. After Guild Wars 2 shipped in 2012, we contracted a composer named Stan LePard to write some music for our first few Living World releases (Halloween, Lost Shores, and Wintersday). I contributed some music to those releases as well. Since then, I have handled the majority of new music in support of the Living World releases, and I’ve had some additional help from one of our game designers, Leif Chappelle. Leif is a composer in his own right, and has helped me with music for Wintersday, Super Adventure Box, Bazaar of the Four Winds, and some future Living World content. We have lots of big music plans for 2014, so thanks for listening!
Q: Where do you look to hire voice actors?
Bobby Stein: We work almost exclusively with L.A. talent, though we have a few actors from other parts of the U.S. in our cast. Much of the casting organization is handled via Blindlight, our VO partner, and Eve on the Editing Team.
Q: How do you record all your audio files? Do you do this locally or do you travel a lot to find the perfect piece?
James: We still go off-site for a lot of our sounds and the location is usually depending on what we are recording. Most all of us take recorders with us any time we go on vacation or just leave it in the car, you never know when you’ll get something cool. We are also fortunate enough to have a nice quiet Foley room at ArenaNet that we use a lot as well.
Q: I loved some of the videos from way-back that showed how you recorded certain sounds. The swinging fireball comes to mind. Any chance you’ll do more of those in the future or give us a tour of a VO session?
James: Yeah, I loved doing those videos and yes, we plan on doing more. It’s just time sure is hard to find lately. We did a VO video a while ago, before we shipped, I’m not sure if we’ll do another one of those, but you never know.
Q: If you would introduce a new monster, and you would like to make it sound very new and very exciting, would you take archived assets and remix existing sound pieces, or would you start recording something new that fits your expectations?
James: There have been times in the past where we look at old assets and maybe use those as guides but I’d like to think each new creature needs a new sound set. Jerry has been hitting a lot of our big bosses lately and I think he’s done an amazing job working with and expanding the technical limitations while creating new sounds. The marionette is a perfect example. She was definitely a full new set of sounds and even a new way to play those sounds. Sometimes we record animals, people, and machines with the intent to use on a creature. Sometimes they work, and sometimes they don’t. It’s hard to know until it’s in the game. I may be jinxing myself here, but I’ve been dreaming of recording a “Jake Brake” from a big rig truck and using that as a creature sound for a long time. In my mind, they could sound like a huge creature with very little processing, but I won’t know until I get out and record one and see if it works. Maybe that will be our next video…
Q: Living World has a unique and high cadence in video game industry. How does it work for sound production in this regard? How did you manage to make the updates release pace work together with the character voiceover (since the persons giving their voices to the characters must not be available at all times)?
Bobby: We sometimes write and record ahead of schedule, though sometimes those scenes don’t make it into the game where/when we originally intended. So far we’ve been pretty lucky regarding actor availability.
James: The two week cadence has been a learning process for us all. We definitely couldn’t do it without people like Amy Liu making sure all the disciplines that affect audio (which is all of them) give us enough time to do our job. Communication is key. Talking with the teams early and continued tracking while the development is going is what seems to be working for us. If we can meet early and get an idea of what’s coming, and keep up to date as work is being completed then we have a better chance of getting our job done.
Q: What would you say is the most complicated sound effect/track in the game, and why?
Robert Gay: Amazingly, one of our most complex tracks in the game is what sonically is considered by most to be our simplest, our ambience. But it’s just a few birds and wind right? Well, what kind of wind? Cavernous wind, city wind, interior wind, forest wind, alpine wind… and birds only occupy a subset of regions in Tyria, not to mention the variety therein! Jokes aside considering the vast variety of source material and sounds themselves, when designing an audible world, ambience can be vastly understated.
It has to match the physical world it embodies feeling mechanical where fitting and organically semi-random where nature takes its course. Tyria has over 8,500 ambient audio zones and sources, all intermingling and layered in different ways. The world utilizes a plethora of game state information to breathe life into this crucial segment of sound design, including time-of-day, combatant activity, event state, boss health, and player location to name just a few. From the creaks under foot when mastering that jumping puzzle to the screams and moans of distant ghosts while exploring ancient catacombs, inhabiting a living world only works if the bed of sound it provides is believable. And if you are not listening too actively to it by default, then that’s usually a good sign we’re doing our job right!
Q: How do you go about deciding how many variations of a sound to have for an event or action?
James: We try to think about how much the player will hear the sounds and if they are something that could get fatiguing over long hours of play. Some sounds may be able to have only a few variations and be fine, but some are very unique sounding or get heard a lot and those are the ones we work on more to give variation to, either through more sound files, or random runtime dsp, or both.
Q: I have noticed some pretty special and cool sound effects during the living story. How does this work? For example, the sound of a twisted nightmare. Are the sound effects recorded after the creature and animations are done, or is it actually a part of the development process?
James: This was a perfect example of us getting a heads-up before the creature was ready for sounds. We met with the designers early and talked about what the plan was and what the lore of the creature would be. Then as the model, FX and animation were being created we were able to see and adapt our sounds. We set up a few Foley sessions, worked with the animation and FX teams to build the implementation strategies and tried our best to line up our work flow with theirs.
Q: Have you ever considered expanding the dialogue for player characters, or adding different-voice options to be purchased for further customization (giving your character a totally new voice, both for yourself and for other players to hear)?
Bobby: We’re getting a programmer to look at PC voice outside cinematics. If that works, we’ll also update/expand current PC chatter.
James: We have a lot of dreams and ideas for this and I really hope we get time to do half of what we want.
Q: How much post-processing goes into the charr voices? It’s clear that many of them are pitch-shifted and have some snarl sound effects layered in – how much work and how many effects go into this process?
James: The VO process is a lot more complex that you might think. This one could get really long, I’ll try to keep it short enough that you still want to read it. We started using the standard batch processing in Sound Forge and even built some fairly complex plug-in chains, but were never really happy. Fortunately we are lucky enough to have Drew Cady and he took it upon himself to create a custom Reaktor plug-in so we could use Nuendo for all our VO processing. Nuendo had just updated to being able to batch export from markers so we wanted to take advantage of that. So, with the help of Rob we now have this process for all our processed Voice in the game, which is a lot. Here’s an abbreviated list of what happens.
- We get the files back from the studio already named and cropped.
- We use an internal tool to import the files which also creates a concatenated file of each set of sounds that need processing. For example, the Charr will get huge Concat file(s) exported at the same time that it’s imported, and then we import into Nuendo.
- Along with the concatenated wav files, we also get and import a CSV file that holds all the file names, actors, file lengths, etc.
- The custom Reaktor plug-in then uses the wave file(s) as a guide to modify any number of other plug-ins like pitch, volume, reverb, etc. Those values get sent out as Midi controls. We can for example have pitch that modifies along with or against the pitch of the stock sound or the volume, or whatever.
We can also set the volumes on tracks to come on/off based on any of these values as well. An easy example of this is the slurps at the end of some of the charr lines. Sometimes that was the actor and sometimes it’s us. We have a track full of snarly-slurpy-slobber sounds that get turned up for a moment if the last part of the stock sound is loud enough and quick enough to make room for our sfx. We can also do things like set the pitch to climb or drop based on volume, so if someone yells maybe the pitch goes up, or if they whisper we bring in some buzzy robot sounds like on the Golem. Basically, we track the live performance to modify a lot of post processing so it’s not just a blanket batch process that always sounds the same. This gives us a lot of creative freedom, but comes with a bit of a time price. Since the “midiyoke” we use to route internal midi doesn’t run on export, we have to record real time track of all the VO we are going to export, then use the CSV to export the files. Most of the time this is fine, it’s just a double time pass, but for characters like the charr that have a lot of lines, it can take a while. Let’s just say I’ve run more than my share of 14-hour recording sessions.