Eric Fernandes has worked in many studios such as Industrial Light and Magic, Weta Digital or Dreamworks Animation and Digital Domain. He has worked on such films as STAR WARS EPISODE II: ATTACK OF THE CLONES, LORD OF THE RINGS: THE TWO TOWERS and RETURN OF THE KING but also on KUNG FU PANDA and AVATAR.

What’s your background and what was your position on this show?
My name is Eric Fernandes and I was the CG Supervisor on THOR for Digital Domain, working out of the Vancouver office. Since the VFX Supervisors on THOR were primarily working out of Los Angeles, it was my job to oversee the pipeline and maintain the aesthetic and technical direction for the show from Vancouver. We would present our work to the VFX Supervisor at DD, Kelly Port, on a daily basis for feedback and approval, and then that would be shown to the overall VFX Supervisor, Wesley Sewell, usually once a week in a Cinesync session. Cinesync allows multiple people in different locations to view streaming video simultaneously, as well as draw or annotate on the image in real-time.

What’s your software and hardware pipeline?
We are almost entirely Linux based at DD, with HP Workstations, primarily HP 8600s with Nvidia Quadro graphics cards running the CentOS operating system. For modeling and texturing we do occasionally use Windows machines for specific non-Linux applications like ZBrush or Photoshop. Our software pipeline consists of a wide variety of off-the-shelf tools, which are supplemented by in-house tools and custom development efforts. For THOR, modeling was done in Zbrush, Mudbox, and Maya. Texturing was done primarily using Mari from The Foundry. We also used Mudbox for displacements, as well as Photoshop for specific additional painting work. Animation and rigging were done almost entirely in Maya with custom DD plug-ins for the rigging system, and tools which animators and TDs wrote as well. FX elements were generated 90% in Houdini, using DD’s proprietary Rigid Body Dynamics system called « Drop », as well as its volumetric renderer, « Storm », for all snow and breath effects. Some particle fx work and miscellaneous elements were generated straight out of Maya and rendered through Mental Ray as well. Our lighting pipeline on this film was entirely Renderman, using the latest Renderman Studio software, including Slim and Renderman for Maya. Our render farm is managed through Qube in Vancouver, and as with all DD productions, Nuke was used for compositing.

What did Digital Domain on this project?
DD was responsible for the Jotunheim sequence, which is the home world of the Frost Giants. We also did the work involving the Frost Giants in other locations in the film, such as in the prologue for Earth, or when the Frost Giants raid Asgard to attempt to steal the Casket of Winters. DD’s work consisted of approximately 350 shots, ranging from relatively simple set extensions all the way to 90 fully CG, stereo-rendered creature shots with hundreds of Frost Giants, hero creatures, and ridiculous amounts of rigid body dynamics. Internally we referred to the Jotunheim sequence as « AVATAR » meets « 2012 », based on the combination of full-CG creature work with huge destruction elements.

Can you tell us how many assets you have built for the movie? What was the size of the team and how did you organize such a big production?
We built the entire world of Jotunheim, from the wide scale shots of the planet, to each building and rock that you see on the screen that wasn’t practically shot on set. The actual set was pretty minimal, so I would say 80-90% of what you see in the environment on Jotunheim is entirely CG. We built about 30 to 40 buildings or structures, a dozen or so mountains and cliffs, piles of rubble and debris, and countless other small architectural elements like bridges and walkways. On the creature side we modeled 12 variants of the hero Frost Giants with different markings, scar patterns, and costumes, along with a Frost Beast creature, which Marvel asked us to help them design about halfway through production. We also created 7 digi-doubles, one for each of the main characters in the film who appeared in Jotunheim. Ultimately we reached a crew size of about 200 people on this film, with about 80% of the crew in Vancouver and 20% in Venice, California. DD has a long history of working on large projects, so there is a legacy of strong support systems and procedures for everything from staffing and hiring artists, to asset tracking and scheduling. A wide variety of tools are used to organize and manage a film like this, from commercial products such as Shotgun and Qube, to inhouse tools like our asset management or daily viewing system.

Were there new challenges on this project? Did you adapt any asset or tool from your previous works?
We did not necessarily face new challenges on this film, but the large amount of work combined with a wide variety of skills – photorealistic environments, creature animation, destruction fx, stereo 3D, etc. – proved to be the biggest challenge. We basically had to create hero creatures that could stand in for their live-action counterparts, and do some very convincing and challenging environments and fx development. From a hero creature standpoint, we had to create creatures that could hold up full screen, match their on-set versions, and be able to populate them in scenes ranging from one frost giant to hundreds of them. And the Frost Beast was an entirely new creature we created from scratch, spearheaded by Miguel Ortega and Chris Nichols, which popped up as a new character halfway through production on the film. For FX development we had to come up with so many different and unique looks and systems, that was a big challenge in and of itself. We had to do cg snow and breath in almost every shot, lightning for Thor’s hammer, as well as fx for the hammer spinning, huge cloud vortexes that the Aesir travel through, growing ice weapons, blood and gore effects, and huge dynamic simulations of collapsing buildings and terrain. In short, we had just about every fx challenge on this film except fire. So it was a daunting logistical, technical, and artistic challenge to do all of that work in a 9-month period of time and maintain the quality that both Marvel and DD were expecting. Our FX Supervisor Ryo Sakaguchi did a great job with a team of approximately 12 fx artists in Vancouver, along with some initial development and support from Venice-based Houdini developers.

The other challenge we had on this film was doing it from a new facility in Vancouver, which was set up in January 2010 to work on TRON: LEGACY. So one of our big tasks was integrating the pipeline from the main studio in Venice, training the artists, and building a cohesive group of people to work on this film from scratch. It was also the first film at DD Vancouver that we dealt with the frond end of the pipeline, including asset development such as modeling, texturing, and lookdev. So it was a great learning experience for us to get up to speed quickly and experience a “trial by fire,” and thankfully we were given the freedom to recruit and hire a very talented group of experienced artists from around the world to make that happen. And of course we leaned heavily on the long history of development and software at DD, from its in-house Houdini tools, to a rich and deep history with Nuke for compositing.

Did you used previs for your shots? Did you change shots from the storyboard to the final result?
Previs was done by « Third Floor, » who did a great job and worked directly with Marvel on that element. But we certainly did a lot of all-CG shots and helped Marvel conceptualize some of the more complex camera moves, such as the shot where the Frost Beast runs underneath the ice and the camera goes upside down. But yes, in general there were ongoing changes between the camera department and animation and layout to refine the cameras and layout of the shots throughout the film.

What’s the most challenging shot for you?
There were probably 2-3 shots that were the most challenging for different reasons. Late in the film we were asked to do a shot for the prologue, which had the Frost Giant army lined up against Odin’s guards. The previs showed thousands of characters lining up for battle, and our pipeline up to that point was primarily hero-character driven, so we hadn’t implemented any crowd systems like Massive, and it was honestly too late to go that route. So our Pipeline Supervisor, Tim Belsher, and our lead shader writer, Mark Davies, came up with a custom, instance-driven delayed geometry approach for this one shot that they worked on for the final month of the film. Animation baked out predetermined movements of about 10-20 characters in simple cycles with cape/cloth sims. This was then combined into a delayed geometry rib archive with the shaders and pre-rendered occlusion maps, which were called procedurally through Renderman.

Also challenging shots were any of the shots where the Frost Beast is chasing the Asgardians out of the city and you see buildings and ground collapsing beneath them. This involved a huge amount of back and forth between layout, fx, anim, and lighting right down to the final days of us working on the film. Shots like these had complex rigid body dynamics simulations for the ground and buildings, volumetric smoke, particles, mocap animation, custom animation, and soft body, rag-doll dynamics on characters flying through the air after interacting with the RBD simulation. Integrating all of the different data coming from multiple packages like Maya and Houdini, along with the
myriad of internal tools used at DD like our volumetric renderer, Storm, and doing it for both left and right eye in stereo 3D was quite a challenge, to say the least.

Can you explain to us how did you design and create Jotunheim?
Jotunheim went through a number of design changes along the way, and ultimately we designed the planet with large chunks taken out of it to imply decay, along with giant icy canyons and crevasses. Some of the shots we spent a long time trying to get right were the establishing shots at the landing area where the Asgardians first arrive, and where they flee back to when chased by the Frost Beast. Think of the Grand Canyon times 100 for size.

And as we worked on that, the primary way to express distance and scale of a canyon are the horizon line and whatever fog/atmosphere you add in between the viewer and the farthest object. It turned out to be very difficult for us to get the sense of scale that Marvel was looking for with this super-sized canyon. If we added a lot of fog or atmosphere it looked like clouds or a low lying fog layer, because over that large of a distance it accumulates to full opacity very quickly. You can’t visually discern the difference between clouds that are 100 miles away or 10,000 miles away if they stack up horizontally on each other. And no matter how far we pushed back the far wall of the canyon there is still a vanishing point on the screen that also doesn’t change a whole lot in proportion to the distance you’re moving it in world space. Again, it’s really hard to represent the difference of a cliff wall that is 100 miles away, versus one that is 10,000 miles away if you don’t have something in between to establish distance and scale cues. After a couple months of iterations we ended up with something everyone was relatively happy with, although it probably works better in the 3D space than anything else we did on the film.

As for what Jotunheim is, at the time that Thor arrives early in the film the planet is a decayed, crumbling world. Buildings have aged or toppled over, ice shelves have moved around, and the world is dying and inhospitable. A lot of this was driven initially by what they shot on set, and what the art department conceptualized, which were these crystalline, snowflake patterns, and hexagonal tubes made from granite and ice. The buildings are mostly vertical, simplistic construction designs meant to imply that the Frost Giants would have created them with the materials available to them organically. Marvel asked us to get away from buildings with a lot of architectural details or ornate construction motifs. At one time we had concepts for statues, busts, and very Gothic looking burned out structures, which we eventually moved away from. So as we expanded the world and built it out beyond the small area of the set we tried to keep in mind that these enormous buildings and palaces were made of a similar combination of granite and ice. Even though the buildings are large they’re all composed of the same, fairly basic building blocks and materials, at the same time we tried very hard to avoid anything resembling the « Fortress of Solitude » look since that’s a completely different superhero altogether. So we avoided diagonally crossing columns, or anything that implied a more clean, very crystalline look. We went with grungy, dirty mixtures of granite and ice.

What were your references for Jotunheim?
As with any film we looked at a ton of reference, from Mayan/Inca pyramid designs to Gothic architecture reminiscent of bombed out churches and other structures in World War II. We looked at anything that evoked primitive yet powerful societies, as well as reference of cities that have decayed over time. Additionally we had some very talented concept artists on the film, and our matte painters David Woodland and Mat Gilson did a lot of concept art to establish the look of the world along with Claas Henke. It was a very collaborative process with director Kenneth Branagh and all of the folks at Marvel, which was a lot of fun for us. They gave us a lot of leeway to pitch ideas and concepts to them and really treated us as a partner in helping to design the world and figure out what worked and what didn’t in the context of the story.

How did you create the right look for the ice effect?
I would say the big difference in this film with the ice look is that Jotunheim is a decayed, crumbling world, and the Jotuns built their structures out of a combination of what you might call granite and ice. So the good part for us is that there is actually very little of the typical real-world properties of ice evident, with things like refraction or reflection, which is expensive to compute. Believe it or not, we did an entire film on an ice planet without a single reflection, which saved us a great deal of time and look development as we didn’t need to rely on ray-tracing or faking in reflections. We achieved the look of the ice and ground with a combination of fairly simple diffuse shaders, good specular maps, and subsurface scattering. It was really driven by the look needed in the shots, and the story Marvel wanted to tell rather than any real-world simulation of ice properties. We also benefited from the fact that Marvel was trying to avoid comparisons to another superhero property that has a very ice-like, crystalline look.

Can you explain in detail the collapse effects?
The collapsing effects were some of the most difficult shots we did on Thor. They were all done in Houdini, using a number of proprietary rigid body dynamics tools that DD has written over the years like « Drop ». As you probably can tell from films like THE DAY AFTER TOMORROW and 2012, DD has always had a very robust in-house set of tools for this type of work. The workflow itself is fairly straightforward. Modelers create a surface, whether it’s a building or the ground plane, and makes sure not to build it with any overlapping vertices or intersecting geometry. This is vital for the RBD simulation to work properly. The fx department then pre-scores the model into chunks where they would like it break. After that they run the simulation through the RBD system and it breaks the geometry, creates any needed interior faces, and spits out a per-frame geometry sequence in the bgeo format. This is then brought into lighting, either as live geometry, or through a proprietary DD tool called geoTor, which is a bgeo-to-Renderman delayed geometry work flow. Once that’s rendered out it’s up to comp to integrate it with the actors, add camera shake, as well as any additional elements which help integrate the cg to the live action plate like dust or lens flares.

How did you make frozen effects of Heimdall?
Actually DD did not do the frozen effects on Heimdall, so I can’t answer this. I believe either BUF or Luma did the Heimdall shots. We only added the Frost Giants for these.

Can you tell us the features for the Frost Giants and Beasts?
Eric Petey, animation supervisor // The Frost Giants are large humanoids, standing approx 3 to 3.5 meters in height. They are very strong – they have a lot of mass but unlike large humans on Earth they are not slow as a result. Blue, battle-scarred skin and piercing red eyes complete the look of these fierce warriors. The Frost Beast measures around 8 meters at the shoulder – enough to dwarf even the Frost Giants. It’s a very solid monster, built to bash and slash with a battering ram-like head complete with large blade-shaped tusks and gnarled teeth. Every part of it is dangerous: powerful muscular arms and paws are armed with huge claws, and a very long tail carries sharp spikes on its tip like a medieval mace.

How did you animate the Frost Giants and the Beast?
Eric Petey, animation supervisor // For the Frost Giants we used a combination of motion capture and key-frame animation, and the motion capture was often modified with key frame action on top. There were some live-action Frost Giants in the sequence as well, a number of times it’s a mix even within the same shot. CG Frost Giants were used from deep background right up to close foreground. There was no use of Massive or other crowd simulators. The Frost Beast is purely key-frame animation.

How did you create their skeletons and their rigs?
Eric Petey, animation supervisor // The Frost Giant skeletons are quite complex, using the same physically accurate skeleton system developed by Walt Hyneman and his rigging team for another one of our recent films. The animation puppets were complex but not unwieldy – it was possible to reduce the level of control for blocking, and ramp it up for more detailed work. The deformation rig used a combination of pre-simulated muscle and cloth deformations, with additional dynamic muscle jiggle tools. It was complex but allowed us to save a lot of time on the back end by not having to run dynamics passes on every shot.

Can you tell us how did you make them interact with the real actors?
Eric Petey, animation supervisor // Sometimes actions that involved interaction with actors were motion captured, using the shot footage as reference. Of course some modification is necessary to make contact points and eye lines work. But often it’s just a matter of the animators and supervisory staff keeping a close watch on all the subtle things that make interaction with performers believable. In that way it was not unlike most films that combine live action and CG characters.

A big thanks for your time.

// WANT TO KNOW MORE ?

Digital Domain: Official website of Digital Domain.

© Vincent Frei – The Art of VFX – 2011

LAISSER UN COMMENTAIRE

S'il vous plaît entrez votre commentaire!
S'il vous plaît entrez votre nom ici