Kelly Port began his career in 1995 at Digital Domain on APOLLO 13. He participated in many projects of the studios such as TITANIC, RED PLANET or THE LORD OF THE RINGS: THE FELLOWSHIP OF THE RING. He will also oversee the effects of movies like WE OWN THE NIGHT, STAR TREK or THE A-TEAM.

Can you tell us at what point did Digital Domain enter the project?
We certainly had a presence on the set supporting Wes Sewell, the overall visual effects supervisor, especially in regards to gathering on-set reference photography and camera data. We set up a texture photography booth and shot all the Frost Giants in costume – as well as the villagers for the prologue sequence – and we covered a lot of the sets. These are all critical elements that come into play later in post-production, when we are developing assets and need the ability to match camera moves. Scott Edelstein was critically involved on set, and he eventually went on to lead our digital environments team.

How did you design the environment and the character?
Many of the original concepts and designs came from Marvel’s art department and production design team. Of course, we had to then take these concepts and run with them, and develop them in 3D and with much higher detail. In terms of character work, we had Nick Lloyd and Miguel Ortega finalize the Frost Giant designs. When the Frost Beast was being designed, we presented dozens of concepts. Marvel combined some of those ideas with their own concepts, and ended up incorporating everything into a hybrid creature that became the final result. So, as with everything, it was a very collaborative approach.

What was the real size of the set?
The only set that was actually built was the lower area of Laufey’s palace, which is where they have a conversation with Laufey prior to the battle itself. It’s open on one end, like a horseshoe. So anything looking out into that open area was a digital environment, and anything above the set also required extension. We also added more refined architectural elements to the set here and there as needed.

Were there some full CG shots?
In the Jotunheim sequence, we had almost 90 shots that were all-CG. The stereo on those shots, of course, didn’t need to be ‘dimensionalized’ since we provided both “eyes” on all of those.

What were the lighting challenges in these sequences?
The biggest challenge for lighting lead Betsy Mueller was to create not only the mood and tone of the sequence, but also to create the scope of the world our heroes found themselves in. By using depth perspective, layers of light and shadow, and of course atmospheric snow and fog, we were able to address all of those challenges.

The battle includes huge amounts of shattering and breaking up of the ground and structures. Can you tell us your approach and tools and the challenges in producing these effects?
Ryo Sakaguchi supervised the fx animation on the Jotunheim sequence. The really big shots for the fx team were those that involved complex rigid body dynamics (RBD) simulations. All of the structures and the terrain had to be modeled according to some very precise specifications in order for our fracturing algorithms to work correctly. For example, the models couldn’t have any overlapping vertices. Once the models were fractured, we’d run the sim, and after adjusting the parameters over the course of many iterations, we’d get something that looked good. One of the biggest challenges, though, was getting this now broken-up geometry back into our lighting pipeline with proper Uv’s, textures and displacements. While the initial modeling and UV mapping were all done out of Maya, all of the fracturing and simulations were done out of Houdini with our proprietary tools, then back to Maya for lighting out of prman.

What role did matte paintings play in the environments and how did you create them in 3D space?
Our lead matte painters and conceptual artists, like Matt Gilson and Minoru Sasaki, played a critical role in finessing the final look of many of shots. They not only came up with amazing concepts, they took those ideas into the final picture by using techniques that utilized multiple projection cameras within Nuke and Maya.

There are a lot of atmospheric effects like fog, snow and other particles. What were your techniques to create them?
All of our snow and atmospheric effects were done out of Houdini. We simulated a variety of turbulent wind conditions and density and saved them out as different libraries. Ideally, we could load a particular library for a given shot and it would work out great; but often we’d have to adjust the simulation slightly. For any given shot, at a minimum we would generate, a background, mid-ground and foreground layer of snow and fog. This proved to be enormously helpful to Stereo D in converting the 2D final composite to stereo 3D.

What are your softwares and pipeline ? Did you create specific tools for this project?
I’m especially interested in the compositing tools, and any particular compositing challenges in the project. In terms of what software we used on the project, it was primarily Maya, Houdini and Nuke. For developing assets, we also used Z-Brush, Mudbox, Mari and Photoshop.

How did you bring Frost Giants to life ? Did you use motion capture?
Eric Petey supervised animation on the project, and yes many Frost Giants were fully animated CG characters. We used motion capture extensively for the Frost Giants, and those motion clips were captured at Giant Studios in Los Angeles. Giant has a great virtual production setup where you could have two 6’ tall performers fighting each other in the mocap volume, for example, but then on the monitor you would see a 6’4” Thor fighting a 10’ tall Frost Giant composited and animated in real time. This was very useful when it came down to physical contact. For example, if Fandral stabbed a Frost Giant in the chest, we needed the motion capture performer to aim higher, like around the neck or head, because the CG character would be much taller than its human counterpart performing in the mocap volume. Ultimately the cleaned-up motion capture worked quite well for many of the shots, but in cases where the director wanted to change the animation and we didn’t have the proper clip, we would have to animate those by hand. And, of course, the Frost Beast was always animated by hand.

Can you explain how you create the Frost Beast? What were the main challenges with him?
The Frost Beast was a true “monster mash” of various ideas. Creature designer Nick Lloyd came up with dozens of concepts that we presented to Marvel, and ultimately they came up with a final look utilizing various elements of our designs (as well as their own creative touches). Miguel Ortega built a digital model that was textured by Christopher Nichols using elephant skin as a major reference for its exterior look. The face resembles that of a turtle crossed with the rancor from Star Wars, something Marvel specifically referenced in their design notes. Obviously nothing like the Frost Beast exists nature, so Eric Petey and his team animated this creature without the benefit of mocap – it was all done by hand to mimic the motion of a large cat, but of course it’s powerful like a rhinoceros and can burst through anything in its path. That created challenges for computer graphics supervisor Eric Fernandes and the fx animation team, which had to shatter ice and rock as the Frost Beast chases after our heroes. It was a real team effort to put such a menacing creature on screen for this film.

Did you enhanced some camera moves?
Almost all the camera moves with a live-action component were match-moved from what the production camera was doing, although sometimes we would extend the move out higher or alter it to some degree, in order to fit the shot better. For the all-CG shots, they typically started out as previs from Third Floor or Digital Domain’s internal previs team. We would take that and incorporate it into our world scale and refine it if necessary.

Can you explain how was the collaboration with the stereo team?
Working together with production and post-conversion company Stereo D, we devised four types of stereo shots. Type 1 would be where we didn’t have to do anything, which was equivalent to what would happen on all the non-visual effects shots. Type 2 was an all-CG shot, where we would render a left and right eye, resulting in a “true stereo” shot; we did about 90 of these. Type 3 was where we would provide Stereo D with any layers that would help make their job easier – mattes, z-depth information from the renders, snow and atmospheric layers, etc. Type 4 was where Stereo D would post-convert the plate and provide us with a camera, and then we would render out both eyes for “true stereo” CG elements.

A big thanks for your time.

// WANT TO KNOW MORE?

Digital Domain: Official website of Digital Domain.

© Vincent Frei – The Art of VFX – 2011

LAISSER UN COMMENTAIRE

S'il vous plaît entrez votre commentaire!
S'il vous plaît entrez votre nom ici