What is your feeling to be back in Tony Stark universe?
This is our third visit to the Stark universe. It’s always exciting to help expand upon the creation of Tony Stark’s world.
Can you tell us more about your work with Production VFX Supervisor Christopher Townsend?
We had worked with Chris on CAPTAIN AMERICA: THE FIRST AVENGER and enjoyed that experience, so it was good to be working with him again. Chris is very collaborative and encouraging and it was good to have that support with what were tricky sequences.
What have you done on this show?
Fuel designed and created the ‘War Room’ sequence where Tony Stark holographically recreates the Chinese Theatre bomb site to search for clues about the bombing, and learn what he can about The Mandarin and his terrorist activities. We also created the brain hologram that Killian uses to try and woo Pepper into investing in the Extremis technology.
What was your approach with these various holograms?
While the idea and CG technique of projecting light was consistent across both sequences, each hologram came with it’s own approach in a lot of ways. Stark’s ‘War Room’ holograms live within the world of Jarvis so there was a language that was already established from previous films. In saying that, the representation of the pre- and post-destruction Chinese Theatre did need to take on a new and un-described look. The hardest thing to find was how the transition between them looked and worked.
Killian’s brain hologram was something new in the Marvel world. We started with real 3D mapping data we were given from university studies that defined the fibre pathways of the human brain. We used these curves to create moving light within the brain. The outer surface or brain ‘shell’ was another model with different shaders. Depth of field played a large part in the look of this sequence, which we set in comp using deep image renders.
Jarvis is recreating a crime scene in a hologram. Can you tell us in details about its creation?
We received Lidar scans of the post destruction set from the production, and a 3d model of the Chinese Theatre from another vendor (who had to build it for the explosion sequence). We then turned the pre-destruction Chinese Theatre into geometry capable of being rendered as lines. This was combined with an ambient occlusion type render to get the final look. The post-destruction was converted into a series of 3D points based on the data captured by the Lidar. Each one of these points was rendered as a tiny cube, the exact size of which was defined based on it’s distance to camera. They were all rendered using deep image out of Renderman which allowed us to do all of the interaction between Stark and the hologram within Nuke.
How did you manage the matchmoving challenge?
Matchmoving was particularly challenging in this instance. We had to track the camera and rotomate the actors in some cases – especially when Stark was moving his hands to control Jarvis. We built all of Stark’s garage in 3D because we needed to be able to have our holograms intersect and collide with those objects (there is a LOT of CG lighting interaction with the live action set). This model had to be exact because we knew it would all have to track – and stick – across all of the various angles. The fact we were re-comping the shots in stereo meant there was no room for error. The tracking of the camera and set was all done in 3DE which was helped by having a Lidar of the garage. The rotomation was started with a one point track from 3DE and finished by hand, frame by frame.
Which softwares did you used for their creation and animation?
We used Photoshop and After Effects for all of the design elements, Maya for most of the 3d elements and Houdini for a handful. The shots were tracked in 3DE and composited in Nuke.
What was the biggest challenge on this project and how did you achieve it?
As with almost all VFX, our job was to help tell a story and make stuff look cool at the same time. The War Room was especially challenging though as it was a key scene to progress the plot and involved a full holographic environment and there are few real-world references to base the visual effects on.
What we tried to do was narrow down on a particular look and then explore that further in a couple of indicative angles. However, what looked great in one instance might not convey the right story point in another. So we had to work very hard at getting the balance right across all the shots which was harder than you might expect. We often would present something that looked great in one shot, only to find out that two shots later, from a very different angle, it was confusing and even distracting. When a look wasn’t right for the whole sequence, we would return to a couple of shots to workshop solutions, and then look to propagate those out across the sequence again for further assessment.
How many shots have you done and what was the size of your team?
We worked on 70 shots and had about 30 people on the team over 5 months.
What is your next project?
Fuel VFX is currently working on THE HUNGER GAMES; CATCHING FIRE.
A big thanks for your time.
// WANT TO KNOW MORE?
– Fuel VFX: Official website of Fuel VFX.
© Vincent Frei – The Art of VFX – 2013