HEREAFTER: Bryan Grill – VFX Supervisor – Scanline VFX

Bryan Grill evolves in visual effects for over 20 years. After working for 14 years at Digital Domain starting as Flame operator to finish as a VFX supervisor, he oversaw such projects as THE GOLDEN COMPASS, PIRATES OF THE CARIBBEAN 3 or G.I. JOE. He joined the teams of Scanline VFX in 2010.

Scanline VFX has received the Outstanding Supporting Visual Effects in a feature motion picture presented by the Visual Effects Society for this movie!

What is your background?
I started in visual effects in 1986 as nighttime receptionist at The Post Group in Hollywood. No formal film school just on the job training on my own time learning to be a tape operator and then editor. I left The Post Group after 6 years to go work at Digital Magic in which I worked on STAR TREK GENERATIONS and DEEP SPACE NINE. I was than fortunate enough to gain employment at Digital Domain where I worked on my first film APOLLO 13 as a Flame compositor working my way up to visual effects supervisor when I left. After a very enjoyable 14 year career at DD I found my new home at the LA offices of Scanline.

How was your collaboration with Clint Eastwood?
Michael Owens a long time collaborator with Clint on over 7 movies was there to guide us through the creative process between him and Clint. Our team at Scanline worked very closely with Michael on achieving the desired look and feel of the visual effects to tell the story the way Clint wanted them to. Clint was very impressed with the work we were creating but not one to get into all the details of how it is done but collaborative none the less when it comes to what he wants to see.

What is the approach of Clint Eastwood about the visual effects?
Clint’s approach to visual effects is realism and a tool to tell the story of the film. Clint’s shooting style doesn’t change just because there are visual effects in the movie. The visual effects team must have a very robust plan of how to achieve the desired effect without overstating our presence while on set. He is aware when there is something crucial we need for any particular effect but likes to keep the film rolling in the cameras knowing he has the confidence in us to make it work in the end.

How did you recreated the tsunami?
By looking at as much reference material as possible was the first thing. Once we saw the horror and devastation of the what a tsunami could do we than had to art direct those moments to best fit the storytelling of the film. Once we knew what our environments were going to be we started the simulation of the waves speeding down the streets. From there we established the amount of destruction that was going to occur. Plotting out when the buildings would collapse or when people would get eaten up by the massive wave was essential to the storytelling.

Can you explain to us the creation of the shots where Cecile de France carried away by the wave? How did you shoot her?
Originally there was a tank shoot at Pinewood studios. They shot Cecile both under and on top of the water in a controlled environment. These dailies became extremely useful when post vising the actual shots. But because it was a controlled environment Clint felt as if the could use a little more urgency and helplessness. In search of additional dramatic opportunities, plates were shot of Marie and the little island girl struggling to survive in the open ocean water off of Lahaina. While the actors performances were a wonderful addition, the look and feel of the ocean water added a great deal of realism.

Tell us about how did you rebuilt the city and the wave passing through it and causing so much damages?
Comprehensive 3d LIDAR scans were taken of the principal photography location sets. With the help of our on set crew, we were able to get HDR photography for recreating the lighting for our CG environments. Over 50 separate buildings had to be photographed with bracketed textures to build not only the one block set we shot in, but also to build the rest of the mile long stretch of road that would ultimately need to be created for the sequence. For interaction between the CG environment and the wave, we ran rigid body dynamics simulations in conjunction with water simulation. We built buildings in CG much as a construction crew would build a physical structure: first erecting the skeletal understructure and then building out from there. By building it that way, the simulation would break the structure in the weaker areas when it was subjected to the pressure of the wave.

How did you create the shot in which Cecile de France is sinking into the water?
I think the shot you are talking about is when Marie is just coming from under the balcony and her foot gets stuck on a cart. As the cart starts to sink so does Marie with it and than we cut to her underwater struggling to untie her foot from the cart material. The full cg shot when she first starts to get sucked under was a combination of everything we had been doing for most of the work in the tsunami sequence. We mo caped and animated Marie trying to stay afloat. Once the cg water flow was bought off on we showed the animators water flow markers so that they could match her movement to the flows of the water correctly. Than it was additional work to add all the debris floating and interacting with and destroying buildings.
The following shot was a plate of Marie and the cart under water at the tank at Pinewood studios. We tracked the plate and rotoed her out. Than we did a full water simulation with all the debris and environment floating under the water. After integrating Marie back into water we added more bubbles and debris on top to completely immerse her under the flowing torrents of the tsunami wave. Than there was the shot of Marie sinking into the water after being hit by the car bumper. This was a full cg shot including full cg Marie. This shot ended up being a hybrid of motion capture and hand animation. The hair and cloth simulations were painstakingly iterated to match the surrounding live action underwater tank footage of Marie, which directly cut with the shot.

Have you used digital doubles?
Yes digital doubles played a very large roll in the film. All people caught up in the Tsunami water were motion captured digital doubles animated in Massive and Motion Builder. Capturing motions of people being swept away by a river of water proved challenging. Our motion capture shoot relied heavily on actor equipped with zero-gravity traveling wire rigs attached to a gyro waist rig enabling 360 degree movement. This enabled them to perform floating,swimming and thrashing movements of characters in rushing water.

About the sequences of the afterlife. What were your references and how did you created those shots?
Joe Farrell our compositing supervisor worked very closely with Michael on creating the look of the afterlife. There were lines in the script which outlined the perception from hundreds of life after death accounts so those parameters of a bright light feeling of weightlessness and being able to see a 360 degree view was the bases of the effect. We than started to find reference material to help plot out the length and visuals of the shots. Close Encounters Of The Third Kind became our visual inspiration. The end scene when the ship lands and we see all the people leaving the ship who had been abducted struck a cord. The bright light and changing silhouetted images gave a sense of the unknown without being to scary. A green screen shoot allowed Michael to art direct the amount and type of people were needed for the scene. Using the same type of technique of using a very bright 20k light back lighting everyone became the new base. Joe using nuke created an 3D environment and added some very traditional optical effects thus giving a very surreal and haunting moment in the film.

What other kind of effects have you made?
The Visual Effects requirements in the remainder of Hereafter covered a spectrum. The team recreated the 2005 London Underground bombing we also added tears and facial performance enhancements to increase the dramatic effect of the character Jason as well as a host of other Visual Effects techniques to bring Hereafter to a non-visual effects oriented dramatic film.

Is there a particular shot that prevented you from sleeping?
There were a few shots that kept me awake at night. The shot of Marie and the little girl being swallowed up from the oncoming wave in the middle of the marketplace. Also the first time the tsunami crashes onto shore overtaking and destroying the hotel pool area and lower level. Both of these shots had some minor art direction tweaks very close to the end of the production so it was a race to the finish to get all the elements to integrate and work with each other harmoniously.

What was the size of your team?
Our team size ramped up to between 50 – 60 people at its largest.

How long have you worked on this project and how many shots have you made?
The project was awarded to us in November of 2009 and we finished in August of 2010. We were the sole visual effects vendor on the film responsible for 169 shots 86 of those in the tsunami sequence.

What did you keep from this experience?
My experience on Hereafter continues today our team along with overall VFX supervisor Michael Owens are nominated for an Academy Award for best visual effects. So the whole experience of creating,finishing and than ultimately being recognized by your peers is something very very special to me.

What is your next project?
Scanline is happy to be working on Tarsem Singh’s IMMORTALS. And I will be supervising a few sequences on New Line’s JOURNEY 2: MYSTERIOUS ISLAND.

What are the 4 movies that gave you the passion of cinema?
THE WIZARD OF OZ, JAWS, THE GODFATHER and THE FIFTH ELEMENT.

A big thanks for your time.

// WANT TO KNOW MORE ?

Scanline VFX: Dedicated HEREAFTER page on Scanline VFX website.
fxguide: Article about HEREAFTER on fxguide.

// HEREAFTER – SCANLINE VFX – VFX BREAKDOWN

© Vincent Frei – The Art of VFX – 2011

BLACK SWAN: Dan Schrecker – VFX Supervisor – Look Effects

Dan Schrecker has worked on all the movies of Darren Aronofsky’s from PI to BLACK SWAN. He also worked on films such as THE DARJEELING LIMITED, ACROSS THE UNIVERSE or LAW ABIDING CITIZEN and the TV serie THE SOPRANOS. In 2008, he joined the staff of Look Effects.

What is your background?
I started out doing non-digital animation. From there I went to graduate school to study interactive telecommunications. This got me into the digital world and I started my own business doing interactive design. After a few years of this, my old college roommate made a film and needed some help with some graphics so I teamed up with another old college friend and did the work. That film was ?. From there I started to do title design and got into visual effects that way, starting a new company called Amoeba Proteus with my friend Jeremy Dawson.

How do Look Effects got involved in this project?
I had worked with Darren Aronofsky on all of his films and, since I had taken a staff position at LOOK in 2008, it made sense for us to do the work on BLACK SWAN. In addition, LOOK had done work for Darren on FOUNTAIN and THE WRESTLER, so he was familiar with the company as a whole.

What have you made on this show?
We completed over 200 visual effects shots for the film. This included complex CG work, such as the swan transformation, as well as, production fixes, such as lots of crew removal in the mirrored rehearsal rooms.

How was your collaboration with Darren Aronofsky?
It was good. Like I said, we have been friends since college, so we have a very good working relationship and a shared history which allows us to communicate fairly easily.

How did you create the wings in the dance sequence?
The swan transformation
With Nina’s triumphant performance as the Black Swan, her transformation reaches completion in this sequence. The practical chicken skin on her back spreads across her chest as a 3D effect. It travels down her arms and CG feathers begin to emerge. The Black Swan makeup travels up her arms as a 2D effect. When she begins the final coda, the feathers being to form full swan wings and torso.

Dance Motion Capture
On set, we shot the professional dancer using a motion-capture setup provided by Curious Pictures. There were 18 cameras which captured the dancer’s motion. Because the dance double could only do this most difficult ballet sequence from stage left and Ms. Portman could only do it from the right side, we were forced to flop the shot. Throughout the film, we were very impressed with Natalie’s dancing skills, as she performed much more of the ballet than initially anticipated. In this case, she performed the final coda, providing us with a high-quality element for the face replacement in the few shots where we used the dancer.

Wing Layout and Rig:
The wings were built based on a compromise between the concept drawings and the dancer rig (3D set-up). The dancer rig was built-in with the arms divided into multiple joints to allow for greater tracking flexibility. To match the rig, the wing model has the same number of joints and is constrained to follow the dancer’s arm movement and twist at each extra joint location. The rig also contains extra controls to allow for additional twists and offsets. This was critical because what worked best for the track did not necessarily appear natural for the wing. Twisting was especially an issue that resulted in a lot of additional animation on the wing rig.

The joints of the wing rig were skinned to a NURBS foil shape that was more bird-like in proportion than arm-like. The larger feathers, the primaries and secondaries, were hand-positioned on this NURBS model surface. The smaller feathers that fill in the wings were placed by MEL scripts (specially-written programs) which LOOK wrote to instance (multiply) based on texture maps. Additional MEL scripts constrained the feathers to the NURBS geometry (3D model) using Maya’s follicle nodes. The body feathers were also entirely positioned and scaled based on texture maps and the MEL instancing scripts. The total feather count was around 11,000 and LOOK’s technical director wrote around 1500 lines of MEL code for rigging and scene management.

All of this coding allowed the wing rig, when fully-built and attached to the dancer rig, to use the dancer in the master file as reference and simply swap with the most recent wing version.

Lighting/Rendering/Passes:
Lighting was done entirely with conventional Autodesk Maya area and spot lights. We had enough images to generate HDRIs (high dynamic range imaging), but the lighting changes and large camera movement made it impossible for one or two HDRIs to cover the wild shifts in luminance. Computer graphics lights were “built” based on the film footage, with numerous stage lights above and to the sides of Natalie, six large chandeliers were approximated, several bright footlights near the orchestra were added and three very bright spot lights casting rim light were critical. Even with all these lights, the black levels in the plate the smoke in the theater made matching the actual lighting tricky. A lot of color correction in compositing had to be done as a result to make the wing feel integrated.

The shot was rendered in mental ray using the Rasterizer because of the huge amount of motion blur. Most of the wing is captured in one big beauty pass with an additional shadow pass and numerous mattes for the compositor.

Can you talk more in details about the feathers creation?
Feathers:
For the transformation, we needed to create, animate and composite in the black swan feathers. Great care was taken to make the look and behavior of the feathers as realistic as possible, thus helping to make the transformation believable.

The wing feathers (Primaries, Secondaries) are simple 3D models: curved planes for the barbs and cylindrical geometry from extruded curves for the rachises. Each feather has a deformer rig (animation control structure) to add bend in two directions and also to allow growth from the rachis outward. The body feathers were simplified and usually did not contain rigs or separate rachis geometry. Feather silhouettes were created with a cutout map from scans of actual Swan Tundra feathers (a white swan). Those same scans were darkened in Photoshop and painted over to produce a more plausible black swan feather. Normal maps (textures) were generated through ShaderMap to add barb roughness. There were ten primaries, eight secondaries, and five generic body feathers in total. Each set was mirrored producing 46 feather textures.

The look of the feathers comes from an anisotropic shader, which gives the smooth geometry the sort of directional sheen one would expect from a real feather composed of thousands of individual barbs. The shader was slightly translucent to allow light through when the wings crossed in front of stage lights. Special care was given to reflection falloff, as there is essentially no diffuse lighting on the feathers due to the dark plate and the dark texture maps being multiplied against the diffuse lighting values.

Feather Growth:
Feather growth was one of the more challenging aspects of the visual effects in Black Swan. LOOK animated black and white maps in Adobe After Effects and generated some low-resolution image sequences. Those growth images were read into the 3D software, Autodesk Maya, and determined the feather scale and other properties, such as rotation for feather ruffling. Each feather had extra data associated with it, such as UV position, which had been stored previously when rigged by the instancing MEL scripts. It was a crude, but effective, feather system.

Animation:
The primary and secondary feathers were grown “by hand” by hand animating scale attributes and keyframing the deformers of each individual feather’s rigs that allow outward growth (the barbs pop out of the rachises). Additionally, keyframes were set by hand every couple of frames to keep the wing from twisting (often around the elbows) and to minimize feathers penetrating each other. As the shot progressed, the matchmove often had to be tightened up with additional keyframes.

How did you create the tattoo on the back of Natalie Portman?
During the sex scene, the lily tattoo on Lily’s back, which we’ve seen throughout the earlier part of the film, transforms, from Nina’s point of view, into swan wings. Is this a hallucination brought on by Nina’s drug intake or a manifestation of Nina’s paranoia that Lily will take over the role of the Swan Queen?

LOOK started the transformation effect with an image of the practical tattoo. We had our concept illustrator design a final swan wing element in the same color and style as the tattoo. He then drew in-between frames, “mapping” each petal and leaf of the lilies to the wing feathers. An animator then used these elements to create the final 2D animation. Simultaneously, the painstaking 3D matchmove of Lily’s back and movement was ongoing. This was a particularly difficult track due to subtle muscle movements in the actress’ back. The 2D animation was then applied to Lily’s back, creating the transformation.

Is the handheld style of Darren Aronofsky gives you some troubles?
It did make things more difficult to track, but we got it done. In a few cases we insisted that Darren shoot with a locked off camera and we added digital camera shake in post.

Have you done any face replacements?
A few, but not many. Natalie did almost all of her own dancing. Though there were shots when we mapped Natalie’s face on to drive the storytelling, such as Lily’s face in the dressing room confrontation and the corps de ballet backstage.

How did you design and create the paintings that speak to Natalie Portman?
The art department created the actual paintings. As the scene progressed during editing, Darren wanted it to be more extreme, so we did a number of variations of the faces until we found the right balance. Because they were so childish in style, we had to be careful not to make it too goofy.

How many shots have you made and what was the size of your team?
I believe it was 210 shots and about 25 artists.

Is there a shot or sequence that prevented you from sleeping?
Many.

What did you keep from this movie?
It was a very satisfying project to work on, but very difficult.

What is your next project?
Right now we are finishing work on LIMITLESS for Relativity Media, Neil Berger director, with Bradley Cooper and Robert DeNiro. We are also starting work on THE SITTER for Fox, starring Jonah Hill and directed by David Gordon Green

What are the 4 movies that gave you the passion of cinema?
JAWS, DR. STRANGELOVE, SUPERMAN 2 and APOCALYPSE NOW

A big thanks for your time.

// WANT TO KNOW MORE ?

Look Effects: Official website of Look Effects.
fxguide: Article about BLACK SWAN on fxguide.

// BLACK SWAN – LOOK EFFECTS – VFX BREAKDOWN

© Vincent Frei – The Art of VFX – 2011

127 HOURS: Adam Gascoyne – VFX Supervisor & Co-founder – Union VFX

Adam Gascoyne is working in visual effects for over 15 years. He worked at Cinesite, MPC or Rainmaker London. His filmography includes such films as DIE ANOTHER DAY, DA VINCI CODE or the TV series ROME. He oversaw a large number of films including SLUMDOG MILLIONAIRE, INKHEART or LA MOME. He founded Union VFX with Tim Caplan.

What is your background at Union VFX?
Adam Gascoyne is a leading VFX supervisor with over 15 years of experience in the film industry, covering all technical aspects of the visual effects and postproduction process for Film and Television. Prior to setting up Union VFX, Adam worked as a visual effects supervisor at Cinesite, MPC and Rainmaker in London.

Tim Caplan has been in the visual effects industry for 17 years. He has played a key role in the start up of two large facilities Cinesite (where he first met Adam) and Mill Film.

How did you get involved with on 127 HOURS?
Danny approached me in October 2009 with the script and asked how to tackle the arm issues. After working on SLUMDOG MILLIONAIRE I think he was keen to have the same team working with him on 127 HOURS.

How was your collaboration with Danny Boyle?
Its always an aventure working with Danny, he is an inspiration and knows what he wants from the start. He leads from the front and is a great motivator. The whole team worked closely with him throughout the project.

What kind of effects have you made on this show?
127 HOURS is, on the face of it, not a big VFX movie but once you start looking in to the locations and the subject matter you realise that it is a really tricky movie to make, just getting to the locations took a hurculian effort. So that’s where the effects were used, to solve difficult production issues and help maintain the reality which was key to the story.

Can you explain the creation of the shot which start near James Franco and ends far above the canyon?
The first part of that shot was filmed in the canyon set in Salt Lake City, Then we shot plates from a helicopter at various heights above the actual accident site in Blue John Canyon. These plates were then projected onto geometry generated from height field data of the area. This allowed us to soar above and through the canyon lands which was impossible to film with such freedom.

Tell us about the shots showing the storm approaching. How did you create them? Are they full CG?
The clouds terrain and rain were all  fully CG. We referenced a quicktime movie that Danny gave us.

How did you recreate the horses in CG? And why not be shoot them in-camera?
The horses had to be filmed from such a low angle and depth it proved tricky to set up a run for them to get enough speed and jump far enough, we decided to go with the CG horses pretty early on as they also needed to be ready for the shoot and time was tight.

What was the size of the canyon set? Where was the line between the real set and the CG extension?
The set was 30ft tall and 60 ft long. The join varies from shot to shot and lens to lens but it was above the point where Aron falls from.

What have you done on the amputation shots?
We started with a great prosthetic arm made by Altarian. We needed to work  on the connective tissue under the skin and added a little blood though the sequence for continuity.

How did you remove the forearm when James Franco swim?
We tracked over a 3d shape and cleaned James Franco’s arm. They were very difficult shots due to the bubbles, caustic lights and skin movement. We textured the CG arm with photos of Aron Ralston arm shot the same time and location so it helped with the lighting etc.

Are you involved on shots of Ralston climbing a snowy mountain?
Yes, these shots were originally shot to be used before the accident so we cleaned out James arm again and replaced it with the ice pick.

What was the biggest challenge on this project?
There were many challenges, there was a 3 month schedule for post which went pretty quickly. The shots where James turns to dust were the most time consuming, lots of fun with Maya particles and cloth simulations.

How many shots have you made and what was the size of your team?
There are over 350 shots in the movie and we had a team of 20.

What are your softwares at Union?
Mainly Nuke with Maya, we use various tracking software.

What did you keep from this experience?
It was an amazing experience from start to finish, We were involved for just under a year from first draft of script to the final vfx delivery, I spent 9 weeks in Utah shooting in some amazing locations. So many experiences to keep.

What is your next project?
We have just finished a movie called WILL and we are working with Danny Boyle on the opening ceremony for the 2012 olympics.

What are the 4 movies that gave you the passion of cinema?
CLOSE ENCOUNTERS OF THE THIRD KIND
TIME BANDITS
STAR WARS

A big thanks for your time.

// WANT TO KNOW MORE ?

Union VFX: Official website of Union VFX.
fxguide: 127 HOURS article on fxguide.

© Vincent Frei – The Art of VFX – 2011

THE GREEN HORNET: Jamie Dixon – VFX Supervisor & Co-founder – Hammerhead Productions

Jamie Dixon is a true pioneer in the world of visual effects. He was one of the first employees at Pacific Data Images in 1985. His wire removal on TERMINATOR 2 was one of the first and same thing about the morphing on the faces for the music video BLACK & WHITE of Michael Jackson. He founded Hammerhead in 1995, with Dan Chuba, Thad Beier and Rebecca Marie, and work on many films such as TRUE LIES, TITANIC, X-MEN, THE CHRONICLES OF RIDDICK or WANTED.

What is your background?
I started studying computer graphics in the early ’80s at UC Berkeley and started doing national broadcast graphics at Pacific Data Images in 1985. In 1988 I had the opportunity to work on SCROOGED for Paramount Pictures and was bitten by feature films. In 1990, still with PDI, I worked on TERMINATOR 2 and through 1995 worked on around 80 other films including TRUE LIES and TOYS. In 1995, myself and 3 other partners started Hammerhead Productions and since then have contributed visual effects to over 100 major studio motion pictures including TITANIC, DEEP BLUE SEA, X-MEN, FAST AND FURIOUS and UP IN THE AIR.

How was your collaboration with director Michel Gondry?
Michel was very interested in using minimal visual effects in THE GREEN HORNET. He wanted everything to be absolutely realistic and nit to have the audience ever question that. The effects in the film eventually fell into two categories, those driven by production realities and the more whimsically creative. In consultation with him and the producers, I mainly handled the standard effects so he could concentrate on the creative.

What was his approach with the visual effects?
Michel’s experience with visual effects on this scale was pretty limited and he basically allowed me the opportunity to recommend techniques to handle the standard types of shots. For the creative moments, he was pretty loose as far as planning went. I believe he had a clear idea on the results he wanted but we were never quite sure how he wanted to get there. For those sections, we shot many takes and angles and he took that footage into a Flame and started trying to “find it”. After mocking up examples of what he wanted, we could finally focus on completing the shots with the fidelity expected.

What are the sequences made by Hammerhead?
Hammerhead mainly worked on the driving sequences where the shots with the actors were filmed against bluescreen as well as the “gas gun”. Hammerhead did around 300 shots for the film and in addition to those two sections filled in many of the little one-off elements.

How did you get the idea of slow motion in which certain objects or vehicles lengthen and multiply?
Michel had done a “fight test” before he was hired to direct the film and he used a Phantom camera that shot 1000 frames per second to film it. He then selectively sped up and slowed down different parts of the frame to achieve a fluid and warped sense of time. For the film, we basically did the same, shooting the fight on that camera then manipulating in post.

Have you received any references particularly from Michel Gondry for Kato slow motion and Kato vision?
The test he had done.

How do these scenes were filmed?
The Phantom camera is a digital, high definition camera that can shoot up to 1000 frames per second but our scenes were shot at either 300 fps (closeups) or 150 fps (wider shots). Because of the high speed and low asa of the sensor, lots of additional light was needed but the scene was pretty much run in a regular fashion. The magic all came in post.

Can you tell us about the creation of green gas? How did you design and create it?
Early in production, I had realized that the green gas was going to be an important part of the film, even before the filmmakers had wanted to start thinking about it. Hammerhead began testing and creating examples of what it could look like and that gave us the confidence that we could shoot the scenes without any physical smoke and add it later. Luckily, this made us ready and when the producers started asking for example shots that they could use in early marketing, we could deliver.

How does a typical day for you? On set and during post-production?
During production, I was typically there early and stayed late. In addition to the main unit, we had an extensive second unit shooting action scenes. There were many overlapping days where main unit would shoot during the day and second unit would be on nights. Even though I had a crew handling all of the mechanical aspects of visual effects production, I needed to be many places at the same time and basically spent long hours bouncing between units. In post, in addition ot Hammerhead, we had Luma Pictures, CIS Hollywood and Pixomondo contributing. My typical day would be to start at Hammerhead then head over to my office at the studio (on the Sony lot) where we would review dailies, consult with editorial and then maybe visit one or more of the vendors.

How did you manage to reproduce the interactive lighting of the city during the driving scenes with the Black Beauty?
The backgrounds for those scenes were shot at the same time as the exterior action scenes. Once a particular stunt was filmed, we would run our camera car down the same route and capture the needed angles. On the bluescreen stage, we would study the footage and there was an extensive lighting rig that we would adjust to roughly match the setting. Sometimes there would be very few lights and others many. There were only a few differing environments that needed matching and frankly, at least when shooting the closer angles a direct match is not really necessary.

Did you used digital doubles?
There were only a couple of shots in the film that had classic digital doubles. There is a shot of a guy wearing green being tossed from an overpass and then during the parachute ending. There were around a dozen face replacement shots where you could clearly see that a stunt player had performed and we needed to replace that with the actor. During production, we scanned the main players to have that option.

How did you handle the stereo conversion?
Luckily and wisely, the stereo conversion of the film was handled by a different department that could focus solely on that task. Of course the VFX department was fully supportive and the vendors would provide many additional layers to the conversion houses in addition to completed shots. For instance, clean backgrounds that matched the bluescreen shots gave them the opportunity to create better separation.

As a production visual effects supervisor on the film, how did you choose the sequences that would be made by other studios?
We looked at the scope of work and had around a dozen different houses bid on executing the work. During that process it became pretty clear who would be best suited for the different needs of the film. We made the selections based on that and also by selecting vendors that would work well sharing assets between each other. Since I was overseeing the whole process, I could help ensure that there was a consistent result and because I come from a vendor background it put me in a good position to understand the inner workings of each of the players.

Can you explain the sequence distribution in other studios?
Luma Pictures in Venice handled most of the 3D scenes including the car flipping, the printing press set extensions and the parachute scene.
CIS Hollywood did the Kato-Vision scenes including the cemetary fight, the south central fight and the shot in the bullpen at the end when Seth gains that ability. They also did the 16 frame fluid split-screen shot.
Pixomondo in Santa Monica handled the Britt flashback which was a dreamy recollection of previous moments.
Finally CIS Vancouver filled in with a lot of the clean up and fixit shots.

What was the biggest challenge on this project?
I think it had to be understanding Michel’s vision and reconciling that with his process. He had a relatively unorthodox way of describing and executing those creative scenes and getting in sync with him took a lot of effort. In the end, I am very happy with how that all turned out and in hind sight it seems obvious but at the time it was very challenging to make sure that I was giving him what he wanted.

Was there a shot or a sequence that prevented you from sleeping?
All 650 shots! Actually there were never any that I doubted. We had plenty of time and resources and we had the best companies in the business working on the film. Everyone was very enthusiastic about this film and contributed far above the minimum. The atmosphere with editorial and the filmmakers was very collaborative and we really had the chance to design the best solutions to every problem and execute that.

What is your pipeline and your software at Hammerhead?
Hammerhead mainly uses Nuke for compositing, Maya for 3D and FumeFX for smoke, all running on Linux boxes. We have a couple of hundred processor render farm.

How long have you worked on this film?
I started in June of 2009, photography was completed in December 2009 and we finally viewed the finished 2D print in November of 2010.

How many shots have you made and what was the size of your team?
There were around 650 shots in the final film though we produced roughly 900 to varying degrees of completion. I had a VFX producer (Lori Nelson during production and Camille Cellucci in post), 2 coordinators and a PA. During photography, we had a VFX camera unit that included an AD, DP and various assistants.

What do you keep from this experience?
The lasting revelation is that it’s very difficult to judge the final quality of a film during production. We had many times on this one where it was impossible to believe that it would turn out to be something we would be proud of. In the end, and because of very focused and intense work by Seth Rogen, it actually turned out to be great. Who knew?

What is your next project?
Hammerhead is very busy for the foreseeable future and we are currently working on projects for Paramount, Warner Brothers and Dreamworks.

What are the 4 movies that gave you the passion of cinema?
I have always loved the “creature feature” films of the 50’s and 60’s. The early visual effects treats that I continue to enjoy were 2001 A SPACE ODISSEY, INDIANA JONES and GHOSTBUSTERS. My current addiction started back in 1988 with SCROOGED and the realization that I could actually contribute to such a grand art as film.

A big thanks for your time.

// WANT TO KNOW MORE ?

Hammerhead Productions: Dedicated page about THE GREEN HORNET on Hammerhead website.
fxguide: THE GREEN HORNET article on fxguide.

© Vincent Frei – The Art of VFX – 2011

TRON LEGACY: Charlie Iturriaga – VFX Supervisor – Ollin Studio

After working several years in Mexico and Latin America, Charlie Iturriaga and Ollin Studio began a long collaboration with David Fincher working on ZODIAC, THE CURIOUS CASE OF BENJAMIN BUTTON and THE SOCIAL NETWORK. At their credits, there are also the visual effects of THE SPIRIT, LET ME IN and THE LAST AIRBENDER.

What is your background?
I studied Electronics Engineering, but dedicated most of my time studing film and working in productions for commercials and features. After a couple of years working in the local (Mexico and Latin America) industry, I started working for Hollywood features, starting with David Fincher’s ZODIAC movie.

How was the collaboration with Joseph Kosinski and Eric Barba?
During months we were working together in reviews almost every day. I had dailies review of the work I was supervising with Eric first thing in the morning (9am) and around noon. Those reviews decided the changes and look of our work, and helped get an idea of the overall design of the movie, as Eric had in his brain the movie from beginning to the end. On those reviews the materials was picked up and shown to Joe, weather to be discussed or to have a final proposal and continue with the work. We had meetings with Joe at the beginning of the production twice a week, and at some point we started having meetings almost every day, including weekends.

How did Ollin Studio got involved on this movie?
Ollin created around 260 shots in the movie, including both Stereo shots and Flat shots. We were the second biggest vendor outside Digital Domain.

What sequences did you do?
Armory (CG Environments and suit effect), Sam’s Apartment (Digial Matte Paint), Black Guard Fight (CG Environment/Set extension, Suit/Disc enhancements and fight effects), End of line Club (Set extension and environment design, Privacy screen and stairs effect of Zeus), Ending LA (Digital Matte paintings and Flare effects), Flynns Arcade (Set extensions and environments), Private room (Suit enhancements, privacy screen and Stereoscopic fixes).



What assets have you received from production?
Depending on the sequence, different kind of assets where delivered. The first ones where obviously the plates, which included 2 eyes and a QT reference of the editorial timing.
-Concept art where delivered as high res paintings by Production’s Art’s department.
-Shared geometry where delivered if it was created by another vendor (in cases where we have a sequence split between two studios).
-Look up tables and color references were sent back and forth between Ollin and Production/Digital Domain to make sure we were in the same color space and look.
-Scripts for specific effects shared through the movie, like glows or flares that were previously approved by director.

Does the director gave you references in particular?
During our numerous hours of screenings and meetings checking the shots, several references where called. One that I specially remember is when we were discussing and designing the “transition” to the tron world, which was not specifically designed or visualized. Joe created an effect in After Effects which involved some points flying to the screen, and he handled that directly to me as a start reference of the shot. He wanted to step away from the original effect where Flynn traveled to the computer and create something a little bit more in first person and a little like WIZARD OF OZ in the sense of a change that is more to the audience than to the character. After going back and forth, the effect ended up being pretty close to his original concept, just with a stereo version and proper finishing.

What was the actual size of the set?
There were different sizes…
Armory: around 15 meters by 38 meters
End of line Club: 30 meters by 30 meters approx.. .
Really huge.

How did you create the shots of Flynn arcade building?
The shot was re-created using a combination 2d matte paint and CG created elements. The 2D matte paints where the further back element, they were based on a couple of stills from primary photography and assembled using Adobe Photoshop, then mapped on a plane and attached to a scene that was 3D tracked with the original plate. Cameras were tracked with PFTrack and shared between Maya/renderman and Nuke 3D space depending on the element.

The High res (foreground) elements, and mid res (mid-distance elements) where built in Maya as 3D geometry, using photographic references of places that Production scouted and picked as the proper sourroundings of Flynns arcade. Once the geometry was created textures where applied using the same reference photographs and mapped using deep paint software. The original photographs where taken with a Canon 5D m2 in Raw and stitched in different resolutions depending on the distance from camera.

Once geometry was modeled, shading and lighting started using Renderman and a combination of Image based lighting and specific lighting for the scene. We added subtleties as grass, garbage and different elements in the floor using procedural particles and replacing them with geometry for adding randomness to the look of the street.

One funny fact is that you can see in the wide shot an Original Tron poster on the right side of the street, as well as a graffiti of the comp artist “Bto” in the wall.

Did you do something on the light suits?
Most of the light suits needed some enhancement, mainly in the falloff of the glow and consistency across the movie. It was great to have a real lightsource during the movie, as the characters interacted with the light and gave a much natural and estetic look in the movie. The cons where that for making them actually expose, most of the apertures on camera where really open, giving a soft defocus on backgrounds and edges of the characters when against blue screen, which gave a hard time on final compositing.

At some point during the post productin, Joe wanted to give a subtle flicker on all the suits, that mean that all suits needs to get a post process of a filer in nuke which gave he noisy look. Also, some parts of the suits turned off during takes, which needed to be rotoscoped and lit up again.

Some times suits were turned off for several reasons. One funny example is during the Black Guard fight, Sam did some of the choreographs of the fighting, the sweating gave some light shorts to his skin, and for that reason suit needs to be off. Roto and lighting interaction needed to be added for those shots.

Can you explain to us how your create the animation on suit when the Sirens put it on Sam?
Several geometry animation were created within Houdini to present Eric and Joe as proposals of “surface” build up. During the same time, tracking was being done for both the cameras and the character. The character used witness cams so we can have different point of views  for the 3D track, adding on the top of the tracking deformation and animation of a soft body to make the tracked mesh as attached as posible in stereo


Final render was done in manta using Image base lighting of the set, and several passes were sent to comp to enhance bump/occlusion/speculars and highlights during the transformations.

How was the assets sharing between your studio and Digital Domain?
Geometry was sent through Maya files. Mainly was of a couple of sets and elements. DD use as their primary render VRay for the project, while Ollin use Renderman… So shaders were not possible to share.
Textures were sent as EXR files.
Versioning was tracked by Ollin and DD Production team

Did the stereo aspect caused you some troubles?
More than trouble, is a complete different pipeline for several proceses in the VFX production:
– Tracking is done completely different.
– Compositing needs to take care of Vertical disparity, Lens aberrations between eyes, Polarization differences and needs to mantain information of convergence and interocular distance so artists can check the depth of their shots accurately.
– Reviewing between facilities and even with the director was in different stereo projection systems. Ollin uses Shutterglasses with a Barco Projector, DD uses RealD Polarized projector, and Joe sometimes was reviewing in Skywalker sound with a Dolby system.

What was the biggest challenge on this project?
Definitely the Stereo issue was the biggest challenge, as we had to re-design our pipeline.

Was there a shot or a sequence that prevented you from sleeping?
Hahaha… Several! There were two reasons why the last year we had trouble sleeping, one was Trailers/teasers/comicon shots that needed to have a very strict delivery. This movie was extremely publiciced and needed a lot of quick turnovers for specific shots. Not all of them were complicated, but to be able to present with the quality needed on a very short time was a bit painful. The other big problems was with tracking of some shots that had a problem in one of the stereo lenses. Specifically 18mm lenses during the entire movie had a center miss alignment which lead to weeks of work to find a proper solution.

What is your pipeline and your software at Ollin Studio?
We have several proprietary tools for pipeine management, most of them written in python for attaching to Nuke and Maya. The pipeline is based in a SQL database, that tracks each shot version, metadata, information, artist that worked on the shot and dates.
PReviewing is done by one of our internal software called “Jefecheck”.
Houdini, Renderman, Maya and Mantra was used for CG.
PFTrack and 3D Equalizer was used for tracking.
We worked closely with the Foundry for Ocula developement and that tool really saved the day!

How long have you worked on this film?
About 14 months.

How many shots have you done and what was the size of your team?
260 shots, and the team was up to 45 at some point of the production.

What did you keep from this experience?
An amazing movie with a great solid pipeline for Stereo production. Working with such a visual director as Joe Kosinsky and a profesional team at DD lead by Eric Barba was a great experience also for understanding the problems and solutions for such a complicated movie with a big studio behind.

What is your next project?
We are working with Disney in a new big movie for this year… but we can’t talk about it yet 😉

What are the 4 movies that gave you the passion of cinema?
2001 : A SPACE ODISSEY
THE GODFATHER
CLOCKWORK ORANGE
STAR WARS.

A big thanks for your time.

// WANT TO KNOW MORE ?

Ollin Studio: Official website of Ollin Studio.
fxguide: TRON LEGACY article on fxguide.

© Vincent Frei – The Art of VFX – 2011

TRON LEGACY: Danny Yount – Creative Director – Prologue Films

Danny Yount, Creative Director at Prologue Films, accompanied by Daniel Klohn and Miles Laurisden talks about their work on TRON LEGACY. Prologue is a studio specializing in the design of film credits and motion design. They worked on films such as IRON MAN 2, THE LOSERS or SHERLOCK HOLMES and TV show like THE WALKING DEAD.

How was the collaboration with Joseph Kosinski?
It was a wonderful collaboration. He’s a director with a great design sensibility. He knew I was a fan of the first film and have designed things that referenced it (IRON MAN 1 end credits) so as you could imagine I was very excited to talk about TRON LEGACY with him. We met shortly after we completed the SHERLOCK HOLMES titles and during our last push to finish the hologram VFX for IRON MAN 2.

What did you do on this movie?
We designed the end credits, the Disney castle logo, the memory sequences and the TV news sequence that reported Flynn’s disappearance.

The Disney logo revisited for this movie is very dark and techno. Can you explain how you design it?
We were initially given the task of merely updating the castle to suit the film using the existing logo that flies down from the sky. We went through several ideas to test the water of how much we would be able to push the look of the traditional Disney vibe into a Tron space. I was afraid that if we went too far we would be forever cast out of the Magic Kingdom but I underestimated the bravery of the filmmakers who wanted to arrive at a very Tron-like Castle, which Daniel Klöhn designed.

What indications and references have you received from the director for the main and end titles?
They had very utilitarian view of the end titles – that it was to be type only. I think that knowing what we usually do with end credits, they were a little apprehensive about making something with “too much design”, as the film already had plenty of great CG, but once we started pushing a type-only sequence to incorporate more elements and really take advantage of stereoscopic 3D space, they began to warm up to what we brought the table. I think we all understood from the very first CG test that Joseph Kosinski had made to sell the idea of the film that this was something very special and celebratory in terms of design. I liked the very detailed type animations that designer Daniel Klöhn was making and thought it would be good if that somehow it all felt like a graphic representation of the Grid. We also spent a lot of time adding detail to the climax of the sequence – the TRON logo. I added a lot of detail and digital branchings to it to have it feel as though it is the center of the entire piece – as though you are in the core of the Grid. It took several months to incorporate everything but the results turned out great I thought.

How did you create the beautiful opening title where we go from a CG wireframe to a city?
That was made by the director and his team at DD.

What were the challenges to the TVs sequence?
The assignment was to give the viewer a brief retrospect of what had happened to Kevin Flynn since the last film. His idea was to do this using 80’s television sets. That gave us the ability to communicate a barrage of reporting worldwide simultaneously. They had and edit made of storyboard frames they had made. The challenge was to have the news broadcast work without real newscasters, so I thought it might be good to have it feel like a special reporting segment that used in-house promo material from Encom.

The problem was that it still looked too digital and not like real 80’s video, so we turned off all the video plug-in effects we had and decided to shoot everything running on monitors with old video cameras from that time period. Compositor and Technical lead Miles Lauridsen and animator JD Burditt spent many late nights getting the look down using that technique.

How was the collaboration with Eric Barba and the team of Digital Domain?
Terrific – great people with detailed notes on everything. It helps a lot to work with people who know what they want and know what they are doing.

Can you explain the creation of the shot with the ePad from scratch to the final compositing?
It really came to us as a “oh by the way we also need this…”. So I designed something quickly that animator Takayuki Sato did in a very short time. It needed to do was communicate that Encom’s OS was being hijacked virally.

Did you have to show some specific information on the ePad interface?
Yeah – we just needed to communicate in an interesting way that the Encom OS was being downloaded as a bit-torrent.

How did you design the look of the flashback Flynn shots?
We were given sequences and asked to make them look like they were being viewed within a sort of a futuristic monitor. They needed to feel like memories so we distorted the edges a bit and warped the image to give it a curved surface feeling. We developed several looks for this – everything form a retro film look to analog video to digital. What seemed to work best was sort of an analog / digital hybrid so that they felt like archival material. For the fight scene I thought it would be interesting if impacts would cause the video to glitch a little. We also did a lot of work to add camera shake, sparks and flares to the scenes to help amp them up a little. For the final touch compositor / technical lead Miles Lauridsen came up with a nice hex pattern look for the surface of the display.

Can you expound on any technical tricks or requirements the flashback sequences required?
Miles Lauridsen: The flashback sequence technical brief was to take 2d footage (single view) and convert it into something that felt like viewing a tv set in stereo. After receiving a detailed creative roadmap of the project created by Danny in After Effects, we set about the process of placing footage, lens flares, light reflections and dust textures on geometry in Maya. Footage was mapped to a card at the back of the scene with the additional textures of dirt, dust and glitches placed on various curved planes closer to camera to give the feeling of a space between tv tube and the surface glass. We rendered out stereo views of this and added a distinctive ‘hex’ effect to give the feeling of a refractive or pixelated grid covering the television tube in the final comp.

Here are some concepts below:

Did the stereo aspect caused some problems to you on technical and artistic levels?
Miles Lauridsen: Stereo certainly adds another level or two of complexity and challenge to any project. Technically, you’re dealing with twice as much data and any small change upstream requires a solid workflow to propagate those changes out to all artists and software packages working on a particular shot. Artistically it allowed us to add an extra element of emotion and life into shots that were already effective without being stereoscopic. One of the challenges was making sure that the design and creative dictated the stereo and not the other way around. In this way we were able to use the stereo aspect to focus the eye on certain elements of design and use it as a tool in a similar manner to a filmmaker using depth of field or blocking to tell their story.

What was the biggest challenge on this project?
Miles Lauridsen: The biggest technical challenge was probably sharing and managing stereo camera data between 5 different software packages: Maya, Houdini, C4D, Nuke and After Effects. For the end credit sequence a custom stereo camera rig was built to allow attachment to Daniel’s animated camera in AE, and then data was baked out for Maya and converted to FBX for portions of the main title done in Houdini.

Daniel Klöhn: For the end credit sequence we started working with two cameras. one starting from the beginning, one from the middle and later on we had to move the whole world space to join them together. Another challenge was that the sequence contained over 30,000 of layers in AfterEffects, which brought the software to its ultimate limit, so we had to create several precomps to make the layers work and visible inside the software interface.

Was there a shot or a sequence that prevented you from sleeping?
As with any project there are several, but that is mostly due to our own obsession with detail. The 3D was a challenge for us as this process is still relatively new, but we eventually worked out the kinks as time went on. The main problem really was the amount of data and render times since everything had to be rendered with 2 cameras. All in all though everyone did a tremendous job of delivering on time.

How long have you worked on this show?
Starting with early concepts around 8 months total with most of the work during the last 4 months.

How many shots have you done?
About 14 shots in stereo – 20,000 final frames total. The frames required for the end credits were about 80,000 before final comping.

What did you keep from this experience?
The thing that impressed me the most was how much fun it was and how surprisingly fresh a new translation of an old idea can become. I thought the filmmakers did an amazing job of translating that so we were so happy to be a part of it. And when you have that kind of enthusiasm in a team of great people like we had the excitement just becomes infectious.

What is your next project?
We’re pitching on some things that I’d love to share but can not right now. I can tell you it’s another superhero film though which looks promising so far.

A big thanks for your time.

// WANT TO KNOW MORE ?

Prologue: TRON LEGACY page on Prologue website.
fxguide: TRON LEGACY article on fxguide.

© Vincent Frei – The Art of VFX – 2011

TRON LEGACY: Aaron Weintraub – VFX Supervisor – Mr. X

Co-founder of the studio Mr. X in 2001, Aaron Weintraub oversees many films like A HISTORY OF VIOLENCE, SCOTT PILGRIM VS THE WORLD or REPO MEN. He recently completed the VFX supervision of THE FACTORY.

What is your background?
6 years 3D and compositing for commercials, music videos, broadcast design. 11 years feature films and co-founded Mr. X in 2001.

How was the collaboration with Joseph Kosinski and Eric Barba?
We flew down to Digital Domain (DD) as we were first being considered for the project to meet with Eric, who briefed us on the sequences, the design and style of the show, and the work we would be undertaking. Everything we worked on went through Eric first to present to Joe. Dailies sessions were done first with Eric to review the work and when a shot was approved, it went into Joe’s dailies to be reviewed. As most of the dailies sessions were done while we were in Toronto, we were video-conferenced into DD’s theatre so we could see what they were seeing, and discuss the work.

Can you tell us how Mr X got involved on this show?
Originally DD was to be the only vendor, but as shooting progressed, the size and scope of the VFX work grew beyond anyone’s expectations. At that point, Disney and DD began looking for other vendors that they could partner with who were capable of handling the work. They visited companies all around world, looking at not only the quality of their past work, but also the pipeline, infrastructure, and talent pools that were available. After being vetted technically and artistically, we began a back-and-forth bidding process while DD determined the most suitable sequences and shots for us, and were ultimately awarded the work.

What sequences have you made?
We worked on the “Rectifier Interior” and “Rectifier Bridge” sequences, which take place in the film just as the Solar Sailer docks in Clu’s mountainside base (the “Rectifier”), and ends just before Clu and the Blackguards jump out of the throneship to begin the Lightjet Battle chase sequence. The sequences include the Quorra/Rinzler fight, the creation of Clu’s army, Clu’s speech, Sam reclaiming the disc and fighting with Rinzler, Quorra’s rescue, and the Lightjet escape.

Have you received any assets from Digital Domain?
Yes, many. The Rectifier is populated by a lot of vehicles that came to us basically finished from DD, since they appear in other parts of the film (tanks, recognizers, military vehicles, throne ship, sentries, 3-man lightjet, character digidoubles). For assets specific to these sequences that hadn’t been fully completed by DD, we received the previz models, along with all the designs and concept art to allow us to clean up the assets and detail them out. Most of our asset design time was spent on the actual rectifier itself, both the exterior, as well as the detailed interior with its complex ceiling catwalk and crane gantry systems. We did a lot of work on the Solar Sailer as well.

How was the collaboration with the teams of Digital Domain?
There was very tight collaboration with DD. Shots went back and forth all the time, with both companies working on different pieces of the same shot simultaneously. We tried to work to maximum efficiency so that there was as little duplication of work as possible. From the beginning, we knew that a portion of our shots would be shared with them, especially the Clu head shots. For those shots, DD would send us the initial tracked camera which we would use to generate the environment, composite everything except for the head, and send the Nuke script and elements back to them along with the precomped shot minus the head. The Jarvis de-rezz was another example of this, where DD completed the effect since they already had the de-rezz tool working on their hands. There were also a number of shots where we completed modeling, layout, and animation, and handed geo back over to DD for final lighting, rendering, and compositing. Our pipelines needed to be very compatible, using all the same technical specifications for scene scale, world coordinates, camera specifications, naming conventions, etc.

What was the actual size of the set?
In the Solar Sailer and docking area inside the Rectifier, one and a half cargo pod lengths of Solar Sailer were built, and these were only built three cars wide (the fourth one was added digitally). There was a single exit staircase built that led down to a piece of floor just large enough to accommodate the Quorra/Rinzler fight. There was a small section of catwalk rails built, again just large enough to contain the action of Sam and Kevin Flynn. For Clu’s speech, basically only the floor and podium existed practically. The throne ship was the most complete set, with the throne room itself being mostly all practical except for the ceiling and the cones of silence. The bridge area only had a floor created, and there was a single elevator platform extending out the back side.

Did you created digital doubles?
We received them from DD for Sam, Flynn, Quorra, Jarvis, and the sentries.

How did you create the army of Clu?
Starting with the basic single sentry model that we received from DD, we created several deformed models and alternate textures to distribute variations in body type, height, and facial characteristics. We created a library of animation cycles for the actions that were required in the sequence (at rest, chanting, cheering, banging staffs), and wrote a tool to pseudo-randomly distribute the cycles, models, and textures within the scene.

What references have you received for the Rectifier?
There was a lot of amazingly detailed concept art created for the film early in production that showed the views inside the rectifier. It was always a struggle to come up with a real-world physical material counterpart, since ultimately, it was supposed to be something that existed only inside Tron World. There were discussions about the amount of cleanliness, the amount of scuffing on the floor, and what levels would be appropriate for a surface that has armies marching and tanks driving on it for years, but also, conceptually, being a perfect digital creation. That being said, we did have to match into the physical sets, though often these were replaced with wholly digital environments. The closest real-world reference we received for the Rectifier surface were photographs of black submarines.

How did you create this huge environment?
Using DD’s previz model as a size template, we remodeled the rectifier using the concept paintings and blueprints generated from the art department as reference, making sure all the detail was apparent, and that where no exact reference was available, that the style was consistent with the rest of the environment.

Have you created some set extensions for the sequence where Sam Flynn gets back the disc from its father and fights?
Yes, the ceiling in the throne room was completely replaced, as well as the glass doors. The glass windshield was created in the bridge environment. All the exteriors, including the sea of simulation, and the portal and monoliths, were created digitally as well.

What references have you received for the creation of the LightJet?
DD had sent us their model for the 3-man LightJet, since it was used extensively in the following sequence. We had to do some extra work on it for our sequences, since for example, the cockpit wasn’t modeled or rigged to open up in the version we received. We also had shots which showed a clearer view of the interior of the canopy, so that had to be detailed out as well.

Can you explain how you created the escape sequence of Sam Flynn and Quorra? All the shots are 100% CG?
Some were 100% CG, like the shot where they burst through the throneship glass and their chute deploys. The shot where they rush towards camera used a photographed element of Sam and Quorra suspended on a rig while the camera flew past them, and the following shot where they fly away from camera down to the floor is all-CG. When they finally crash to the floor, the actors were on a safety wire and landed on a piece of practical set (which was eventually replaced), and had a CG chute attached to them in a completely CG environment.

Did the stereo aspect caused you some troubles?
There’s the obvious addition of work that comes from having to render and composite everything twice, but a lot of the trouble comes from dealing with the imperfections in the practical stereo photography. A lot of time was spent correcting vertical disparity and polarization artifacts, and making sure that the camera tracks were absolutely perfect. A lot of the usual 2D tricks that you have on a traditional monoscopic film just don’t work in stereo. Even something as simple as scaling an element up or down to change the apparent distance to camera now comes with a host of other issues such as ensuring that the IO is correct for where you want to place the element, as well as the proper convergence within the scene. Because the images were so high contrast, with bright, sharp glowlines on dark suits, we also had some serious ghosting issues, where the picture from one eye would be slightly into the image of the other eye. Fundamentally this is a problem with the projection system and the glasses used, and future technologies – for example, beaming the image directly onto the viewer’s retina – may completely eliminate ghosting as a problem altogether, even when viewing stereo films made today. For right now though, it was definitely a consideration which often caused us to have to reduce the stereo effect so the stripes wouldn’t double-up as severely.

Did Mr X Montreal worked on this show?
No.

What was the biggest challenge on this project?
There were many challenges on the project, but one of the major ones was the amount of retooling the pipeline to get in sync with what DD was doing and what they had built already. We dove head-first into V-Ray relying on some of DD’s knowledge, but ultimately needed to figure it out and R&D our own pipeline tools for it while were were in production. We also increased our renderfarm and server storage to keep up with the demands of the project.

Was there a shot or a sequence that prevented you from sleeping?
Ha.. yes, all of them.

What is your pipeline and softwares at Mr. X?
We added V-Ray to our pipeline since that’s what DD was using. Our traditional CG pipeline used Maya for modeling and animation, Houdini for lighting, and Renderman to render the images. Effects in Houdini, rendered in Mantra. Compositing is done in Nuke. For TRON, we replaced the Houdini-Renderman portion of the pipeline with Maya and V-Ray. There was definitely a learning curve, but we continue to use V-Ray on other projects now, where it’s appropriate, now that we have a bit of a tool base written to make it workable.

How long have you worked on this film?
9 months from first turnovers to final delivery.

How many shots have you done and what was the size of your team?
211 shots were worked on by our team of around 80 people.

What did you keep from this experience?
It was a great experience working on a film where the anticipation of the fans and world-at-large was so high. There’s an enormous amount of satisfaction knowing that your work is definitely going to be seen and appreciated by fans for what it is. On a lot of the films we typically work on, our presence is invisible, and the goal is to not be noticed or get in the way of telling the story. On TRON, it was very much the opposite of that, and we had to make sure that everything was absolutely perfect to withstand the scrutiny of the most discerning of audiences.

What is your next project?
Currently in production at Mr. X: THREE MUSKETEERS 3D, THE THING, HANNA, THE VOW and SILENT HILL REVELATION 3D.

What are the 4 movies that gave you the passion of cinema?
Hard to narrow it down, but: THE GODFATHER, A CLOCKWORK ORANGE, STAR WARS, 2001: A SPACE ODYSSEY.

A big thanks for your time.

// WANT TO KNOW MORE ?

Mr. X: Official website of Mr. X Inc.
fxguide: Tron Legacy article on fxguide.

© Vincent Frei – The Art of VFX – 2011

The Art of VFX celebrates its 1st birthday!

Hello everyone,

This Tuesday marks the first anniversary of The Art of VFX!

I wanted to thank you all for reading me every week and to be so many to follow my interviews.
You are several thousands to come every month from more than 140 countries.
The site has over 48,000 hits since its creation.

I am truly touched that you are so many to appreciate my work!

I am pleased to share the messages I’ve received for this event:

Vincent, Congratulations on your 1st Anniversary of The Art of VFX. Thank you so much for providing such insightful details from the front line creative leaders of our industry. You provide respect and acknowledgement to the true artists who are responsible for the execution of stunning visuals in todays feature films. Keep up the great work.

Jeff Campbell
Visual Effects Supervisor
Spin VFX

//

Happy 1st Birthday The Art of VFX. I have really enjoyed reading what other supervisors around the globe have been up to this past year. Keep it coming!

Mattias Lindhal
Visual Effects Supervisor
Fido

//

The Art of VFX is one of the best sites on the web for detailed interviews with the best VFX artists. Vincent goes beyond the work to find out why we get into this business in the first place.

Paul Franklin
Visual Effects Supervisor & Co-founder
Double Negative

//

Congratulations on your 1st birthday, you’ve posted some very interesting interviews over the past year.

Michael Ellis
Visual Effects Supervisor
Double Negative

//

The Art of VFX continues to be a valuable resource to the visual effects community. Since its inception, one can rely upon your articles for valuable insights and information not previously available. Keep up the great work!

Josh Comen
Principal
Comen VFX

//

Happy Anniversary “Art of VFX”! It was a great honor to be a part of your first year’s golden collection of inspiration, love for big cinema and like-mindedness! May the heroes continue to annihilate evil with lots of blood splatter, fire balls, light sabers and other joy bringing butcher ustensils in the coming years!

Simon Otto
Head of Character Animation
Dreamworks Animation

//

Happy first year of your site. I like the way you include artists of different levels, for consistently honest and insightful interviews. best of luck in the years to come.

Adam Valdez
Visual Effects Supervisor
The Moving Picture Company

//

Congratulations on your 1st anniversary! Here’s to many more years of interesting interviews.

Daniel Leduc
VP & Visual Effects Supervisor
Hybride

//

Happy Birthday Art of VFX! Keep up the good work covering the VFX scene! All the best for the future!

Kevin Mack
Visual Effects Supervisor
Mack Art Productions

//

The Art of VFX’s website is an excellent resource for us. The articles are deep and insightful and packed with everything we would want to know about the production of so many high-end studios. Happy 1st birthday Vincent!

Danny Yount
Creative Director
Prologue Films

//

The Art of VFX, a great site for VFX enthusiasts, professional and amateur. The interviews are interesting and highlight the real issues facing the post-production industry. Good luck and happy birthday “

Mathilde Tollec
Lighting TD
The Moving Picture Company

//

The Art Of VFX contacted me for an interview on the work I did at MPC on Prince Of Persia. I was really impressed by the quality of the questions and the way that they translated it in a very faithful way.
Since then, I am a constant follower of the blog and I am very pleased to see how it has been developing in the last year. The interviews are always interesting to read and the fact that they cover both supervisors and sequence or discipline leads makes it even more exciting, giving us an in-depth review of the work achieved on big studio movies as well as smaller independent projects.
I wish a long life to The Art of VFX blog, and looking forward to reading more and more interviews about the great films coming out this year.

Stéphane Ceretti
Visual Effects Supervisor
Method Studios

//

I have really enjoyed reading The Art of VFX over the past year.  Not only does it have great breakdowns and interviews on first rate projects from around the globe, it also doesn’t localize the discussions to specific roles on projects.  The articles speak with Supervisors, Artists, and Designers alike and provide insight into all different parts of post production.

Justin Ball
Visual Effects Supervisor
Justin Ball VFX

//

Happy 1st year to Art of VFX and thanks to Vincent for his constant enthusiasm and energy to put together all those great interviews.

Guillaume Rocheron
Visual Effects Supervisor
The Moving Picture Company

//

Happy birthday Art of VFX. Having access to so many people and information in such a short time is something rare. I hope this will continue for many years.

Nicolas Aithadi
Visual Effects Supervisor
The Moving Picture Company

//

I am pleased to share with you this new year!

A very big thanks to:

All my readers.
All the interviewees.
BUF, Brainstorm Digital, Cinesite, Comen VFX, Double Negative, Dreamworks Animation, Filmgate, Framestore, Hatch FX, Hybride, L’E.S.T., Look Effects, Mikros Image, The Moving Picture Company, Pixar Animation, Prime Focus, Prologue Films, Postmodern Sydney, Rhythm & Hues, Rodeo FX, Spin VFX, Trixter, Worldwide FX.
20th Century Fox, Red Lorry Yellow Lorry, Universal, Warner, Walt Disney Pictures.

BOARDWALK EMPIRE: Justin Ball – VFX Supervisor – Brainstorm Digital

After several years at Zoic Studios as a TD and engineer, Justin Ball joined the team at Brainstorm Digital. He participated in many projects such as BURN AFTER READING, DUPLICITY or THE ROAD. It also oversees the effects of several films like BROOKLYN’S FINEST, or LETTERS TO JULIET and THE ADJUSTMENT BUREAU.

What is your background?
I studied sculpture, animation, and programming at Pratt Institute in New York. I had my first real exposure to CG and VFX while working at Curious Pictures in New York, in the model shop, building the last of the handmade puppets for their stop-motion kids’ show on HBO, A LITTLE CURIOUS. It was there that I experienced firsthand the industry’s real push to go digital. After teaching animation at another NYC university while finishing my undergrad at Pratt, I moved to L.A. and started working for the then-start-up, Zoic Studios. I functioned mainly as an engineer
and TD there, working side by side with amazing artists, supervisors and creative directors who really gave me a passion for the industry and the work that we do. Eventually, I changed roles at Zoic and got more into effects. Soon after that, I moved back to New York to help build Brainstorm Digital, a new all-film VFX house in Brooklyn.

Over my years at Brainstorm, I rose from being an engineer/TD to an effects artist to my current position as VFX Supervisor. I come from a very technical background, but have also always had the passion for the creative. VFX Supervisor is a great mix of disciplines. You get to work and design really creative shots, and then figure out how in the world you will make them work, both on set and back in the office. It really requires many skill sets.

How was your collaboration with the various directors of the series and especially with Martin Scorsese?
It was very interesting to work with all the directors. It was a first for me to have that many different creative ideas and approaches centered around a single project, giving every episode a totally unique feel. The biggest challenge was the informational gap in dealing with the complex filming location of the boardwalk set. So there was a lot of work and discussion every episode with the new directors to bring them up to speed as quickly as possible on the difficulties of the location. One of the nice things was that we had alternating DP’s for each episode, and over the course of the season they became well-versed with the shortcomings of filling in the big blue box. After the initial ramp-up period at the beginning of the season, it became a well-oiled machine.

Working with Scorsese was fascinating. We weren’t able to spend much time with him due to how busy and in demand he is. But I must say it was a huge pleasure to both watch him work and to work with him. He was a tremendous asset for us, as we would approach him with options and he could be extremely decisive about the approach and direction he wanted to go. So the lack of one on one interaction was more than compensated by the precise amount of direction and information we were provided.

Can you explain how Brainstorm Digital got involved in this project?
In late 2008, we were approached by a producer friend who was attached to the project. Back then, it was just a loose idea and everyone was trying to figure out how to pull it off. One of the key elements up for discussion was how and where to film a 1920’s Atlantic City. Initially, there was an overwhelming push to try to film the project on a sound stage, which was a big “no-no” for us. The feeling was that even though we could manage to do the filming inside, and certain issues would be lessened by filming in a controlled environment, the show would never feel as if it were taking place on an actual, outdoor, seaside boardwalk. So for the first meeting with Executive Producer Eugene Kelly, we put together a rough projection test with some sky and water plates we filmed, to help illustrate how VFX could help the show and more importantly, how much set would have to be built.

We started by going out to Brighton Beach in Brooklyn to film some open coastline with recessed buildings. We took our HD camera and played with some camera moves on the beach. We then took that back to the office and built a 3D camera from the data we’d collected. Then, using archival photographs provided by the production historical researcher, we started to formulate a plan. One of the difficult issues was at that point in time we did not have an overview image of what the whole Atlantic City boardwalk looked like back then. I spent a long time studying antique photographs to find images that overlapped and structures that were geographically located on the boardwalk. We centered our test around the Blenheim Hotel because it was the most visually interesting structure located on the boardwalk in 1920. The original Atlantic City boardwalk was a constantly evolving attraction; from year to year the entire place changed dramatically. Buildings were torn down, and others were erected in their place, with even more floors and levels added to already existing buildings. Finding a way to recreate a realistic period set from the old, dynamic boardwalk was an interesting challenge.

After deciding a geographic location to base the test, we built a loose projection system in Nuke. Using black-and-white photographs, we built a portion of the boardwalk set to make the test. With our initial test, we were forecasting that production would only need to build about 160 feet of set before VFX could take over. The issue with that was that it limited the filming options for the series. So eventually we all decided to extend the set another 100 feet to around 270 feet.

What references did you have for the streets and the pier of Atlantic City?
Production had hired a researcher, Edward McGinty, to help research all aspects of the show. Either the production VFX Supervisor, David Taritero, or Ed would track down any imagery or ref we would need if we could not find it ourselves. This was such an amazing asset for us to have in helping develop the realistic vintage look for the show. We also worked very closely with Robert Stromberg, who was a matte painter and VFX Designer for the series.

Were you involved in pre-production in order to help the shooting? Did you create some previzualisations for this?
Yes. We were heavily involved with every aspect of the design and building of the set. From helping to pick the location, to previzing the set, to designing the backlot, we were there all the way through filming. It started early on from that first meeting with the Art Department. We hired an art director from the HBO team to build us a scale 3D model of the set as the Art Department was designing it. Even though we were usually about a week behind the Art Department, we always had a scale 3D model in Google Sketch-Up. And while the Art Department was working on the details, we would work on the world beyond the set, both the placement of period buildings and also the practical elements surrounding the present day set. We used this model to help us visualize the layout of the bluescreen and how to design the space of the backlot.

The bluescreen in itself was a difficult challenge to take on, because we were not sure how best to rig something that large. We went through many different ideas about how to make it work. We had thought about cloth draping systems all rigged on wires, or systems with traveling 40×40 bluescreens, but knew that those could not cover the scope of the set that we were building. A bad bluescreen setup could lead to problems while shooting with lengthy re-set times, and on an episodic shoot, this was not really an option. We also needed something that could stand up to a New York winter, a huge challenge in itself. We were in constant contact with David Taritero, who was going to be the production VFX Supervisor for the show, but was still in L.A. finishing up post on HBO’s THE PACIFIC while all of this was going on. Fortunately, we were able to share ideas with him while going though the design stage. Dave had used a similar curtain system on THE PACIFIC and was able to give us a firsthand account of the pluses and minuses of that approach.

While researching this issue, I came across an article about the movie CHANGELING, where they mentioned that they used a shipping container as a green backstop for their backlot because it was a cheap but solid structure. When I read this I thought it was the perfect solution to our problem. We proposed this to production, and after we convinced them that the containers would also provide storage on a backlot that had very limited working space, they agreed. A secondary benefit of building a huge wall out of shipping containers was that it provided visual protection of the set from onlookers, along with weather protection. The set was built in an empty lot in Greenpoint, Brooklyn, right at the edge of the East River. Without that protection, the winter would have been much harder on both the filming schedules and the actors, not to mention the complications that can arise from curious crowds.

Using the 3D model, we were able to lay out the placement of the containers within the physical space of the backlot. We were also able to map out the number of containers to use along with seeing what they would look like in that environment. Since we built the set in real-world scale, I could plant a camera on the elevated production set and I could see what we would see once the set was built. This let us map out how high the wall needed to be to cover certain elements that we did not have control over, such as buildings across the street and even the Chrysler Building across the river. We were also able to test camera heights and lenses to know ahead of time when an actor’s height would break the coverage of the blue wall, revealing the real sky behind them.

This setup was extremely useful for the DP’s to get an understanding of the working conditions of the boardwalk even before it was built. We spent multiple days with Stuart Dryburgh, the DP for the pilot, working on angles and shots before the set was even complete.

About the shooting, were you able to shoot all you need in front of a bluescreen or did you need some extensive roto?
We were able to have quite a few production days in front of the bluescreen to film actors and background to get different elements. One of the issues we had was that the show takes place over the course of a year, so the people elements that we filmed early in the season would only work for a portion of the episodes. There were also many different types of attire and day and night scenes where we needed to fill people in. For the most part we were able to get all of this covered to a usable extent. But for some of the beach scenes, we weren’t able to get everything we needed due to camera moves in the original plate and weather problems at the beach. So in a few shots we did have to get into roto extractions to help cover the camera moves.

Can you explain to us in detail how you recreate entire streets and the pier?
The approach we took for these shots was that we would use matte paintings and projection setups in Nuke wherever we could get away with them. Initially, David and the production team and I discussed building the entire boardwalk area in 3D, as it was a finite space and we could build it as an easily reusable asset for years to come. But when first embarking on this project, none of us could know how much or how little we would see beyond the practical set, so we opted to do more paintings in the beginning. And when you have access to the talent of Rob Stromberg for your paintings, that’s a very easy decision. But over the course of the season we started to pick out more and more “featured” structures that became 3D models and independent assets.

Knowing the limitations of time and resources for this project, I wanted to use Nuke whenever we could to get the most out of our 2D team as possible. We relied heavily on 3D camera tracks and projection geometry to build a large part of the world seen in BOARDWALK EMPIRE.

In Episode One, we see a beautiful panning shot on Sewell Avenue, this street with old 1920’s beach bungalows that still exists out in Rockaway, Brooklyn. The problem was that only one side of the street actually had the appropriate houses. On the day of filming, I shot images of every house that was dressed for filming from every angle I could get. We used these images as well as still frames from the plate footage to re-project the surface back onto 3D-modeled houses. The houses were built very loosely in Maya and then brought into Nuke’s working space. We surfaced all the buildings in Nuke except when we needed some shadow casting passes. We would then kick that back to 3D for a good shadow pass. We also rebuilt the electric lines and poles in Nuke along with the ground plane and full sky replacement.

As for the boardwalk shots, depending on the camera move we used different techniques, but it was along the same lines as what we did for the street. We had the pre-viz model we built to scale in pre-production as a starting point, which gave us a lot to work with. That model was converted to Maya from Sketch-Up into a few different resolution levels. Then I would use the low res version on-set with my laptop to build camera moves on the fly for the directors, and back in the shop we used it for projection or, depending on the detail, for 3D renders. We also used this model with our 3D tracking team. So every shot on the boardwalk that went into 3D tracking lived exactly where it was filmed on the real set. We could turn over grey shaded renders to the matte painting team, placed with the proper perspective and vanishing points already mapped out. It really helped our build time in terms of the painting portion of the process. We would also pre-light the grey shaded models so the renders could almost serve as an underpainting. The water was just filmed water plates that were tiled out and placed on cards in Nuke.

In both types of shots, the real feat was to use the strengths and size of our 2D team and to not overwhelm our 3D crew. Using projection setups the way that we did really let our 3D guys operate in more of a supporting role and let the 2D shoulder the mass of shots, so we would not be stuck behind rendering bottlenecks from the 3D side.

Were you involved on the sequences on the ocean including the shots in which a character is thrown into the water?
Yes, and this is a bit of a funny story. All the shots with the smuggler boat were set to be a VFX split day to shoot both the “dumping the body” shots and the opening shot of the show, where the boat is moving away towards Atlantic City at night. When we were riding to the set that day, Richard Friedlander, Brainstorm’s VFX producer, got a call that the hero boat was taking on water on its way to the set. When we all arrived to the set we assessed the situation there were really only two options: either we would have to shut down and postpone the filming day till the boat could be properly fixed, or we could fix it in post. Robert Stromberg, the production VFX Supervisor for the pilot, pulled me aside and asked how I felt about doing a boat replacement. This was something that neither Brainstorm nor I had done before, but I was very confident that we could do it. So we had the boat team pull up all the boats that were similar in length to the hero boat until we found one that could work for our purposes. It was the approximate length (just a few feet shorter), and it had low enough rails on the side that we would not have any obscuring issues with the actors.

As this was an all-VFX shoot day we had a lot of opportunities to shoot safety plates to help us with the final assembly of the shots. We began by shooting the actors performing the actions in the stand-in boat out on the water, as we had planned to do with the hero boat. When we heard the news about our our hero boat, we requested that it be brought over on a trailer so that we could get the measurements and reference images for the CG replacement. When it arrived, we hatched another plan. We would also reenact the water scene on the drydocked hero boat.

Now keep in mind that we had planned to do all of our shooting that day from a 50-foot Techno-crane. So we set the camera up on the crane and were able to swing it around into many different positions and angles without ever having to move the base. That saved us a huge amount of time between set-ups on a day that was quickly becoming a mad scramble, all because of one leaky boat. Filming from the crane-mounted camera, we performed the body toss on the drydock boat from many different angles with stunt actors and landing pads. While we were waiting between setups, we would swing the crane over the water and film matching water plates for the angle. We used a tugboat in the water hitting the throttle to create churning water for the plates.

In the end it was up to the director to put the scene together the way he liked. When we received the working edit, we saw the issues we were up against. Some of our shots were re-envisioned in the edit, but luckily with all the coverage we shot that day, we had the pieces to put it together.

The boat replacement shots did have some interesting complications to them. We modeled the boat from measurements and photographic reference. Once we had the boat modeled and matched-moved to the production plate, we found that the difference in scales of the boat was much more apparent than we expected to see. In certain instances we had to do scaling adjustments to help it sit better into the plate. We also found that the position of the cockpit and some other key features on the boat were not really in the exact location, so it took some clever work on the part of our 2D and 3D teams make everything feel right. In certain instances we did have to do some rebuilding of the actors to help sell the effect.

The other issue we had with the boat was the water interaction. I had hoped that we could steal more interaction from the plate, but since the hero boat had a white hull, and was shorter than the replacement boat, we had to resort to some help from 3D. We used Houdini to both enhance the boat and water leading edge interaction along with enhance the splash when man was thrown overboard. The end results are great, but it took some quick thinking to make it happen.

Can you explain how you created the impressive war wound on the face of one of the characters?
When this topic came up in pre-production, we were worried that the effect would be too big and complicated for us to pull off on an episodic show. The scarring on the face of Richard Harrow was not to be a superficial wound, but rather something that had taken away part of his face. Dave Taritero and I came up with an initial plan for the face shots: make everything as simple as possible. No camera moves, no talking, just a straight reveal. This is how we approached the shots in Episode Seven when we first meet Richard. The idea was to cut down on any of the complex roto work or carving into his face to make the deformation look real.

The shots we did for Episode Ten, where Richard is in the living room on the couch, featured rapid movement, a moving camera, and lots of talking—not simple at all! But since we had already revealed Richard, there was no turning back. For the Episode Seven shot we had resorted to a bit of trickery, with a loose face model, matchmove, and re-projecting the face in Nuke. We pushed that shot about as far as we could go with our sort of mishmash 2.5D approach, and it worked nicely. But for the Episode Ten shots, we knew we would have to be very creative and really get our hands dirty.

When looking at how to complete this series of shots, I came up with a different approach. Matchmoving a head is a pretty simple thing to do, especially for just a handful of shots, and all we really needed was registration of the wound as we would be taking over all of that portion of the face. We enlisted the help of an old modeling buddy of mine, Brian Freisinger, and a character animator, Anton Dawson, to rig and deform the face to match sync. I took photos of the actor’s face from every angle I could and sent that info to Brian, who built us a pretty perfect face match.

Once again, the production team provided us with all the research we needed, and what we learned was fairly horrifying. We had great examples of the scarring and skin grafts, burns, and all sorts of other trauma that these men had to live with after World War I, and how they used actual tin masks to cover their deformities. We went through a rather long design cycle for the look for the face for these shots. It took us a while to figure out the type of deformity he had and how it would react with his face. The placement of the wound did provide a challenge, as the actor still had the front and rear of his jawbone intact, but a gaping hole in his face and a missing eye. Making the wound look ghastly enough without overdoing it was a bit of a balancing act.

Ultimately, the Episode Ten shots were rendered out of Maya using mental ray and comped in Nuke. One way we helped our compositors was to provide them with a UV map of the face so that they could apply grades and corrections to specific spots, to keep the shot from going back to 3D once the animation and lighting were locked.

What was the biggest challenge on this project?
I think the biggest challenge on this project was twofold: dealing with the sheer amount of data constantly coming in, and managing the shooting schedule at the same time. We were in post on the show while a large portion of the series was still filming. So we were working on shots as they were filmed, planning new shots, reading scripts, filming elements, and then also working on set with David and the directors to help visualize and realize the world we were creating. It was intense.

Was there a shot that prevented you from sleeping?
Yes–the Episode Two Times Square shot, definitely. It was a total unknown. It was Brainstorm’s first fully CG shot, and it had to be a photorealistic recreation of 1920’s Times Square. You might be amazed by the lack of imagery available of Times Square in 1920, especially from a high-angle camera position. It really just doesn’t exist.
We added a lot of little details to help sell the feeling of Times Square, with endless revisions and much tinkering to make it work.

What is your pipeline and your softwares at Brainstorm Digital?
We primarily used Nuke as our compositing package (but Shake still makes an appearance every now and again), and Maya for 3D. We also use Houdini for the snow and splash effects.

How long have you worked on this series?
We have been on the project for close to 22 months, from the original pitch all the way to the completion of Season One.

What did you keep from this experience?
This project was an amazing opportunity to work with some of the best talents in the field of film and television today. Working hand-in-hand with the writers and directors from THE SOPRANOS and THE PACIFIC, along with multiple Oscar winning directors and artists, was an amazing experience for me. Few other projects will pull that many different talents from all over our industry into one place.

What is your next project?
The project we just finished together is THE DILEMMA for Ron Howard, and we’re in prep mode for Season Two of BOARDWALK EMPIRE.

What are the 4 movies that gave you the passion of cinema?
This is tough–there are so many.
JURASSIC PARK (The film that really pushed me down the path of VFX)
THE SHINING
2001
BRAZIL

A big thanks for your time.

// WANT TO KNOW MORE?

Brainstorm Digital: Official website of Brainstorm Digital.
fxguide: Article about BOARDWALK EMPIRE on fxguide.

// BOARDWALK EMPIRE Season One – BRAINSTORM DIGITAL – VFX BREAKDOWN

© Vincent Frei – The Art of VFX – 2011

SCOTT PILGRIM VS THE WORLD: Frazer Churchill – VFX Supervisor – Double Negative

After several years of freelance work, Frazer Churchill joined Double Negative which is one of the founders. He participated in many projects such as PITCH BLACK, BELOW or ENEMY AT THE GATES. In 2001, he became supervisor and handles the visual effects of DOOM, SAHARA or CHILDREN OF MEN.

What is your background?
I was a freelance graphic designer before working in video post production in the mid-nineties. I then moved into title design and film fx and went on to be a founding member of Double Negative. I developed a career as a digital artist and then VFX supervisor. I supervised DOOM, SAHARA & CHILDREN OF MEN.

How was your collaboration with Edgar Wright?
Edgar is an auteur, he has a unique style of film-making that is unmistakably his, yet he is still very open to input. The way the film looks is due to the collaboration of Edgar, Oscar Wright (Edgar’s brother, the film’s concept designer) Bill Pope, Marcus Rowland, Myself and Andrew Whitehurst (CG supervisor)

Have you used some motion designers for some sequences?
We had 150 digital artists working on the show from all backgrounds, some of them have motion graphics experience.

Can you tell us about the shooting of the first fight? And what have you done on it?
The first fight in the film is the Patel fight. This was a complicated fight to plan and shoot. We had already shot a short piece of the sequence during the test shoot and this helped us establish some key techniques but there was still a lot of work to do.

The Patel fight introduces the viewer to the hyper-real Manga-esque world of Scott Pilgrim, the film up until this point has played relatively straight, at this point in the story Scott Pilgrim shows it’s true colours and takes a sharp left turn. Matthew Patel bursts through the Ceiling of the Rockit nightclub and attacks Scott while he’s playing on stage. Matthew flies through the air, lands on stage, faces off with Scott then charges across the stage, get’s kicked up into the ceiling of the club, where upon Scott leaps from the stage, flies up to the ceiling catches Patel with a mid-air uppercut and then lands a repeated volley of punches on him while ascending to the ceiling of the nightclub at which point he smacks him down from the ceiling to the floor.

A kung fu style battle then takes place on the nightclub floor until matthew Patel levitates and begins an aerial bollywood style song and dance routine during which he summons four winged demonic hipster girls who bombard sex-bob omb with a deluge of fireballs.

The whole sequence was shot using a combination of bluescreen photography and in-situ on-set photography. Parrallelogram stunt rigs, wire work and jogging treadmills.

Matthew Patel’s flying pose was shot on a parallelogram rig against blue with an interactive light sequence rolling over him and wind machine to create movement. The interactive light was created by a series of programmed parcans and a rolling mirror.

For the punching and spinning, we used the phantom camera and shot at 288fps. Cera was shot on set punching 2 lighting triggers that set off four photoflash bulbs, Cera’s punching action was shot twice, one tight medium shot of him blocking and punching and one wide shot of the follow through KO punch, we then morph transitioned between the two shots two create a crash zoom out, this was augmented with camera shake and “colour shake” in which the image would cycle through frames of block colour which added a very stylised feeling of impact. The element of Patel being punched out was shot on a P-rig, Patel was manipulated by Brad Allen’s stunt team in blue suits while Bill Pope provided a 70kw “lightning strike” interactive light to create flashing KO light on Patel as he spun out of frame in digital slow mo.

The whole shot was composed around the “Krowww” and “kpok” sound effect graphics which we took from the comics and overlayed live on set to get good composition and line-up on the day.

In the final composite put together by Ian Copeland there are Handrawn flash frames during the impact flashes, these were provided by Oscar Wright, the film’s concept designer.

In another action shot Patel runs across the stage at Scott during which the proportions of the room stretch and distort as they do in Manga animation, in the finished shot the camera appears locked to Patel’s legs as he runs. To achieve this we shot Patel on a blue jogging treadmill to capture a “camera locked” aspect on his running legs, we then shot Scott, Kim, Stephen and Johnny stills on the Rockit set with a dolly move .

CG floor was created and then animated to match Patel’s treadmill legs, CG set wall was also added later, Anime zoom lines and lens flares were comped in to create the final look.

A dynamic low angle shot of Scott kicking Patel was accomplished using stunt performers on wire rigs who later had their faces replaced. The impossible focal length change during the crash zoom was created using close and distant camera positions morphed together.

The Final Anime backgrounds for the sequence were created using a combination of plate photography and digital stills. Working with with 2nd Unit DP David Franco, I shot travelling plates of the Rockit set using a 50ft techno crane. Compositors worked through the material to come up with a photographic version of a “Naruto” style speedline background.

Patel’s song and dance sequence was choreographed to music written by Dan Nakamura. Patel was shot in situ in the Rockit set on a fork rig performing his dance routine, photoflash bulbs were triggered at the appropriate times during the live playback to sync with the appearance of the demon hipster chicks and fireballs.

The idea for the four demon girls is that they all look identical but have slight differences in their performance, the way we acheived this was to shoot the same girl on a fork rig with a four camera array on bluescreen, this gave us a slightly different angle on each girl relative to their position in the air. We shot numerous takes of her running through her routine which enabled us to use different takes for the different girls giving us a synchronised but not identical performance from each of the girls. One girl pass was shot with a spotlight on to create a pass that we could place in the spotlit area of the hero plate. Flashbulbs were fired in time with the song playback to synchronise with the Hero Matthew Patel performance.

The Hipster chicks were given CG wings, an ethereal glow and ghostly transparency in compositing. Fireballs, flames, debris and magicdust were also added. Fireball effects were created by Aline Sudbreck using in-house software squirt. CG wings were built and animated to complement the dance routine, while Kate Porter oversaw compositing of the sequence.

Did you create digital doubles for the fights?
The actors trained for a long time with Stunt Co-ordinators Brad Allen & Jackie Chan’s stunt team so they were able to do a lot of the fighting and physical stuff for real. We did create quite a few digital doubles for the more extreme moments in the fights. We would always start the effect with an actor on a rig, shot against bluescreen and then take over with a digital double. I think you’d be surprised by how much of the fighting is actually performed by the actors.

Did you create previz for the different battles?
We previs’d the Katanyagi scene and the Lucas Lee skateboard scene.

How did you create the impressive downhill skateboarding?
Scott challenges Lucas lee to show off his skateboarding skills by getting him to grind down the big staircase leading up to Casa Loma. This sequence sees Chris Evans ollying up onto the railings and grainding his way down to the street below. In the world of Lucas Lee “There are like 200 steps and the rails are garbage”.

The sequence sees Lucas hopping from rail to rail, pulling an aerial 360 degree rail grab, a backside toeslide and other radical moves, all of this is punctuated with on screen sound effects, flurries of snow and plumes of sparks.

A previz model of the Casa Loma staircase environment was built using location photography as reference and the sequence was animated using a digital Lucas Lee. The sequence underwent numerous revisions in the edit suite before it was locked.

The previz models of portions of the steps and their associated CG camera positions were placed into a maya scene of the bluescreen studio, this enabled us to create the physical layout for each of the shots. Art dept provided the practical railings and platforms that doubled for the CG set. All the stunts were shot entirely against blue using wire work and gimbles. CG environment was created based on extensive location photography and lit using HDRI derived CG lights. Once again The Casa Loma staircase is an exaggerated version of reality, we played very fast and loose with the spatial relationships and increased all the distances and changed the layout to create the breakneck Anime flavoured action set-piece. The final sequence has matte paintings of the Toronto skyline, CG steps, CG trees, CG snow flurries, CG sparks, lens flare elements, 2D graphics, bluescreen stunt performers, bluescreen actors, CG coin explosions, CG & photographic smoke elements. Chantelle Williams created the CG environment using Maya and Renderman. Steve Tizzard oversaw the creation and compositing of the action sequence.

Only the first shot in the sequence was shot on location at Casa Loma with Chris Evans ollying up onto the railings with a wire assist, he pulled it off in two takes, that shot is completely real!

What have you done for the scenes with the Twins?
In one of the more spectacular fight sequences Sex-bob-omb have to face off with the Katanyagi twins in a battle of the bands at huge warehouse party. Sex bob-omb start to play a track written by “beck” called threshold whereupon the Katanyagi twins fire up their synths and blast Sex bob-omb off the stage with a devastating sound wave from their huge speaker stack, the wave also blows a hole in the venues roof.

Sex-bob-omb recompose themselves and start to play again, snow is falling through the hole in the roof and as they start to play the snow dances in time to the music. The Katanayagi twins summon 2 huge snow dragons from their speaker stack they coil through the air, breath snow fire on Sex bob-omb blowing them off the stage.

The fight resumes with Sex bob-omb summoning a sound yeti from their amplifiers, the dragons & yeti battle for supremacy in a huge aerial striuggle while Sex bob-omb and the Katanyagis play. The fight intensifies and finally the yeti bashes the snow dragons heads together and they fall onto the Katanyagi stage destroying the twins, their synths and speaker stack in a huge explosion of coins and broken speakers.

The scene was pre-vizd at Double Negative, so the choreography & basic look of the creatures was to some extent designed prior to shooting. The scene took 2 weeks to shoot and was shot with on-set playback to sync all the elements of the peformance, using the previz we marked the position of the creature in each shot with a weather balloon, this also acted as a CG lighting reference. Bill positioned long sequences of lights which were programmed to follow the creatures movement providing interactive lighting at the correct spatial positions.

Colin Mcevoy animated the creature fight while Markus drayss and Lucy Salter designed the complex particle system that defines the snow dragons. The dancing snow system and the Katanyagis soundwave effects were created by Alexis Hall using Houdini & Maya.

One of the features of the Sound Yeti is that it’s covered in “sound fur” which reacts to the music. CG supervisor Andrew Whitehurst wrote a piece of software called the waveform generator which converts data from the audio files into animation data which drives the amplitude and frequency of each spike of the sound fur. The end result is a relentless bristling movement on the yeti which is driven by the music.

Ultimately the look of the yeti harks back to hand drawn animation, one of the first briefs for the creature was to make it like the “creature from the ID” from the film FORBIDDEN PLANET. The final Yeti look is a combination of hand animation, complex particle system dynamics and audio driven animation.

In the fights big finale the dragons fall onto the katanyagi twins demolishing their stage in a spectacular explosion enhanced with a scott pilgrim signature shower of coins. The basis of the explosion was a practical gag rigged by Laird McMurry with pyrotechnics provided by Arthur Langevin. The explosion was enhanced with Rigid body dynamics using in-house software “Dynamite” and coin particle simulations, provided by Chris Thomas and Federico Fasselini. Composited by Keith Herft. The final shot is a seamless blend of Live action and CG destruction mayhem.

As production VFX supervisor, can you tell us what are the sequences that were attributed to Mr X and why?
Mr X created the stylised exteriors of Toronto, lots CG falling snow and snowy Backgrounds. Mr X are based in Toronto where the film is set and shot, they were very keen to work on the film, their previous work was great and Dennis Berardi & Aaron Weintraub (the VFX supervisors) are great guys.

Is there any shots that prevented you from sleeping?
The birth of my daughter halfway through the shoot!

How many shots did you made?
1200.

What was the size of your team?
About 200 people.

What did you keep from this experience?
The best projects involve a director with a clear vision and extensive collaboration between department heads.

What is your next project?
I’m talking to Edgar Wright about his next film

What are the four films that gave you the passion for cinema?
Can I have seven?….
STAR WARS: I was 7 when it came out and it blew my mind.
A MAN CALLED HORSE: I was 10 when I saw it and knew it was good but I didn’t know why.
TIME BANDITS: I was 11 when I saw this and it changed my life.
HIGH PLAINS DRIFTER: Revenge from beyond the grave, perfect cinema.
LA HAINE: Made me want to be a film maker.
ELECTRA GLIDE IN BLUE: Best last shot in any film, ever.
ONCE UPON A TIME IN THE WEST: Will anyone ever make a better film than this?

A big thanks for your time.

// WANT TO KNOW MORE?

Double Negative: Dedicated page about SCOTT PILGRIM VS THE WORLD on Double Negative website.
fxguide: Article about SCOTT PILGRIM VS THE WORLD on fxguide.

© Vincent Frei – The Art of VFX – 2011