AVENGERS – ENDGAME: Russell Earl (VFX Supervisor) with Dan Snape (VFX Supervisor), Kevin Martel (Animation Supervisor) and Abs Jahromi (Facial Technology Supervisor) – Industrial Light & Magic

Last year Russell Earl explained to us the work of Industrial Light & Magic on AVENGERS: INFINITY WAR. He is back today with Dan Snape (VFX Supervisor), Kevin Martel (Animation Supervisor) and Abs Jahromi (Facial Technology Supervisor) to talk about AVENGERS: ENDGAME.

How was this new collaboration with directors Russo Brothers and VFX Supervisor Dan DeLeeuw?
ENDGAME was the culmination of a great collaboration for us, Dan Deleeuw, and the entire team at Marvel over the course of the last four Russo Brothers films.

How did you use your experience of INFINITY WAR on ENDGAME?
Having worked with the same team at Marvel, we knew what the expectations would be, and had a good idea of the aesthetic as well. All of the prior films built on each other in terms of complexity and tool sets. CAPTAIN AMERICA: WINTER SOLDIER with its massive Helicarrier destruction and environments led to CIVIL WAR, and the fight at the airport prepared us for INFINITY WAR’s battle in Wakanda. These films had us primed for the end battle in ENDGAME. We’d really honed our tools and mindset for handling big environments with thousands of characters, all with specific actions, and then being able to destroy it all. The artists had the tool set needed to handle close-ups and wide shots, both in environments and hero action beats.

How did you organize the work with your VFX Producer?
Katherine Farrar Bluff was producing here in San Francisco, and Jeanie King executive produced. We looked at the work and the resources across our facilities and cast the show according to the strengths and talents at each facility. Fortunately, the teams at each location can handle all types of work. When we can, we try to keep characters and beats together; at the very least we try to keep shots in one place. That said, we try to work as one team so that we can share and send assets and shots back in forth when needed.

How did you split the work amongst the Industrial Light & Magic offices?
For ENDGAME, we split the work across three of our studios. San Francisco was our Hub location, with London and Singapore pulling some serious weight in shots and assets. We always try to work in a way that gives us optimum flexibility to share and shift things as the shows go on. Each location had specific beats and shots to focus on. We have great teams in all locations. Dan Snape was VFX supervisor, Michael Lum was animation supervisor, and Danielle Legovich produced in London. Dave Dally was VFX supervisor, Eddy Zhou animation supervisor, and Kacie McDonald produced in Singapore. Each office’s teams worked on individual beats.

We start our Day in San Francisco with reviews with London, and I would speak with Dan and the team of artists there to go over the work. Then it was back to San Francisco for our dailies with our team here, Jay Cooper was my associate VFX supervisor, and led the compositing team. He would help wrangle things and keep all the pieces moving forward.

We would have nightlies in San Francisco in the late afternoon, and then follow that with reviews with Dave Dally and the team in Singapore. We were literally going full steam ahead 24 hours a day spanning the globe. Sometimes we needed to get a quick turnaround on things, so we would rely on each other to tag team the work and do handoffs at the start and end of the day.

What are the sequences made by Industrial Light & Magic?
ILM was responsible for a good variety of different work throughout the film. We created hundreds of assets including the latest updates to Iron Man, War Machine, Rescue, and one of our bigger tasks, developing Smart Hulk. We handled his introduction shots in the diner, his visit to New Asgard, which included Rocket, Korg and Miek as well as the environment itself.

Also Smart Hulk’s time-travel suit introduction/discussion with Rhodey, Hawkeye, and Scott Lang. We revisited the Quantum Realm and realized all the shots where our heroes are traveling throughout; Ant Man in microscopic mode post compound destruction, as well as Smart Hulk and Rhodey trapped below. We revisited NYC and the battle action that we had created for the first Avengers film in 2012.

We also played a part in the end battle, specifically the beat just after where Hawkeye hands off the gauntlet to Black Panther. This included Black Panther’s run, getting picked up by the ebony maws trash worm, and tossing the gauntlet to Spidey. We did the Spidey catch and land, and the transition to Iron Spider mode, as well as the few shots where he is fighting on the ground. We created Doctor Strange’s, Winds of Watoomb spell as well as the water tornado/waterfall shots. We’re also responsible for the first shot that Rescue shows up in as well as the shots of Captain Marvel and the H ship destruction and fall into the bay.

Lastly, we worked on Smart Hulk at the (spoiler) funeral and on the beat where Cap goes to return the stones.

How did you work with the art department to design Smart Hulk?
We received initial artwork from the Marvel art department and used that as the basis for our sculpt. From there we worked to capture the spirit of the artwork as well as making sure to retain just the right amount of Mark Ruffalo’s features; things that would really affect the performance once he started talking/moving.

Can you explain in detail about the creation of Smart Hulk?
Creature supervisor, Lana Lan and our model lead, Sven Jensen did a really amazing job of analyzing all the prior Hulk models, taking the initial artwork and studying Ruffalo, comparing these all, and sculpting the perfect blend. “The brains and the brawn.”

After we had made our initial sculpts, and found a balance of Hulk and Banner that we were all liking, the next step was to see him talk and emote. We wanted to see how the mixture of our big hulking green character would play with Mark Ruffalo’s natural speaking voice. For this test, we found some interview footage of Mark Ruffalo and very carefully studied and animated our Hulk to match the reference we had selected. Each and every nuance was heavily scrutinized and replicated to try and make our new Smart Hulk delivered this line as sincerely as Mark had. The mixture worked! We could now see him as an authentic character that could hopefully live up to the performance Ruffalo would give later on set when his scenes were shot. We also spent a good deal of time in his texturing and lookdev, led by Gareth Jensen and Anthony Rispoli, our paint lead and lookdev lead respectively, on the show.

Did you develop new tools for this CG character?
Yes. We rebuilt the entire facial pipeline for Smart Hulk. We were doing this as the show was in full production.

How did you handle his rigging and animation?
The rigging for Smart Hulk was developed using our Block Party 2 rigging system with close attention paid towards skeletal and muscular anatomical detail. Additionally, muscle and flesh simulations were run using our in-house solver to achieve further anatomical correctness. All costumes were simulated with our in-house cloth solver. Much attention was paid to managing the complexity of the various costume rigs alongside their respective simulations.

Can you tell us more about his animation and especially his face?
Animators had the ability to control over 200 face controls and over 100 controls for the muscles. Because we were utilizing our new Blink retargeting animators could tweak the smallest details–like adding tiny wrinkles under Hulk’s eye–all the way to putting severe anguish on to his face with every vein popping out of his head.

While this Hulk likely weighed several hundred pounds less than the original Avengers Hulk, animators were still very careful to make sure his actions always took his enormous weight into account, often slowing down some of Ruffalo’s quicker and more nimble movements.

How did you use the Medusa technology?
Medusa is essentially the machine vision camera setup that we use to capture Ruffalo and his face shapes in. We use this as a basis for our actor mesh, as it solves a mesh per frame from which we extract the hero select poses for all the key expressions. The hero shapes are also used as a basis for the Smart Hulk face shapes. The facial capture was achieved using Disney Research’s Anyma facial capture system. The process would begin with the capture of Ruffalo‘s performance via a head-mounted camera setup. Anyma takes this recording of the performance as input and performs per-pixel image analysis to achieve an initial deforming actor geometry. This process leverages the Medusa-captured data of Ruffalo alongside machine-learning-powered lip contour rotomation to refine the shape of the geometry so that a more desirable result is achieved. The end product of Anyma’s solve is per frame deforming geometry and displacements which we referred to as the actor deforming mesh.

Retargeting was handled with our in-house retargeting system, developed for this show by our facial tech supervisor, Abs Jahromi. The actor mesh would strategically deform our Hulk facial mesh while staying true to the performance of the actor mesh. Next, the retarget would be decomposed into a series of animator friendly weights that would drive the Hulk rig controls. The final retarget handed to animation would be the sum of the residuals and rig control keyframes. This gave our animators the ability to non-destructively, on-the-fly modify and enhance any region of the face while still preserving Ruffalo’s original high resolution performance

Smart Hulk also incorporated new facial deformer technology to help animators enhance performance. Skin deformers were developed to allow for sliding, wrinkling, and delaying of the facial skin. New sticky lips/eyes deformers were built to give animators greater control of Hulk’s lips and eyes. A live lips collision deformer was implemented to collide the upper and lower lips to give a ‘smooshing’ lip effect.

What was the main challenge about Smart Hulk?
The biggest challenge with Smart Hulk was making sure he could hold his own with the other characters on screen, and keeping true to Ruffalo’s performance. Mark delivered a pretty diverse performance for Hulk. Sometimes we’d see Mark acting on film and he didn’t look like himself. When we were animating Smart Hulk’s performance to match, we would look at some retargets, that were true to the data, and it wouldn’t look like Smart Hulk. The problem there, is you can’t just move on, it’s a bit of a catch 22. This is where the facial, model, and animation teams, would get together in “facelies” and look at the shots to come up with a plan of action on how to adjust the performance to sit better on Hulk and be more on model.

How did you enhance the Iron Man, War Machine and Iron Patriot models?
The suits evolve with each and every film. No one knows these suits as well as Bruce Holcomb, who was built or touched just about every Iron Man and War Machine suit over the last ten years. For Iron Mark 85, based off the great art, we got from the team at Marvel, we tried to combine the bleeding edge and nanotech feel of the suit from INFINITY WAR, with a little more of the traditional rigid suit feel from CIVIL WAR. We wanted to keep the nanotech that brings the suit on and off, but try to do it in a way that made it feel a bit more rigid/mechanical. This was true for the Gauntlet as well. It happens quickly, but if you watch carefully, you will see the nano / liquid metal forming the framework and individual panel pieces that then lock into place. This was done to help sell a more physical / less magical feel for the suits. We also introduced some subtle patterning to the materials on the suit to give it some visual interest.

How did you handle the matchmove and lighting challenges?
Anamorphic shows always make for a bit more challenge on the layout / matchmove front, but we have great tools for that. The majority of Smart Hulk shots had Ruffalo on set either on a platform or with an eye line pole where his head would be. On shots that had motion capture data, we would just add a few keys to place him in space. For some shots, we ended up doing rotomation as a starting place for the character.

Our lighting challenges occurred in differing ways. On Smart Hulk, the challenge was always to light him so that he would sit in the shots. For most of his shots, we were composting him into existing environments with live-action people. If Smart Hulk were there “on the day” the DP could have made adjustments and added hero lighting to him as with any featured actor; every shot gets a specific setup. This is true for CG characters as well. You can capture an HDRI and build an environment to light with as a starting place, but artists needed to hero out each shot, adding bounce cards, eye lights, backlight, all the tricks that would be done on set. The goal was always to show him in his best light, no pun intended, without making it feel like he was overlit and out of place from the other characters.

How did you create and animate the various FX for the time travel and Quantum Real sequences?
In between the Russo Brothers films, we were cast by Marvel to develop the Quantum Realm for both ANT-MAN, and ANT-MAN AND THE WASP, so we were very familiar with it. The biggest challenge with the Realm on this film was trying to tell the story of our heroes being guided to different locations in such a short time. Florian Witzel, our FX lead led the charge.

Can you tell us more about your work on the impressive end battle?
We focused on a few distinct beats. The end battle beats included Black Panther’s run with the Gauntlet. We had done the complex all CG battle work on INFINITY WAR, so that set us up well for ENDGAME. Weta Digital did a large portion of the end battle, so we were sharing looks and assets in as much as we could.

Weta built the initial crater asset and handed it off to us. Once we had the handoff, we ingested it, textured it, and set dressed with hundreds of logs, rocks, concrete slabs, lots on damaged bits. Our environment lead, Johan Thorngren worked with our teams across studios to keep the look consistent and at the level we needed it to be. We were sending paintovers with sky choices and mood to Dan Deleeuw, and he would share looks back and forth between us and Weta, so that we could hone in on a look together.

The process began with the generalist team doing quick 3D exploratory mock-ups of the environment for mood/lighting/scale. As the mock-ups developed, they informed how best to approach the build. This process allowed us to get a reaction from the client quickly, before getting too far into the build itself. The more successful elements were retained and gave us an initial basic recipe of dressing/shading/ lighting, as well as auditioning suitable comp elements.

The important element was getting the sky right: The right level of exposure, temperature and cloud coverage. The broader environment layout was modeled and published for use in lighting and animation blocking passes. The environment team created a large collection of assets and shaders that could all be ported into the lighting pipeline. We made use of library/generic assets and procedural textures as much as possible, and only committed to custom model/texture time after seeing the first pass of shots. Once lighting had basic environments up and running, with established animation beats, we revisited the set to add a final pass of dressing detail.

Groundwork was a custom shader network, built with tile-able, tri-planar textures. This was also built up to act as a layer of dirt that could be applied to assets that were placed in the environment. A variety of colors and frequencies of maps were blended together to give variation across different frequencies. The ‘height’ (y pos) of the geo was also used to blend darker, muddier textures into the pits of the crater compared to the drier, more stoney dirt of the crater walls.

For efficiency, most assets were built using a material library with tri-planar textures to establish material looks. By projecting in world-position, we didn’t have to worry about UV layout or UV scaling issues between one asset and another. Consistent texture scales. Some assets needed to be uprez’ed, so went through a UV layout, texturing and lookdev phase. In the case of these assets, the environment material would automatically preserve the established shading, and just mix the dirt shading across the top.

Dirt mixing was achieved by a combination of procedural masks: Normal orientation (up = more dirt), self-occlusion (cracks and crevices = more dirt), directional occlusion (closer to the ground = more dirt) and tile-able textures (dirt pattern break-up).

Most of the detail in the ground is achieved through a course displacement (to save memory on large environment geo) for broad profile changes like rocks, etc., plus a fine-detail bump. « Puddles » were achieved in shading by mixing in a basic water material (with some absorption for a muddy effect), using the depth of the displacement as a mask. Normals for the water material were replaced with a constant y-up vector to simulate a flat surface, counteracting the unevenness of the geometry.

Once we had the crater asset, we filled it with thousands of fighters, which were a mix of outriders, Wakandans, Chitari, ravagers, and sorcerers, all with their own unique fighting styles.

Which sequence or shot was the most complicated to create and why?
While all the sequences had their challenges, I think keeping Smart Hulk consistent and believable was the biggest one.

Is there something specific that gives you some really short nights?
Having done so much environment/crowd/destruction/end battle work on the previous films, I was pretty confident we could handle that. I will say that Smart Hulk kept me up at night. I knew we would get there with the great team we had, but it was definitely nerve-wracking when we were in the middle of a complete rewrite of the pipeline and trying to get shots to final and out the door.

What is your favorite shot or sequence?
The diner sequence is a fun one, as it was really the first sequence where we had the system fully built and got to see the results. I’m proud of the work we did across the board, the Black Panther run, and Iron Spider beat, and the H ship destruction. How can I pick just one?

What is your best memory on this show?
I have a lot of great memories from the show, but a standout might be when we got word back that the Russo Brothers and studio, loved our initial Smart Hulk shots.

How long have you worked on this show?
Official production began in May of 2018, with first turnovers mid-June of that year, and we finished on April 11th of 2019, just two weeks before the release!

What’s the VFX shots count?
We landed on close to 550 shots, with 100 or so additional omits.

What was the size of your team?
We had about 420 artists globally between San Francisco, London, and Singapore.

A big thanks for your time.

WANT TO KNOW MORE?
Industrial Light & Magic: Dedicated page about AVENGERS: ENDGAME on ILM website.

© Vincent Frei – The Art of VFX – 2019

Share this post

Vincent Frei

Founder & Editor-in-Chief // VES Member // Former comp artist

1 comment

Add yours
  1. Kenan 3 juin, 2019 at 23:41

    I have no words. This was beautifully in depth and a great read overall. Hearing how even VFX teams lose sleep over certain challenges but are capable of overcoming then and how much they enjoy their work is also a treat. I didn’t think I’d be impressed by much after Thanos in Infinity War but Professor Hulk was a more than pleasant surprise in the text department. Not once did I feel that he was out of place in the environment or alongside live actors and I didn’t care much for the Hulk after the first Avengers film, maybe Age of Ultron at a push. Hearing them acknowledge Weta as a fellow studio on the project was also nice as both ILM and Weta can be considered Titans in this industry, so seeing them share the load on a project this large is great. Thank you for this overall and looking forward to more pieces like this.

Post a new comment

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.