Two years ago, Pablo Helman explained to us about the work of ILM on TEENAGE MUTANT NINJA TURTLES. He returns today to talk about this new episode and the new technical evolutions.

What was your feeling to be back with the Turtles?
Had a great time! I got a “familiar” feeling out of working on OUT OF THE SHADOWS. We spent a lot of time discovering the turtles and getting familiar with the characters on the first film. So it was a natural thing for us to develop the characters, making them more appealing and giving the turtles more personal and unique traits that made them uniquely identifiable to the audience.

Can you explain in detail about the Turtles and Splinter creation and how you enhanced their models?
After the first movie it was very clear to all of us that we needed to make the turtles and Splinter a bit more visually appealing. We made some changes to soften their features by making some slight changes to some of the geometry on their faces but also changing their rest pose to be more “friendly” and less “frowny”, if that makes sense. Secondly, we revisited their particular movement style in animation as to soften their reactions and hopefully elicit sympathy.

How did you improve your Muse Facial Capture?
We’ve continued to develop ILM Muse into a 2.0 version that gave us a more robust foundation and still allowing for editing capability of the data. Our new system now withstands the intense pressure of having hundreds of shots concurrently in production, elegantly and efficiently. Also, the solves got substantially more accurate and we started seeing a lot more fidelity coming through directly from the raw performances. All of these changes made it for a lot more animator-friendly interaction with the system and yielded results that ended up being more appealing.

This movie brings new characters like Rocksteady, Bebop and Krang. Can you tell us more about them?
The new characters’ facial performance was keyframed with the help of actors reference videos for their delivery. We also benefited from stunt performances for the fighting sequences and animator performance capture gathered here in on our stage here in San Francisco.

Can you explain in detail about the creation of their transformations?
The transformations were “mapped out” in our art department first while at the same time we were developing two digital assets for each: “A” (digital double of the actor) and “B” the digital new character asset. The animators would map out the various regions to be transformed around the face, hands, arms and feet… and we went for it! There was also a development effort to lose the hair from the human version of Rocksteady in favor of a rhino skin texture.

Which characters was the most complicated to create and why?
Well, this is a movie in which the digital characters interact with the humans and their interactions have to be seamless, so all digital characters were given an incredible amount of care not only in their performance, but texturing, lighting and compositing so that the audience could believe they existed in the same world as their human co-stars.

On the animation side, what were the main challenges with so many different characters?
I think this was the highest number of animators in a show that I have ever been involved with. I’d say hundreds between all ILM facilities around the world (London, Singapore, Vancouver and San Francisco). So it was important that we standardized the software, and prepare it for hundreds of users, hundreds of shots and eight digital characters. The methodologies used in animation also varied and necessitated facial performance capture, Imocap (capturing performance on the set), traditional studio motion capture and keyframe animation.

How did you work with your animation supervisors about that?
Kevin Martel (Animation Supervisor), Shawn Kelly (Associate Animation Supervisor), Robert Weaver (Co-Visual Effects Supervisor) and I would meet every morning for ‘pre-dailies’ to share not only animation but also rendered work. It was all up for grabs! We’d go through everything shot by shot to make sure we were all on the same page.

A major sequence happens in the sky and ends on the ground. Can you tell us more about its filming and how you created it including all the destructions?
I went to Brazil for a couple of weeks to shoot helicopter, boat and ground-based plates in Iguazú Falls. The sequence had been previsualized but when we got to the location we realized that we would have to change it based of the availability of resources and safety. We started with helicopter plates to capture the fall into the river and continued down from there! We tried to use as much of the actual plate as possible in each shot and match the water with our CG fluid simulations which were generated for interaction where ever needed. As much as water is getting easier to generate, this was the first time we took on huge whitewater river rapids that had to match real plates. It was great to have so many real water plates for reference… it made things a lot more efficient and clear to obtain. Our proprietary fluid sim system was used for water and for the plane destruction.

The final battle involved a massive amount of CG elements. How did you manage this aspect?
The final battle was also based on real NYC plates that were gathered with a very systematic approach. Plates were captured from the air by helicopter, film cameras rolled from the ground and we augmented that material with a stills unit that went around the city buildings at different altitudes and different times of the day. All those plates together with the data we captured was organized into an environment that was combined with digital assets – such as the forming Technodrome ship in the sky.

Can you tell us in detail about the creation of this sequence?
The final sequence required a lengthy back and forth between ILM and the editorial department to make sure that narratively we touched on all the right moments. Same thing with camera movement and lens coverage, it took a long time to get all the pieces working together. I think it was a shinning example of digital movie narrative/making in which client and VFX artists collaborate to tell a story.

On the technical side, how did you handle the huge render times of this final sequence?
We employed Pixar RenderMan to do the final renders for the sequence. Hundreds of artists contributed to the shots which we incredibly complex. There was a lot of math involved in calculating how long and what resources were needed to complete the work. Lots of coffee and chocolate were also involved, never under estimate the power of caffeine and sugar!

Was there a shot or a sequence that prevented you from sleep?
I think the Brazil sequence because of the water and the end sequence because of its complexity were two sequences that got in the way of my getting a good night’s sleep on more than one occasion. Also, I know the car chase at the beginning of the film kept Robert Weaver up late as well.

What do you keep from this experience?
This was one of the most rewarding experiences I have ever had. The client was awesome and we all contributed immensely to the content. I can’t ask for more than that!

How long have you worked on this film?
I worked on this project for over two years.

How many shots have you done?
The final count stands at 1300 shots. 900 of which are creature shots.

What was the size of your team?
Globally we had 550 ILM artists working on the film and we also had small crews at BASE FX, Virtuos and WhiskeyTree.

What is your next project?
Currently, I’m working with Martin Scorsese on a movie called SILENCE.

A big thanks for your time.

// WANT TO KNOW MORE?

ILM: Dedicated page about TEENAGE MUTANT NINJA TURTLES: OUT OF THE SHADOWS on ILM website.





© Vincent Frei – The Art of VFX – 2016

LAISSER UN COMMENTAIRE

S'il vous plaît entrez votre commentaire!
S'il vous plaît entrez votre nom ici