Daniele Bigi began his career in the visual effects in the early 2000s and worked in many studios such as Framestore and MPC before joining ILM in 2015. He worked on many films like PROMETHEUS, GUARDIANS OF THE GALAXY, DOCTOR STRANGE and READY PLAYER ONE.
Steve Aplin started working at ILM in 1997 and then worked at DNEG in 2009 before returning to ILM in 2016. He worked on films like STAR WARS: EPISODE III – REVENGE OF THE SITH, TRANSFORMERS: REVENGE OF THE FALLEN, CAPTAIN AMERICA : CIVIL WAR and A QUIET PLACE.
How was the collaboration with director Guy Ritchie and VFX Supervisor Chas Jarrett?
Steve Aplin // Guy and Chas have worked together on a few projects previous to ALADDIN, so definitely had a shorthand when it came to their communication. It was clear that Guy held a lot of trust in Chas’ guidance helping to craft the look and style of the VFX in the film, and gave him the freedom to explore creative ideas with us and his editor, James Herbert. This collaboration worked well, giving Chas the space to come up with ideas in post which really injected extra visual flair and life into the sequences. Of particular note was the broadway-esque theatrical lighting exploration which took place in the Friend Like Me sequence, coming quite late into the post production schedule but really ending up taking it to a whole new level visually.
How did you split the work amongst you and at the ILM offices?
Daniele Bigi // ILM’s London studio was what we call the “Hub” for the show. We developed Genie from modelling to look dev up to the complex tail simulation. Many variations of Genie were created in the London office and the team also worked on Raja, the tiger, the massive set used in the sequence “Friend Like Me” and the environment used for the sequence “Cave of Wonder”. Not only did the cave set contain hundreds of millions of instances to create the massive treasure trove but was also modeled to be dynamically destroyed by the FX team. We also converted Agrabah city to be able to manage it and render it using Clarisse.
Meanwhile, ILM’s Vancouver team developed the vast palace of the Sultan including the exterior environment and several interior set extensions. Plus Vancouver team also created the largest environment in the movie: the entire city of Agrabah. The city is about 1.5 kilometers wide and additionally the team created a section of desert around it that extends for 5 kilometers. Vancouver also developed the character assets for Jago and Jafar Genie and worked on all of those shots.
Lastly, our Singapore team contributed several sequences and many one-off shots as well.
Can you tell us more about the previs and postvis process?
Steve Aplin // Key sequences were extensively previs-ed early on by Proof and ILM. It was especially important to focus in on finding the right feel for the Whole New World carpet flight as the motion would be end up being used to drive the base for the motion control rig the actors would be performing and consequently locking us into the gross motion and camera choreography. Postviz for this sequence involved rough camera solves and rigs in maya to allow animators to drive the resulting plate elements and their local camera motion through broader sweeping paths, pieced together with rough sketches of the 3d environments to give a greater editorial understanding of how the sequence would play out (a similar approach was taken for the Carpet Chase over Agrabah).
Postvis for the genie sequences and a few of the sequences featuring Abu, were taken on by a small team of 2d animators overseen by Chas who blocked in rough ideas for those shots to help drive creative discussions and give editorial material to begin piecing scenes together, usually on 2’s or 4’s.
Can you explain in detail about the creation of the Genie?
Steve Aplin // Built by our London team, Genie’s design went through multiple iterations; it was decided early in the production that we would base Genies look closely on Will Smith but originally the character was plumper with exaggerated features. Eventually a muscular look was chosen and facial adjustments stripped back so that the Genie’s face is an exact 1:1 match of Smith’s face in the final version, to maintain the fidelity and detail of his performance.
Daniele Bigi // Our journey began as usual with many concepts, initially exploring stylized versions of Genie that resembled vaguely the original version. Although some of these concepts looked very appealing and charming, everyone was concerned that this type of style wouldn’t fit with the live-action world. This brought us to explore versions of Genie with more features from Will Smith’s body and face. We also played with the idea of having Genie’s body as real as a normal human but very chubby to allude to the style of the original beloved character. We created around 250 shots with Genie. Every time you see him in the movie, he is a fully CG character, although we used Will’s performance and applied it to the CG version. There are no shots of blue Genie that are live-action.
Eventually, we decided on a more muscular look, which gave Genie a stronger and more powerful body.
How did you handle his rigging and animation?
Steve Aplin // The rigging for the genie really fell into three categories: Body, face and tail. The body rig needed the flexibility to venture into squash and stretch territory, due to the nature of the direction for some of the more physically challenging shots. We knew we would never be needing to go as far as the original 2D animated genie as this new version was always intended to be its own thing, and pushing the physiology too far into a cartoon world jarred against what would be a photoreal character. So, if the genie needed to move somewhere with incredible speed we flavoured in more cartoony mechanics but for only very brief moments, and would quickly return him to something which felt solid and weighty so the audience would never be pulled out of believing he was real.
The tail rig was incredibly important to the character of the genie. It needed to be very versatile as its posing lended strongly to the overall silhouette and would act as the base shape to drive the FX particle sims. It also helped describe the genie’s travel in fast moving moments, almost like a motion blurred speedline so needed to have the controls to twist, bend and squash and stretch. There was a constant back and forth between the animators and the FX artists to enable the particle sims to be more controllable and open to art direction.
The genie’s face rig solution was a combination of sliders on the face controlling blend shapes derived from scan data and broken down into a manageable library, facial motion capture data deltas, and on mesh controllers driving deformers.
For the animation, upper body motion was largely derived from either motion capture sessions with Will Smith or keyframed referencing footage of Will performing on set interactively with the cast during principal photography. This had to gel perfectly with the facial performance as we know when body mechanics don’t support face performances 100% we lose the essence of the performance. Combining body animation from reference/motion capture from different takes to the face though challenging became a necessity for clearer storytelling in certain cases.In addition, the animators needed to incorporate flight mechanics into a large number of the shots, balancing a natural feeling float for more static shots and zippy acrobatic motion for others, all the while maintaining a believable sense of weight and mass to the genie. You could say that the tail was a character in itself, often needing to be keyed on 1’s to maintain the precise nature of it’s posing/silhouette and motion path.
Can you tell us more about the facial animation and the Medusa technology?
Steve Aplin // Medusa technology created by Disney Research Studios and implemented by ILM was used for capturing an accurate static mesh and textures combined with an ICT scan to create the base Will Smith model. In addition, key facial expressions were broken down into animatable shapes for solving the motion capture using Disney Research Studios’ new proprietary Anyma technology. There were three specific Anyma shoots to capture Smith’s face performance – the first was a proof of concept for the “Friend Like Me” sequence, the second was during main photography at ILM’s capture stage in Chiswick, and a third during reshoots which provided most of the data used – as by this point dialogue and sequences were close to final. The Anyma rig consisted of a number of cameras capturing 4k images and a suite of proprietary software to process the footage. The system is designed so that there aren’t any restrictions to performance other than the need to stay in view of the cameras, which meant we were able to capture full body performance at the same time as the face. After receiving the shoot data the facial capture team at ILM would first solve the motion then retarget it to the Genie face mesh. This would then be handed off to the animators to be combined with the body performance in progress. Depending on how much a facial performance needed adjusting (dialogue reordered, expressions exaggerated or reduced, additional dialogue added with no source Anyma footage, eye direction changes, etc.) On top of this animators would keyframe in additional neck performance, including the platysma, adam’s apple, sternocleidomastoid and subtle sticky lips and eyelids.
Daniele Bigi // Will’s facial capture has been essential for Genie’s performance, even more so than his motion capture. The technologies used for facial and motion capture are completely different. To capture the body performance, actors need to wear special suits covered with tracking markers. These markers allow to record every body movement and export it onto a digital version of the actor (a digital rig), which is then utilized to animate the digital character. Capturing facial data is a lot more complex, due to the many subtle movements the human face can generate. Think about how many expressions we are capable of doing: a slightly different squinting of the eyes or mouth movement can generate completely different emotions and feelings. One of the most important aspects we had to consider when creating Genie, was to get a consistent predictable result matching Will’s expressions in every single shot. In order to get the best possible facial performance, we collaborated with our colleagues at Disney Research in Zurich and combined two technologies: Medusa and Anyma.
Medusa’s technology was used for capturing Will’s expressions and extract incredibly detailed geometry. This face-shapes library was then fed into a new software named Anyma. Anyma, in a nutshell, is a markerless ADR style facial performance capture technology. The results are extraordinary: the output was an animated face geometry with consistent topology without any temporal geometric noise and with very precise lips and eye contours. This was achieved thanks to a plug-in called Sensei that uses machine learning technology to help artists to trace specific face landmarks such as eyelids shape and lips profile. I think this new tech played a massive role in the new ILM facial performance’s pipeline and Aladdin is the first project ever to utilize it.
How was filmed the shots of Will Smith playing the Genie?
Steve Aplin // Will was shot on location with the other actors for much of the shoot, which gave a very natural chemistry between them all. Clean plates were also shot so the live action Will could be easily removed and replaced with his blue digital self. He wore an ILM Imocap fractal suit for this shoot to enable easier tracking of joint and key landmark positions for animators to emulate his actions as options for the digital genie. When capturing his facial performance during the Anyma shoot, Will was able to replicate a lot of his body performance which at this time was motion captured, so there was a wealth of motion for Guy as to play with to better pinpoint which could work best in the context of a particular scene.
The Genie has a big musical moment. How did you approach and create this sequence?
Daniele Bigi // “Friend Like Me” is, without a doubt, one of the most complex sequences of the movie. With dozens of characters, fireworks and pyrotechnic effects Genie wasn’t necessarily the hardest challenge in this sequence.
Here are some of the most complex aspects we had to tackle:
- The live-action set that was used during the shoot was very small, compared to what was needed to accommodate all the action, therefore we had to recreate an enormous “Cave of Wonder” entirely in CG.
- The choreography was extremely complex: Genie not only changes his scale several times (from human size to gigantic) but also changes his shape and clone himself several times. This meant we had to build a complex rig and set up, to be able to squash and stretch him into different characters and proportions.
- The set lighting was very complex and needed to change almost on a shot by shot basis. We had hundreds of lights that needed to move, following the rhythm of the music. This required custom lighting setups for pretty much each shot.
- The lighting team in ILM did an amazing job, we approach this sequence to design our CG light rig similarly to what is done in massive musical theatre shows but without the constraints of real physics. The result is spectacular.
Let’s just say, recreating this sequence in CGI wasn’t easy. I spent most of my time during post-production on it, therefore I am probably a bit biased but I think « Friend Like Me » is the most exciting and funny scene in the movie.
How did you create and animate the magical FX use by the Genie?
Daniele Bigi // I think Genie’s tail development was at least as complex as the character’s face and body set-up. The tail was supposed to be fully controllable in animation but to get the look right we had to run a volumetric and particle simulation on every single shot. We did set-up a simple geo that animation used to control initially the tail. We used the motion established by animation to drive certain aspects of the tail simulation.
In a typical show FX usually begins to work on a shot when the animation is fully approved but in this case, we couldn’t follow the standard workflow. In order to show rendered WIP versions for any Genie shot we had to run simulations for Genie’s tail. The problem with any simulation is the unpredictability of the results and the indirect nature of controlling the shape and the motion of any simulation.
When we run simulations for explosions or water, no one wants to art direct the exact shape and the exact motion of the ocean waves, or establish the exact speed of the flames propagating from a big explosion. But for Genie’s tail, we needed that level of control. The tail was an essential part of Genie’s look and behavior. Delivering emotion was an integral part of Genie’s performance, so everyone was keen to art direct any aspect of the tail. We encountered numerous challenges, all of them smartly overcome by the amazing ILM FX team, lead by Jamie Haydock and Tom Raynor.
The tail was made up of several components:
- A layer of blue smoke that was supposed to match and blend with Genie’s skin colour. We developed a method to add a specular component to the volume shader to get a perfect match with the skin shader response. From there we simply had to tweak the color of the scatter component to get interesting subtle colour variations into the shading.
- A layer made of a complex evolving internal lights emanating from a small self-illuminated nebula like structure. Golden and red flakes generated with a particle instance approach, constantly emitted by an ever-evolving cloth simulated atomizing sash and red ribbons. The tip of the tail was designed to charge up and emit light every time it was interacting with any part of the set. In dozens of shots with Genie zipping at high speed in different directions, the smoky blue tail was left behind leaving Genie with only his torso. The initial solution to solve that problem was to run a simulation in local space. Then constrain the cache to Genie’s pelvic and transform the cache in world space but the trick was very visible and it wasn’t looking convincing or very believable.
We decided to run simulations always in world space to get the physics more accurate, but the initial problem was still there. In order to solve it, we created a very dense emitter, constructed with intertwines volumetric ribbons that evolved over time. The goal was to create an emitter that was looking as good as the simulation coming out of it.
Other ad hoc solutions were put in place to get the tail evolving in shots where Genie was scaling up dynamically. In this case, we couldn’t simply scale the cache, we had to emit more density into the system to fill the tail at the right rate for the volume that was supposed to occupy. Not an easy task, when you have to maintain a specific look driven by several components. The speed of the turbulence and the frequency of the light emitted by the nebula component were all controlled and adjusted by shot. We played with these visual components to enhance the Genie’s performance. For example, in the sequence “Friend like me”, we played a lot with Genie’s tail and we used most of these effects to create as much magic inside the tail as possible. In many other shots where Genie was expressing sadness or disappointment, we toned down all the tail FX. We changed the dynamism of the tail to avoid any distraction and help to deliver the correct emotional message.
What was the main challenges with the Genie?
Steve Aplin // Speaking to the animation side of the challenges, we principally needed to find a way to faithfully reproduce Will Smith’s performance, but have his gross body motion be both ethereal and physically grounded in reality even though we constantly had to play with breaking the laws of physics, as well as being able to have him move freely around scenes whilst maintaining eyelines with the live action characters.
Chas Jarrett had the great idea early on of getting a small 2D animation team together to postvis most of the Genie scenes and some of the key Abu shots. This gave editorial material to work with early on to pull scenes together which might not have otherwise been possible had it been approached in 3D. In tandem this gave the ILM animation team a clearer steer towards the staging of what the Genie might be required to do across the film. We would then do our best to implement these ideas whilst always ensuring nothing was pushed so far as to break the feeling that the genie was a photoreal and believable character against the other live action performers.
We also had the additional challenge of sequences such as the musical number ‘Friend Like Me’ or the cave breakout, which required either multiples of the Genie across a number of shots or adding costume and body variations including a ninja, cowboy, waiters, maitre’d, air steward and a mummy, amongst others. For the dance number a dance troop of expertly choreographed performers were motion captured at our Chiswick stage, and that data was then retargeted to Genie rigs to be pulled together to create a chorus line of Genie’s, each with their own keyframed expression changes in the wider shots, and various Anyma takes from Will’s face capture or keyframed performances from reference for the medium and close up group shots.
Can you tell us more about crowd creation and animation?
Steve Aplin // There were a number of sequences requiring large crowds in the film. One of the most challenging was the Prince Ali parade, as it took place in a very exposed part of Agrabah, and consequently needed around 4000 crowd agents to fill out the space seen in the 70 or so shots the sequence called for. ILM’s crowd system has been developed to work on top of Houdini, giving artists the ability to load in pre-canned motions, create stateMachines or simply play clips, re-adapt the skeleton to the terrain procedurally and avoid foot sliding. To preserve and QA the visual continuity across multiple shots we developed a way to see textures variations in viewport and produce precomped daily with the plate projected on the lidar scan and roto.
The crowd artists worked with our proprietary motion database which gave them instant access to over a thousand different clips, each tagged to make it easy to track down specific actions and vignettes. A large proportion of the clips for the digital human extras were captured at our Chiswick stage, with a heavy emphasis on rhythm to sync with the sequence’s signature song. These motions could be dispersed among the crowd assets of which there were 8 unique character builds for the males and 4 for the females. To add greater variation a library of facial expressions were created to be parsed out in addition to costume and colour variants. The crowd team ran cloth simulations on these as well as miscellaneous background props requiring those details, such as flags and billowing sheets, and they made extensive use of vignettes to have believable interaction between agents, allowing them to keep these performances really close to camera and still feel like unique moments.
A menagerie of animals were also needed for the parade, including elephants, giraffes, monkeys, peacocks and camels. Animators created keyframe actions and cycles for to be used by the crowd team, who choreographed the large numbers alongside their human crowd conter-parts.
Can you tell us more about the creatures animation?
Steve Aplin // Of these three characters, Abu was the hardest nut to crack. At the beginning of post production the idea was to have him be as close to a real capuchin in terms of motion and attitude as was possible, but this led to a lack of personality, and we were all aware how much of a beloved character Abu was from the original film, and wanted to maintain some of this character. However, as soon as we began to have him behave in a more anthropomorphic way, or be more cognisant of what was being said in a scene, the less appealing he became. He no longer felt like a real animal, but instead a caricature of one. So, for any given shot the animation team would scour our reference library (composed largely of Fidget, the trained capuchin brought in for reference, and whatever could be found on youtube or other trained or in the wild capuchins), and edit pieces of footage together to give the backbone of the motion intended. They could then flavour in specific moments called out for storytelling or personality building but safe in the knowledge that there was a basis of very natural motion dictating the shot. Abu also needed to transform into a donkey, a camel and finally an elephant, leading into the Prince Ali Parade sequence. This threw up a new host of challenges for rigging and one of our London animation Leads, Kiel Figgins, who was tasked with the transformations.
Iago was more of a technical challenge, requiring extensive research into the flight mechanics and especially wing behaviour of parrots. Animators came up with reference options for their shots as a guide for the naturalistic motion required, and when the dialogue choices arrived from editorial quite late in post these would be massaged in with supporting head and body motion. Iago also has to transform, this time into a monstrous version of himself, nicknamed ‘jafargo’, and this fell to Dave Crispino, one of our Vancouver Animation Supervisors, to tackle with his team. This darker version of Iago had to feel like a terrifying threat to Aladdin, Jasmine and the carpet in one of the films most dynamic set pieces, so a great emphasis was placed on managing his sense of weight, speed, flying mechanics and composition in frame to dwarf his targets yet remain physically plausible.
Rajah, from an animation perspective at least, was probably the most straight forward character to understand. Again, animators found reference footage and pieced together key slices of action or behaviour to create a basis for their blocking. When the general structure and timings for a given shot or sequence was in place they would refine the motion, adding emphasis on the weight and physics to have him feel as natural as possible.
Is there something specific that gives you some really short nights?
Daniele Bigi // When we first started working on ALADDIN, I thought our biggest challenge would have been matching Will Smith’s facial expressions. Obtaining a perfect result for many shots, with little time and a limited budget wasn’t easy. After the first few test shots, the results were promising to the point that in many shots Genie’s facial animation was approved on the first pass. This is a huge achievement, thanks to the incredible work by the ILM team and the staggering precision obtained with the new ILM facial performance capture pipeline based on Medusa and the new Anyma software.
Thinking about it, there was a shot that left me sleepless at times! It was the shot used to develop and test every aspect of Genie’s look. From his skin shading to his colour, up to the dynamic movement of his tail. For this shot alone, we created hundreds of versions. This means that, between all departments, we did review and improved the final result hundreds of times! Once this shot was approved both internally and by the director, we used it as a reference and target for many other shots in the movie.
Steve Aplin // The size and complexity of the show in general kept me up for a fair few nights! There were some very steep mountains to climb, each presenting its own unique challenges and obstacles. The Whole New World flying carpet sequence was one in particular which had huge challenges from a creative standpoint through to the technical aspects of translating the plates from the motion control shoot into a believable, beautiful and entertaining final sequence.
What is your favorite shot or sequence?
Daniele Bigi // Hard to pick one between over 1600 shots and dozens of sequences. I am probably a bit biased but I think the most exciting and funny sequence is « Friend Like Me » to which I dedicated most of my time during the post-production of the movie.
What is your best memory on this show?
Daniele Bigi // The animated version of ALADDIN is such an iconic Disney movie. I watched it several times as a kid. I always had so much fun every time genie made a joke and transformed. One of the best memories I’ll always bring with me is the challenge to create a new adaptation of genie that was a tribute and a celebration of the old cartoon but also introduced changes and innovation to modernize it.
After the first trailer with Genie, there were mixed reactions from the audience. I think partially because they could see so little of the new Genie, partially because of the nostalgia for Robin Williams’ incredible performance and they were skeptical about a new take on it. We were all apprehensive and waited with trepidation for the movie release date. You can never be sure that the audience will like what you did, no matter how hard you worked on it. Once the movie came out, we could finally breathe again: the audience loved the new Genie! Yes, it’s different from the animated version, but it had to be. Thanks to Will’s performance and ILM outstanding work, we created a Genie that we can be proud of.
Steve Aplin // I loved watching what Will Smith brought to this version of the genie. It stands on its own merits, not trying to emulate or compete with the amazing work Robin Williams accomplished on the original animated film, but is purely Will bringing his larger than life personality and experience to the performance and clearly enjoying every moment of it with relish. When I saw the first successful animation and lighting tests come together and felt that we were able to faithfully recreate this digitally and people would laugh and enjoy our blue genie as they would if they were to watch a live action Will – that was a great moment.
I’ll also add in the ‘Eureka!’ moment of finally figuring out the recipe for Abu’s animation after much exploration as a huge highlight. Finding that perfect balance of utterly realistic capuchin monkey motion with the comic timing of a mime artist was an incredible challenge for the animation team and incredibly satisfying to watch evolve into its final state.
How long have you worked on this show?
Daniele Bigi // I do remember finishing READY PLAYER ONE around January 2018 and starting to work on ALADDIN at the beginning of February, we deliver everything by march 2019.
Steve Aplin // I started to get involved with the show around November 2017, as previs was in progress, through to delivery in March 2019.
What’s the VFX shots count?
Daniele Bigi // Globally, ILM contributed 1663 shot to the film, we divided the sequence between 3 ILM facilities. ILM’s London studio worked on 510 shots, Vancouver on 478 and Singapore on 192.
We also collaborate with 3rd party vendors and they helped us to deliver additional 483 shots.
Hybride worked on 346, One Of Us on 80 and Important Looking Pirates on 57.
What is your next project?
Daniele Bigi // I’m currently working on STAR WARS: THE RISE OF SKYWALKER.
Steve Aplin // As am I!
What are the four movies that gave you the passion for cinema?
Daniele Bigi // There are so many movies that inspired my passion for cinema! After all, I decided to work in this industry because of how much I loved watching films as a kid. It’s hard to pick only 4 but if I have to, these are the films that made a big impact on my future as a VFX artist.
1. STAR WARS: THE EMPIRE STRIKES BACK, for so many reasons, this film is so revolutionary. Not sure there is a need to explain why I loved it so much.
2. JURASSIC PARK: the first scene when you look up to see the Brachiosaurus gave me chills down my spine. And so did the first time you see the T-Rex.
3. TERMINATOR 2: especially the T-1000 effect. I loved the scene in which the T-1000 emerges from the flames and the scene where his face passes through the metal bars. It’s incredible how convincing this last one is. You can see still watch it frame by frame today and study the details and it works even after 28 years!
4. TOY STORY: in 1995 when most companies were just starting to explore using 3D, Pixar managed to create a full CG movie. It was like watching a little window opening towards the future of visual effects and CG
Steve Aplin // 1. THE EMPIRE STRIKES BACK – adventure escapism and mystery bound together in the classic hero’s journey.
2. THE SHAWSHANK REDEMPTION – pure emotional cinematic pleasure!
3. JURASSIC PARK – the T-Rex sequence was what pushed me to pursue a career in VFX and animation. It’s still as exciting and successful a sequence to watch today as it was when it first released.
4. RAIDERS OF THE LOST ARK – exceptional characters, adventure and action set pieces.
A big thanks for your time.
WANT TO KNOW MORE?
Industrial Light & Magic: Dedicated page about ALADDIN on ILM website.
Chas Jarrett: My interview of Overall VFX Supervisor Chas Jarrett about ALADDIN.
© Vincent Frei – The Art of VFX – 2019