In 2022, Jabbar Raisani discussed his visual effects work on the final season of Lost in Space. Following that, he worked on the fourth season of Stranger Things.

In 2021, Marion Spates detailed the work of Digital Domain on WandaVision. Afterwards, he contributed to Stranger Things.

Adam Chazen launched his career in visual effects more than 16 years ago. Throughout his career, he has contributed to projects like 2012, Game of Thrones, The Matrix Resurrections, and the fourth season of Stranger Things.

How did you get involved on this series?

Jabbar Raisini // I was initially hired to direct episodes 3&4.

Adam Chazen // I had worked with Jabbar in the past on Game of Thrones and Stranger Things but never had the opportunity to work with him in a producer/supervisor capacity so when he called I was excited and of course said yes.

What is the your role on this series and how do you work with other departments?

Jabbar Raisini // As an EP, director, and VFX supervisor on the show I had tons of creative freedom. Overall the experience was most similar to directing my own feature since there was so much creative control/freedom.

Adam Chazen // I am the VFX Producer. I was brought on during post production, so the main department that I worked with outside VFX was the Post team. Luckily I had worked with Chris Zampas and Jake Chusid previously on Game of Thrones so we already had a shorthand between us and that made it easy to hit the ground running.

(L to R) Sebastian Amoruso as Jet, Director Jabbar Raisani in season 1 of Avatar: The Last Airbender. Cr. Robert Falconer/Netflix © 2024

How did you organize the work between you?

Adam Chazen // I sat with Jabbar and Marion and we first decided what vendors we wanted to have on the show. As Jabbar mentioned, he and Marion had worked with most of the vendors previously and I had separately had similar experiences with the same vendors so we were on the same page with that. We then laid out their strengths, whether it be characters, environments, effects, etc.

As cuts became ready we had spotting sessions with the team and Netflix to call out the work needed. Once I had all that information I was able to breakdown the shots into my database and tag what kind of work was involved in each shot. I also had all of the shot work separated out by location tags (Southern Air Temple, Agna Qel’a, Omashu, Pohuai, etc). We tried to give full sequences in a given environment to one vendor where possible, but when it came to certain characters/creatures or unique effects we needed to go the shared shot or shared asset route. When it came to bending, since that happens so much across the season, we had to establish early on hero looks for each of the four elements which we turned over to each vendor and always referred back to.

How did you choose the various vendors and split the work amongst them?

Jabbar Raisini // Marion and I had worked with almost all of the vendors previously and we really tried to divide the work based on what we knew from previous experience with the various vendors.

Could you elaborate on the process of preparing for the visual effects of « Avatar: The Last Airbender » adaptation?

Marion Spates // I can only speak to the reshoots and post production, I was asked by Jabbar to join him on the Production reshoots and help finalize Post Production. For the reshoots, it was primarily like any other production Jabbar and I have worked on, it’s about working with all the department HODs to understand what we are trying to achieve and what each department will do on the day of the Principle Photography.

The one area that was different, was what the stunt coordination needed for multiple scenes anywhere from a single bending shot to a full complex coordination of a Southern Air Temple attack. We were fortunate to have an amazing stunt team who spent a lot of time understanding and developing the methodology for our bending techniques, not to mention coordinating some amazing fight scenes between our airbenders and firebenders. Our stunt team created stuntviz for the fights sequences, it was a quick method to block out an entire scene based on a Pre-Production meeting where Jabbar would walk them through his shoot plan for the sequence. Stuntviz lends a great workflow to iterate variations and revisions for beats needed in extreme heightened fight sequences that contain multiple fights / vignettes.

As far as Post Production, it’s a myriad of things ranging from capturing as onset photography, actor scans, photo booth shoot, texture photography, onset reference, and plate photography via helicopter for the Appa flying scenes, and LiDAR. For the reshoots we had a team that worked diligently to get the data we needed for our work in post without holding up Principle Photography. It’s always a dance, but a dance that can be accomplished if everyone is on the same page from the beginning.

During the filming of the series, what were some of the key challenges you faced in integrating live-action sequences with visual effects seamlessly?

Marion Spates // I would say the biggest challenge for the reshoots was filming the bending, primarily the firebending. To sell the gage you need interactive lighting from the fire onto the set and the characters, so we came up with some methodologies to try and convey the interactivity. For the hands we added lights to sell the fire close to their hands, however the light had to be triggered by a board operator who would be queued by a number that represented the bender bending. For instance, in a scene where you have multiple benders it becomes very tricky to hit the timing correctly based on action in the sequence, unfortunately we ended up with lights on when not needed so we had to deal with some additional paint out of lights. In addition to the hand lighting we tried to hit the characters and set with production lights which was not the greatest approach either because of the angle not matching the source, for the final visual effects we had to just do all the interactive lighting in comp or ideally in 3D so we could properly have the correct source directionality. Definitely two areas that will need to be perfected for season two and three. We think we’ll need to bite the bullet and 3D matchmove every character lit by fire so it always matches and integrates correctly, even the 2D solution is not the greatest.

Avatar: The Last Airbender has a dedicated fanbase with high expectations for the adaptation. How did you ensure that the visual effects honored the beloved source material while also offering fresh and innovative interpretations?

Jabbar Raisini // We really tried to let the animated series be our guide when it came to the look and feel of everything and particularly the bending. We would always go back to the series to study how they did it there and even went as far as to overcut the animated series into our sequences so we had a picture-in-picture reference of our cut overlaid with the animation.

Could you elaborate on the creative process behind designing and animating the various elemental powers?

Marion Spates // First and foremost it’s the story that drives the creation process. One of the most challenging and fun was the arc of Katara’s waterbending. The water is already one of the trickier bending powers because the water has to generally disobey gravity to tell the story. In addition to gravity the bending needs to be driven by Katara’s actions, so the waterbending doesn’t look like it has a mind of its own. In the beginning Katara’s bending is more messy and non-controlled and over the course of four episodes she’s starting to master the bending, for me personally I found it tricker in the beginning to get the simulations right, especially when she’s just trying to control and orb or a water whip out in front of her without dropping it. The water had to move a bit slower, but be unstable in a way for the audience to believe she wasn’t in control, but still be connected to her motions, and in addition getting the correct barbershop motion with proper amount of drippage falling off, too much drippage started to read very campy / silly, why isn’t all the water falling for instance.

What were some of the key challenges you faced in bringing to life the bending abilities of water, earth, fire, and air in a live-action format?

Jabbar Raisini // Each element had it’s own challenges. For Water, it was the fact that humans are very aware of how water behaves in relation to the forces of gravity, friction, etc. For our show we had to break all of the laws of physics, but make it still feel grounded. That was definitely a challenge.

Fire had another big challenge which is that fire illuminates everything it comes near. That mean we had to do an insane amount of tracking and match-moving performances of actors as well as sets and props in order to properly illuminate anything that was in close proximity to firebending. It was a massive undertaking for a lot of shots, but doing that leg work is also one of the reasons I think the firebending feels real and grounded.

Can you discuss any specific techniques or technologies employed in the VFX production to enhance the realism and impact of the elemental powers?

Marion Spates // We tried to use as much real world reference as possible to help ground the elemental powers in the real world. Jabbar and I have worked together for a long time and we really lean into as much photography as possible.

Firebending – We used a flamethrower to help drive the detail and realism of the fire and smoke. In addition, we used a spark element to help give us texture and detail, but really to help sell the force of an impact. The last real world element was heat distortion which was driven by the fire. With these four elements we quickly got fire that read as bending but was still grounded in the real world.

In addition to the real world elements we tried to give the firebending a more distinctive source / emitter (a core) to help drive the three dimensional shape of the source at which the fire was controlled. The source could be at the head of the blast or at the base of the hand, but never touch the hand. Firebending is not birthed from the limbs of the bender, as noted from Avatar wiki, it’s the pyrokinetic ability to control and produce fire.

An additional note regarding the fire bending, we came up with a methodology to turn on the fire. We quickly realized when a bender needed to activate the fire, it looked like fire that was scaling up to form the size of the bending effect. We toyed around with some ideas, but we didn’t get it right until we turned to our Nasa reference of a jet engine starting up. When you step through frame by frame, there’s a lot of variation in the activation of the engine, and we used this reference to drive the distortion, sparks and licks of fire that would build up over the course of a few frames to help determine the final size of the fire bending. So cool!

Waterbending – To be honest, it’s hard to find water references that don’t obey gravity, although we found some reference of water in zero G that really helped. We also just looked at references of water being thrown out of a bucket past the camera to help with the realism of the water as it got particularly close to the camera in one shot. One of our vendors we’ve worked with for years (Accenture Song) helped with the references and took that shot to final. Other than adding real world elements to the water (water, splash, spray and foam) we added what we call the barbershop motion, one of our Additional VFX sups (Jared Higgins) suggested we add a flow / current to the water so the water isn’t just static within the volume of water. The Barbershop name comes from the classic barbershop sign with the red and white spiral motion. This really helped to break up the water and give it some life of its own. Depending on the activity of the water the barbershop would be dialed accordingly.

Airbending – Airbending is tricky because it’s air, we don’t see air, it’s invisible. However, we also wanted to stay away from just adding an arbitrary color to the effect. In the anime they often use a white neutral color which works well for the anime but it very quickly starts to read as white smoke in our world. We decided to lean into more refractive elements plus the natural texture that’s in the world where the bender is bending to sell the effect. Distortion elements quickly became the major ingredient used in the effect, however the effect starts to look more two dimensional if not run through smoke simulations with different frequencies to help sell the depth of the effect. Our hero finer distortion frequency values were driven by the classic heat distortion that comes from the exhaust of an F22. In addition to the distortion, we pulled elements from the surrounding area such as, sand, snow, dirt and even water droplets from the water puddles on the deck of the ships.

Earthbending – Earth bending was definitely the easiest because it is typically a physical element being extracted from the ground that’s within the production plate, making it a tangible element to manipulate. Our main focus was to not make the boulders feel pre-made for bending, so we decided to construct the boulders out of material within the ground. For instance, pulling rocks, sand, dirt, mud, leaves, roots and smaller pebbles within our surrounding real world elements at the time of extraction. Therefore, when we see the boulder it would be compacted of different materials to lend variation within the boulder. This methodology helped provide us with different roughness values which ultimately gave us specularity differences in the rock.

Which elemental power was the most complicate to do and how did you achieve this challenge?

Marion Spates // For me, it was the airbending because air is invisible. How do you visualize something you can not see without just giving it some arbitrary color? To solve that problem we found the distortion and the real world elements to give it structure, depth and complexity, and with the distortion it helps sell the air attribute as a refractive element in which you read/see the background through.

Can you share insights into the creative process behind bringing Appa and Momo to life?

Jabbar Raisini // Bringing Appa and Momo to life was definitely two of our biggest challenges. We really tried to embody the animated characters as much as we could given that we had to create them in 3-dimensions. We realized that because of the nature of them being hand animated, they are not always drawn exactly the same. We came up with a board for Appa and Momo comprised of images from the animated series that we felt best represented what we wanted to achieve. From there we spent a ton of time and versions refining the designs until it felt like we had really nailed the spirit of the animated characters. Once the broad strokes were dialed in it really became a matter of tiny adjustments like 3% change wedges on the sizes of the eyes, nose, mouth, etc until we felt we had really nailed it.

What were some of the key challenges faced for Appa and Momo?

Jabbar Raisini // Once the design was finished with Appa and Momo the biggest challenge was nailing the performances. If we didn’t hit the emotional performance of the characters it doesn’t really matter if design was spot on or not. We really spent a lot of time in animation ensuring they weren’t digital characters, they were just characters like any other performer in the show.

Could you discuss any innovative techniques or technologies utilized for Appa and Momo, particularly in scenes requiring complex movements or interactions with other characters?

Marion Spates // While on reshoots we built a small maquette out of foam blue that represented a portion of Appa’s head because Aang hugs Appa, so we needed a portion of this head for interactivity purposes for Aang to grab onto.

In addition to Appa’s head, special effects built a large portion of Appa, including his head, horns and saddle for our cast to ride while in flight with Appa. The movement of Appa was controlled by three Special Effects crew members, two in the back and one on the side.

(L to R) Ian Ousley as Sokka, Gordon Cormier as Aang, Kiawentiio as Katara in season 1 of Avatar: The Last Airbender. Cr. Robert Falconer/Netflix © 2024

In adapting Appa and Momo for live-action, what aspects of their designs or behaviors did you find most important to preserve in order to maintain the emotional connection fans have with these characters?

Jabbar Raisini // We really wanted to get both Appa and Momo’s personalities right. We took a lot of time doing animation studies that weren’t shot specific just to ensure we had the feeling right. For Appa it was about getting his scale right and ensuring he was the loveable gentle giant from the animated series. For Momo we definitely wanted to make sure he brought the cute and the comic relief. And for both we wanted to ensure they really felt emotionally intelligent and were connecting with the actors in the scenes.

Can you elaborate on the creative process behind bringing the spiritual world and its creatures to life?

Jabbar Raisini // For the Spirit World we relied heavily on our colorist Siggy Ferstl as well as one of our vendors Nexodus. We started with concepts of where we could go creatively and then brought those in to Siggy as an initial pitch. From there Siggy looked at all of the scenes in the Spirit World and came up with a look that would progress throughout episode 105.

For the Spirit World creatures, we relied heavily on the animated series for inspiration and really did our best to capture the essence of those 2-Dimensional drawings in a 3D live-action world.

The avatar in Avatar: The Last Airbender is depicted as a character of immense significance and power. How did your team approach capturing the essence of such a pivotal figure through visual effects?

Jabbar Raisini // In order to bring a sense of power to the Avatar State, we really leaned heavily on scale and destruction. We did some heavy destruction simulations and lots of FX whipping around to sell that power and scale. And we worked closely with the sound team since so much of that power is driven by the sound effects.

Given the avatar’s significant final size in the final battle, what specific challenges did you encounter when translating this portrayal into the final design?

Marion Spates // We wanted to sell the weight / size of Koizilla, being as big and heavy as Godzilla was the intent, but because he’s made up of water, we needed water to flow through him. Early on we did a lot of simulation tests to get the amount, speed and directionality of the waves moving through his body. It’s amazing how much the movement of the waves can quickly break the scale of a creature this size. Another fun factor was how much bioluminescence to have within his body and when the waves are crashing, this was a little bit of a dance because we did not want him to be become over illuminated, therefore we went for a subtle internal veins structure that cascaded out from the Aang orb along with an additional bioluminescence element to the crashing waves. The crashing waves added scale, complicity and ground it to the real world.

How does his massive size affects your work and especially the lighting and animation?

Marion Spates // Our team also spent a lot of time developing Koizilla’s animation in an effort to sell the size of a massive creature destroying everything in his path. Our team referred back to some of our favorite Godzilla movies to help incorporate his heaviness, all the way down to his higher frequency head quivers, to when he shakes his head and roars. This was fun to dial in, and was very rewarding when we saw the water reacting to the higher frequency movements.

As far as lighting, the trickiest thing was the story of the moon being gone. The scene turns black and white but the bending is all in color. For Koizilla, we needed some amount of directional push to help shape him and the ships. We took the initiative to light our elements like the DP did on set in order to create interesting frames. Also, with a creature this size it’s always tricky to dial how much interactive lighting he passes on to the world, or even how much interactive light falls onto him from the nearby fires or the fireballs.

The last aspect of our lighting was our black and white look. We had the idea for VFX to composite everything in color, and then do a composite that represented the black and white concept once the color composite was approved. We decided for final delivery, VFX would deliver fully saturated composites and our colorist, Siggy Ferstl, would then dial the black and white looks with colored bending.

The Scanline VFX reel features the Koizilla sequence

Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?

Marion Spates // The Koizilla beat was my top memorable moment. The complexity of this scene was insane. One area we have not discussed, the fireball trebuchets impacting Koizilla, where we tried to convey enough fireballs impacting Koizilla to turn the environment into steam, so we could then reveal Koizilla at a later time. The complexity of the fireballs, getting the speeds, size and details dialed in, plus the impacts when they hit the water, and how much bioluminescence activates and how it cascades though his body, all of the detail put into this effect took a lot of time, but it was a thrill to be a part of something so awesome!

Looking back on the project, what aspects of the visual effects are you most proud of?

Jabbar Raisini // We worked very hard to embody the spirit of the animated series and that was our biggest challenge and accomplishment. When I look at the fan feedback they really responded positively to the visuals and could tell we were paying attention. So I’m glad all of that effort was worth it.

Tricky question, what is your favorite shot or sequence?

Adam Chazen // I’m going to choose two favorite shots. The first is in 101 where we are top down on Aang airbending over the Southern Air Temple and the second is in 103 When Aang and Teo are flying through Omashu.

What’s the VFX shots count?

Adam Chazen // 3,381. Though it should be noted that we had a lot of shared shots due to our creature work with Appa, Momo, and Ostrich Horses… those should count as more than one shot since it doubles the work load!

How long have you worked on this show?

Jabbar Raisini // I’ve had 3 birthdays while on this show so that will give you some sense of just how long I’ve been on it.

A big thanks for your time.

Cadence Effects: Dedicated page about Avatar: The Last Airbender on Cadence Effects website.
FABLEfx: Dedicated page about Avatar: The Last Airbender on FABLEfx website.
Framestore: Dedicated page about Avatar: The Last Airbender on Framestore website.
Image Engine: Dedicated page about Avatar: The Last Airbender on Image Engine website.
Nexodus: Dedicated page about Avatar: The Last Airbender on Nexodus website.
Outpost VFX: Dedicated page about Avatar: The Last Airbender on Outpost VFX website.
Rodeo FX: Dedicated page about Avatar: The Last Airbender on Rodeo FX website.
Untold Studios: Dedicated page about Avatar: The Last Airbender on Untold Studios website.
Netflix: You can now watch Avatar: The Last Airbender on Netflix.

© Vincent Frei – The Art of VFX – 2024


S'il vous plaît entrez votre commentaire!
S'il vous plaît entrez votre nom ici