Adrien Saint Girons started his career in visual effects in 2006. He worked in many studios like The Mill, MPC, ILM before joining Framestore in 2015. He has worked on films such as Pacific Rim, Warcraft, Blade Runner 2049 and The Suicide Squad.
What is your background?
I’ve been working in VFX for some time now. I started as an ATD writing tools, followed by a career in lighting and lookdev. This eventually led to the role of CG supervisor and then VFX supervisor. I love problem solving and leading teams of talented artists to create iconic images.
How did you and Framestore get involved on this show?
What was your feeling to be back in the Spider-Man universe?
The work Framestore produced on those films was so iconic, it was very exciting to jump back in. Almost everyone wanted to be involved in the project and the motivation across the company was so strong.
How was this new collaboration with Director Jon Watts and Production VFX Supervisor Kelly Port?
It was a very smooth collaboration. They had a lot of confidence in Framestore so it was an extremely collaborative experience from start to finish.
What were their expectations and approach about the visual effects?
They came to us to do two things. One was to look back at what we had done in the past and recreate it. This includes eldritch magic, Doctor Strange’s cloak, Doctor Strange and Spidey digidoubles etc…
On the other hand, imagine and create completely new effects, pushing the envelope further and creating new iconic content. This includes the ancient artefact, the Spidey sense, the prison forcefield, the spell of forgetfulness, the mirror dimension etc…
How did you organize the work with your VFX Producer?
We got hard at work on anything that could be started straight away with our VFX team on one hand. On the other hand, we worked closely with our Vis dev team to figure out what new effects and ideas we were creating. As ideas evolved and changed we needed to work hand in hand to plan and prioritise in order to ensure the successful delivery of the show.
How was split the work between the Framestore offices?
This was a Montreal only show with some help from some key creatives in the London office.
What are the sequences made by Framestore?
We worked on two main sequences. In the first, Spell Gone Wrong, Doctor Strange attempts to perform a spell so that everyone would forget Peter Parker is Spiderman. As the name of the sequence suggests, the spell goes wrong which triggers a chain of events key to the narrative of the film.
In the second, we find Doctor Strange in the Sanctum basement attempting to solve the ancient artefact and send the villains back to their dimensions. Spiderman grabs the artifact to try to protect them leading to an epic chase sequence between Doctor Strange and Spider-Man through the streets of New York through the Mirror Dimension.
Can you elaborates about the Spell gone wrong sequence?
We were involved early and ended up also doing the postvis for the sequence. This allowed us to drive the creative aspects of the spell and establish a visual language. It was really important for Jon to make sure that the spell and the way it goes wrong is visually very clear to the audience. Early versions of the spell were way too complex with runes and sigils filling up the room. We landed on a concept that was much clearer visually with the lines of spell being written and cast aside. The runes pulsate and distort more and more as the spell progressively gets worse eventually leading to room exploding and the characters floating in a nebulous space.
Our asset team created a fully digital version of the room allowing the FX team to explode it and create the amazing visuals. The nebula itself was designed by our team as well ensuring it was nothing too recogniseable and that you could make up bright patches of what looks like characters.
What kind of references did you received for the spell?
We received a range of references, one of which was Framestore’s work on Jingle Jangle. The runes themselves were derived from the scripture visible on the spell basin in the center of the room but was tweaked into its own unique look by our Comp Lead Wakako Sekine. The spell was then ‘eldritchified’ by our FX and look dev team to keep it in line with Doctor Strange’s iconic magic.
How did you work with the other vendors for the assets sharing?
As our sequences were relatively self-contained there was minimal asset sharing. The main asset that was shared from us to other vendors was the ancient relic.
Can you tell us more about the creation of the ancient relic?
Even though the ancient relic may seem like a simple prop asset it was one of the assets we ended up spending the most time on. Nothing had been built on set apart from a green cube for the actors to hold, determining its size. The creative process of making it involved concept art, asset creation to a very high level of artisanry, and rigging. It was important that the relic feel ancient and complex so we spent a lot of time refining all aspects to push it in that direction ensuring the materials and designs all felt appropriate and believable. The box also has complex puzzle mechanics as it was an important part of the brief. We referred to a variety of puzzle mechanisms ranging from ancient puzzles to more recent ones as those featured in Chris Ramsay’s videos.
How did you use the Framestore experience on Avengers: Infinity War for this new movie?
We constantly build on our experience from previous shows. In this case there were many that we were able to reference but we always push things further.
Can you explain in detail about the creation of New York and the crazy bending environment?
In order to create New York the environment team lead, Scott Coates, opted for a procedural approach to provide a very strong base for the city. This was done using OSM data to figure out the footprint and height of the buildings allowing us to create any street of New York. This data along with a huge library of set dressing, facades, store fronts and more was used to generate the city. From here, procedural textures and shaders were developed and tweaked to bring the procedural base as photoreal as possible. Bespoke iconic buildings were made by hand and swapped in. Central Park was created using online data to figure out the topography as well as tree species and density. In parallel to this, the data was passed onto the rigging team that would rig chunks of the city for the animation team to bend and move to taste. The animated data was then passed to the fx team so they could add the iconic kaleidoscoping effects on top of the buildings.
We also had bespoke shots that were managed slightly differently such as the donuts shots and the department store one. These required their own specific techniques in order to achieve the unique effects.
The Grand Canyon hybrid section was achieved by having our layout team work closely with our environment team. Using the same procedural city base, the layout team was able to place islands of city blended with environments sections of Grand Canyon made by procedurally upresing and shading a google data base. The top and bottom were separated allowing for them to be lit individually.
The end kaleidescape section was achieved by sending the assets created for New York and the Grand Canyon to the FX team. They would then create a kaleidoscope version of the assets making a library that the layout team can choose from and compose each shot. We also had a spiral rig allowing them to create the spiraling objects seen in that section.
Did you use procedural tools to help you?
A lot of the techniques were developed through Houdini and internal proprietary procedural tools to allow us to create the unique workflow needed to complete the task.
What is the main challenge to create such a massive and well known environment?
Making sure that it looks real. The only way to do that is to keep looking back at reference and improving procedural methods and manual ones when necessary.
How did you create the digital doubles for Doctor Strange and Spider-Man?
They were photoscaned and matched by our asset team. Spider-Man specifically went through a rigorous rigging and dynamics process ensuring his muscle and cloth movement were as believable as possible. We were often complimented on how great he looks as the team did an amazing job.
With so many moving and changing environment, how did you manage the lighting challenge?
Lighting in the mirror dimension was a big challenge. For bending New York, it was really challenging to come up with a keylight direction as the bending city would cast huge unwanted shadows. The team spent a lot of time coming up with the best lighting position to create the most attractive images while trying to keep some consistency. The other big lighting challenge was the Grand Canyon hybrid section as the world exists both on top and bottom. Trying to light that with one main key source created a very dark environment so we had to create a top and bottom lighting setup that would work well together still giving the impression one is shadowing the other and that they could both coexist.
The cloak is an important character in the sequence. Can you elaborates about its creation and animation?
Doctor Strange’s cloak is indeed a character in itself. Well defined in the first film, we were able to dust off the old setup and modernize it to our current pipeline. In order to keep it as flexible as possible, the team developed a rig that gave a lot of control to the animation team enabling them to make sure the cape can emote and perform complex actions. Once animation made a pass, the creature dynamics team was able to simulate the cape and blend areas with the animated one ensuring we kept the right amount of performance vs simulation.
Your sequences are have a lots of FX elements. How did you prevent your render farm to don’t burn?
There were indeed a lot of data and renders being created throughout the lifespan of the show. Our CG supervisor, Eric Noel did a great job at prioritizing and managing it so it would all come through.
Which shot or sequence was the most challenging?
The Mirror Dimension environment involved both a huge photoreal environment, which in itself is hard to pull off, plus magic, which is extremely subjective and ever changing. Both combined made this sequence quite the creative and technical challenge. The team did an amazing job on both fronts to bring it to the level it is.
Is there something specific that gives you some really short nights?
The department store shot was a beast. When it was first discussed that it would be cool if Spider-Man swung through a kaleidoscoping department store, I had a few sleepless nights trying to figure it out. The result in the end is very cool as the team really pulled it off.
What is your favorite shot or sequence?
There are so many iconic shots in the work that the team did that it’s really hard to choose. There’s a shot in Spell Gone Wrong where the camera pulls back as the fragments of the basement are swirling around in nebulous space. We sent the pull back a camera position wedge as the camera was meant to be static. When Jon saw it, he liked the movement so much we ended up using it. I think the shot looks awesome and the camera move adds lots of depth.
What is your best memory on this show?
Reaching the finish line and shortly after seeing the reviews of the film that were very positive.
How long have you worked on this show?
Close to a year and half.
What’s the VFX shots count?
What was the size of your team?
What is your next project?
I’m on paternity leave for a little while taking care of our third baby boy 🙂
What are the four movies that gave you the passion for cinema?
I wouldn’t be able to call out four specifically. My passion for cinema and for VFX is constantly growing and is driven by the energy of the teams I work with. The work they produce and continue to create is incredible and inspiring.
// Spider-Man: No Way Home – Mirror Dimension Clip
A big thanks for your time.
WANT TO KNOW MORE?
Framestore: Dedicated page about Spider-Man: No Way Home on Framestore website.
© Vincent Frei – The Art of VFX – 2021