Last year, Daniel Kramer talked to us about the work of Sony Pictures Imageworks on EDGE OF TOMORROW. This time, he explains in detail about his work on PIXELS.

How did you and Sony Pictures Imageworks got involved on this show?
Sony Pictures asked us to bid on the work and meet with Matthew Butler, the production side VFX Supervisor, and Denise Davis, the VFX Producer. I was really interested in the film, I found the short really charming and had spent my youth in the local arcade. Matt and I had a really good first meeting where I learned some of the creative goals and we discussed many of the challenges, unknowns, and the types of tests we’d want to try to realize these characters. It was clear we had many of the same sensibilities and questions and quickly started riffing off each others theories. Soon after that we were awarded a good portion of the project.

How was the collaboration with director Chris Columbus?
On set Chris mostly interacted with Matthew for VFX. He seemed to really trust in Matthew’s advice and suggestions to make the most of the VFX work. During post production I had a lot more interactions with Chris during our daily shots reviews. Chris worked out of his office in San Francisco and Matthew and the rest of the VFX production team were based at DD in Playa Del Rey. They had a great system where 2K media could be streamed directly from Playa to Chris for daily reviews. Whenever we had shots to review I would travel 15min to Digital Domain, along with our VFX Producer Christian Hejnal, to present our work to Chris and Matthew. During those reviews I was able to present our work directly to Chris and hear his direct feedback. He was always very complimentary and seemed to embrace new ideas coming from the VFX teams. Matthew was very inclusive of my input and helped to facilitate my direct communication with Chris which was really great. He made us really feel part of the team.


What was his approach about the visual effects?
Chris, like most directors I’ve worked with, is really focused on the story, and in this case the comedy as well. On set his focus is really about the actors. He of course was concerned that the VFX looked great but the actors and the comedic timing was the most important thing… if the VFX were great but overshadowed the actors or comedy then we’d have to scale back. The cut was very fluid due to finding the right comedic timing which was a challenge to keep up with on our end, certainly kept us on our toes.

How was this collaboration with Production VFX Supervisor Matthew Butler?
Matthew was very inviting. He’s got a great eye and a very technical background in CG. He really treated all the VFX facilities as one big team. He wasn’t precious about his ideas if a better one came along, he’s got a pretty pragmatic approach to ultimately make Chris happy.

What was the work done by Sony Pictures Imageworks?
In rough chronological order:

– Guam sequence when the Mothership and Galaga creatures first arrive. This included seamless set extensions and CG vehicles to fill out the airbase, Galaga characters, voxelated destruction of the airbase and high res hero voxels. For those who don’t know the term a « voxel » is a 3D pixel, or a « volume pixel ». It essentially looks like a cube and is the building block of everything we’ve created for the film.

– We handled the Q*bert character and shots throughout the film. Some of the shots in Donkey Kong were shared with Digital Domain where we lit and temp comped Q*bert into Digital Domain’s backgrounds and delivered Q*bert elements for their final composite.

– We did all the White House shots, both front and back with the White House Lawn. This included an almost 100% digital establishing shot of the front of the White House plus several shots at the end of the film on the back lawn. These lawn shots were filmed in Toronto in a park and we added the White house plus several surrounding trees to dress the park for the iconic lawn.

– We did a few hero cube shots in the Darpa lab with a captured cube. This was a much more detailed cube then the ones we used to build our characters with lots of internal geometry to show depth and parallax as we inspect the form. These “super cubes” were also used throughout the film whenever we’d get really close to a voxel, but they were too heavy to render everywhere.

– The bulk of our work took place during the DC Chaos sequences where all the creatures pour out of the mothership and wreak havoc on the streets of Washington DC. This starts with an all CG shot of Joust characters attacking the Washington Monument and continues with many shots of game characters attacking the city. We developed about 27 game characters in all for this sequence including Galaga, Burger Time, Duck Hunt, Lady Lisa (an invented game for the film), Biddy Bandits (for the Lady Lisa game), Frogger, Robotron, Centipede, Dig Dug, Defender, Joust, Tetris, Space Invaders, Q*bert, and a Smurf! We also developed quite a few destruction techniques to ‘voxelate’ cars and buildings and destroy characters.

Can you describe one of your typical day during the shooting and the post?
I wasn’t on set for the full shoot, only for the Sony portion of the work and I shared that responsibility with John Haley our Digital Effects Supervisor. Sometimes we were there together but often we switched off. For the DC chaos in particular there were generally a couple of units shooting in different areas of Toronto so we’d strategize with Matthew and figure out who would cover each unit. Days were hot and long but all in all it was a fairly painless shoot for the portions I was involved with. The shoot was fairly typical vfx wise for me, we had a Q*bert stuffy on set for lighting and composition reference and on occasions we’d have an interactive light so our characters could cast light onto the scene.

For post we’d have a production meeting in the morning with the dept heads and move into rounds shortly after. Our team is spread across LA and Vancouver, with most artists in Vancouver. I’m based in LA so I tend to spend my day in a screening room connected to a shared session with artists from both locations. We have a really slick system where 100’s of artists are able to join a shared playback system, we can all talk or annotate the submissions to help communication. Everything is viewed at full 2k. It’s a system we’ve been working on for years, since our first remote site in Albuquerque.

Generally during the day I’ll call out shots to send Matthew for feedback and I’ll head over to DD in the afternoon to review with him alone or with Chris as well if we feel it’s ready to show. Then it’s back to SPI to download the notes and further review any new work. Really toward the end of the project I spend all day in a dark theater talking to the walls.

What was your pipeline to create so many various characters?
Generally we’d start with the game art, screenshots, video, cabinet art and even sprite sheets. We were usually able to find the original sprite sheets online… the internet! For some of the characters we also received production artwork from Peter Wenham’s team of what they might look like voxelated. We’d study videos of the game play to find the right behaviors to mimic in animation. Often we’d have to expand what the characters were able to do in the games to accommodate the story.

Can you explain step by step the creation of these characters?
Using the collected reference our modelers would build a very simple low res version of the game character in 3D. At that stage there’s a lot of very subjective calls to make as you often only see these characters from very limited views but we needed to create a fully 3D character that could be seen from any angle. So some creative license needed to happen where we’d try and understand the intent of a simple shape in the original sprite and expand on that. Sometimes the cabinet art would help as I see that as the idealized high res version of the character the game makers wanted to make but couldn’t due to their low res limitations.

We wanted to start with a smooth character and add the voxels procedurally later. Had we built our characters out of voxels in modeling it would have been really difficult to rig and animate. We also wanted our character to reconfigure, allowing voxels to turn on and off which gave it a more digital appearance.. and we wanted to avoid voxels deforming so traditional skin binding wouldn’t have worked. So our animation department generally only worked with this smooth representation.

To handle the procedural voxelization we’d pass the model to the FX department lead by Charles-Felix Chabert. We tried several strategies to voxelate our characters and track the voxels to the animation using Houdini. Matthew liked the notion that voxels could turn on and off as the character moves much like pixels turn on and off as a sprite character moves across the screen. We wanted to do the 3D equivalent to how pixels work on a CRT monitor. So instead of a 2D array of pixels on a screen we created a 3D volume of voxels for our smooth characters to move though. As a character intersects with a voxel in space it’s turned on and once the character has passed through it’s turned off again. In this way the voxels never move, they are simply revealed by the character. We found this approach had some problems in practice though. First the re-voxelization was too frenetic and became distracting, even very small motions of a character could cause big changes in voxelization. Second it was difficult to keep our characters on model as the voxel orientation could change relative to the character’s orientation as the character rotates through the field. And finally the setup was difficult to light because the voxels actually never move relative to the scene lighting.. again they are just turning on and off as they are not bound at all to the character.

We ended up modifying this approach by parenting several voxel fields to our character’s skeleton. So one field might be attached to the head, one to the chest, two for the arm etc… in this way the local field would translate and rotate with the bone but any skin deformations or scaling beyond that would cause re-voxelization. This seemed to be the best bet for most of the characters, keeping them on-model, still allowing for some re-voxelization, and allowing some rotation to help pick up the scene lighting.

Characters were then published out of Houdini to Katana for rendering. Early tests with 100’s of characters took days to translate with this pipeline and we knew we needed a more efficient method. In the end, rather than translating the fully voxelated character to from Houdini to Katana we simply output a single point per cube and loaded the point up with attributes to define the look of the cube. Then at render time we’d instance a single voxel shape to each point and modify its appearance based on the point attributes.. including things like color, size, orientation, noise space, etc. We were able to translate and render shots in a matter of hours rather than days with the brute force method.


Can you tell us more about their rigging and animation?
Generally the rigging was pretty straight forward as the smooth representation was pretty simple. Q*bert had the most work as he was one of the only characters that really needed to show a big range of emotion. We did bake out voxelated versions of our characters and bind them to the skeleton as well to give the animators an idea of what the silhouette might be. It was a crude representation but the full voxelization process was too time consuming to calculate for animation to be efficient. We did explore using Houdini Engine to voxelate the animation live in the Maya scene but generally this proved to be too slow for interactive work (not because Houdini Engine is inherently slow, but our node networks were pretty heavy).

Steven Nichols, our Animation Supervisor, had a lot of fun translating the game motions into 3D. In most cases we tried to obey physics to help the characters feel like they were really in our world but at times we’d move to cartoon physics for particular gags. I think the biggest challenge he faced was that we’d see a lot more detail at the animation stage which got lost once a character is voxelated. We tried to voxelate our characters with as few cubes as possible while still allowing us to see all of the animation detail and that was a trial and error process. We found that our characters were more charming and appealing when they were lower resolution but at times we’d have to go back and up our voxel resolution to accommodate certain animations.

How did you handle the shaders and textures?
Matthew and Chris wanted the voxels to feel like they were lit from within but a straight illuminated cube wasn’t quite detailed enough. We painted textures to mask the internal energy so it looked like the energy was burning though the texture.. sort of like a candle in a paper lantern where we scratch away some of the paper to allow more light to come though in areas. We used circuit boards as inspiration for the various patterns we’d paint on the voxel faces. These textures would animate so that each face felt a bit alive. We’d also run noise though the overall character to drive the internal light allowing the voxels to blink on and off.

For shading, each voxel contained several attributes to define its look. Each voxel’s topology was exactly the same which allowed us to instance a single voxel over the entire character. This kept rendering light but meant we needed to rely on these attributes to define the look of each instance. Attributes would define local color, a rest space where we could map noise, some geometric normals and what we called “hybrid” normals.

We found that because the characters lacked any curvature it made them difficult to light and that’s where our “hybrid” normals come in. Hybrid normals are a combination of the geometric normals of the voxels and the rounded normals of the underlying smooth character. For each voxel face we’d compare the geometric normal to the smooth normal of the animation model and if they were pretty close, say within 10 degrees, we’d transfer the smooth normal to the voxel face. If they diverged too much we’d just stick with the geometric normal. This gave us some curvature on the large flat faces on the side of Q*bert’s head, for example, allowing us to light him a bit rounder than he would otherwise, catching rims and varied reflections.


What was the most complicated characters to create and why?
Q*bert was certainly the most challenging as he was a very important character to Chris. He’s one of the only characters that needed to have a wide range of emotions and he’s got a lot of screen time. Chris wanted him to be a really appealing side kick to our heroes and early voxelization tests seemed too unfriendly, hard edged, just not cute. Finding a way to soften him up took a lot of work. Many of the techniques we used to create Q*bert were used on our other characters, he was really the test bed for our look. Bret St Clair helped us out in lookdev and generated 100s of version of Q*bert to help find the look. Some tests in all 3D with custom shaders and some were crazy mixtures of various render passes in Nuke to come up with a concept.

The hybrid normal technique came out of our effort to bring some of Q*bert’s shape and roundness into the voxel array. This softened his lighting up a lot and allowed us to rim and shape Q*bert in an appealing way. We also spent a lot of time working on the eye detail and skin around the eyes to give enough voxel detail to show off his emotions.

A lot of his cuteness comes from his animation but the voxelization was clobbering the animation subtleties. We tried very small voxels on his body which showed off the animation clearly but didn’t look as appealing as larger chunkier voxels. With the larger voxels there wasn’t enough resolution to show all of his expression, we wanted to take parts of both. So we ended up using both, building a high res version with small voxels and encasing that in a layer of larger refractive voxels, giving him a multi-layer look. If you looks closely at Q*bert you can see the high res body inside his chunky exterior. This really brought back the brow and eye animation, gave him a level of complexity, and the transparent layers softened him up a bit which Chris and Matthew really responded to.


How did you manage the interactions on-set with the cast?
Typical techniques like thin rods on c-stands for eye lines. We might do a reference pass where we have someone stand in for a character and then run it clean once everyone was familiar. Additionally we often had some sort of interactive light for many of the characters. In the kitchen scene with Q*bert for example we had a small light stand on the stool where Q*bert stands to both give the actors an eyeline and cast some light onto the surrounding scene.

There were some cases where an actor had to hold Q*bert. We’d have the actor hold a full scale model of Q*bert and we’d cover that up with our CG model, painting out anything that we couldn’t cover. In one case Josh Gad’s character holds Q*bert under a blanket when we first meet Q*bert. In that case you could really tell that there was a still, inanimate object under the blanket so we fit our CG character between his arms and replaced the blanket completely to allow Q*bert to struggle a bit with some independent motion.

What was your feeling to be recreated those legendary games characters?
It was really fun to revisit some of the games I loved as a kid. Especially when we could really leverage an iconic action from the game and work it into the movie, that was quite satisfying.

How did you share assets with the others vendors?
We shared a few characters with DD and other vendors. In Donkey Kong we handled Q*bert for all those shots while DD was doing the lion share of the rest of the shot. In that case we would get temp backgrounds from DD and light and comp Q*bert in for approval, then pass our Nuke files with all the various layers to DD for the final composite. Sometimes they’d also use Alembic data of our character but generally they worked with our render passes in the comp.

For the end battle we needed to have Centipede which DD developed for their Centipede sequence. In that case they handed us their Maya file with their Centipede rig plus their Houdini file to voxelate the result. It took quite a bit of work to convert their workflow into ours as we used different conventions to build and translate our characters to lighting, we were using a different renderer as well. DD uses VRay and we use Arnold.

How did you design and created the Voxelization effect?
Well I’ve said a lot about voxels already. I’ll talk about how hard surface objects were voxelated a bit. One of the first and most successful shots was the voxelization of the Guam sign at the beginning of the movie. I believe this is also in some of the early trailers. One of our FX leads, Pav Grochola, developed a great system for progressively subdividing an object into voxels which is used in most of the destruction shots of both earthbound objects and our characters when they are destroyed. The system was developed on this first Guam shot.

When an object is impacted by a light bomb the area is converted into voxels, higher res smaller voxels at the impact point and larger chunkier voxels as we radiate away from that point. From there a light energy pulse radiates outward causing further subdivision.. in this way you’ll see waves of light moving across objects or characters breaking bigger voxels into smaller ones. Each voxels subdivides into 4, and if hit again each of those are further subdivided. In addition, those voxels are converted into RBD objects and glued together. If the forces are strong enough the glue is broken and chunks of the voxels break off and simulate freely. This gave a great physical look to the cubes and really made it look like objects and characters were eroding and crumbling with light energy. You can see this clearly on that Guam sign as well as character destruction shots like when Josh Gad’s character shots Robotron Grunt with his light cannon.

The movie have many destructions in well know locations. Can you tell us more about that?
We wanted our destruction to be photo-realistic so you believed there was real danger but also un-real by introducing the pixelated effect to the destruction. We ended up doing a mixture of real dust and debris and large chunks of voxels for hard surfaces and advected voxels though volumes for smoke.

In one of the major destruction sequences we attack the Washington Monument with several Joust characters. This shot had to be an all CG shot because it’s impossible to film this location with the high security of that area. The shot required a helicopter to fly 360 degrees around the Monument revealing all of DC in the background including the Capital Building, the White House, and the Lincoln Memorial. We were granted the ability to fly around the perimeter of the national mall and hoped to collect enough reference to re-create the area in a 2.5 D matte painting. We mapped out 4 key locations on the flight path where we could hover and collect 360 tiles but sadly we were further restricted once in the air and couldn’t get as close to the perimeter as we’d hoped. With less than ideal tiles John Haley, our Digital Effects Supervisor, also collected more reference on the ground by walking up and down the mall collecting tiles as well as shooting some images from within the Monument itself from the observation windows at the top. Using all this data, a simple 3d model of the area, and many many weeks of work by our Matte Painter Jeremy Hoey, we were able to create a seamless 360 painting of DC as seen from the Monument. All of the near buildings were modeled with simple geometry allowing for some parallax as we moved the camera over the area.

With our matte painting in place we were able to fly our virtual camera around the CG monument in an all CG shot. For destruction our FX department developed a system to voxelate areas of a model on impact. At the epicentre of the impact they would generate very small voxels for high detail and enlarged the voxels as we moved away from that epicentre. The system was pretty cool as additional impacts could subdivide larger voxels into smaller ones if needed. You’ll see on the monument very large chunky voxels radiating on the outskirts of the impacts, we found that keeping the voxels really big and cartoony was a much more appealing look, if they were too small the destruction looked too similar to traditional destruction.. we wanted the voxelization to be really obvious. In addition using light energy to highlight particular voxels on first impact really helped to sell the digital look of the destruction. Other than Jeremy there were a few other key players in the finishing of this shot. Ruben Mayor handled the FX, James Park for the lighting, and Christian Schermerhorn brought it all together in Nuke.


Can you explain in details about the massive FX elements work?
Massive was mainly used to fill out the backgrounds of our DC chaos shots. We ended up building static voxelated characters in their rest pose and binding them to the massive skeletons. In this way our massive characters didn’t re-voxelate but they were so small that it wasn’t necessary. They also had a simplified shading model to keep the renders light. For behaviors it was a combination of massive brain work and sourcing animation clips provided by the animation department to show off some of the iconic game motions.

Can you tell us more about the Light Energy and glowing effects?
Chris was really concerned that our characters would look too much like Lego and wanted to distinguish our look. We also wanted to add a level of complexity to our characters without destroying the charm of the 8-bit chunky look. I think this is where light energy came from, it’s a way to illuminate our characters much like pixels are illuminated on a CRT monitor.

This worked really well for the night shots, it’s pretty straightforward to imagine how this can look pretty cool at night. The additional challenge we had is that most of our shots were in broad daylight and it’s difficult to sell light energy under those conditions. We found it was really important to allow most of the cubes to be unlit and receive the natural key to fill of the scene lighting to sell that out characters were really there. There was a lot of work finding the right mixture of lit and unlit cubes and the speed they would blink on and off. We found if too many cubes were lit the character lacked shape as all the shadows were filled in, and if we didn’t have enough we started to look too much like plastic Legos.

For hero shots where we needed to see a cube close up we used a high res “super cube” to show a lot more depth to the texture and energy. These cubes contained a lot of internal geometry to both emit light and block light giving an interesting 3D volume to the cube.

How did you manage the transformation of Q*bert into Lady Lisa?
Charles-Felix Chabert handled the fx on this shot. We voxelated both characters and simulated the voxels pulling off Q*bert and targeted voxels on Lady Lisa as their destinations. For Lady Lisa we Rotomated the plate element with a digi-double and voxelated that for our target model. We went through several iterations on the forces and paths the voxels would take. We’d transfer the color of the voxels over time and once they land in place on Lady Lisa we progressively subdivided each voxel into 4 smaller voxels and so on until voxels were about the size of a pixel. At this stage they each received the color of the the photographed Lady Lisa, transitioning into the plate element.

Was there a shot or a sequence that prevented you from sleep?
I wouldn’t say there was a particular shot or sequence though there were times when I did worry. We had a massive amount of characters to build, render, and destroy in a fairly short amount of time… just worrying about getting all the work done at the quality level we expected was always a concern. I’m proud of what the team pulled off.

How long have you worked on this show?
I was on the show for about a year. From early concepts, on-set work and though shot production. Shot production lasted about 5 months once we had plates turned over.

How many shots have you done?
About 246 shots.

What was the size of your team?
I think we had about 70 artists in all, not all at once. Artists would come and go over the course of production as our needs changed.

What is your next project?
I can’t say quite yet, we’re bidding on a few projects that look really interesting… but we’re not quite there yet to make an announcement.

A big thanks for your time.

// WANT TO KNOW MORE?

Sony Pictures Imageworks: Official website of Sony Pictures Imageworks.





© Vincent Frei – The Art of VFX – 2015

LAISSER UN COMMENTAIRE

S'il vous plaît entrez votre commentaire!
S'il vous plaît entrez votre nom ici