Disney's Hyperion Renderer
A renderer is the software that takes all of the models, animations, textures as well as lights and other scene objects and produces the final image that make up an animated movie by calculating how the light bounces around a virtual scene and shades the objects. Hyperion is our in-house renderer and is a physically-based path tracer.
The Science Behind the Renderer:
What is path tracing?
Path tracing is a method for generating digital images by simulating how light would interact with objects in a virtual world. The path of light is traced by shooting rays (line segments) into the scene and tracking them as they bounce between objects.
Path tracing gets its name from calculating the full path of light from a light source to the camera. Light can potentially bounce between many objects inside the virtual scene. As a ray of light hits a surface, it bounces and creates new rays of light. A path can therefore consist of a number of rays. By collecting all of the rays along a path together, the contributions of a light source and the surfaces along the path can be calculated. These calculations are used to produce a final image.
In many versions of path tracing (including the approach Hyperion takes), paths are started from the camera and shot into the scene to find connections to light sources. This is the opposite of how light behaves in the real world, but by doing this backwards, it is much easier to find light paths that will actually hit the camera.
Why path tracing?
The technique is capable of producing a high degree of realism. Because all of the interactions between lights and objects in the virtual scene are simulated, we can capture effects such as refraction and glossy reflections. Most importantly, we can produce images with indirect illumination where light reflecting off virtual objects is accounted for. Even light that reflects several times (called multi-bounce light) has an impact on the scene, incorporating subtle and not-so-subtle lighting effects.
In the following swipeable frames we see real photographs of objects with very different characteristics, compared to re-creations rendered in Hyperion. Please move your mouse left and right over the images.
This swipeable frame shows the influence of multi-bounce light on the appearance of Baymax in the staircase. In the right image, please note the soft indirect light emanating through Baymax, making him appear translucent similar to the table tennis ball in the previous images.
This frame shows a closeup of the Big Hero 6 team lab rendered with and without Honey Lemon rolling in the explosive ball. Roll your mouse over the image to introduce Honey Lemon, and note the subtle influences she has on the look of the lab environment thanks to the accurate simulation of light.
How does Hyperion produce images?
In our movies, we encounter very large and complex settings such as San Fransokyo, the city in Big Hero 6. This would present a challenge to any renderer striving to simulate realistic lighting.
Using typical path tracing techniques, light bounces around randomly, encountering objects in an unpredictable order. This can lead to a massive amount of wasted time, especially in complex scenes. To tackle this difficult problem we use our own unique variation of path tracing.
Hyperion handles several million light rays at a time by sorting and bundling them together according to their directions. When the rays are grouped in this way, many of the rays in a bundle hit the same object in the same region of space. This similarity of ray hits then allows us – and the computer – to optimize the calculations for the objects hit.
Thanks to our novel method, we can render the entirety of San Fransokyo without resorting to tricks like creating paintings to imitate distant scenery.
This lets us put the audience on Baymax's shoulders as he rockets through San Fransokyo without any constraints.
A more technical overview
Hyperion is a streaming ray-tracer, capable of performing multi-bounce global illumination on production-scale scenes without resorting to shading caches or instancing. To achieve this, we introduced a novel two-stage ray-sorting framework. First, we sort large, potentially out-of-core ray batches to ensure coherence. Working with large batches is essential to extract coherent ray groups from complex scenes. Second, we sort ray hits for deferred shading with out-of-core textures. For each batch we achieve perfectly coherent shading with sequential texture reads, eliminating the need for a texture cache. Our approach is simple to implement and compatible with most scene traversal strategies.
References: References: Eisenacher, C., Nichols, G., Selle, A. and Burley, B. (2013), Sorted Deferred Shading for Production Path Tracing. Computer Graphics Forum, 32: 125–132. doi: 10.1111/cgf.12158
The Hyperion team
The Hyperion team: Andrew Fisher, Patrick Kelly, David Adler, Dan Teece, Christian Eisenacher, Doug Lesan, Greg Nichols, Chuck Tappan, Brent Burley, Sean Jenkins, Andy Selle, Noah Kagan, Lisa Young, Matt Chiang, Ralf Habel, Darren Robinson, Peter Kutz, (not pictured: Karl Li)