We present an unbiased Monte Carlo technique for sampling high frequency illumination through a geometrically complex refractive interface enclosing a scattering medium. This situation is difficult to handle efficiently and is a common source of image noise. Based on the incoming light at a chosen surface point, we place an intermediate path vertex within the medium to form a two-bounce connection to the light that satisfies the law of refraction.
We present an energy-conserving fiber shading model for hair and fur that is efficient enough for path tracing. Our model adopts a near-field formulation to avoid the expensive integral across the fiber, accounts for all high order internal reflection events with a single lobe, and proposes a novel, closed-form distribution for azimuthal roughness based on the logistic distribution. Additionally, we derive, through simulation, a parameterization that relates intuitive user controls such as multiple-scattering albedo and isotropic cylinder roughness to the underlying physical parameters.
Existing multigrid methods for cloth simulation are based on geometric multigrid. While good results have been reported, geometric methods are problematic for unstructured grids, widely varying material properties, and varying anisotropies, and they often have difficulty handling constraints arising from collisions. This paper applies the algebraic multigrid method known as smoothed aggregation to cloth simulation. This method is agnostic to the underlying tessellation, which can even vary over time, and it only requires the user to provide a fine-level mesh. To handle contact constraints efficiently, a prefiltered preconditioned conjugate gradient method is introduced. For highly efficient preconditioners, like the ones proposed here, prefiltering is essential, but, even for simple preconditioners, prefiltering provides significant benefits in the presence of many constraints. Numerical tests of the new approach on a range of examples confirm 6 − 8× speedups on a fully dressed character with 371k vertices, and even larger speedups on synthetic examples.
Physical simulation and rendering of cloth is widely used in 3D graphics applications to create realistic and compelling scenes. However, cloth animation can be slow to compute and difficult to specify. In this paper, we present a set of experiments in which we explore some factors that contribute to the perception of cloth, to determine how efficiency could be improved without sacrificing realism. Using real video footage of several fabrics covering a wide range of visual appearances and dynamic behaviors, and their simulated counterparts, we explore the interplay of visual appearance and dynamics in cloth animation.
Hybrid Lagrangian/Eulerian simulation is commonplace in computer graphics for fluids and other materials undergoing large deformation. In these methods, particles are used to resolve transport and topological change, while a background Eulerian grid is used for computing mechanical forces and collision responses. Particle- in-Cell (PIC) techniques, particularly the Fluid Implicit Particle (FLIP) variants have become the norm in computer graphics calculations. While these approaches have proven very powerful, they do suffer from some well known limitations. The original PIC is stable, but highly dissipative, while FLIP, designed to remove this dissipation, is more noisy and at times, unstable. We present a novel technique designed to retain the stability of the original PIC, with- out suffering from the noise and instability of FLIP. Our primary observation is that the dissipation in the original PIC results from a loss of information when transferring between grid and particle representations. We prevent this loss of information by augmenting each particle with a locally affine, rather than locally constant, description of the velocity. We show that this not only stably removes the dissipation of PIC, but that it also allows for exact conservation of angular momentum across the transfers between particles and grid.
This paper presents a scalable implementation of the Asynchronous Contact Mechanics (ACM) algorithm, a reliable method to simulate flexible material subject to complex collisions and contact geometries. As an example, we apply ACM to cloth simulation for animation. The parallelization of ACM is challenging due to its highly irregular communication pattern, its need for dynamic load balancing, and its extremely fine-grained computations. We utilize CHARM++, an adaptive parallel runtime system, to address these challenges and show good strong scaling of ACM to 384 cores for problems with fewer than 100k vertices. By comparison, the previously published shared memory implementation only scales well to about 30 cores for the same examples. We demonstrate the scalability of our implementation through a number of examples which, to the best of our knowledge, are only feasible with the ACM algorithm. In particular, for a simulation of 3 seconds of a cylindrical rod twisting within a cloth sheet, the simulation time is reduced by 12× from 9 hours on 30 cores to 46 minutes using our implementation on 384 cores of a Cray XC30.
In this paper, we introduce a novel material point method for heat transport, melting and solidifying materials. This brings a wider range of material behaviors into reach of the already versatile mate- rial point method. This is in contrast to best-of-breed fluid, solid or rigid body solvers that are difficult to adapt to a wide range of ma- terials. Extending the material point method requires several contributions. We introduce a dilational/deviatoric splitting of the constitutive model and show that an implicit treatment of the Eulerian evolution of the dilational part can be used to simulate arbitrarily in- compressible materials. Furthermore, we show that this treatment reduces to a parabolic equation for moderate compressibility and an elliptic, Chorin-style projection at the incompressible limit. Since projections are naturally done on marker and cell (MAC) grids, we devise a staggered grid MPM method. Lastly, to generate varying material parameters, we adapt a heat-equation solver to a material point framework.
We present a new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations. Our system exploits the concept of bas-relief sculptures, making it possible to generate 3D proxies suitable for rendering without requiring side-views or extensive user input. We formulate an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape. The resulting renders provide the richer stylization capabilities of 3D global illumination while still retaining the 2D hand-drawn look-and-feel. We demonstrate our approach on a varied set of hand-drawn images and animations, showing that even in comparison to ground-truth renderings of full 3D objects, our bas-relief approximation is able to produce convincing global illumination effects, including self-shadowing, glossy reflections, and diffuse color bleeding.
We develop an algorithm for the efficient and stable simulation of large-scale elastic rod assemblies. We observe that the time- integration step is severely restricted by a strong nonlinearity in the response of stretching modes to transversal impact, the degree of this nonlinearity varying greatly with the shape of the rod. Build- ing on these observations, we propose a collision response algo- rithm that adapts its degree of nonlinearity. We illustrate the advan- tages of the resulting algorithm by analyzing simulations involving elastic rod assemblies of varying density and scale, with up to 1.7 million individual contacts per time step.
Snow is a challenging natural phenomenon to visually simulate. While the graphics community has previously considered accumulation and rendering of snow, animation of snow dynamics has not been fully addressed. Additionally, existing techniques for solids and fluids have difficulty producing convincing snow results. Specifically, wet or dense snow that has both solid- and fluid-like properties is difficult to handle. Consequently, this paper presents a novel snow simulation method utilizing a usercontrollable elasto-plastic constitutive model integrated with a hybrid Eulerian/Lagrangian Material Point Method. The method is continuum based and its hybrid nature allows us to use a regular Cartesian grid to automate treatment of self-collision and fracture. It also naturally allows us to derive a grid-based semi-implicit integration scheme that has conditioning independent of the number of Lagrangian particles. We demonstrate the power of our method with a variety of snow phenomena including complex character interactions.
Computation of bending forces on triangle meshes is required for numerous simulation and geometry processing applications. A common quantity in many bending models is the hinge angle between two adjacent triangles. This angle is straightforward to compute, and its gradient with respect to vertex positions (required for the forces) is easily found in the literature. However, its Hessian, which is required for efficient numerics (e.g., implicit time stepping, Newton-based energy minimization) is not documented in the literature. Readily available computations of the Hessian, such as those produced by symbolic algebra systems, or by autodifferentiation codes, are expensive to compute. We present compact, easily reproducible, closed form expressions for the Hessian. Compared to the automatic differentiation, we measure up to 7 speedup for the evaluation of the bending forces and their gradients.
Force-deformation measurements of cloth exhibit significant hysteresis, and many researchers have identified internal friction as the source of this effect. However, it has not been incorporated into computer animation models of cloth. In this paper, we propose a model of internal friction based on an augmented reparameterization of Dahl’s model, and we show that this model provides a good match to several important features of cloth hysteresis even with a minimal set of parameters. We also propose novel parameter estimation procedures that are based on simple and inexpensive setups and need only sparse data, as opposed to the complex hardware and dense data acquisition of previous methods. Finally, we provide an algorithm for the efficient simulation of internal friction, and we demonstrate it on simulation examples that show disparate behavior with and without internal friction.
We present a new stereoscopic compositing technique that combines volumetric output from several stereo camera rigs. Unlike previous multi-rigging techniques, our approach does not require objects rendered with different stereo parameters to be clearly separable to prevent visual discontinuities. We accomplished that by casting not straight rays (aligned with a single viewing direction) but curved rays, and that results in a smooth blend between viewing parameters of the stereo rigs in the user-defined transition area. Our technique offers two alternative methods for defining shapes of the cast rays. The first method avoids depth distortion in the transition area by guaranteeing monotonic behavior of the stereoscopic disparity function while the second one provides a user with artistic control over the influence of each rig in the transition area. To ensure practical usability, we efficiently solve key performance issues in the ray- casting (e.g. locating cell-ray intersection and traversing rays within a cell) with a highly parallelizable quadtree-based spatial data structure, constructed in the parameterized curvilinear space, to match the shape definition of the cast rays.
Ray-traced global illumination is becoming widespread. However, incoherent ray traversal and shading has traditionally limited ray tracing to scenes that fit in memory. To combat these issues, we introduce a sorting strategy for large, potentially out-of-core ray batches, and we sort and defer shading of ray hits. As a result, we achieve perfectly coherent shading and texture access, removing the need for a shading cache.
We extend the Asynchronous Contact Mechanics algorithm [Harmon et al. 2009] and improve its performance by two orders of magnitude, using only optimizations that do not compromise ACM’s three guarantees of safety, progress, and correctness. The key to this speedup is replacing ACM’s timid, forward-looking mechanism for detecting collisions—locating and rescheduling separating plane kinetic data structures—with an optimistic speculative method inspired by Mirtich’s rigid body Time Warp algorithm .
We present Smart Scribbles—a new scribble-based interface for user-guided segmentation of digital sketchy drawings. In contrast to previous approaches based on simple selection strategies, Smart Scribbles exploits richer geometric and temporal information, resulting in a more intuitive segmentation interface. We introduce a novel energy minimization formulation in which both geometric and temporal information from digital input devices is used to define stroke-to-stroke and scribble-to-stroke relationships. Although the minimization of this energy is, in general, a NP-hard problem, we use a simple heuristic that leads to a good approximation and permits an interactive system able to produce accurate labelings even for cluttered sketchy drawings. We demonstrate the power of our technique in several practical scenarios such as sketch editing, as-rigid-as-possible deformation and registration, and on-the-fly labeling based on pre-classified guidelines.
We present a simple generalized impact model motivated by both the successes and pitfalls of two popular approaches: pair-wise propagation and linear complementarity models. Our algorithm is the first to satisfy all identified desiderata, including simultaneously guaranteeing symmetry preservation, kinetic energy conservation, and allowing break-away.
Vectorization provides a link between raster scans of pencil-and-paper drawings and modern digital processing algorithms that require accurate vector representations. We propose a vectorization algorithm specialized for clean line drawings that analyzes the drawing's topology in order to overcome junction ambiguities.
The principled, faithful simulation of complex collisions for deformable objects, such as cloth and other flexible materials, remains an open, challenging, and important problem. We propose to place safety and correctness on an equal footing with progress. To overcome the fundamental opposition between these requirements, we turn to asynchronous integration, which integrates each geometric element of a discrete shape (e.g., the stretching resistance of cloth defined across a triangle) at its own pace, not in lockstep with the entire object.
We propose a technique to control the temporal noise present in sketchy animations. Given an input animation drawn digitally, our approach works by combining motion extraction and inbetweening techniques to generate a reduced- noise sketchy animation registered to the input animation. The amount of noise is then controlled by a continuous parameter value. Our method can be applied to effectively reduce the temporal noise present in sequences of sketches to a desired rate, while preserving the geometric richness of the sketchy style in each frame. This provides the manipulation of temporal noise as an additional artistic parameter to emphasize character emotions and scene atmosphere, and enables the display of sketchy content to broader audiences by producing animations with comfortable noise levels. We demonstrate the effectiveness of our approach on a series of rough hand-drawn animations.
We present a novel and practical texture mapping algorithm for hand-drawn cartoons that allows the production of visually rich animations with minimal user effort. Unlike previous techniques, our approach works entirely in the 2D domain and does not require the knowledge or creation of a 3D proxy model. Inspired by the fact that the human visual system tends to focus on the most salient features of a scene, which we observe for hand-drawn cartoons are the contours rather than the interior of regions, we can create the illusion of temporally coherent animation using only rough 2D image registration. This key observation allows us to design a simple yet effective algorithm that significantly reduces the amount of manual labor required to add visually complex detail to an animation, thus enabling efficient cartoon texturing for computer-assisted animation production pipelines. We demonstrate our technique on a variety of input animations as well as provide examples of post-processing operations that can be applied to simulate 3D-like effects entirely in the 2D domain.
We present a new algorithm for near-interactive simulation of skeleton driven, high resolution elasticity models. Our methodology is used for soft tissue deformation in character animation. The algorithm targets performance through parallelism using a fully vectorized and branch-free SVD algorithm as well as a stable one-point quadrature scheme on a hexahedral grid.
This paper addresses the problem of unintended light contributions due to physical properties of display systems. We propose an automatic, perceptually-based computational compensation framework, which formulates pollution elimination as a minimization problem. Our method aims to distribute the error introduced by the pollution in a perceptually optimal manner.
We present a method for generating art-directable volumetric effects, ranging from physically-accurate to non-physical results. Our system mimics the way experienced artists think about volumetric effects by using an intuitive lighting primitive, and decoupling the modeling and shading of this primitive. We integrate our approach into a real-world production pipeline and couple our volumetric effects to surface shading.
Ball-morphs use the automatic ball-map correspondence, proposed by Chazal et al., from which we derive different vertex trajectories (linear, circular, and parabolic). All three morphs are symmetric, meeting both curves with the same angle, which is a right angle for the circular and parabolic. We provide simple constructions for these ball-morphs and compare them to each other and other simple morphs.
A novel parallel algorithm to animate the deformation of a soft body in response to collision. The algorithm incorporates elements of physically-based methods, and at the same time, it allows artistic control of general deformation behavior. The proposed solver has important benefits for practical use, such as evaluation of animation frames in an arbitrary order and effective approximation of volume preservation.
The generation of inbetween frames that interpolate a given set of key frames is a major component in the production of a 2D feature animation. Our objective is to considerably reduce the cost of the inbetweening phase by offering an intuitive and effective interactive environment that automates inbetweening when possible while allowing the artist to guide, complement, or override the results. Tight inbetweens, which interpolate similar key frames, are particularly time-consuming and tedious to draw. Therefore, we focus on automating these high-precision and expensive portions of the process. We have designed a set of user-guided semi-automatic techniques that fit well with current practice and minimize the number of required artist-gestures. We present a novel technique for stroke interpolation from only two keys which combines a stroke motion constructed from logarithmic spiral vertex trajectories with a stroke deformation based on curvature averaging and twisting warps. We discuss our system in the context of a feature animation production environment and evaluate our approach with real production data.
This paper introduces a novel approach for creating an art-directable hair shading model from existing physically based models. Through an informal user study we show that this system is easier to use compared to existing systems. In practice, the new approach has been integrated into our production pipeline and is being used in the production of the upcoming feature film Tangled.
We present a novel algorithm for deforming a locally smooth polygonal mesh by sliding its vertices over the surface. This sliding deformation creates the visual appearance of texture animation without requiring an explicit global surface parameterization or the overhead of storing texture coordinates.
While designing the stereoscopic conversion process for Beauty and the Beast 3D, the engineering team at Walt Disney Animation Studios quickly recognized the benefit of desk-side 3D viewing for 2D to 3D conversion artists. This paper outlines the technical and creative requirements of the project that supported that opinion along with the criteria established to analyze available solutions. The evolution of internal prototypes as well as 3rd party devices are then explored along with a description of the final choices made and wish-lists for future development.
We build upon work from classical fluid mechanics to design an algorithm that allows us to accurately precompute the turbulence being generated around an object immersed in a flow. This is made possible by modeling turbulence formation based on an averaged flow field, and relying on universal laws describing the flow near a wall.
We develop a method for reliable simulation of elastica in complex contact scenarios. Our focus is on firmly establishing three parameter-independent guarantees: that simulations of well-posed problems (a) have no interpenetrations, (b) obey causality, momentum and energy conservation laws, and (c) complete in finite time.
This paper presents a hybrid Eulerian/Lagrangian approach to handling both self and body collisions with hair efﬁciently while still maintaining detail.
We present an algorithm for robust and efficient contact handling of deformable objects. By being aware of the internal dynamics of the colliding objects, our algorithm provides smooth rolling and sliding, stable stacking, robust impact handling, and seamless coupling of heterogeneous objects, all in a unified manner.
This paper presents a novel mesh fairing method to remove unwanted geometric artifacts such as dents. The key element of the proposed method is our unique algorithm for the assignment of weights in the discrete Laplacian.
We prefilter occlusion of aggregate geometry, e.g., foliage or hair, storing local occlusion as a directional opacity in each node of a bounding volume hierarchy (BVH). During intersection, we terminate rays early at BVH nodes based on ray differential, and composite the stored opacities.
Robust treatment of complex collisions is a challenging problem in cloth simulation. We present a fail-safe that cancels impact but not sliding motion. This reduces artificial dissipation considerably. We equip the proposed fail-safe with an approximation of Coulomb friction, allowing finer control of sliding dissipation.
Old cinema dead, Digital cinema now, hybrid theatre future.
We describe algorithms for canonically partitioning semi-regular quadrilateral meshes into structured submeshes, using an adaptation of the geometric motorcycle graph of Eppstein and Erickson to quad meshes.
We study approximate topological matching of quadrilateral meshes; that is, the problem of ﬁnding as large a set as possible of matching portions of two quadrilateral meshes.
We propose a new texture mapping method for Catmull-Clark subdivision surfaces that requires no explicit parameterization. Our method, Ptex, stores a separate texture per quad face of the subdivision control mesh, along with a novel per-face adjacency map, in a single texture file per surface.
We present a new approach to accelerate collision detection for deformable models. Our formulation applies to all triangulated models and significantly reduces the number of elementary tests between features of the mesh, i.e., vertices, edges and faces. We introduce the notion of Representative-Triangles, and use this representation to achieve better collision query performance.
We extend the eigenbasis method of Jos Stam to evaluate Catmull-Clark subdivision surfaces near extraordinary vertices on B-spline boundaries.
We present a novel algorithm for accurately detecting all contacts, including self-collisions, between deformable models. We precompute a chromatic decomposition of a mesh into non-adjacent primitives using graph coloring algorithms. This enables us to check for collisions between non-adjacent primitives using a linear-time culling algorithm.
We present a method for applying complex textures to hand-drawn characters in cel animation. The method correlates features in a simple, textured, 3-D model with features on a hand-drawn figure, and then distorts the model to conform to the hand-drawn artwork.
The approach is motivated by a traditional technique used in 2D cel animation, in which a single background image, which we call a multiperspective panorama, is used to incorporate multiple views of a 3D environment as seen from along a given camera path. In this paper, we explore how such backdrops can be created from 3D models and camera paths.
We present a technique for rendering animations in a painterly style. The difﬁculty in using existing still frame methods for animation is getting the paint to “stick” to surfaces rather than randomly change with each frame, while still retaining a hand-crafted look.
Subsurface scattering rendered using brute-force volumetric path tracing looks more natural and is also more robust compared to popular diffusion-based approximations. This talk proposes a reparameterization formula that allows artists to use the production-proven intuitive controls, and it introduces a sampling scheme to make the formula practical for production rendering.
A novel hair reflectance model that is energy-conserving, efficient, and easy to control, and that reproduces a wide range of hair and fur with just a few intuitive parameters. This model is implemented in a production path-tracer and has achieved unprecedented character richness.
When we began designing the new queueing system for Walt Disney Animation Studios, we had a simple goal in mind: Take some shell commands, run them on some remote machines, and return the results. What we thought would be a six month project evolved into an eight year journey that helped revolutionize the culture at our studio, and built a world class queueing system capable of executing millions of tasks per day. Three key features of Coda, automated render wrangling, localized job priority, and advanced desktop rendering, have helped to encourage a culture of trust and collaboration, both amongst the different shows competing for queue resources, and between the production and technology organizations in the studio.
To realize the lush world of Zootopia, we extended our existing toolset for procedural vegetation to accomplish a wide array of plant species, we implemented various instancing schemes to handle increasing geometric complexity, and we improved our tools and workflows around animation to bring to life a vibrant, living world.
For the ice cream effects of Zootopia, we developed non-simulation based work flows. We used 2D drawovers, deformers and textures to make delicious looking ice cream effects. 2D drawovers provided us with a solid vision of the effects we needed to create. And it helped us in designing and optimizing our tool sets. Our customized deformers let us manipulate the geometry as we wanted. Using deformers, we could make the ice cream rolling effects and ice cream dropping effects as we designed. Final detailed features were accomplished with the procedural and painted textures. These non-simulation approaches allowed us not only to do great performances but also to accomplish our creative goals of the ice cream effects.
The geometric complexity required to create a city filled with furry animals in Walt Disney Animation Studios' Zootopia necessitated a new approach to level-of-detail for our in-house primitive generator, XGen. The characters spanned the breadth of the animal kingdom, from a mouse to an elephant, and each presented challenges with scale and fur quality. Early test scenes proved unmanageable to render with even a few characters and we knew some sequences called for thousands of them. To address this, we updated XGen's underlying pruning algorithm, refreshed the user interface, and developed a new wedge rendering tool designed to help streamline the fine-tuning of level-of-detail settings for optimal efficiency.
The world of "Zootopia" is a vast, multi-scale, multi-climate city that represents the biomes of the tundra, desert, rainforest, and grasslands. It is also a place where humans never existed and animals have evolved to be anthropomorphic, clothed, city-dwellers. Walt Disney Animation Studios pushed the artistic boundaries of all aspects of the production process to create vastly different environments filled with hundreds of unique and detailed characters. The scale difference of the animals, from a mouse to giraffe (95:1), required us to design a city that could be inhabited by characters of all shapes and sizes, while depicting thousands of furred and clothed animals moving through landscapes filled with grass and trees required innovative solutions to efficiently represent the scope and detail needed to bring this fully realized world to life.
We present the latest character simulation techniques developed for Disney’s Zootopia. In this film, we created herds of anthropomorphic mammals whose art direction called for the subtle, detailed motion distinctive to the real animal world coupled with the stylized, non-physical aesthetics characteristic of animated feature films. This required a strong partnership between technology and production to productize our flesh simulation research to meet the unique challenges of this show. Our techniques scaled from several hero characters to many secondary and tertiary characters, and also accommodated two characters with special requirements.
On Disney's Big Hero 6, we needed to create the city of San Fransokyo with unparalleled levels of visual complexity. The cityscape has more buildings and more geometry than any prior Disney film. Inhabiting this city are hundreds of unique characters, each performing high caliber of animation individually and as a group. These challenges prompted a major upgrade to our existing crowd pipeline and the development of several new technologies in authoring crowd characters, generating crowd animation cycles, and instancing crowds for rendering.
This talk presents a very efficient and directable procedural technique for explosive dynamic effects that can both substitute for and interface with traditional dynamic-calculation techniques.
In this talk we present our process of modeling, lighting and rendering an environment constructed entirely of animated, three dimensional volumetric fractals for the climactic sequence of “Big Hero 6”.
Models are fundamental for any digital production. Ranging from main characters to secondary props, they always must satisfy the aesthetic and technical requirements. Consequently, tools to create production models become essential in digital movie making. Therefore, Disney Animation has invested heavily in proprietary modeling software. Here we overview our main modeling tools.
The winter wonderland of Disney’s feature animation Frozen presented several unique technical and creative challenges for the Character Simulation team. One of the most prominent was cap- turing the interaction of wind with the cloth and hair simulations. The approach to meet this challenge involved low level changes to the simulation software, the creation of custom fields and visualizers, and the integration of "windicator" and wind gust rigs into the shot production pipeline. These three primary components gave the artists a tool set that allowed them to hit the desired art direction.
In this talk we will describe the hair pipeline utilized on Disney’s most recent full length animated feature, Frozen. Producing intricate hair styles is a challenging problem, spanning many departments. We focus on the generation of the hair groom and motion. This process starts by producing the groom, guided by 2D artwork from visual development and 3D proxies from modeling. We have developed a new intuitive interactive grooming tool, Tonic, which uses geometric volumes to procedurally groom the hairstyle. Once the hair volumes are sculpted, Tonic generates a set of guide curves within each Tonic hair tube. These tubes and guide curves are then passed to simulation which produces motion for a subset of the guide curves. The motion is controlled using an animation rig and a two-level simulation rig with the underlying dynamics calculated using our in-house solver. This motion is then mapped onto the full set of guide curves. In technical animation, cleanup and fine-tuning of the motion is done on a per-shot basis. Finally, the guide curves are interpolated and extra detail added using XGen, to produce the final set of curves sent to rendering. With this new workflow and toolset, the artists were able to create the almost 50 unique hair styles on Frozen.
Facial rigging is the process of adding controls to a face for animating facial expressions. These controls are commonly bound to either deformers or blendshapes, both of which modify the face’s shape, scale, or orientation. Facial rigs created with complex deformers are intuitive, yet commonly slow and less tunable. Facial rigs created with blendshapes are fast, yet memory intensive and sensitive to model changes. This paper introduces a rapid, deformer-based approach to facial rigging using only skin clusters, (in-house) wire deformers, and pose space deformation (PSD)
We present the innovative techniques that brought to life an audio-animatronic Lumiere for Disney World’s newest attraction- -Enchanted Tales With Belle. Walt Disney Imagineering and Walt Disney Animation Studios collaborated to create a pipeline that was flexible and intuitive for feature film animators. First, motion was digitally choreographed on the computer with a virtual rig. Next, the software detected constraints and motion limitations that would exist on the physical audio-animatronic. This allowed the artist to resolve them and animate physically plausible motions. Finally, the performance was transferred to the audio-animatronic, faithfully recreating the artist-produced motion using motors and machinery.
Dual Quaternion Skinning (DQS) is an advanced rigging technique that binds a mesh to skeletal joints. Unlike the popular alternative, Linear Blend Skinning (LBS), DQS avoids the undesirable “candy-wrapper effect” and effectively simulates volume preservation. DQS is a powerful technique, but to get desirable results, it must be extended to meet the needs of production environments, and is therefore not a simple drop-in replacement for LBS. This paper presents an extension to DQS that successfully met the rigging requirements of Disney’s feature Frozen. In particular, DQS is configured with LBS to handle non-rigid transformations, hierarchies of differing joints, and arbitrary support joints.
We present dRig, Disney Animation's novel approach to rigging that allows for efficient reuse and extension of assets, fast authoring of per-element variations, and accessibility of rig code through a proprietary language and user interface.
In these course notes we describe the development of a new BRDF model used on Wreck-It Ralph and subsequent productions at Walt Disney Animation Studios. We begin with observations from studying measured materials along with insights we've gleaned about which models fit the measured data and where they fall short. We then present our new model, describe our experience of adopting this new model in production, and discuss how we were able to add the right level of artistic control while preserving simplicity and robustness.
We present a system which allows animators to combine CG animation's strengths -- temporal coherence, spatial stability, and precise control -- with traditional animation's expressive and pleasing line-based aesthetic. Our process begins as an ordinary 3D CG animation, but later steps occur in a light-weight and responsive 2D environment, where an artist can draw lines which the system can then automatically move through time using vector fields derived from the 3D animation, thereby maximizing the benefits of both environments. Unlike with an automated "toon-shader", the final look was directly in the hands of the artists in a familiar workflow, allowing their artistry and creative power to be fully utilized. This process was developed during production of the short film Paperman at Walt Disney Animation Studios, but its application is extensible to other styles of animation as well.
We implemented the recent Photon Beams algorithm in Photorealistic RenderMan to efficiently render artistically-directed volumetric lighting effects for the feature-length animated movie Tangled. With the knowledge that most fall-off functions defined by our artists would be polynomial-smooth, we use Gaussian Quadrature to accurately and efficiently estimate the lighting contribution of these camera-containing beams.
We describe a hybrid approach leveraging the power of custom hair dynamics with the artistic control of key-framed animation that was key to the success of directing hair motion on Tangled.
Much like the character animation in Tangled, the goal of the dam break sequence was to bring classic Disney 2D sensibilities to CG effects. Hand-drawn effects animation in films such as Pinocchio and Fantasia served as inspiration. The water shapes drawn in these films were very stylized yet conveyed recognizable forms of nature. The concept was to emulate these shapes and then enhance them with the modern benefits of CG rendering such as ray traced reflections and ambient occlusion.
A point based representation was extensively used on "Tangled" to generate occlusion and indirect illumination involving the characters' hair.
This paper presents a hybrid approach to facial rigging that uses Pose Space Deformation (PSD) to seamlessly combine both geometric deformations and blendshapes.
This paper presents an automated means to select targets for DrivenShape. These targets enable DrivenShape to produce the lowest error in common contexts.
Look development on Walt Disney's Tangled called for artists to paint hundreds of organic elements with high resolution textures on a tight schedule. We found that example-based texture synthesis, where an artist paints a small exemplar texture, indicating the desired pattern that the system synthesizes over arbitrary surfaces, would alleviate some of the burden. In this talk we describe how we adapted existing synthesis methods to our Ptex-based workflow, scaled them to production sized methods and expose it to artists using a one click interface.
We designed a system of authoring trees based around a language of hierarchical curves. Our system lets artists interactively sketch out a base skeleton representation of a tree and grow procedural twigs and leaves out to a canopy shell by tweaking a limited number of parameters.
Prep and Landing had multiple snow variants in a large number of shots - ranging from gentle falling snow outside windows, to near blizzard-like conditions. Snowfall was necessary to help the world the characters inhabited feel believable. Managing the workflow and complexity involved in creating snow variety was the challenge.
For the creation of brilliant light displays, flickering control tower buttons and vibrant computer monitors, the Effects Department goal was to build a motion graphics pipeline capable of running nearly unattended - all while maintaining the flexibility of downstream artist input, should problems arise.
We present several key techniques used for simulating Rapunzel's 70 feet of hair for the animated feature “Tangled”; these techniques range from methods to improve the run-time efficiency of the simulations to achieving the desired art direction of the hair.
Pose Space Deformation (PSD) [Lewis2000] is a shape interpolation technique for animation. This paper presents some practical experience with PSD acquired while creating the film "BOLT."
We suggest a method for artists to better understand RBF behavior through visualization and to evaluate RBF functions according to the requirements of a production environment.
Rendering hair and simulating refraction, when performed separately, are both time and memory intensive. To overcome these problems, a process was developed for "Bolt" which involved exporting 3D data from the render stage so that the calculation of the refraction could be delayed until the composite stage.
To perform the stereoscopic conversion of Disney’s Beauty and the Beast, we developed novel extensions to standard medial axis techniques.
We focused on fundamental ideas such as massing, a term in painting which refers to the process of editing detail into bigger shapes, and also edge quality, the use of the painter's brush to vary edges of shapes which can bring emphasis to the image and/or direct the eye. This led to the development of new algorithms and tools for the film "Bolt".
In Disney’s Bolt, the character of Rhino poses many technical challenges. He spends a majority of the movie inside a plastic ball, frequently contacts the ground surface, and presents a complicated skinning problem. Innovative tools and technology were developed to solve these issues for the production.
Using an anatomically motivated approach, our method to produce realistic convincing deformations of the skin and flesh surrounding the eye is unique, not only due to the novel approaches employed, but also because our method is entirely procedural.
iBind smoothly deforms vertices using a control cage by uniquely leveraging heat diffusion on closed, thin layers across a structured set of mean value coordinates. Dynamic rebinding is still useful, so iBind implements it, but for most cases of character articulation, a static binding option is used.
We designed a system to facilitate the modeling of cracked and shattered objects, enabling the automatic generation of a large number of fragments while retaining the ﬂexibility to artistically control the density and complexity of the crack formation.