SCA: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this community
Browse
Browsing SCA: Eurographics/SIGGRAPH Symposium on Computer Animation by Issue Date
Now showing 1 - 20 of 558
Results Per Page
Sort Options
Item Dynapack: Space-Time compression of the 3D animations of triangle meshes with fixed connectivity(The Eurographics Association, 2003) Ibarria, Lawrence; Rossignac, Jarek; D. Breen and M. LinDynapack exploits space-time coherence to compress the consecutive frames of the 3D animations of triangle meshes of constant connectivity. Instead of compressing each frame independently (space-only compression) or compressing the trajectory of each vertex independently (time-only compression), we predict the position of each vertex v of frame f from three of its neighbors in frame f and from the positions of v and of these neighbors in the previous frame (space-time compression). We introduce here two extrapolating spacetime predictors: the ELP extension of the Lorenzo predictor, developed originally for compressing regularly sampled 4D data sets, and the Replica predictor. ELP may be computed using only additions and subtractions of points and is a perfect predictor for portions of the animation undergoing pure translations. The Replica predictor is slightly more expensive to compute, but is a perfect predictor for arbitrary combinations of translations, rotations, and uniform scaling. For the typical 3D animations that we have compressed, the corrections between the actual and predicted value of the vertex coordinates may be compressed using entropy coding down to an average ranging between 1:37 and 2:91 bits, when the quantization used ranges between 7 and 13 bits. In comparison, space-only compression yields a range of 1:90 to 7:19 bits per coordinate and time-only compressions yields a range of 1:77 to 6:91 bits per coordinate. The implementation of the Dynapack compression and decompression is trivial and extremely fast. It perform a sweep through the animation, only accessing two consecutive frames at a time. Therefore, it is particularly well suited for realtime and outof- core compression, and for streaming decompression.Item Vision-based Control of 3D Facial Animation(The Eurographics Association, 2003) Chai, Jin-xiang; Xiao, Jing; Hodgins, Jessica; D. Breen and M. LinControlling and animating the facial expression of a computer-generated 3D character is a difficult problem because the face has many degrees of freedom while most available input devices have few. In this paper, we show that a rich set of lifelike facial actions can be created from a preprocessed motion capture database and that a user can control these actions by acting out the desired motions in front of a video camera. We develop a real-time facial tracking system to extract a small set of animation control parameters from video. Because of the nature of video data, these parameters may be noisy, low-resolution, and contain errors. The system uses the knowledge embedded in motion capture data to translate these low-quality 2D animation control signals into high-quality 3D facial expressions. To adapt the synthesized motion to a new character model, we introduce an efficient expression retargeting technique whose run-time computation is constant independent of the complexity of the character model. We demonstrate the power of this approach through two users who control and animate a wide range of 3D facial expressions of different avatars.Item A Sketching Interface for Articulated Figure Animation(The Eurographics Association, 2003) Davis, James; Agrawala, Maneesh; Chuang, Erika; Popovic, Zoran; Salesin, David; D. Breen and M. LinWe introduce a new interface for rapidly creating 3D articulated figure animation, from 2D sketches of the character in the desired key frame poses. Since the exact 3D animation corresponding to a set of 2D drawings is ambiguous we first reconstruct the possible 3D configurations and then apply a set of constraints and assumptions to present the user with the most likely 3D pose. The user can refine this candidate pose by choosing among alternate poses proposed by the system. This interface is supported by pose reconstruction and optimization methods specifically designed to work with imprecise hand drawn figures. Our system provides a simple, intuitive and fast interface for creating rough animations that leverages our users existing ability to draw. The resulting key framed sequence can be exported to commercial animation packages for interpolation and additional refinement.Item FootSee: an Interactive Animation System(The Eurographics Association, 2003) Yin, KangKang; Pai, Dinesh K.; D. Breen and M. LinWe present an intuitive animation interface that uses a foot pressure sensor pad to interactively control avatars for video games, virtual reality, and low-cost performance-driven animation. During an offline training phase, we capture full body motions with a motion capture system, as well as the corresponding foot-ground pressure distributions with a pressure sensor pad, into a database. At run time, the user acts out the animation desired on the pressure sensor pad. The system then tries to see the motion only through the foot-ground interactions measured, and the most appropriate motions from the database are selected, and edited online to drive the avatar.We describe our motion recognition, motion blending, and inverse kinematics algorithms in detail. They are easy to implement, and cheap to compute. FootSee can control a virtual avatar in a fixed latency of 1 second with reasonable accuracy. Our system thus makes it possible to create interactive animations without the cost or inconveniences of a full body motion capture system.Item Estimating Cloth Simulation Parameters from Video(The Eurographics Association, 2003) Bhat, Kiran S.; Twigg, Christopher D.; Hodgins, Jessica K.; Khosla, Pradeep K.; Popovic, Zoran; Seitz, Steven M.; D. Breen and M. LinCloth simulations are notoriously difficult to tune due to the many parameters that must be adjusted to achieve the look of a particular fabric. In this paper, we present an algorithm for estimating the parameters of a cloth simulation from video data of real fabric. A perceptually motivated metric based on matching between folds is used to compare video of real cloth with simulation. This metric compares two video sequences of cloth and returns a number that measures the differences in their folds. Simulated annealing is used to minimize the frame by frame error between the metric for a given simulation and the real-world footage. To estimate all the cloth parameters, we identify simple static and dynamic calibration experiments that use small swatches of the fabric. To demonstrate the power of this approach, we use our algorithm to find the parameters for four different fabrics. We show the match between the video footage and simulated motion on the calibration experiments, on new video sequences for the swatches, and on a simulation of a full skirt.Item Sound-by-Numbers: Motion-Driven Sound Synthesis(The Eurographics Association, 2003) Cardle, M.; Brooks, S.; Bar-Joseph, Z.; Robinson, P.; D. Breen and M. LinWe present the first algorithm for automatically generating soundtracks for input animation based on other animations' soundtrack. This technique can greatly simplify the production of soundtracks in computer animation and video by re-targeting existing soundtracks. A segment of source audio is used to train a statistical model which is then used to generate variants of the original audio to fit particular constraints. These constraints can either be specified explicitly by the user in the form of large-scale properties of the sound texture, or determined automatically and semi-automatically by matching similar motion events in a source animation to those in the target animation.Item Particle-Based Fluid Simulation for Interactive Applications(The Eurographics Association, 2003) Müller, Matthias; Charypar, David; Gross, Markus; D. Breen and M. LinRealistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.Item Finite Volume Methods for the Simulation of Skeletal Muscle(The Eurographics Association, 2003) Teran, J.; Blemker, S.; Hing, V. Ng Thow; Fedkiw, R.; D. Breen and M. LinSince it relies on a geometrical rather than a variational framework, many find the finite volume method (FVM) more intuitive than the finite element method (FEM).We show that the FVM allows one to interpret the stress inside a tetrahedron as a simple 'multidimensional force' pushing on each face. Moreover, this interpretation leads to a heuristic method for calculating the force on each node, which is as simple to implement and comprehend as masses and springs. In the finite volume spirit, we also present a geometric rather than interpolating function definition of strain. We use the FVM and a quasi-incompressible, transversely isotropic, hyperelastic constitutive model to simulate contracting muscle tissue. B-spline solids are used to model fiber directions, and the muscle activation levels are derived from key frame animations.Item A 2-Stages Locomotion Planner for Digital Actors(The Eurographics Association, 2003) Pettré, Julien; Laumond, Jean-Paul; Siméon, Thierry; D. Breen and M. LinThis paper presents a solution to the locomotion planning problem for digital actors. The solution is based both on probabilistic motion planning and on motion capture blending and warping. The paper describes the various components of our solution, from the first path planning to the last animation step. An example illustrates the progression of the animation construction all along the presentation.Item Generating Flying Creatures using Body-Brain Co-Evolution(The Eurographics Association, 2003) Shim, Yoon-Sik; Kim, Chang-Hun; D. Breen and M. LinThis paper describes a system that produces double-winged flying creatures using body-brain co-evolution without need of complex flapping flight aerodynamics. While artificial life techniques have been used to create a variety of virtual creatures, little work has explored flapping-winged creatures for the difficulty of genetic encoding problem of wings with limited geometric primitives as well as flapping-wing aerodynamics. Despite of the simplicity of system, our result shows aesthetical looking and organic flapping flight locomotions. The restricted list structure is used in genotype encoding for morphological symmetry of creatures and is more easily handled than other data structures. The creatures evolved by this system have two symmetric flapping wings consisting of continuous triangular patches and show various looking and locomotion such as wings of birds, butterflies and bats or even imaginary wings of a dragon and pterosaurs.Item A Real-Time Cloud Modeling, Rendering, and Animation System(The Eurographics Association, 2003) Schpok, Joshua; Simons, Joseph; Ebert, David S.; Hansen, Charles; D. Breen and M. LinModeling and animating complex volumetric natural phenomena, such as clouds, is a difficult task. Most systems are difficult to use, require adjustment of numerous, complex parameters, and are non-interactive. Therefore, we have developed an intuitive, interactive system to artistically model, animate, and render visually convincing volumetric clouds using modern consumer graphics hardware. Our natural, high-level interface models volumetric clouds through the use of qualitative cloud attributes. The animation of the implicit skeletal structures and independent transformation of octaves of noise emulate various environmental conditions. The resulting interactive design, rendering, and animation system produces perceptually convincing volumetric cloud models that can be used in interactive systems or exported for higher quality offline rendering.Item Geometry-Driven Photorealistic Facial Expression Synthesis(The Eurographics Association, 2003) Zhang, Qingshan; Liu, Zicheng; Guo, Baining; Shum, Harry; D. Breen and M. LinExpression mapping (also called performance driven animation) has been a popular method to generate facial animations. One shortcoming of this method is that it does not generate expression details such as the wrinkles due to the skin deformation. In this paper, we provide a solution to this problem. We have developed a geometry-driven facial expression synthesis system. Given the feature point positions (geometry) of a facial expression, our system automatically synthesizes the corresponding expression image which has photorealistic and natural looking expression details. Since the number of feature points required by the synthesis system is in general more than what is available from the performer due to the difficulty of tracking, we have developed a technique to infer the feature point motions from a subset by using an example-based approach. Another application of our system is on expression editing where the user drags the feature points while the system interactively generates facial expressions with skin deformation details.Item Stylizing Motion with Drawings(The Eurographics Association, 2003) Li, Yin; Gleicher, Michael; Xu, Ying-Qing; Shum, Heung-Yeung; D. Breen and M. LinIn this paper, we provide a method that injects the expressive shape deformations common in traditional 2D animation into an otherwise rigid 3D motion captured animation. We allow a traditional animator to modify frames in the rendered animation by redrawing the key features such as silhouette curves. These changes are then integrated into the animation. To perform this integration, we divide the changes into those that can be made by altering the skeletal animation, and those that must be made by altering the character's mesh geometry. To propagate mesh changes into other frames, we introduce a new image warping technique that takes into account the character's 3D structure. The resulting technique provides a system where an animator can inject stylization into 3D animation.Item Construction and Animation of Anatomically Based Human Hand Models(The Eurographics Association, 2003) Albrecht, Irene; Haber, Jörg; Seidel, Hans-Peter; D. Breen and M. LinThe human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.Item Feel the 'Fabric': An Audio-Haptic Interface(The Eurographics Association, 2003) Huang, G.; Metaxas, D.; Govindaraj, M.; D. Breen and M. LinAn objective fabric modeling system should convey not only the visual but also the haptic and audio sensory feedbacks to remote/internet users via an audio-haptic interface. In this paper we develop a fabric surface property modeling system consisting of a stylus based fabric characteristic sound modeling, and an audio-haptic interface. By using a stylus, people can perceive fabrics surface roughness, friction, and softness though not as precisely as with their bare fingers. The audio-haptic interface is intended to simulate the case of "feeling a virtually fixed fabric via a rigid stylus" by using the PHANToM haptic interface. We develop a DFFT based correlation-restoration method to model the surface roughness and friction coefficient of a fabric, and a physically based method to model the sound of a fabric when rubbed by a stylus. The audio-haptic interface, which renders synchronized auditory and haptic stimuli when the virtual stylus rubs on the surface of a virtual fabric, is developed in VC++6.0 by using OpenGL and the PHANToM GHOST SDK. We asked subjects to test our audio-haptic interface and they were able to differentiate the surface properties of virtual fabrics in the correct order. We show that the virtual fabric is a good modeling of the real counterpart.Item Learning Controls for Blend Shape Based Realistic Facial Animation(The Eurographics Association, 2003) Joshi, Pushkar; Tien, Wen C.; Desbrun, Mathieu; Pighin, Frédéric; D. Breen and M. LinBlend shape animation is the method of choice for keyframe facial animation: a set of blend shapes (key facial expressions) are used to define a linear space of facial expressions. However, in order to capture a significant range of complexity of human expressions, blend shapes need to be segmented into smaller regions where key idiosyncracies of the face being animated are present. Performing this segmentation by hand requires skill and a lot of time. In this paper, we propose an automatic, physically-motivated segmentation that learns the controls and parameters directly from the set of blend shapes. We show the usefulness and efficiency of this technique for both, motion-capture animation and keyframing. We also provide a rendering algorithm to enhance the visual realism of a blend shape model.Item Flexible Automatic Motion Blending with Registration Curves(The Eurographics Association, 2003) Kovar, Lucas; Gleicher, Michael; D. Breen and M. LinMany motion editing algorithms, including transitioning and multitarget interpolation, can be represented as instances of a more general operation called motion blending. We introduce a novel data structure called a registration curve that expands the class of motions that can be successfully blended without manual input. Registration curves achieve this by automatically determining relationships involving the timing, local coordinate frame, and constraints of the input motions. We show how registration curves improve upon existing automatic blending methods and demonstrate their use in common blending operations.Item Interactive Physically Based Solid Dynamics(The Eurographics Association, 2003) Hauth, M.; Groß, J.; Straßer, W.; D. Breen and M. LinThe interactive simulation of deformable solids has become a major working area in Computer Graphics. We present a sophisticated material law, better suited for dynamical computations than the standard approaches. As an important example, it is employed to reproduce measured material data from biological soft tissue. We embed it into a state-of-the-art finite element setting employing an adaptive basis. For time integration the use of an explicit stabilized Runge-Kutta method is proposed.Item Blowing in the Wind(The Eurographics Association, 2003) Wei, Xiaoming; Zhao, Ye; Fan, Zhe; Li, Wei; Yoakum-Stover, Suzanne; Kaufman, Arie; D. Breen and M. LinWe present an approach for simulating the natural dynamics that emerge from the coupling of a flow field to lightweight, mildly deformable objects immersed within it. We model the flow field using a Lattice Boltzmann Model (LBM) extended with a subgrid model and accelerate the computation on commodity graphics hardware to achieve real-time simulations. We demonstrate our approach using soap bubbles and a feather blown by wind fields, yet our approach is general enough to apply to other light-weight objects. The soap bubbles illustrate Fresnel reflection, reveal the dynamics of the unseen flow field in which they travel, and display spherical harmonics in their undulations. The free feather floats and flutters in response to lift and drag forces. Our single bubble simulation allows the user to directly interact with the wind field and thereby influence the dynamics in real time.Item Trackable Surfaces(The Eurographics Association, 2003) Guskov, Igor; Klibanov, Sergey; Bryant, Benjamin; D. Breen and M. LinWe introduce a novel approach for real-time non-rigid surface acquisition based on tracking quad marked surfaces. The color-identified quad arrangement allows for automatic feature correspondence, tracking initialization, and simplifies 3D reconstruction. We present a prototype implementation of our approach together with several examples of acquired surface motions.