SCA: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this community
Browse
Browsing SCA: Eurographics/SIGGRAPH Symposium on Computer Animation by Issue Date
Now showing 1 - 20 of 558
Results Per Page
Sort Options
Item Handrix: Animating the Human Hand(The Eurographics Association, 2003) Koura, George El; Singh, Karan; D. Breen and M. LinThe human hand is a complex organ capable of both gross grasp and fine motor skills. Despite many successful high-level skeletal control techniques, animating realistic hand motion remains tedious and challenging. This paper presents research motivated by the complex finger positioning required to play musical instruments, such as the guitar. We first describe a data driven algorithm to add sympathetic finger motion to arbitrarily animated hands. We then present a procedural algorithm to generate the motion of the fretting hand playing a given musical passage on a guitar. The work here is aimed as a tool for music education and analysis. The contributions of this paper are a general architecture for the skeletal control of interdependent articulations performing multiple concurrent reaching tasks, and a procedural tool for musicians and animators that captures the motion complexity of guitar fingering.Item A Practical Dynamics System(The Eurographics Association, 2003) Kacic-Alesic, Zoran; Nordenstam, Marcus; Bullock, David; D. Breen and M. LinWe present an effective production-proven dynamics system. It uses an explicit time differencing method that is efficient, reasonably accurate, conditionally stable, and above all simple to implement. We describe issues related to integration of physically based simulation techniques into an interactive animation system, present a high level description of the architecture of the system, report on techniques that work, and provide observations that may seem obvious, but only in retrospect. Applications include rigid and deformable body dynamics, particle dynamics, and at a basic level, hair and cloth simulation.Item Trackable Surfaces(The Eurographics Association, 2003) Guskov, Igor; Klibanov, Sergey; Bryant, Benjamin; D. Breen and M. LinWe introduce a novel approach for real-time non-rigid surface acquisition based on tracking quad marked surfaces. The color-identified quad arrangement allows for automatic feature correspondence, tracking initialization, and simplifies 3D reconstruction. We present a prototype implementation of our approach together with several examples of acquired surface motions.Item Aesthetic Edits For Character Animation(The Eurographics Association, 2003) Neff, Michael; Fiume, Eugene; D. Breen and M. LinThe utility of an interactive tool can be measured by how pervasively it is embedded into a user's work flow. Tools for artists additionally must provide an appropriate level of control over expressive aspects of their work while suppressing unwanted intrusions due to details that are, for the moment, unnecessary. Our focus is on tools that target editing the expressive aspects of character motion. These tools allow animators to work in a way that is more expedient than modifying low-level details, and offers finer control than high level, directorial approaches. To illustrate this approach, we present three such tools, one for varying timing (succession), and two for varying motion shape (amplitude and extent). Succession editing allows the animator to vary the activation times of the joints in the motion. Amplitude editing allows the animator to vary the joint ranges covered during a motion. Extent editing allows an animator to vary how fully a character occupies space during a movement - using space freely or keeping the movement close to his body. We argue that such editing tools can be fully embedded in the workflow of character animators. We present a general animation system in which these and other edits can be defined programmatically. Working in a general pose or keyframe framework, either kinematic or dynamic motion can be generated. This system is extensible to include an arbitrary set of movement edits.Item Flexible Automatic Motion Blending with Registration Curves(The Eurographics Association, 2003) Kovar, Lucas; Gleicher, Michael; D. Breen and M. LinMany motion editing algorithms, including transitioning and multitarget interpolation, can be represented as instances of a more general operation called motion blending. We introduce a novel data structure called a registration curve that expands the class of motions that can be successfully blended without manual input. Registration curves achieve this by automatically determining relationships involving the timing, local coordinate frame, and constraints of the input motions. We show how registration curves improve upon existing automatic blending methods and demonstrate their use in common blending operations.Item Interactive Physically Based Solid Dynamics(The Eurographics Association, 2003) Hauth, M.; Groß, J.; Straßer, W.; D. Breen and M. LinThe interactive simulation of deformable solids has become a major working area in Computer Graphics. We present a sophisticated material law, better suited for dynamical computations than the standard approaches. As an important example, it is employed to reproduce measured material data from biological soft tissue. We embed it into a state-of-the-art finite element setting employing an adaptive basis. For time integration the use of an explicit stabilized Runge-Kutta method is proposed.Item Construction and Animation of Anatomically Based Human Hand Models(The Eurographics Association, 2003) Albrecht, Irene; Haber, Jörg; Seidel, Hans-Peter; D. Breen and M. LinThe human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.Item Feel the 'Fabric': An Audio-Haptic Interface(The Eurographics Association, 2003) Huang, G.; Metaxas, D.; Govindaraj, M.; D. Breen and M. LinAn objective fabric modeling system should convey not only the visual but also the haptic and audio sensory feedbacks to remote/internet users via an audio-haptic interface. In this paper we develop a fabric surface property modeling system consisting of a stylus based fabric characteristic sound modeling, and an audio-haptic interface. By using a stylus, people can perceive fabrics surface roughness, friction, and softness though not as precisely as with their bare fingers. The audio-haptic interface is intended to simulate the case of "feeling a virtually fixed fabric via a rigid stylus" by using the PHANToM haptic interface. We develop a DFFT based correlation-restoration method to model the surface roughness and friction coefficient of a fabric, and a physically based method to model the sound of a fabric when rubbed by a stylus. The audio-haptic interface, which renders synchronized auditory and haptic stimuli when the virtual stylus rubs on the surface of a virtual fabric, is developed in VC++6.0 by using OpenGL and the PHANToM GHOST SDK. We asked subjects to test our audio-haptic interface and they were able to differentiate the surface properties of virtual fabrics in the correct order. We show that the virtual fabric is a good modeling of the real counterpart.Item Discrete Shells(The Eurographics Association, 2003) Grinspun, Eitan; Hirani, Anil N.; Desbrun, Mathieu; Schröder, Peter; D. Breen and M. LinIn this paper we introduce a discrete shell model describing the behavior of thin flexible structures, such as hats, leaves, and aluminum cans, which are characterized by a curved undeformed configuration. Previously such models required complex continuum mechanics formulations and correspondingly complex algorithms. We show that a simple shell model can be derived geometrically for triangle meshes and implemented quickly by modifying a standard cloth simulator. Our technique convincingly simulates a variety of curved objects with materials ranging from paper to metal, as we demonstrate with several examples including a comparison of a real and simulated falling hat.Item FootSee: an Interactive Animation System(The Eurographics Association, 2003) Yin, KangKang; Pai, Dinesh K.; D. Breen and M. LinWe present an intuitive animation interface that uses a foot pressure sensor pad to interactively control avatars for video games, virtual reality, and low-cost performance-driven animation. During an offline training phase, we capture full body motions with a motion capture system, as well as the corresponding foot-ground pressure distributions with a pressure sensor pad, into a database. At run time, the user acts out the animation desired on the pressure sensor pad. The system then tries to see the motion only through the foot-ground interactions measured, and the most appropriate motions from the database are selected, and edited online to drive the avatar.We describe our motion recognition, motion blending, and inverse kinematics algorithms in detail. They are easy to implement, and cheap to compute. FootSee can control a virtual avatar in a fixed latency of 1 second with reasonable accuracy. Our system thus makes it possible to create interactive animations without the cost or inconveniences of a full body motion capture system.Item Blowing in the Wind(The Eurographics Association, 2003) Wei, Xiaoming; Zhao, Ye; Fan, Zhe; Li, Wei; Yoakum-Stover, Suzanne; Kaufman, Arie; D. Breen and M. LinWe present an approach for simulating the natural dynamics that emerge from the coupling of a flow field to lightweight, mildly deformable objects immersed within it. We model the flow field using a Lattice Boltzmann Model (LBM) extended with a subgrid model and accelerate the computation on commodity graphics hardware to achieve real-time simulations. We demonstrate our approach using soap bubbles and a feather blown by wind fields, yet our approach is general enough to apply to other light-weight objects. The soap bubbles illustrate Fresnel reflection, reveal the dynamics of the unseen flow field in which they travel, and display spherical harmonics in their undulations. The free feather floats and flutters in response to lift and drag forces. Our single bubble simulation allows the user to directly interact with the wind field and thereby influence the dynamics in real time.Item Geometry-Driven Photorealistic Facial Expression Synthesis(The Eurographics Association, 2003) Zhang, Qingshan; Liu, Zicheng; Guo, Baining; Shum, Harry; D. Breen and M. LinExpression mapping (also called performance driven animation) has been a popular method to generate facial animations. One shortcoming of this method is that it does not generate expression details such as the wrinkles due to the skin deformation. In this paper, we provide a solution to this problem. We have developed a geometry-driven facial expression synthesis system. Given the feature point positions (geometry) of a facial expression, our system automatically synthesizes the corresponding expression image which has photorealistic and natural looking expression details. Since the number of feature points required by the synthesis system is in general more than what is available from the performer due to the difficulty of tracking, we have developed a technique to infer the feature point motions from a subset by using an example-based approach. Another application of our system is on expression editing where the user drags the feature points while the system interactively generates facial expressions with skin deformation details.Item A 2-Stages Locomotion Planner for Digital Actors(The Eurographics Association, 2003) Pettré, Julien; Laumond, Jean-Paul; Siméon, Thierry; D. Breen and M. LinThis paper presents a solution to the locomotion planning problem for digital actors. The solution is based both on probabilistic motion planning and on motion capture blending and warping. The paper describes the various components of our solution, from the first path planning to the last animation step. An example illustrates the progression of the animation construction all along the presentation.Item Adaptive Wisp Tree - a multiresolution control structure for simulating dynamic clustering in hair motion(The Eurographics Association, 2003) Bertails, F.; Kim, T-Y.; Cani, M-P.; Neumann, U.; D. Breen and M. LinRealistic animation of long human hair is difficult due to the number of hair strands and to the complexity of their interactions. Existing methods remain limited to smooth, uniform, and relatively simple hair motion. We present a powerful adaptive approach to modeling dynamic clustering behavior that characterizes complex long-hair motion. The Adaptive Wisp Tree (AWT) is a novel control structure that approximates the large-scale coherent motion of hair clusters as well as small-scaled variation of individual hair strands. The AWT also aids computation efficiency by identifying regions where visible hair motions are likely to occur. The AWT is coupled with a multiresolution geometry used to define the initial hair model. This combined system produces stable animations that exhibit the natural effects of clustering and mutual hair interaction. Our results show that the method is applicable to a wide variety of hair styles.Item Finite Volume Methods for the Simulation of Skeletal Muscle(The Eurographics Association, 2003) Teran, J.; Blemker, S.; Hing, V. Ng Thow; Fedkiw, R.; D. Breen and M. LinSince it relies on a geometrical rather than a variational framework, many find the finite volume method (FVM) more intuitive than the finite element method (FEM).We show that the FVM allows one to interpret the stress inside a tetrahedron as a simple 'multidimensional force' pushing on each face. Moreover, this interpretation leads to a heuristic method for calculating the force on each node, which is as simple to implement and comprehend as masses and springs. In the finite volume spirit, we also present a geometric rather than interpolating function definition of strain. We use the FVM and a quasi-incompressible, transversely isotropic, hyperelastic constitutive model to simulate contracting muscle tissue. B-spline solids are used to model fiber directions, and the muscle activation levels are derived from key frame animations.Item Constrained Animation of Flocks(The Eurographics Association, 2003) Anderson, Matt; McDaniel, Eric; Chenney, Stephen; D. Breen and M. LinGroup behaviors are widely used in animation, yet it is difficult to impose hard constraints on their behavior. We describe a new technique for the generation of constrained group animations that improves on existing approaches in two ways: the agents in our simulations meet exact constraints at specific times, and our simulations retain the global properties present in unconstrained motion. Users can position constraints on agents' positions at any time in the animation, or constrain the entire group to meet center of mass or shape constraints. Animations are generated in a two stage process. The first step finds an initial set of trajectories that exactly meet the constraints, but which may violate the behavior rules. The second stage samples new animations that maintain the constraints while improving the motion with respect to the underlying behavioral model. We present a range of animations created with our system.Item Synthesizing Animatable Body Models with Parameterized Shape Modifications(The Eurographics Association, 2003) Seo, Hyewon; Cordier, Frederic; Magnenat-Thalmann, Nadia; D. Breen and M. LinBased on an existing modeller that can generate realistic and controllable whole-body models, we introduce our modifier synthesizer for obtaining higher level of manipulations of body models by using parameters such as fat percentage and hip-to-waist ratio. Users are assisted in automatically modifying an existing model by controlling the parameters provided. On any synthesized model, the underlying bone and skin structure is properly adjusted, so that the model remains completely animatable using the underlying skeleton. Based on statistical analysis of data models, we demonstrate the use of body attributes as parameters in controlling the shape modification of the body models while maintaining the distinctiveness of the individual as much as possible.Item Learning Controls for Blend Shape Based Realistic Facial Animation(The Eurographics Association, 2003) Joshi, Pushkar; Tien, Wen C.; Desbrun, Mathieu; Pighin, Frédéric; D. Breen and M. LinBlend shape animation is the method of choice for keyframe facial animation: a set of blend shapes (key facial expressions) are used to define a linear space of facial expressions. However, in order to capture a significant range of complexity of human expressions, blend shapes need to be segmented into smaller regions where key idiosyncracies of the face being animated are present. Performing this segmentation by hand requires skill and a lot of time. In this paper, we propose an automatic, physically-motivated segmentation that learns the controls and parameters directly from the set of blend shapes. We show the usefulness and efficiency of this technique for both, motion-capture animation and keyframing. We also provide a rendering algorithm to enhance the visual realism of a blend shape model.Item Dynapack: Space-Time compression of the 3D animations of triangle meshes with fixed connectivity(The Eurographics Association, 2003) Ibarria, Lawrence; Rossignac, Jarek; D. Breen and M. LinDynapack exploits space-time coherence to compress the consecutive frames of the 3D animations of triangle meshes of constant connectivity. Instead of compressing each frame independently (space-only compression) or compressing the trajectory of each vertex independently (time-only compression), we predict the position of each vertex v of frame f from three of its neighbors in frame f and from the positions of v and of these neighbors in the previous frame (space-time compression). We introduce here two extrapolating spacetime predictors: the ELP extension of the Lorenzo predictor, developed originally for compressing regularly sampled 4D data sets, and the Replica predictor. ELP may be computed using only additions and subtractions of points and is a perfect predictor for portions of the animation undergoing pure translations. The Replica predictor is slightly more expensive to compute, but is a perfect predictor for arbitrary combinations of translations, rotations, and uniform scaling. For the typical 3D animations that we have compressed, the corrections between the actual and predicted value of the vertex coordinates may be compressed using entropy coding down to an average ranging between 1:37 and 2:91 bits, when the quantization used ranges between 7 and 13 bits. In comparison, space-only compression yields a range of 1:90 to 7:19 bits per coordinate and time-only compressions yields a range of 1:77 to 6:91 bits per coordinate. The implementation of the Dynapack compression and decompression is trivial and extremely fast. It perform a sweep through the animation, only accessing two consecutive frames at a time. Therefore, it is particularly well suited for realtime and outof- core compression, and for streaming decompression.Item On Creating Animated Presentations(The Eurographics Association, 2003) Zongker, Douglas E.; Salesin, David H.; D. Breen and M. LinComputers are used to display visuals for millions of live presentations each day, and yet only the tiniest fraction of these make any real use of the powerful graphics hardware available on virtually all of today s machines. In this paper, we describe our efforts toward harnessing this power to create better types of presentations: presentations that include meaningful animation as well as at least a limited degree of interactivity. Our approach has been iterative, alternating between creating animated talks using available tools, then improving the tools to better support the kinds of talk we wanted to make. Through this cyclic design process, we have identified a set of common authoring paradigms that we believe a system for building animated presentations should support. We describe these paradigms and present the latest version of our script-based system for creating animated presentations, called SLITHY. We show several examples of actual animated talks that were created and given with versions of SLITHY, including one talk presented at SIGGRAPH 2000 and four talks presented at SIGGRAPH 2002. Finally, we describe a set of design principles that we have found useful for making good use of animation in presentation.