VVG: Vision, Video, and Graphics
Permanent URI for this community
Browse
Browsing VVG: Vision, Video, and Graphics by Subject "Animation"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Cartoon-Style Rendering of Motion from Video(The Eurographics Association, 2003) Collomosse, J.P.; Hall, P.M.; Peter Hall and Philip WillisThe contribution of this paper is a novel non-photorealistic rendering (NPR) system capable of rendering motion within a video sequence in artistic styles. A variety of cartoon-style motion cues may be inserted into a video sequence, including augmentation cues (such as streak lines, ghosting, or blurring) and deformation cues (such as squash and stretch or drag effects). Users may select from the gamut of available styles by setting parameters which in uence the placement and appearance of motion cues. Our system draws upon techniques from both the vision and the graphics communities to analyse and render motion and is entirely automatic, aside from minimal user interaction to bootstrap a feature tracker. We demonstrate successful application of our system to a variety of subjects with complexities ranging from simple oscillatory to articulated motion, under both static and moving camera conditions with occlusion present. We conclude with a critical appraisal of the system and discuss directions for future work.Item Realistic Real-Time Hair Simulation and Rendering(The Eurographics Association, 2005) Jung, Yvonne; Rettig, Alexander; Klar, Oliver; Lehr, Timo; Mike ChantlerWe present a method for realistic rendering and simulation of human hair in real-time, which is suitable for the use in complex virtual reality applications. Neighbouring hairs are combined into wisps and animated with our cantilever beam based simulation system, which runs numerically stable and with interactive update rates. The rendering algorithm utilizes latest graphics hardware features and can even handle light coloured hair by including anisotropic reflection and internal transmission.Item A Rigid Transform Basis for Animation Compression and Level of Detail(The Eurographics Association, 2005) Collins, G.; Hilton, A.; Mike ChantlerWe present a scheme for achieving level of detail and compression for animation sequences with known constant connectivity. We suggest compression is useful to automatically create low levels of detail in animations which may be more compressed than the original animation parameters and for high levels of detail where the original animation is expensive to compute. Our scheme is based on spatial segmentation of a base mesh into rigidly transforming segments and then temporal aggregation of these transformations. The result will approximate the given animation within a user specified tolerance which can be adjusted to give the required level of detail. A spatio-temporal smoothing algorithm is used on decoding to give acceptable animations. We show that the rigid transformation basis will span the space of all animations. We also show that the algorithm will converge to the specified tolerance. The algorithm is applied to several examples of synthetic animation and rate distortion curves are given which show that in some cases, the scheme outperforms current compressors.Item Use and Re-use of Facial Motion CaptureData(The Eurographics Association, 2003) Lorenzo, M.S.; Edge, J.D.; King, S.A.; Maddock, S.; Peter Hall and Philip WillisMotion capture (mocap) data is commonly used to recreate complex human motions in computer graphics. Markers are placed on an actor, and the captured movement of these markers allows us to animate computer-generated characters. Technologies have been introduced which allow this technique to be used not only to retrieve rigid body transformations, but also soft body motion such as the facial movement of an actor. The inherent difficulties of working with facial mocap lies in the application of a discrete sampling of surface points to animate a fine discontinuous mesh. Furthermore, in the general case, where the morphology of the actor's face does not coincide with that of the model we wish to animate, some form of retargetting must be applied. In this paper we discuss methods to animate face meshes from mocap data with minimal user intervention using a surface-oriented deformation paradigm.