Volume 35 (2016)
Permanent URI for this community
Browse
Browsing Volume 35 (2016) by Subject "animation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Line‐Drawing Video Stylization(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Ben‐Zvi, N.; Bento, J.; Mahler, M.; Hodgins, J.; Shamir, A.; Chen, Min and Zhang, Hao (Richard)We present a method to automatically convert videos and CG animations to stylized animated line drawings. Using a data‐driven approach, the animated drawings can follow the sketching style of a specific artist. Given an input video, we first extract edges from the video frames and vectorize them to curves. The curves are matched to strokes from an artist's library, while following the artist's stroke distribution and characteristics. The key challenge in this process is to match the large number of curves in the frames over time, despite topological and geometric changes, allowing to maintain temporal coherence in the output animation. We solve this problem using constrained optimization to build correspondences between tracked points and create smooth sheets over time. These sheets are then replaced with strokes from the artist's database to render the final animation. We evaluate our tracking algorithm on various examples and show stylized animation results based on various artists.We present a method to automatically convert videos and CG animations to stylized animated line drawings. Using a data ‐driven approach, the animated drawings can follow the sketching style of a specific artist. Given an input video, we first extract edges from the video frames and vectorize them to curves. The curves are matched to strokes from an artist's library, while following the artist's stroke distribution and characteristics. The key challenge in this process is to match the large number of curves in the frames over time, despite topological and geometric changes, allowing to maintain temporal coherence in the output animation. We solve this problem using constrained optimization to build correspondences between tracked points and create smooth sheets over time.Item A Virtual Director Using Hidden Markov Models(© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Merabti, B.; Christie, M.; Bouatouch, K.; Chen, Min and Zhang, Hao (Richard)Automatically computing a cinematographic consistent sequence of shots over a set of actions occurring in a 3D world is a complex task which requires not only the computation of appropriate shots (viewpoints) and appropriate transitions between shots (cuts), but the ability to encode and reproduce elements of cinematographic style. Models proposed in the literature, generally based on finite state machine or idiom‐based representations, provide limited functionalities to build sequences of shots. These approaches are not designed in mind to easily learn elements of cinematographic style, nor do they allow to perform significant variations in style over the same sequence of actions. In this paper, we propose a model for automated cinematography that can compute significant variations in terms of cinematographic style, with the ability to control the duration of shots and the possibility to add specific constraints to the desired sequence. The model is parametrized in a way that facilitates the application of learning techniques. By using a Hidden Markov Model representation of the editing process, we demonstrate the possibility of easily reproducing elements of style extracted from real movies. Results comparing our model with state‐of‐the‐art first‐order Markovian representations illustrate these features, and robustness of the learning technique is demonstrated through cross‐validation.Automatically computing a cinematographic consistent sequence of shots over a set of actions occurring in a 3D world is a complex task which requires not only the computation of appropriate shots (viewpoints) and appropriate transitions between shots (cuts), but the ability to encode and reproduce elements of cinematographic style. Models proposed in the literature, generally based on finite state machine or idiom‐based representations, provide limited functionalities to build sequences of shots. These approaches are not designed in mind to easily learn elements of cinematographic style, nor do they allow to perform significant variations in style over the same sequence of actions. In this paper, we propose a model for automated cinematography that can compute significant variations in terms of cinematographic style, with the ability to control the duration of shots and the possibility to add specific constraints to the desired sequence. The model is parametrized in a way that facilitates the application of learning techniques. By using a Hidden Markov Model representation of the editing process, we demonstrate the possibility of easily reproducing elements of style extracted from real movies. Results comparing our model with state‐of‐the‐art first‐order Markovian representations illustrate these features, and robustness of the learning technique is demonstrated through cross‐validation.