WICED 2017
Permanent URI for this collection
Browse
Browsing WICED 2017 by Subject "Animation H.5.2 [Information Interfaces and Presentation]"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Making Movies from Make-Believe Games(The Eurographics Association, 2017) Barbulescu, Adela; Garcia, Maxime; Vaufreydaz, Dominique; Cani, Marie Paule; Ronfard, Rémi; William Bares and Vineet Gandhi and Quentin Galvane and Remi RonfardPretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose "Make-believe", a system for making movies from pretend play by using 3D printed figurines as props. We capture the rigid motions of the figurines and the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to the virtual story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation.Item Zooming On All Actors: Automatic Focus+Context Split Screen Video Generation(The Eurographics Association, 2017) Kumar, Moneish; Gandhi, Vineet; Ronfard, Rémi; Gleicher, Michael; William Bares and Vineet Gandhi and Quentin Galvane and Remi RonfardStage performances can be easily captured using a high resolution camera but these are often difficult to watch because actor faces are too small. We present a novel approach to create a split screen video that incorporates both the context as well as the close-up details of the actors. Our system takes as input the static recording of a stage performance and tracking information about the actor positions, and generates a video with a wide master shot and a set of close-ups of all identified actors and hence showing a focus+context view that shows both the overall action as well as the details of actor faces. The key to our approach is to compute these camera motions such that they are cinematically valid close-ups and to ensure that the set of views of the different actors are properly coordinated and presented. The close-up views are created as virtual camera movements by applying panning, cropping and zooming to the source video. We pose the computation of camera motions as convex optimization that creates detailed views and smooth movements, subject to cinematic constraints such as not cutting faces with the edge of the frame. Additional constraints allow for the interaction amongst the close up views of each actor, causing them to merge seamlessly when actors are close. Generated views are then placed in a layout that preserves the spatial relationships between actors. We demonstrate our results on a variety of video sequences from theatre and dance performances.