WICED 2017
Permanent URI for this collection
Browse
Browsing WICED 2017 by Subject "I.3.7 [Computer Graphics]"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item CaMor: Screw Interpolation between Perspective Projections of Partial Views of Rectangular Images(The Eurographics Association, 2017) Raghuraman, Gokul; Barrash, Nicholas; Rossignac, Jarek; William Bares and Vineet Gandhi and Quentin Galvane and Remi RonfardCaMor is a tool for generating an animation from a single drawing or photograph that represents a partial view of a perspective projection of a planar shape or image that contains portions of only 3 edges of an unknown rectangle. The user identifies these portions and indicates where the corresponding lines should be at the end of the animation. CaMor produces a non-affine animation of the entire plane by combining (1) a new rectification procedure that identifies the orientation in 3D of a rectangle from the partial image of its perspective projection, (2) a depth adjustment that ensures that the two rectified rectangles are congruent in 3D, (3) a screw motion that interpolates in 3D between the two congruent shapes, and (4) at each frame, a perspective projection of a user-selected portion of the original image. The animation may be modified interactively by adjusting the final positions of the lines or the focal length. We suggest applications to the animation of hand-drawn scenes, to the morph between two photographs, and to the intuitive design of camera motions for indoor and street scenes.Item La Caméra Enchantée(The Eurographics Association, 2017) Rossignac, Jarek; William Bares and Vineet Gandhi and Quentin Galvane and Remi RonfardA rich set of tools have been developed for designing and animating camera motions. Most of them optimize some geometric measure while satisfying a set of geometric constraints. Others strive to provide an intuitive graphical user interface for manipulating the camera motion or the key poses that control it. We will start by reviewing examples of such tools developed by the speaker and his collaborators and students. These include a 6 DoF GUI for moving a MiniCam over a floor plan of the set, arguing the benefits of Screw Motions for interpolation key poses, using HelBender to smoothen piecewise helical interpolating motions, and controlling the camera by moving on the screen the location of feature points tracked by the camera, and scene graph extensions that support smooth transitions between tracked objects. Then, we will ask harder questions: What is the best way for the user to specify the objectives, the constraints, and the camera motion style? How do we define and program such a style? Is the objective to make the motion so natural that it is not noticed by the viewer or is should we strive to support aesthetic qualities and artistic camera actions? And finally, how do we define and program responsive camera behaviors for interactive environments? Author's prior publications referenced in the talk include: [SBM 95], [RK01], [KR03], [PR05], [RKS 07], [PR08], [RS08], [RV11], [RK12], [RLV12].Item Making Movies from Make-Believe Games(The Eurographics Association, 2017) Barbulescu, Adela; Garcia, Maxime; Vaufreydaz, Dominique; Cani, Marie Paule; Ronfard, Rémi; William Bares and Vineet Gandhi and Quentin Galvane and Remi RonfardPretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose "Make-believe", a system for making movies from pretend play by using 3D printed figurines as props. We capture the rigid motions of the figurines and the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to the virtual story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation.Item Using ECPs for Interactive Applications in Virtual Cinematography(The Eurographics Association, 2017) Wu, Hui-Yin; Li, Tsai-Yen; Christie, Marc; William Bares and Vineet Gandhi and Quentin Galvane and Remi RonfardThis paper introduces an interactive application of our previous work on the Patterns language as creative assistant for editing cameras in 3D virtual environments. Patterns is a set of vocabulary, which was inspired by professional film practice and textbook terminology. The vocabulary allows one to define recurrent stylistic constraints on a sequence of shots, which we term ''embedded constraint pattern'' (ECP). In our previous work, we proposed a solver that allows us to search for occurrences of ECPs in annotated data, and showed its use in automated analysis of story and emotional elements of film. This work implements a new solver that interactively propose framing compositions from an annotated database of framings that conform to the user-applied ECPs. We envision this work to be incorporated into tools and interfaces for 3D environments in the context of film pre-visualisation, film or digital arts education, video games, and other related applications in film and multimedia.Item Zooming On All Actors: Automatic Focus+Context Split Screen Video Generation(The Eurographics Association, 2017) Kumar, Moneish; Gandhi, Vineet; Ronfard, Rémi; Gleicher, Michael; William Bares and Vineet Gandhi and Quentin Galvane and Remi RonfardStage performances can be easily captured using a high resolution camera but these are often difficult to watch because actor faces are too small. We present a novel approach to create a split screen video that incorporates both the context as well as the close-up details of the actors. Our system takes as input the static recording of a stage performance and tracking information about the actor positions, and generates a video with a wide master shot and a set of close-ups of all identified actors and hence showing a focus+context view that shows both the overall action as well as the details of actor faces. The key to our approach is to compute these camera motions such that they are cinematically valid close-ups and to ensure that the set of views of the different actors are properly coordinated and presented. The close-up views are created as virtual camera movements by applying panning, cropping and zooming to the source video. We pose the computation of camera motions as convex optimization that creates detailed views and smooth movements, subject to cinematic constraints such as not cutting faces with the edge of the frame. Additional constraints allow for the interaction amongst the close up views of each actor, causing them to merge seamlessly when actors are close. Generated views are then placed in a layout that preserves the spatial relationships between actors. We demonstrate our results on a variety of video sequences from theatre and dance performances.