WICED 2022
Permanent URI for this collection
Browse
Browsing WICED 2022 by Issue Date
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item (Re-)Framing Virtual Reality(The Eurographics Association, 2022) Sagot-Duvauroux, Rémi; Garnier, François; Ronfard, Rémi; Ronfard, Rémi; Wu, Hui-YinWe address the problem of translating the rich vocabulary of cinematographic shots elaborated in classic films for use in virtual reality. Using a classic scene from Alfred Hitchcock's "North by Northwest", we describe a series of artistic experiments attempting to enter "inside the movie" in various conditions and report on the challenges facing the film director in this task. For the case of room-scale VR, we suggest that the absence of the visual frame of the screen can be usefully replaced by the spatial frame of the physical room where the experience takes place. This "re-framing" opens new directions for creative film directing in virtual reality.Item Evaluation of Deep Pose Detectors for Automatic Analysis of Film Style(The Eurographics Association, 2022) Wu, Hui-Yin; Nguyen, Luan; Tabei, Yoldoz; Sassatelli, Lucile; Ronfard, Rémi; Wu, Hui-YinIdentifying human characters and how they are portrayed on-screen is inherently linked to how we perceive and interpret the story and artistic value of visual media. Building computational models sensible towards story will thus require a formal representation of the character. Yet this kind of data is complex and tedious to annotate on a large scale. Human pose estimation (HPE) can facilitate this task, to identify features such as position, size, and movement that can be transformed into input to machine learning models, and enable higher artistic and storytelling interpretation. However, current HPE methods operate mainly on non-professional image content, with no comprehensive evaluation of their performance on artistic film. Our goal in this paper is thus to evaluate the performance of HPE methods on artistic film content. We first propose a formal representation of the character based on cinematography theory, then sample and annotate 2700 images from three datasets with this representation, one of which we introduce to the community. An in-depth analysis is then conducted to measure the general performance of two recent HPE methods on metrics of precision and recall for character detection , and to examine the impact of cinematographic style. From these findings, we highlight the advantages of HPE for automated film analysis, and propose future directions to improve their performance on artistic film content.Item Real-Time Music-Driven Movie Design Framework(The Eurographics Association, 2022) Hofmann, Sarah; Seeger, Maximilian; Rogge-Pott, Henning; Mammen, Sebastian von; Ronfard, Rémi; Wu, Hui-YinCutting to music is a widely used stylistic device in film making. The usual process involves an editor manually adjusting the movie's sequences contingent upon beat or other musical features. But with today's movie productions starting to leverage real-time systems, manual effort can be reduced. Automatic cameras can make decisions on their own according to pre-defined rules, even in real time. In this paper, we present an approach to automatically create a music video. We have realised its implementation as a coding framework integrating with the fmod api and Unreal Engine 4. The framework provides the means to analyze a music stream at runtime and to translate the extracted features into an animation story line, supported by cinematic cutting. We demonstrate its workings by means of an instance of an artistic, music-driven movie.Item WICED 2022: Frontmatter(The Eurographics Association, 2022) Ronfard, Rémi; Wu, Hui-Yin; Ronfard, Rémi; Wu, Hui-YinItem Framework to Computationally Analyze Kathakali Videos(The Eurographics Association, 2022) Bulani, Pratikkumar; S, Jayachandran; Sivaprasad, Sarath; Gandhi, Vineet; Ronfard, Rémi; Wu, Hui-YinKathakali is one of the major forms of Classical Indian Dance. The dance form is distinguished by the elaborately colourful makeup, costumes and face masks. In this work, we present (a) a framework to analyze the facial expressions of the actors and (b) novel visualization techniques for the same. Due to extensive makeup, costumes and masks, the general face analysis techniques fail on Kathakali videos. We present a dataset with manually annotated Kathakali sequences for four downstream tasks, i.e. face detection, background subtraction, landmark detection and face segmentation. We rely on transfer learning and fine-tune deep learning models and present qualitative and quantitative results for these tasks. Finally, we present a novel application of style-transfer of Kathakali video onto a cartoonized face. The comprehensive framework presented in the paper paves the way for better understanding, analysis, pedagogy and visualization of Kathakali videos.Item The Prose Storyboard Language: A Tool for Annotating and Directing Movies(The Eurographics Association, 2022) Ronfard, Rémi; Gandhi, Vineet; Boiron, Laurent; Murukutla, Vaishnavi Ameya; Ronfard, Rémi; Wu, Hui-YinThe prose storyboard language is a formal language for describing movies shot by shot, where each shot is described with a unique sentence. The language uses a simple syntax and limited vocabulary borrowed from working practices in traditional movie-making and is intended to be readable both by machines and humans. The language has been designed over the last ten years to serve as a high-level user interface for intelligent cinematography and editing systems. In this new paper, we present the latest evolution of the language, and the results of an extensive annotation exercise showing the benefits of the language in the task of annotating the sophisticated cinematography and film editing of classic movies.Item Consistent Multi- and Single-View HDR-Image Reconstruction from Single Exposures(The Eurographics Association, 2022) Mohan, Aditya; Zhang, Jing; Cozot, Remi; Loscos, Celine; Ronfard, Rémi; Wu, Hui-YinRecently, there have been attempts to obtain high-dynamic range (HDR) images from single exposures and efforts to reconstruct multi-view HDR images using multiple input exposures. However, there have not been any attempts to reconstruct multi-view HDR images from multi-view Single Exposures to the best of our knowledge. We present a two-step methodology to obtain color consistent multi-view HDR reconstructions from single-exposure multi-view low-dynamic-range (LDR) Images. We define a new combination of the Mean Absolute Error and Multi-Scale Structural Similarity Index loss functions to train a network to reconstruct an HDR image from an LDR one. Once trained we use this network to multi-view input. When tested on single images, the outputs achieve competitive results with the state-of-the-art. Quantitative and qualitative metrics applied to our results and to the state-of-the-art show that our HDR expansion is better than others while maintaining similar qualitative reconstruction results. We also demonstrate that applying this network on multi-view images ensures coherence throughout the generated grid of HDR images.Item Using Advene to Bridge the Gap Between Users and Ontologies in Movie Annotation(The Eurographics Association, 2022) Aubert, Olivier; Ronfard, Rémi; Wu, Hui-YinFeature movies and documentaries analysis has always relied on available access tools and possibilities: movie theaters required memorizing whole sequences, home video (VHS, DVD) brought new possibilities for analysis. Digital video tools now provide additional capabilities, such as video annotation, which is sometimes used in research contexts, from simple synchronized note-taking to more structured approaches. The AdA project of the Cinepoietics team of Freie Universität de Berlin aims at investigating the audiovisual rhetorics of affect in audiovisual media on the financial crisis. The analyses are framed by theoretical assumptions on the process of filmviewing, and one of the goals of the project is to study in what measure a systematic approach based on semantic annotations of the audiovisual corpus can bring a new light to the reflexions. Such an approach requires appropriate tooling for humanities researchers. We will describe in this contribution how the Advene video annotation platform has been extended and used to produce and use semantic annotations, and to validate the underlying ontology, accompanying the humanities researcher practices in the AdA project.