WICED 2022
Permanent URI for this collection
Browse
Browsing WICED 2022 by Subject "Applied computing"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Evaluation of Deep Pose Detectors for Automatic Analysis of Film Style(The Eurographics Association, 2022) Wu, Hui-Yin; Nguyen, Luan; Tabei, Yoldoz; Sassatelli, Lucile; Ronfard, Rémi; Wu, Hui-YinIdentifying human characters and how they are portrayed on-screen is inherently linked to how we perceive and interpret the story and artistic value of visual media. Building computational models sensible towards story will thus require a formal representation of the character. Yet this kind of data is complex and tedious to annotate on a large scale. Human pose estimation (HPE) can facilitate this task, to identify features such as position, size, and movement that can be transformed into input to machine learning models, and enable higher artistic and storytelling interpretation. However, current HPE methods operate mainly on non-professional image content, with no comprehensive evaluation of their performance on artistic film. Our goal in this paper is thus to evaluate the performance of HPE methods on artistic film content. We first propose a formal representation of the character based on cinematography theory, then sample and annotate 2700 images from three datasets with this representation, one of which we introduce to the community. An in-depth analysis is then conducted to measure the general performance of two recent HPE methods on metrics of precision and recall for character detection , and to examine the impact of cinematographic style. From these findings, we highlight the advantages of HPE for automated film analysis, and propose future directions to improve their performance on artistic film content.Item Framework to Computationally Analyze Kathakali Videos(The Eurographics Association, 2022) Bulani, Pratikkumar; S, Jayachandran; Sivaprasad, Sarath; Gandhi, Vineet; Ronfard, Rémi; Wu, Hui-YinKathakali is one of the major forms of Classical Indian Dance. The dance form is distinguished by the elaborately colourful makeup, costumes and face masks. In this work, we present (a) a framework to analyze the facial expressions of the actors and (b) novel visualization techniques for the same. Due to extensive makeup, costumes and masks, the general face analysis techniques fail on Kathakali videos. We present a dataset with manually annotated Kathakali sequences for four downstream tasks, i.e. face detection, background subtraction, landmark detection and face segmentation. We rely on transfer learning and fine-tune deep learning models and present qualitative and quantitative results for these tasks. Finally, we present a novel application of style-transfer of Kathakali video onto a cartoonized face. The comprehensive framework presented in the paper paves the way for better understanding, analysis, pedagogy and visualization of Kathakali videos.Item The Prose Storyboard Language: A Tool for Annotating and Directing Movies(The Eurographics Association, 2022) Ronfard, Rémi; Gandhi, Vineet; Boiron, Laurent; Murukutla, Vaishnavi Ameya; Ronfard, Rémi; Wu, Hui-YinThe prose storyboard language is a formal language for describing movies shot by shot, where each shot is described with a unique sentence. The language uses a simple syntax and limited vocabulary borrowed from working practices in traditional movie-making and is intended to be readable both by machines and humans. The language has been designed over the last ten years to serve as a high-level user interface for intelligent cinematography and editing systems. In this new paper, we present the latest evolution of the language, and the results of an extensive annotation exercise showing the benefits of the language in the task of annotating the sophisticated cinematography and film editing of classic movies.Item (Re-)Framing Virtual Reality(The Eurographics Association, 2022) Sagot-Duvauroux, Rémi; Garnier, François; Ronfard, Rémi; Ronfard, Rémi; Wu, Hui-YinWe address the problem of translating the rich vocabulary of cinematographic shots elaborated in classic films for use in virtual reality. Using a classic scene from Alfred Hitchcock's "North by Northwest", we describe a series of artistic experiments attempting to enter "inside the movie" in various conditions and report on the challenges facing the film director in this task. For the case of room-scale VR, we suggest that the absence of the visual frame of the screen can be usefully replaced by the spatial frame of the physical room where the experience takes place. This "re-framing" opens new directions for creative film directing in virtual reality.Item Real-Time Music-Driven Movie Design Framework(The Eurographics Association, 2022) Hofmann, Sarah; Seeger, Maximilian; Rogge-Pott, Henning; Mammen, Sebastian von; Ronfard, Rémi; Wu, Hui-YinCutting to music is a widely used stylistic device in film making. The usual process involves an editor manually adjusting the movie's sequences contingent upon beat or other musical features. But with today's movie productions starting to leverage real-time systems, manual effort can be reduced. Automatic cameras can make decisions on their own according to pre-defined rules, even in real time. In this paper, we present an approach to automatically create a music video. We have realised its implementation as a coding framework integrating with the fmod api and Unreal Engine 4. The framework provides the means to analyze a music stream at runtime and to translate the extracted features into an animation story line, supported by cinematic cutting. We demonstrate its workings by means of an instance of an artistic, music-driven movie.