Expressive 2018
Permanent URI for this collection
Browse
Browsing Expressive 2018 by Issue Date
Now showing 1 - 20 of 21
Results Per Page
Sort Options
Item Implicit Representation of Inscribed Volumes(ACM, 2018) Sahbaei, Parto; Mould, David; Wyvill, Brian; Aydın, Tunç and Sýkora, DanielWe present an implicit approach for constructing smooth isolated or interconnected 3-D inscribed volumes which can be employed for volumetric modeling of various kinds of spongy or porous structures, such as volcanic rocks, pumice stones, Cancellus bones, liquid or dry foam, radiolarians, cheese, and other similar materials. The inscribed volumes can be represented in their normal or positive forms to model natural pebbles or pearls, or in their inverted or negative forms to be used in porous structures, but regardless of their types, their smoothness and sizes are controlled by the user without losing the consistency of the shapes. We introduce two techniques for blending and creating interconnections between these inscribed volumes to achieve a great flexibility to adapt our approach to different types of porous structures, whether they are regular or irregular. We begin with a set of convex polytopes such as 3-D Voronoi diagram cells and compute inscribed volumes bounded by the cells. The cells can be irregular in shape, scale, and topology, and this irregularity transfers to the inscribed volumes, producing natural-looking spongy structures. Describing the inscribed volumes with implicit functions gives us a freedom to exploit volumetric surface combinations and deformations operations effortlessly.Item Motion-coherent stylization with screen-space image filters(ACM, 2018) Bléron, Alexandre; Vergne, Romain; Hurtut, Thomas; Thollot, Joëlle; Aydın, Tunç and Sýkora, DanielOne of the qualities sought in expressive rendering is the 2D impression of the resulting style, called flatness. In the context of 3D scenes, screen-space stylization techniques are good candidates for flatness as they operate in the 2D image plane, after the scene has been rendered into so-called G-buffers. Various stylization filters can be applied in screen-space while making use of the geometrical information contained in G-buffers to ensure motion coherence. However, this means that filtering can only be done inside the rasterized surface of the object. This can be detrimental to some styles that require irregular silhouettes to be convincing. In this paper, we describe a post-processing pipeline that allows stylization filters to extend outside the rasterized footprint of the object by locally "inflating" the data contained in G-buffers. This pipeline is fully implemented on the GPU and can be evaluated at interactive rates. We show how common image filtering techniques, when integrated in our pipeline and in combination with G-buffer data, can be used to reproduce a wide range of "digitally-painted" appearances, such as directed brush strokes with irregular silhouettes, while keeping a degree of motion coherence.Item 2D Shading for Cel Animation(ACM, 2018) Hudon, Matis; Pagés, Rafael; Grogan, Mairéad; Ondřej, Jan; Smolić, Aljoša; Aydın, Tunç and Sýkora, DanielWe present a semi-automatic method for creating shades and self-shadows in cel animation. Besides producing attractive images, shades and shadows provide important visual cues about depth, shapes, movement and lighting of the scene. In conventional cel animation, shades and shadows are drawn by hand. As opposed to previous approaches, this method does not rely on a complex 3D reconstruction of the scene: its key advantages are simplicity and ease of use. The tool was designed to stay as close as possible to the natural 2D creative environment and therefore provides an intuitive and user-friendly interface. Our system creates shading based on hand-drawn objects or characters, given very limited guidance from the user. The method employs simple yet very efficient algorithms to create shading directly out of drawn strokes. We evaluate our system through a subjective user study and provide qualitative comparison of our method versus existing professional tools and state of the art.Item Context-based Sketch Classification(ACM, 2018) Zhang, Jianhui; Chen, Yilan; Li, Lei; Fu, Hongbo; Tai, Chiew-Lan; Aydın, Tunç and Sýkora, DanielWe present a novel context-based sketch classification framework using relations extracted from scene images. Most of existing methods perform sketch classification by considering individually sketched objects and often fail to identify their correct categories, due to the highly abstract nature of sketches. For a sketched scene containing multiple objects, we propose to classify a sketched object by considering its surrounding context in the scene, which provides vital cues for resolving its recognition ambiguity. We learn such context knowledge from a database of scene images by summarizing the inter-object relations therein, such as co-occurrence, relative positions and sizes.We show that the context information can be used for both incremental sketch classification and sketch co-classification. Our method outperforms the state-of-the-art single-object classification method, evaluated on a new dataset of sketched scenes.Item 3D Sketching for Interactive Model Retrieval in Virtual Reality(ACM, 2018) Giunchi, Daniele; James, Stuart; Steed, Anthony; Aydın, Tunç and Sýkora, DanielWe describe a novel method for searching 3D model collections using free-form sketches within a virtual environment as queries. As opposed to traditional sketch retrieval, our queries are drawn directly onto an example model. Using immersive virtual reality the user can express their query through a sketch that demonstrates the desired structure, color and texture. Unlike previous sketch-based retrieval methods, users remain immersed within the environment without relying on textual queries or 2D projections which can disconnect the user from the environment. We perform a test using queries over several descriptors, evaluating the precision in order to select the most accurate one. We show how a convolutional neural network (CNN) can create multi-view representations of colored 3D sketches. Using such a descriptor representation, our system is able to rapidly retrieve models and in this way, we provide the user with an interactive method of navigating large object datasets. Through a user study we demonstrate that by using our VR 3D model retrieval system, users can perform search more quickly and intuitively than with a naive linear browsing method. Using our system users can rapidly populate a virtual environment with specific models from a very large database, and thus the technique has the potential to be broadly applicable in immersive editing systems.Item Expressive 2018: frontmatter(ACM, 2018) Aydın, Tunç and Sýkora, DanielItem Fluid Brush(ACM, 2018) Abraham, Sarah; Vouga, Etienne; Fussell, Donald; Aydın, Tunç and Sýkora, DanielDigital media allows artists to create a wealth of visually-interesting effects that are impossible in traditional media. This includes temporal effects, such as cinemagraph animations, and expressive fluid effects. Yet these flexible and novel media often require highly technical expertise, which is outside a traditional artist's skill with paintbrush or pen. Fluid Brush acts a form of novel, digital media, which retains the brush-based interactions of traditional media, while expressing the movement of turbulent and laminar flow. As a digital media controlled through a non-technical interface, Fluid Brush allows for a novel form of painting that makes fluid effects accessible to novice users and traditional artists. To provide an informal demonstration of the medium's effects, applications, and accessibility, we asked designers, traditional artists, and digital artists to experiment with Fluid Brush. They produced a variety of works reflective of their artistic interests and backgrounds.Item The Role of Grouping in Sketched Diagram Recognition(ACM, 2018) Ghodrati, Amirhossein; Blagojevic, Rachel; Guesgen, Hans W.; Marsland, Stephen; Plimmer, Beryl; Aydın, Tunç and Sýkora, DanielAn early phase of sketched diagram recognition systems consists of grouping digital ink into possible shapes. This survey presents the key literature on automatic grouping techniques in sketch recognition. In addition, we identify the major challenges in grouping ink into identifiable shapes, discuss the common solutions to these challenges based on current research, and highlight areas for future work.Item Automatic Generation of Geological Stories from a Single Sketch(ACM, 2018) Garcia, Maxime; Cani, Marie-Paule; Ronfard, Rémi; Gout, Claude; Perrenoud, Christian; Aydın, Tunç and Sýkora, DanielDescribing the history of a terrain from a vertical geological cross-section is an important problem in geology, called geological restoration. Designing the sequential evolution of the geometry is usually done manually, involving many trials and errors. In this work, we recast this problem as a storyboarding problem, where the different stages in the restoration are automatically generated as storyboard panels and displayed as geological stories. Our system allows geologists to interactively explore multiple scenarios by selecting plausible geological event sequences and backward simulating them at interactive rate, causing the terrain layers to be progressively un-deposited, un-eroded, un-compacted, un-folded and un-faulted. Storyboard sketches are generated along the way. When a restoration is complete, the storyboard panels can be used for automatically generating a forward animation of the terrain history, enabling quick visualization and validation of hypotheses. As a proof-of-concept, we describe how our system was used by geologists to restore and animate cross-sections in real examples at various spatial and temporal scales and with different levels of complexity, including the Chartreuse region in the French Alps.Item Computational Light Painting and Kinetic Photography(ACM, 2018) Huang, Yaozhun; Tsang, Sze-Chun; Wong, Hei-Ting Tamar; Lam, Miu-Ling; Aydın, Tunç and Sýkora, DanielWe present a computational framework for creating swept volume light painting and kinetic photography. Unlike conventional light painting technique using hand-held point light source or LED arrays, we move a flat-panel display with robot in a curved path. The display shows real-time rendered contours of a 3D object being sliced by the display plane along the path. All light contours are captured in a long exposure and constitute the virtual 3D object augmented in the real space. To ensure geometric accuracy, we use hand-eye calibration method to precisely obtain the transformation between the display and the robot. A path generation algorithm is developed to automatically yield the robot path that can best accommodate the 3D shape of the target model. To further avoid shape distortion due to asynchronization between the display's pose and the image content, we propose a real-time slicing method for arbitrary slicing direction. By organizing the triangular mesh into Octree data structure, the approach can significantly reduce the computational time and improve the performance of real-time rendering. We study the optimal tree level for different range of triangle numbers so as to attain competitive computational time.Texture mapping is also implemented to produce colored light painting. We extend our methodologies to computational kinetic photography, which is dual to light painting. Instead of keeping the camera stationary, we move the camera with robot and capture long exposures of a stationary display showing light contours. We transform the display path for light painting to the camera path for kinetic photography. A variety of 3D models are used to verify that the proposed techniques can produce stunning long exposures with high-fidelity volumetric imagery. The techniques have great potential for innovative applications including animation, visible light communication, invisible information visualization and creative art.Item Stylized Stereoscopic 3D Line Drawings from 3D Images(ACM, 2018) Istead, Lesley; Kaplan, Craig S.; Aydın, Tunç and Sýkora, DanielStereoscopic 3D (S3D) line drawings were introduced by Sir Charles Wheatstone in 1838. S3D line drawings persist today in various art forms, such as comic books. Stereoscopic 3D line drawings may be hand-drawn or generated from 3D meshes using a variety of algorithms. When creating these drawings, emphasis is placed on consistency: ensuring that the object/scene visible in both views matches exactly for a comfortable viewing experience and accurate depiction of depth [Northam et al. 2013]. While producing S3D line drawings from S3D photos has not been studied in depth, several methods do exist. Kim et al. describe a method for producing stylized stereoscopic 3D line drawings from S3D photographs [Kim et al. 2012]. Their paper applies Canny edge detection to the edge tangent field [Kang et al. 2007] of the left stereo image and warps the discovered edges to the right image using the disparity map. However, the rendered lines are from all edges that can be found in the actual image, including object contours as well as texture or lighting contours. By contrast, a hand-drawn stereoscopic 3D line drawing would be likely to include only object contours and creases. In previous work, we explored the stylization of S3D images by decomposing an image into a set of disparity layers [Northam et al. 2013]. However, that would be ineffective here because while applying the Canny edge detector to the disparity mapwould isolate object contours from texture or lighting contours, the layers would only contain pixels of a single disparity. Hence, there would be no edges to find in each layer. We present a method to produce stylized stereoscopic 3D line drawings from 3D photos that only depicts object contours similar to traditional line drawings. Since contours alone can be insufficient to communicate 3D shape, we also provide the option of adding shading to our drawings to clarify shape and enhance the perception of depth.Item Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization(ACM, 2018) Besançon, Lonni; Semmo, Amir; Biau, David; Frachet, Bruno; Pineau, Virginie; Sariali, El Hadi; Taouachi, Rabah; Isenberg, Tobias; Dragicevic, Pierre; Aydın, Tunç and Sýkora, DanielWe present the first empirical study on using color manipulation and stylization to make surgery images more palatable. While aversion to such images is natural, it limits many people's ability to satisfy their curiosity, educate themselves, and make informed decisions. We selected a diverse set of image processing techniques, and tested them both on surgeons and lay people. While many artistic methods were found unusable by surgeons, edge-preserving image smoothing gave good results both in terms of preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). Color manipulation turned out to be not as effective.Item Seamless Reconstruction of Part-Based High-Relief Models from Hand-Drawn Images(ACM, 2018) Dvorožnák, Marek; Nejad, Saman Sepehri; Jamriška, Ondřej; Jacobson, Alec; Kavan, Ladislav; Sýkora, Daniel; Aydın, Tunç and Sýkora, DanielWe present a new approach to reconstruction of high-relief models from hand-made drawings. Our method is tailored to an interactive modeling scenario where the input drawing can be separated into a set of semantically meaningful parts of which relative depth order is known beforehand. For this kind of input, our technique allows inflating individual components to have a semi-elliptical profile, position them to satisfy prescribed depth order, and provide their seamless interconnection. As compared to previous similar frameworks our approach is the first that formulates this reconstruction process as a joint non-linear optimization problem. Although its direct optimization is computationally demanding we propose an approximative solution which delivers comparable results orders of magnitude faster enabling an interactive response. We evaluate our approach on various hand-made drawings and demonstrate that it provides stateof-the-art quality in comparison with previous methods which require comparable user intervention.Item ToonCap: A Layered Deformable Model for Capturing Poses From Cartoon Characters(ACM, 2018) Fan, Xinyi; Bermano, Amit H.; Kim, Vladimir G.; Popović, Jovan; Rusinkiewicz, Szymon; Aydın, Tunç and Sýkora, DanielCharacters in traditional artwork such as children's books or cartoon animations are typically drawn once, in fixed poses, with little opportunity to change the characters' appearance or re-use them in a different animation. To enable these applications one can fit a consistent parametric deformable model - a puppet - to different images of a character, thus establishing consistent segmentation, dense semantic correspondence, and deformation parameters across poses. In this work we argue that a layered deformable puppet is a natural representation for hand-drawn characters, providing an effective way to deal with the articulation, expressive deformation, and occlusion that are common to this style of artwork. Our main contribution is an automatic pipeline for fitting these models to unlabeled images depicting the same character in various poses. We demonstrate that the output of our pipeline can be used directly for editing and re-targeting animations.Item Sculpture Paintings(ACM, 2018) Arpa, Sami; Süsstrunk, Sabine; Hersch, Roger D.; Aydın, Tunç and Sýkora, DanielWe present a framework for automatically creating a type of artwork in which 2D and 3D contents are mixed within the same composition. These artworks create plausible effects for the viewers by showing a different relationship between 2D and 3D at each viewing angle. As the viewing angle is changed, we can clearly see 3D elements emerging from the scene. When creating such artwork, we face several challenges. The main challenge is to ensure the continuity between the 2D and the 3D parts in terms of geometry and colors. We provide a 3D synthetic environment in which the user selects the region of interest (ROI) from a given scene to be shown in 3D. Then we create a flat rendering grid that matches the topology of the ROI and attach the ROI to the rendering grid. Next we create textures for the flat part and the ROI. To enhance the continuity between the 2D and the 3D scene elements, we include bas-relief profiles around the ROI. Our framework can be used as a tool in order to assist artists in designing such sculpture paintings. Furthermore, it can be applied by amateur users to create decorative objects for exhibitions, souvenirs, and homes.Item An ego-altruist society(ACM, 2018) Cruz, Pedro M.; Cunha, André B.; Aydın, Tunç and Sýkora, DanielThis artwork is an artificial life simulation that shows how a society of agents flourishes with the symbiotic interactions between the egotist and altruist extremes. Egotist agents seek and absorb energy. Altruist agents seek other agents, share energy and reproduce. They group into multi-agent organisms that adapt to the energy present in the system.Item Abstract Depiction of Human and Animal Figures: Examples from Two Centuries of Art and Craft(ACM, 2018) Dodgson, Neil A.; Aydın, Tunç and Sýkora, DanielThe human figure is important in art. I discuss examples of the abstract depiction of the human figure and the challenge faced in attempting to mimic algorithmically what human artists can achieve. The challenge lies in the workings of the human brain: we have enormous knowledge about the world and a particular ability to make fine distinctions about other humans from posture, clothing and expression. This allows a human to make assumptions about human figures from a tiny amount of data, and allows a human artist to take advantage of this when creating art. We look at examples from impressionist and post-impressionist painting, from cross-stitch and knitting, from pixelated renderings in early video games, and from the stylisation used by the artists of children's books.Item Approaches for Local Artistic Control of Mobile Neural Style Transfer(ACM, 2018) Reimann, Max; Klingbeil, Mandy; Pasewaldt, Sebastian; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias; Aydın, Tunç and Sýkora, DanielThis work presents enhancements to state-of-the-art adaptive neural style transfer techniques, thereby providing a generalized user interface with creativity tool support for lower-level local control to facilitate the demanding interactive editing on mobile devices. The approaches are implemented in a mobile app that is designed for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to perform location-based filtering and direct the composition. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.Item MNPR: A Framework for Real-Time Expressive Non-Photorealistic Rendering of 3D Computer Graphics(ACM, 2018) Montesdeoca, Santiago E.; Seah, Hock Soon; Semmo, Amir; Bénard, Pierre; Vergne, Romain; Thollot, Joëlle; Benvenuti, Davide; Aydın, Tunç and Sýkora, DanielWe propose a framework for expressive non-photorealistic rendering of 3D computer graphics: MNPR. Our work focuses on enabling stylization pipelines with a wide range of control, thereby covering the interaction spectrum with real-time feedback. In addition, we introduce control semantics that allow crossstylistic art-direction, which is demonstrated through our implemented watercolor, oil and charcoal stylizations. Our generalized control semantics and their style-specific mappings are designed to be extrapolated to other styles, by adhering to the same control scheme. We then share our implementation details by breaking down our framework and elaborating on its inner workings. Finally, we evaluate the usefulness of each level of control through a user study involving 20 experienced artists and engineers in the industry, who have collectively spent over 245 hours using our system. MNPR is implemented in Autodesk Maya and open-sourced through this publication, to facilitate adoption by artists and further development by the expressive research and development community.Item Brush Stroke Synthesis with a Generative Adversarial Network Driven by Physically Based Simulation(ACM, 2018) Wu, Rundong; Chen, Zhili; Wang, Zhaowen; Yang, Jimei; Marschner, Steve; Aydın, Tunç and Sýkora, DanielWe introduce a novel approach that uses a generative adversarial network (GAN) to synthesize realistic oil painting brush strokes, where the network is trained with data generated by a high-fidelity simulator. Among approaches to digitally synthesizing natural media painting strokes, methods using physically based simulation by far produce the most realistic visual results and allow the most intuitive control of stroke variations. However, accurate physics simulations are known to be computationally expensive and often cannot meet the performance requirements of painting applications. A few existing simulation-based methods have managed to reach real-time performance at the cost of lower visual quality resulting from simplified models or lower resolution. In our work, we propose to replace the expensive fluid simulation with a neural network generator. The network takes the existing canvas and new brush trajectory information as input and produces the height and color of the paint surface as output. We build a large painting sample training dataset by feeding random strokes from artists' recordings into a high quality offline simulator. The network is able to produce visual quality comparable to the offline simulator with better performance than the existing real-time oil painting simulator. Finally, we implement a real-time painting system using the trained network with stroke splitting and patch blending and show artworks created with the system by artists. Our neural network approach opens up new opportunities for real-time applications of sophisticated and expensive physically based simulation.