Expressive 2019
Permanent URI for this collection
Browse
Browsing Expressive 2019 by Subject "Computing methodologies"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Abstract Shape Synthesis From Linear Combinations of Clelia Curves(The Eurographics Association, 2019) Putnam, Lance; Todd, Stephen; Latham, William; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenThis article outlines several families of shapes that can be produced from a linear combination of Clelia curves. We present parameters required to generate a single curve that traces out a large variety of shapes with controllable axial symmetries. Several families of shapes emerge from the equation that provide a productive means by which to explore the parameter space. The mathematics involves only arithmetic and trigonometry making it accessible to those with only the most basic mathematical background. We outline formulas for producing basic shapes, such as cones, cylinders, and tori, as well as more complex families of shapes having non-trivial symmetries. This work is of interest to computational artists and designers as the curves can be constrained to exhibit specific types of shape motifs while still permitting a liberal amount of room for exploring variations on those shapes.Item Aesthetically-Oriented Atmospheric Scattering(The Eurographics Association, 2019) Shen, Yang; Mallett, Ian; Shkurko, Konstantin; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present Aesthetically-Oriented Atmospheric Scattering (AOAS): an experiment into the feasibility of using real-time rendering as a tool to explore sky styles. AOAS provides an interactive design environment which enables rapid iteration cycles from concept to implementation to preview. Existing real-time rendering techniques for atmospheric scattering struggle to produce non-photorealistic sky styles within any 3D scene. To solve this problem, first, we simplify the geometric representation of atmospheric scattering to a single skydome to leverage the flexibility and simplicity of skydomes in compositing with 3D scenes. Second, we classify the essential and non-essential visual characteristics of the sky and allow AOAS to vary the latter, thus producing meaningful, non-photorealistic sky styles with real-time atmospheric scattering that are still recognizable as skies, but contain artistic stylization. We use AOAS to generate a wide variety of sky examples ranging from physical to highly stylized in appearance. The algorithm can be easily implemented on the GPU, and performs at interactive frame rates with low memory consumption and CPU usage.Item Defining Hatching in Art(The Eurographics Association, 2019) Philbrick, Greg; Kaplan, Craig S.; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe define hatching-a drawing technique-as rigorously as possible. A pure mathematical formulation or even a binary this-or-that definition is unreachable, but useful insights come from driving as close as we can. First we explain hatching's purposes. Then we define hatching as the use of patches: groups of roughly parallel curves that form flexible, simple patterns. After elaborating on this definition's parts, we briefly treat considerations for research in expressive rendering.Item Enhancing Neural Style Transfer using Patch-Based Synthesis(The Eurographics Association, 2019) Texler, Ondřej; Fišer, Jakub; Lukáč, Mike; Lu, Jingwan; Shechtman, Eli; Sýkora, Daniel; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a new approach to example-based style transfer which combines neural methods with patch-based synthesis to achieve compelling stylization quality even for high-resolution imagery. We take advantage of neural techniques to provide adequate stylization at the global level and use their output as a prior for subsequent patch-based synthesis at the detail level. Thanks to this combination, our method keeps the high frequencies of the original artistic media better, thereby dramatically increases the fidelity of the resulting stylized imagery. We also show how to stylize extremely large images (e.g., 340 Mpix) without the need to run the synthesis at the pixel level, yet retaining the original high-frequency details.Item Generating Playful Palettes from Images(The Eurographics Association, 2019) DiVerdi, Stephen; Lu, Jingwan; Echevarria, Jose; Shugrina, Maria; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenPlayful Palettes are a recent innovation in how artists can mix, explore, and choose colors in a user interface that combines the benefits of a traditional media painter's palette with non-destructive capabilities of digital tools. We present a technique to generate a Playful Palette that best represents the colors found in an input image, allowing the artist to select colors from the image's gamut, while maintaining full editability of the palette. We show that our approach outperforms recent work in terms of how accurately the image gamut is reproduced, and we present an approximation algorithm that is an order of magnitude faster with an acceptable loss in quality.Item Irregular Pebble Mosaics with Sub-Pebble Detail(The Eurographics Association, 2019) Javid, Ali Sattari; Doyle, Lars; Mould, David; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenPebble mosaics convey images through an irregular tiling of rounded pebbles. Past work used relatively uniform tile sizes. We show how to create detailed representations of input photographs in a pebble mosaic style; we first create pebble shapes through a variant of k-means, then compute sub-pebble detail with textured, two-tone pebbles.We use a custom distance function to ensure that pebble sizes adapt to local detail and orient to local feature directions, for an overall effect of high fidelity to the input photograph despite the constraints of the pebble style.Item Learning from Multi-domain Artistic Images for Arbitrary Style Transfer(The Eurographics Association, 2019) Xu, Zheng; Wilber, Michael; Fang, Chen; Hertzmann, Aaron; Jin, Hailin; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe propose a fast feed-forward network for arbitrary style transfer, which can generate stylized image for previously unseen content and style image pairs. Besides the traditional content and style representation based on deep features and statistics for textures, we use adversarial networks to regularize the generation of stylized images. Our adversarial network learns the intrinsic property of image styles from large-scale multi-domain artistic images. The adversarial training is challenging because both the input and output of our generator are diverse multi-domain images.We use a conditional generator that stylized content by shifting the statistics of deep features, and a conditional discriminator based on the coarse category of styles. Moreover, we propose a mask module to spatially decide the stylization level and stabilize adversarial training by avoiding mode collapse. As a side effect, our trained discriminator can be applied to rank and select representative stylized images. We qualitatively and quantitatively evaluate the proposed method, and compare with recent style transfer methods. We release our code and model at https://github.com/nightldj/behance_release.Item Real-Time Patch-Based Stylization of Portraits Using Generative Adversarial Network(The Eurographics Association, 2019) Futschik, David; Chai, Menglei; Cao, Chen; Ma, Chongyang; Stoliar, Aleksei; Korolev, Sergey; Tulyakov, Sergey; Kučera, Michal; Sýkora, Daniel; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a learning-based style transfer algorithm for human portraits which significantly outperforms current state-of-the-art in computational overhead while still maintaining comparable visual quality. We show how to design a conditional generative adversarial network capable to reproduce the output of Fišer et al.'s patch-based method [FJS*17] that is slow to compute but can deliver state-of-the-art visual quality. Since the resulting end-to-end network can be evaluated quickly on current consumer GPUs, our solution enables first real-time high-quality style transfer to facial videos that runs at interactive frame rates. Moreover, in cases when the original algorithmic approach of Fišer et al. fails our network can provide a more visually pleasing result thanks to generalization. We demonstrate the practical utility of our approach on a variety of different styles and target subjects.Item Single Stroke Aerial Robot Light Painting(The Eurographics Association, 2019) Ren, Kejia; Kry, Paul G.; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenThis paper investigates trajectory generation alternatives for creating single-stroke light paintings with a small quadrotor robot. We propose to reduce the cost of a minimum snap piecewise polynomial quadrotor trajectory passing through a set of waypoints by displacing those waypoints towards or away from the camera while preserving their projected position. It is in regions of high curvature, where waypoints are close together, that we make modifications to reduce snap, and we evaluate two different strategies: one that uses a full range of depths to increase the distance between close waypoints, and another that tries to keep the final set of waypoints as close to the original plane as possible. Using a variety of one-stroke animal illustrations as targets, we evaluate and compare the cost of different optimized trajectories, and discuss the qualitative and quantitative quality of flights captured in long exposure photographs.Item Sketching and Layering Graffiti Primitives(The Eurographics Association, 2019) Berio, Daniel; Asente, Paul; Echevarria, Jose; Leymarie, Frederic Fol; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a variant of the skeletal strokes algorithm aimed at mimicking the appearance of hand made graffiti art. It includes a unique fold-culling process that stylizes folds rather than eliminating them. We demonstrate how the stroke structure can be exploited to generate non-global layering and self-overlap effects like the ones that are typically seen in graffiti art and other related art forms like traditional calligraphy. The method produces vector output with no artificial artwork splits, patches or masks to render the non-global layering; each path of the vector output is part of the desired outline. The method lets users interactively generate a wide variety of stylised outputs.Item Stipple Removal in Extreme-tone Regions(The Eurographics Association, 2019) Azami, Rosa; Doyle, Lars; Mould, David; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenConventional tone-preserving stippling struggles with extreme-tone regions. Dark regions require immense quantities of stipples, while light regions become littered with stipples that are distracting and, because of their low density, cannot communicate any image features that may be present. We propose a method to address these problems, augmenting existing stippling methods. We will cover dark regions with solid polygons rather than stipples; in light areas, we both preprocess the image to prevent stipple placement in the very lightest areas and postprocess the stipple distribution to remove stipples that contribute little to the image structure. Our modified stipple images have better visual quality than the originals despite using fewer stipples.Item Video Motion Stylization by 2D Rigidification(The Eurographics Association, 2019) Delanoy, Johanna; Bousseau, Adrien; Hertzmann, Aaron; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenThis paper introduces a video stylization method that increases the apparent rigidity of motion. Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. In contrast, traditional hand-drawn animations often exhibit simplified in-plane motion, such as in the case of cut-out animations where the animator moves pieces of paper from frame to frame. Inspired by this technique, we propose to modify a video such that its content undergoes 2D rigid transforms. To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. The output of our method is a new video and its optical flow, which can be fed to any existing video stylization algorithm.