Browsing by Author "Hertzmann, Aaron"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Contact and Human Dynamics from Monocular Video(The Eurographics Association, 2020) Rempe, Davis; Guibas, Leonidas J.; Hertzmann, Aaron; Russell, Bryan; Villegas, Ruben; Yang, Jimei; Holden, DanielExisting methods for human motion from video predict 2D and 3D poses that are approximately accurate, but contain visible errors that violate physical constraints, such as feet penetrating the ground and bodies leaning at extreme angles. We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input. We first estimate ground contact timings with a neural network which is trained without hand-labeled data. A physicsbased trajectory optimization then solves for a physically-plausible motion, based on the inputs. We show this process produces motions that are more realistic than those from purely kinematic methods for character animation from dynamic videos. A detailed report that fully describes our method is available at geometry.stanford.edu/projects/human-dynamics-eccv-2020.Item Learning A Stroke‐Based Representation for Fonts(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Balashova, Elena; Bermano, Amit H.; Kim, Vladimir G.; DiVerdi, Stephen; Hertzmann, Aaron; Funkhouser, Thomas; Chen, Min and Benes, BedrichDesigning fonts and typefaces is a difficult process for both beginner and expert typographers. Existing workflows require the designer to create every glyph, while adhering to many loosely defined design suggestions to achieve an aesthetically appealing and coherent character set. This process can be significantly simplified by exploiting the similar structure character glyphs present across different fonts and the shared stylistic elements within the same font. To capture these correlations, we propose learning a stroke‐based font representation from a collection of existing typefaces. To enable this, we develop a stroke‐based geometric model for glyphs, a fitting procedure to reparametrize arbitrary fonts to our representation. We demonstrate the effectiveness of our model through a manifold learning technique that estimates a low‐dimensional font space. Our representation captures a wide range of everyday fonts with topological variations and naturally handles discrete and continuous variations, such as presence and absence of stylistic elements as well as slants and weights. We show that our learned representation can be used for iteratively improving fit quality, as well as exploratory style applications such as completing a font from a subset of observed glyphs, interpolating or adding and removing stylistic elements in existing fonts.Item Learning from Multi-domain Artistic Images for Arbitrary Style Transfer(The Eurographics Association, 2019) Xu, Zheng; Wilber, Michael; Fang, Chen; Hertzmann, Aaron; Jin, Hailin; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe propose a fast feed-forward network for arbitrary style transfer, which can generate stylized image for previously unseen content and style image pairs. Besides the traditional content and style representation based on deep features and statistics for textures, we use adversarial networks to regularize the generation of stylized images. Our adversarial network learns the intrinsic property of image styles from large-scale multi-domain artistic images. The adversarial training is challenging because both the input and output of our generator are diverse multi-domain images.We use a conditional generator that stylized content by shifting the statistics of deep features, and a conditional discriminator based on the coarse category of styles. Moreover, we propose a mask module to spatially decide the stylization level and stabilize adversarial training by avoiding mode collapse. As a side effect, our trained discriminator can be applied to rank and select representative stylized images. We qualitatively and quantitatively evaluate the proposed method, and compare with recent style transfer methods. We release our code and model at https://github.com/nightldj/behance_release.