38-Issue 7

Permanent URI for this collection

Pacific Graphics 2019 - Symposium Proceedings
Korea University, Seoul, Korea
October 14 – 17, 2019
(for Short Papers see PG 2019 - Short Papers)
Color and Image
Succinct Palette and Color Model Generation and Manipulation Using Hierarchical Representation
Taehong Jeong, Myunghyun Yang, and Hyun Joon Shin
An Improved Geometric Approach for Palette-based Image Decomposition and Recoloring
Yili Wang, Yifan Liu, and Kun Xu
Generic Interactive Pixel-level Image Editing
Yun Liang, Yibo Gan, Mingqin Chen, Diego Gutierrez, and Adolfo Muñoz
Natural Phenomena
Procedural Riverscapes
Adrien Peytavie, Thibault Dupont, Eric Guérin, Yann Cortial, Bedrich Benes, James Gain, and Eric Galin
Desertscapes Simulation
Axel Paris, Adrien Peytavie, Eric Guérin, Oscar Argudo, and Eric Galin
Parallel Generation and Visualization of Bacterial Genome Structures
Tobias Klein, Peter Mindek, Ludovic Autin, David S. Goodsell, Arthur J. Olson, Eduard Gröller, and Ivan Viola
Lines and Sketches
Learning to Trace: Expressive Line Drawing Generation from Photographs
Naoto Inoue, Daichi Ito, Ning Xu, Jimei Yang, Brian Price, and Toshihiko Yamasaki
Deep Line Drawing Vectorization via Line Subdivision and Topology Reconstruction
Yi Guo, Zhuming Zhang, Chu Han, Wenbo Hu, Chengze Li, and Tien-Tsin Wong
Pencil Drawing Video Rendering Using Convolutional Networks
Dingkun Yan, Yun Sheng, and Xiaoyang Mao
Geometric Modeling
Active Scene Understanding via Online Semantic Reconstruction
Lintao Zheng, Chenyang Zhu, Jiazhao Zhang, Hang Zhao, Hui Huang, Matthias Niessner, and Kai Xu
Surface Fairing towards Regular Principal Curvature Line Networks
Lei Chu, Pengbo Bo, Yang Liu, and Wenping Wang
Subdivision Schemes for Quadrilateral Meshes with the Least Polar Artifact in Extraordinary Regions
Yue Ma and Weiyin Ma
Imitating Popular Photos to Select Views for an Indoor Scene
Rung-De Su, Zhe-Yo Liao, Li-Chi Chen, Ai-Ling Tung, and Yu-Shuen Wang
Image Processing
Scale-adaptive Structure-preserving Texture Filtering
Chengfang Song, Chunxia Xiao, Ling Lei, and Haigang Sui
Rain Wiper: An Incremental RandomlyWired Network for Single Image Deraining
Xiwen Liang, Bin Qiu, Zhuo Su, Chengying Gao, Xiaohong Shi, and Ruomei Wang
Field-aligned Quadrangulation for Image Vectorization
Guangshun Wei, Yuanfeng Zhou, Xifeng Gao, Qian Ma, Shiqing Xin, and Ying He
Learning Explicit Smoothing Kernels for Joint Image Filtering
Xiaonan Fang, Miao Wang, Ariel Shamir, and Shi-Min Hu
Perception and Visualization
ManyLands: A Journey Across 4D Phase Space of Trajectories
Aleksandr Amirkhanov, Ilona Kosiuk, Peter Szmolyan, Artem Amirkhanov, Gabriel Mistelbauer, Eduard Gröller, and Renata Georgia Raidou
Inertia-based Fast Vectorization of Line Drawings
Patryk Najgebauer and Rafal Scherer
Animation
Generating 3D Faces using Multi-column Graph Convolutional Networks
Kun Li, Jingying Liu, Yu-Kun Lai, and Jingyu Yang
Figure Skating Simulation from Video
Ri Yu, Hwangpil Park, and Jehee Lee
Towards Robust Direction Invariance in Character Animation
Li-Ke Ma, Zeshi Yang, Baining Guo, and KangKang Yin
Computational Photography
Dual Illumination Estimation for Robust Exposure Correction
Qing Zhang, Yongwei Nie, and Wei-Shi Zheng
Specular Highlight Removal for Real-world Images
Gang Fu, Qing Zhang, Chengfang Song, Qifeng Lin, and Chunxia Xiao
Light Field Video Compression and Real Time Rendering
Saghi Hajisharif, Ehsan Miandji, Per Larsson, Kiet Tran, and Jonas Unger
Naturalness-Preserving Image Tone Enhancement Using Generative Adversarial Networks
Hyeongseok Son, Gunhee Lee, Sunghyun Cho, and Seungyong Lee
Voxels and Polycubes
Practical Foldover-Free Volumetric Mapping Construction
Jian-Ping Su, Xiao-Ming Fu, and Ligang Liu
Computing Surface PolyCube-Maps by Constrained Voxelization
Yang Yang, Xiao-Ming Fu, and Ligang Liu
Polycube Shape Space
Hui Zhao, Xuan Li, Wencheng Wang, Xiaoling Wang, Shaodong Wang, Na Lei, and Xianfeng Gu
Compacting Voxelized Polyhedra via Tree Stacking
Yue Hao and Jyh-Ming Lien
Multi-View and VR
Pyramid Multi-View Stereo with Local Consistency
Jie Liao, Yanping Fu, Qingan Yan, and Chunxia Xiao
Automatic Modeling of Cluttered Multi-room Floor Plans From Panoramic Images
Giovanni Pintore, Fabio Ganovelli, Alberto Jaspe Villanueva, and Enrico Gobbetti
A Generalized Cubemap for Encoding 360° VR Videos using Polynomial Approximation
Jianye Xiao, Jingtao Tang, and Xinyu Zhang
Generative Models
Interactive Curation of Datasets for Training and Refining Generative Models
Wenjie Ye, Yue Dong, and Pieter Peers
Shadow Inpainting and Removal Using Generative Adversarial Networks with Slice Convolutions
Jinjiang Wei, Chengjiang Long, Hua Zou, and Chunxia Xiao
HidingGAN: High Capacity Information Hiding with Generative Adversarial Network
Zihan Wang, Neng Gao, Xin Wang, Ji Xiang, Daren Zha, and Linghui Li
Two-phase Hair Image Synthesis by Self-Enhancing Generative Model
Haonan Qiu, Chuan Wang, Hang Zhu, Xiangyu Zhu, Jinjin Gu, and Xiaoguang Han
Rendering and Sampling
Visibility-Aware Progressive Farthest Point Sampling on the GPU
Sascha Brandt, Claudius Jähn, Matthias Fischer, and Friedhelm Meyer auf der Heide
Unsupervised Dense Light Field Reconstruction with Occlusion Awareness
Lixia Ni, Haiyong Jiang, Jianfei Cai, Jianmin Zheng, Haifeng Li, and Xu Liu
Seamless Mipmap Filtering for Dual Paraboloid Maps
Zhenni Wang, Tze Yui Ho, Chi-Sing Leung, and Eric Wing Ming Wong
Real-time Indirect Illumination of Emissive Inhomogeneous Volumes using Layered Polygonal Area Lights
Takahiro Kuge, Tatsuya Yatagawa, and Shigeo Morishima
Images and Learning
A Unified Neural Network for Panoptic Segmentation
Li Yao and Ang Chyau
Style Mixer: Semantic-aware Multi-Style Transfer Network
Zixuan Huang, Jinghuai Zhang, and Jing Liao
A Color-Pair Based Approach for Accurate Color Harmony Estimation
Bailin Yang, Tianxiang Wei, Xianyong Fang, Zhigang Deng, Frederick W. B. Li, Yun Ling, and Xun Wang
Cloth and Fluid
Distribution Update of Deformable Patches for Texture Synthesis on the Free Surface of Fluids
Jonathan Gagnon, Julián E. Guzmán, Valentin Vervondel, François Dagenais, David Mould, and Eric Paquette
A Rigging-Skinning Scheme to Control Fluid Simulation
Jia-Ming Lu, Xiao-Song Chen, Xiao Yan, Chen-Feng Li, Ming Lin, and Shi-Min Hu
Global Illumination
High Dynamic Range Point Clouds for Real-Time Relighting
Manuele Sabbadin, Gianpaolo Palma, Francesco Banterle, Tamy Boubekeur, and Paolo Cignoni
Offline Deep Importance Sampling for Monte Carlo Path Tracing
Steve Bako, Mark Meyer, Tony DeRose, and Pradeep Sen
Image Based Rendering
Deep Video-Based Performance Synthesis from Sparse Multi-View Capture
Mingjia Chen, Changbo Wang, and Ligang Liu
Appearance Flow Completion for Novel View Synthesis
Hoang Le and Feng Liu
FontRNN: Generating Large-scale Chinese Fonts via Recurrent Neural Network
Shusen Tang, Zeqing Xia, Zhouhui Lian, Yingmin Tang, and Jianguo Xiao
Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image
Mojtaba Bemana, Joachim Keinert, Karol Myszkowski, Michel Bätz, Matthias Ziegler, Hans-Peter Seidel, and Tobias Ritschel
Shape Analysis
Mesh Defiltering via Cascaded Geometry Recovery
Mingqiang Wei, Xianglin Guo, Jin Huang, Haoran Xie, Hua Zong, Reggie Kwan, Fu Lee Wang, and Jing Qin
Topology Preserving Simplification of Medial Axes in 3D Models
Yiyao Chu, Fei Hou, Wencheng Wang, and Lei Li
Intrinsic Symmetry Detection on 3D Models with Skeleton-guided Combination of Extrinsic Symmetries
Wencheng Wang, Junhui Ma, Panpan Xu, and Yiyao Chu
Single-View Modeling of Layered Origami with Plausible Outer Shape
Yuya Kato, Shinichi Tanaka, Yoshihiro Kanamori, and Jun Mitani
Image and Video Editing
Image Composition of Partially Occluded Objects
Xuehan Tan, Panpan Xu, Shihui Guo, and Wencheng Wang
A PatchMatch-based Approach for Matte Propagation in Videos
Marcos H. Backes and Manuel M. Oliveira
Wavelet Flow: Optical Flow Guided Wavelet Facial Image Fusion
Hong Ding, Qingan Yan, Gang Fu, and Chunxia Xiao
ShutterApp: Spatio-temporal Exposure Control for Videos
Nestor Z. Salamon, Markus Billeter, and Elmar Eisemann
Surface and Texture
Selecting Texture Resolution Using a Task-specific Visibility Metric
Krzysztof Wolski, Daniele Giunchi, Shinichi Kinuwaki, Piotr Didyk, Karol Myszkowski, Anthony Steed, and Rafal K. Mantiuk
Global Texture Mapping for Dynamic Objects
Jungeon Kim, Hyomin Kim, Jaesik Park, and Seungyong Lee
Discrete Calabi Flow: A Unified Conformal Parameterization Method
Ke Hua Su, Chen Chen Li, Yu Ming Zhou, Xu Xu, and X. F. Gu
Reliable Rolling-guided Point Normal Filtering for Surface Texture Removal
Yangxing Sun, Honghua Chen, Jing Qin, Hongwei Li, Mingqiang Wei, and Hua Zong
Rendering and Lighting
Lighting Layout Optimization for 3D Indoor Scenes
Sam Jin and Sung-Hee Lee
A Stationary SVBRDF Material Modeling Method Based on Discrete Microsurface
Junqiu Zhu, Yanning Xu, and Lu Wang
Surfaces
Anisotropic Surface Remeshing without Obtuse Angles
Qun-Ce Xu, Dong-Ming Yan, Wenbin Li, and Yong-Liang Yang
Modeling Interfaces
RodSteward: A Design-to-Assembly System for Fabrication using 3D-Printed Joints and Precision-Cut Rods
Alec Jacobson
Learning Style Compatibility Between Objects in a Real-World 3D Asset Database
Yifan Liu, Ruolan Tang, and Daniel Ritchie

BibTeX (38-Issue 7)
                
@article{
10.1111:cgf.13811,
journal = {Computer Graphics Forum}, title = {{
Succinct Palette and Color Model Generation and Manipulation Using Hierarchical Representation}},
author = {
Jeong, Taehong
and
Yang, Myunghyun
and
Shin, Hyun Joon
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13811}
}
                
@article{
10.1111:cgf.13812,
journal = {Computer Graphics Forum}, title = {{
An Improved Geometric Approach for Palette-based Image Decomposition and Recoloring}},
author = {
Wang, Yili
and
Liu, Yifan
and
Xu, Kun
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13812}
}
                
@article{
10.1111:cgf.13813,
journal = {Computer Graphics Forum}, title = {{
Generic Interactive Pixel-level Image Editing}},
author = {
Liang, Yun
and
Gan, Yibo
and
Chen, Mingqin
and
Gutierrez, Diego
and
Muñoz Orbañanos, Adolfo
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13813}
}
                
@article{
10.1111:cgf.13814,
journal = {Computer Graphics Forum}, title = {{
Procedural Riverscapes}},
author = {
Peytavie, Adrien
and
Dupont, Thibault
and
Guérin, Eric
and
Cortial, Yann
and
Benes, Bedrich
and
Gain, James
and
Galin, Eric
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13814}
}
                
@article{
10.1111:cgf.13815,
journal = {Computer Graphics Forum}, title = {{
Desertscapes Simulation}},
author = {
Paris, Axel
and
Peytavie, Adrien
and
Guérin, Eric
and
Argudo, Oscar
and
Galin, Eric
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13815}
}
                
@article{
10.1111:cgf.13816,
journal = {Computer Graphics Forum}, title = {{
Parallel Generation and Visualization of Bacterial Genome Structures}},
author = {
Klein, Tobias
and
Mindek, Peter
and
Autin, Ludovic
and
Goodsell, David
and
Olson, Arthur
and
Groeller, Eduard
and
Viola, Ivan
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13816}
}
                
@article{
10.1111:cgf.13817,
journal = {Computer Graphics Forum}, title = {{
Learning to Trace: Expressive Line Drawing Generation from Photographs}},
author = {
Inoue, Naoto
and
Ito, Daichi
and
Xu, Ning
and
Yang, Jimei
and
Price, Brian
and
Yamasaki, Toshihiko
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13817}
}
                
@article{
10.1111:cgf.13818,
journal = {Computer Graphics Forum}, title = {{
Deep Line Drawing Vectorization via Line Subdivision and Topology Reconstruction}},
author = {
Guo, Yi
and
Zhang, Zhuming
and
Han, Chu
and
Hu, Wenbo
and
Li, Chengze
and
Wong, Tien-Tsin
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13818}
}
                
@article{
10.1111:cgf.13819,
journal = {Computer Graphics Forum}, title = {{
Pencil Drawing Video Rendering Using Convolutional Networks}},
author = {
Yan, Dingkun
and
Sheng, Yun
and
Mao, Xiaoyang
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13819}
}
                
@article{
10.1111:cgf.13820,
journal = {Computer Graphics Forum}, title = {{
Active Scene Understanding via Online Semantic Reconstruction}},
author = {
Zheng, Lintao
and
Zhu, Chenyang
and
Zhang, Jiazhao
and
Zhao, Hang
and
Huang, Hui
and
Niessner, Matthias
and
Xu, Kai
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13820}
}
                
@article{
10.1111:cgf.13821,
journal = {Computer Graphics Forum}, title = {{
Surface Fairing towards Regular Principal Curvature Line Networks}},
author = {
Chu, Lei
and
Bo, Pengbo
and
Liu, Yang
and
Wang, Wenping
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13821}
}
                
@article{
10.1111:cgf.13822,
journal = {Computer Graphics Forum}, title = {{
Subdivision Schemes for Quadrilateral Meshes with the Least Polar Artifact in Extraordinary Regions}},
author = {
Ma, Yue
and
Ma, Weiyin
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13822}
}
                
@article{
10.1111:cgf.13823,
journal = {Computer Graphics Forum}, title = {{
Imitating Popular Photos to Select Views for an Indoor Scene}},
author = {
Su, Rung-De
and
Liao, Zhe-Yo
and
Chen, Li-Chi
and
Tung, Ai-Ling
and
Wang, Yu-Shuen
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13823}
}
                
@article{
10.1111:cgf.13824,
journal = {Computer Graphics Forum}, title = {{
Scale-adaptive Structure-preserving Texture Filtering}},
author = {
Song, Chengfang
and
Xiao, Chunxia
and
Lei, Ling
and
Sui, Haigang
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13824}
}
                
@article{
10.1111:cgf.13825,
journal = {Computer Graphics Forum}, title = {{
Rain Wiper: An Incremental RandomlyWired Network for Single Image Deraining}},
author = {
Liang, Xiwen
and
Qiu, Bin
and
Su, Zhuo
and
Gao, Chengying
and
Shi, Xiaohong
and
Wang, Ruomei
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13825}
}
                
@article{
10.1111:cgf.13826,
journal = {Computer Graphics Forum}, title = {{
Field-aligned Quadrangulation for Image Vectorization}},
author = {
Wei, Guangshun
and
Zhou, Yuanfeng
and
Gao, Xifeng
and
Ma, Qian
and
Xin, Shiqing
and
He, Ying
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13826}
}
                
@article{
10.1111:cgf.13827,
journal = {Computer Graphics Forum}, title = {{
Learning Explicit Smoothing Kernels for Joint Image Filtering}},
author = {
Fang, Xiaonan
and
Wang, Miao
and
Shamir, Ariel
and
Hu, Shi-Min
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13827}
}
                
@article{
10.1111:cgf.13828,
journal = {Computer Graphics Forum}, title = {{
ManyLands: A Journey Across 4D Phase Space of Trajectories}},
author = {
Amirkhanov, Aleksandr
and
Kosiuk, Ilona
and
Szmolyan, Peter
and
Amirkhanov, Artem
and
Mistelbauer, Gabriel
and
Gröller, Eduard
and
Raidou, Renata Georgia
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13828}
}
                
@article{
10.1111:cgf.13829,
journal = {Computer Graphics Forum}, title = {{
Inertia-based Fast Vectorization of Line Drawings}},
author = {
Najgebauer, Patryk
and
Scherer, Rafal
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13829}
}
                
@article{
10.1111:cgf.13830,
journal = {Computer Graphics Forum}, title = {{
Generating 3D Faces using Multi-column Graph Convolutional Networks}},
author = {
Li, Kun
and
Liu, Jingying
and
Lai, Yu-Kun
and
Yang, Jingyu
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13830}
}
                
@article{
10.1111:cgf.13831,
journal = {Computer Graphics Forum}, title = {{
Figure Skating Simulation from Video}},
author = {
Yu, Ri
and
Park, Hwangpil
and
Lee, Jehee
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13831}
}
                
@article{
10.1111:cgf.13832,
journal = {Computer Graphics Forum}, title = {{
Towards Robust Direction Invariance in Character Animation}},
author = {
Ma, Li-Ke
and
Yang, Zeshi
and
Guo, Baining
and
Yin, KangKang
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13832}
}
                
@article{
10.1111:cgf.13833,
journal = {Computer Graphics Forum}, title = {{
Dual Illumination Estimation for Robust Exposure Correction}},
author = {
Zhang, Qing
and
Nie, Yongwei
and
Zheng, Wei-Shi
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13833}
}
                
@article{
10.1111:cgf.13834,
journal = {Computer Graphics Forum}, title = {{
Specular Highlight Removal for Real-world Images}},
author = {
Fu, Gang
and
Zhang, Qing
and
Song, Chengfang
and
Lin, Qifeng
and
Xiao, Chunxia
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13834}
}
                
@article{
10.1111:cgf.13835,
journal = {Computer Graphics Forum}, title = {{
Light Field Video Compression and Real Time Rendering}},
author = {
Hajisharif, Saghi
and
Miandji, Ehsan
and
Larsson, Per
and
Tran, Kiet
and
Unger, Jonas
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13835}
}
                
@article{
10.1111:cgf.13836,
journal = {Computer Graphics Forum}, title = {{
Naturalness-Preserving Image Tone Enhancement Using Generative Adversarial Networks}},
author = {
Son, Hyeongseok
and
Lee, Gunhee
and
Cho, Sunghyun
and
Lee, Seungyong
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13836}
}
                
@article{
10.1111:cgf.13837,
journal = {Computer Graphics Forum}, title = {{
Practical Foldover-Free Volumetric Mapping Construction}},
author = {
Su, Jian-Ping
and
Fu, Xiao-Ming
and
Liu, Ligang
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13837}
}
                
@article{
10.1111:cgf.13838,
journal = {Computer Graphics Forum}, title = {{
Computing Surface PolyCube-Maps by Constrained Voxelization}},
author = {
Yang, Yang
and
Fu, Xiao-Ming
and
Liu, Ligang
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13838}
}
                
@article{
10.1111:cgf.13839,
journal = {Computer Graphics Forum}, title = {{
Polycube Shape Space}},
author = {
Zhao, Hui
and
Li, Xuan
and
Wang, Wencheng
and
Wang, Xiaoling
and
Wang, Shaodong
and
Lei, Na
and
Gu, Xianfeng
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13839}
}
                
@article{
10.1111:cgf.13840,
journal = {Computer Graphics Forum}, title = {{
Compacting Voxelized Polyhedra via Tree Stacking}},
author = {
Hao, Yue
and
Lien, Jyh-Ming
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13840}
}
                
@article{
10.1111:cgf.13841,
journal = {Computer Graphics Forum}, title = {{
Pyramid Multi-View Stereo with Local Consistency}},
author = {
Liao, Jie
and
Fu, Yanping
and
Yan, Qingan
and
Xiao, Chunxia
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13841}
}
                
@article{
10.1111:cgf.13842,
journal = {Computer Graphics Forum}, title = {{
Automatic Modeling of Cluttered Multi-room Floor Plans From Panoramic Images}},
author = {
Pintore, Giovanni
and
Ganovelli, Fabio
and
Villanueva, Alberto Jaspe
and
Gobbetti, Enrico
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13842}
}
                
@article{
10.1111:cgf.13843,
journal = {Computer Graphics Forum}, title = {{
A Generalized Cubemap for Encoding 360° VR Videos using Polynomial Approximation}},
author = {
Xiao, Jianye
and
Tang, Jingtao
and
Zhang, Xinyu
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13843}
}
                
@article{
10.1111:cgf.13844,
journal = {Computer Graphics Forum}, title = {{
Interactive Curation of Datasets for Training and Refining Generative Models}},
author = {
Ye, Wenjie
and
Dong, Yue
and
Peers, Pieter
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13844}
}
                
@article{
10.1111:cgf.13846,
journal = {Computer Graphics Forum}, title = {{
HidingGAN: High Capacity Information Hiding with Generative Adversarial Network}},
author = {
Wang, Zihan
and
Gao, Neng
and
Wang, Xin
and
Xiang, Ji
and
Zha, Daren
and
Li, Linghui
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13846}
}
                
@article{
10.1111:cgf.13845,
journal = {Computer Graphics Forum}, title = {{
Shadow Inpainting and Removal Using Generative Adversarial Networks with Slice Convolutions}},
author = {
Wei, Jinjiang
and
Long, Chengjiang
and
Zou, Hua
and
Xiao, Chunxia
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13845}
}
                
@article{
10.1111:cgf.13848,
journal = {Computer Graphics Forum}, title = {{
Visibility-Aware Progressive Farthest Point Sampling on the GPU}},
author = {
Brandt, Sascha
and
Jähn, Claudius
and
Fischer, Matthias
and
Heide, Friedhelm Meyer auf der
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13848}
}
                
@article{
10.1111:cgf.13847,
journal = {Computer Graphics Forum}, title = {{
Two-phase Hair Image Synthesis by Self-Enhancing Generative Model}},
author = {
Qiu, Haonan
and
Wang, Chuan
and
Zhu, Hang
and
zhu, xiangyu
and
Gu, Jinjin
and
Han, Xiaoguang
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13847}
}
                
@article{
10.1111:cgf.13849,
journal = {Computer Graphics Forum}, title = {{
Unsupervised Dense Light Field Reconstruction with Occlusion Awareness}},
author = {
Ni, Lixia
and
Jiang, Haiyong
and
Cai, Jianfei
and
Zheng, Jianmin
and
Li, Haifeng
and
Liu, Xu
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13849}
}
                
@article{
10.1111:cgf.13850,
journal = {Computer Graphics Forum}, title = {{
Seamless Mipmap Filtering for Dual Paraboloid Maps}},
author = {
Wang, Zhenni
and
Ho, Tze Yui
and
Leung, Chi-Sing
and
Wong, Eric Wing Ming
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13850}
}
                
@article{
10.1111:cgf.13851,
journal = {Computer Graphics Forum}, title = {{
Real-time Indirect Illumination of Emissive Inhomogeneous Volumes using Layered Polygonal Area Lights}},
author = {
Kuge, Takahiro
and
Yatagawa, Tatsuya
and
Morishima, Shigeo
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13851}
}
                
@article{
10.1111:cgf.13852,
journal = {Computer Graphics Forum}, title = {{
A Unified Neural Network for Panoptic Segmentation}},
author = {
Yao, Li
and
Chyau, Ang
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13852}
}
                
@article{
10.1111:cgf.13853,
journal = {Computer Graphics Forum}, title = {{
Style Mixer: Semantic-aware Multi-Style Transfer Network}},
author = {
HUANG, Zixuan
and
ZHANG, Jinghuai
and
LIAO, Jing
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13853}
}
                
@article{
10.1111:cgf.13854,
journal = {Computer Graphics Forum}, title = {{
A Color-Pair Based Approach for Accurate Color Harmony Estimation}},
author = {
Yang, Bailin
and
Wei, Tianxiang
and
Fang, Xianyong
and
Deng, Zhigang
and
Li, Frederick W. B.
and
Ling, Yun
and
Wang, Xun
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13854}
}
                
@article{
10.1111:cgf.13856,
journal = {Computer Graphics Forum}, title = {{
A Rigging-Skinning Scheme to Control Fluid Simulation}},
author = {
Lu, Jia-Ming
and
Chen, Xiao-Song
and
Yan, Xiao
and
Li, Chen-Feng
and
Lin, Ming
and
Hu, Shi-Min
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13856}
}
                
@article{
10.1111:cgf.13855,
journal = {Computer Graphics Forum}, title = {{
Distribution Update of Deformable Patches for Texture Synthesis on the Free Surface of Fluids}},
author = {
Gagnon, jonathan
and
Guzmán, Julián E.
and
Vervondel, Valentin
and
Dagenais, François
and
Mould, David
and
Paquette, Eric
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13855}
}
                
@article{
10.1111:cgf.13857,
journal = {Computer Graphics Forum}, title = {{
High Dynamic Range Point Clouds for Real-Time Relighting}},
author = {
Sabbadin, Manuele
and
Palma, Gianpaolo
and
BANTERLE, FRANCESCO
and
Boubekeur, Tamy
and
Cignoni, Paolo
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13857}
}
                
@article{
10.1111:cgf.13858,
journal = {Computer Graphics Forum}, title = {{
Offline Deep Importance Sampling for Monte Carlo Path Tracing}},
author = {
Bako, Steve
and
Meyer, Mark
and
DeRose, Tony
and
Sen, Pradeep
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13858}
}
                
@article{
10.1111:cgf.13859,
journal = {Computer Graphics Forum}, title = {{
Deep Video-Based Performance Synthesis from Sparse Multi-View Capture}},
author = {
Chen, Mingjia
and
Wang, Changbo
and
Liu, Ligang
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13859}
}
                
@article{
10.1111:cgf.13860,
journal = {Computer Graphics Forum}, title = {{
Appearance Flow Completion for Novel View Synthesis}},
author = {
Le, Hoang
and
Liu, Feng
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13860}
}
                
@article{
10.1111:cgf.13861,
journal = {Computer Graphics Forum}, title = {{
FontRNN: Generating Large-scale Chinese Fonts via Recurrent Neural Network}},
author = {
Tang, Shusen
and
Xia, Zeqing
and
Lian, Zhouhui
and
Tang, Yingmin
and
Xiao, Jianguo
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13861}
}
                
@article{
10.1111:cgf.13863,
journal = {Computer Graphics Forum}, title = {{
Mesh Defiltering via Cascaded Geometry Recovery}},
author = {
Wei, Mingqiang
and
Guo, Xianglin
and
Huang, Jin
and
Xie, Haoran
and
Zong, Hua
and
Kwan, Reggie
and
Wang, Fu Lee
and
Qin, Jing
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13863}
}
                
@article{
10.1111:cgf.13862,
journal = {Computer Graphics Forum}, title = {{
Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image}},
author = {
Bemana, Mojtaba
and
Keinert, Joachim
and
Myszkowski, Karol
and
Bätz, Michel
and
Ziegler, Matthias
and
Seidel, Hans-Peter
and
Ritschel, Tobias
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13862}
}
                
@article{
10.1111:cgf.13865,
journal = {Computer Graphics Forum}, title = {{
Intrinsic Symmetry Detection on 3D Models with Skeleton-guided Combination of Extrinsic Symmetries}},
author = {
Wang, Wencheng
and
Ma, Junhui
and
Xu, Panpan
and
Chu, Yiyao
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13865}
}
                
@article{
10.1111:cgf.13864,
journal = {Computer Graphics Forum}, title = {{
Topology Preserving Simplification of Medial Axes in 3D Models}},
author = {
Chu, Yiyao
and
Hou, Fei
and
Wang, Wencheng
and
Li, Lei
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13864}
}
                
@article{
10.1111:cgf.13866,
journal = {Computer Graphics Forum}, title = {{
Single-View Modeling of Layered Origami with Plausible Outer Shape}},
author = {
Kato, Yuya
and
Tanaka, Shinichi
and
Kanamori, Yoshihiro
and
Mitani, Jun
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13866}
}
                
@article{
10.1111:cgf.13867,
journal = {Computer Graphics Forum}, title = {{
Image Composition of Partially Occluded Objects}},
author = {
Tan, Xuehan
and
Xu, Panpan
and
Guo, Shihui
and
Wang, Wencheng
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13867}
}
                
@article{
10.1111:cgf.13868,
journal = {Computer Graphics Forum}, title = {{
A PatchMatch-based Approach for Matte Propagation in Videos}},
author = {
Backes, Marcos
and
Menezes de Oliveira Neto, Manuel
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13868}
}
                
@article{
10.1111:cgf.13869,
journal = {Computer Graphics Forum}, title = {{
Wavelet Flow: Optical Flow Guided Wavelet Facial Image Fusion}},
author = {
Ding, Hong
and
Yan, Qingan
and
Fu, Gang
and
Xiao, Chunxia
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13869}
}

Browse

Recent Submissions

Now showing 1 - 60 of 70
  • Item
    Pacific Conference on Computer Graphics and Applications 2019 - CGF38-7: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Lee, Jehee; Theobalt, Christian; Wetzstein, Gordon; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
  • Item
    Succinct Palette and Color Model Generation and Manipulation Using Hierarchical Representation
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Jeong, Taehong; Yang, Myunghyun; Shin, Hyun Joon; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We propose a new method to obtain the representative colors and their distributions of an image. Our intuition is that it is possible to derive the global model from the local distributions. Beginning by sampling pure colors, we build a hierarchical representation of colors in the image via a bottom-up approach. From the resulting hierarchy, we can obtain satisfactory palettes/color models automatically without a predefined size. Furthermore, we provide interactive operations to manipulate the results which allow the users to reflect their intention directly. In our experiment, we show that the proposed method produces more succinct results that faithfully represent all the colors in the image with an appropriate number of components. We also show that the proposed interactive approach can improve the results of applications such as recoloring and soft segmentation.
  • Item
    An Improved Geometric Approach for Palette-based Image Decomposition and Recoloring
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, Yili; Liu, Yifan; Xu, Kun; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Palette-based image decomposition has attracted increasing attention in recent years. A specific class of approaches have been proposed basing on the RGB-space geometry, which manage to construct convex hulls whose vertices act as palette colors. However, such palettes do not guarantee to have the representative colors which actually appear in the image, thus making it less intuitive and less predictable when editing palette colors to perform recoloring. Hence, we proposed an improved geometric approach to address this issue. We use a polyhedron, but not necessarily a convex hull, in the RGB space to represent the color palette. We then formulate the task of palette extraction as an optimization problem which could be solved in a few seconds. Our palette has a higher degree of representativeness and maintains a relatively similar level of accuracy compared with previous methods. For layer decomposition, we compute layer opacities via simple mean value coordinates, which could achieve instant feedbacks without precomputations. We have demonstrated our method for image recoloring on a variety of examples. In comparison with state-of-the-art works, our approach is generally more intuitive and efficient with fewer artifacts.
  • Item
    Generic Interactive Pixel-level Image Editing
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Liang, Yun; Gan, Yibo; Chen, Mingqin; Gutierrez, Diego; Muñoz Orbañanos, Adolfo; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Several image editing methods have been proposed in the past decades, achieving brilliant results. The most sophisticated of them, however, require additional information per-pixel. For instance, dehazing requires a specific transmittance value per pixel, or depth of field blurring requires depth or disparity values per pixel. This additional per-pixel value is obtained either through elaborated heuristics or through additional control over the capture hardware, which is very often tailored for the specific editing application. In contrast, however, we propose a generic editing paradigm that can become the base of several different applications. This paradigm generates both the needed per-pixel values and the resulting edit at interactive rates, with minimal user input that can be iteratively refined. Our key insight for getting per-pixel values at such speed is to cluster them into superpixels, but, instead of a constant value per superpixel (which yields accuracy problems), we have a mathematical expression for pixel values at each superpixel: in our case, an order two multinomial per superpixel. This leads to a linear leastsquares system, effectively enabling specific per-pixel values at fast speeds. We illustrate this approach in three applications: depth of field blurring (from depth values), dehazing (from transmittance values) and tone mapping (from brightness and contrast local values), and our approach proves both favorably interactive and accurate in all three. Our technique is also evaluated with a common dataset and compared favorably.
  • Item
    Procedural Riverscapes
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Peytavie, Adrien; Dupont, Thibault; Guérin, Eric; Cortial, Yann; Benes, Bedrich; Gain, James; Galin, Eric; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    This paper addresses the problem of creating animated riverscapes through a novel procedural framework that generates the inscribing geometry of a river network and then synthesizes matching real-time water movement animation. Our approach takes bare-earth heightfields as input, derives hydrologically-inspired river network trajectories, carves riverbeds into the terrain, and then automatically generates a corresponding blend-flow tree for the water surface. Characteristics, such as the riverbed width, depth and shape, as well as elevation and flow of the fluid surface, are procedurally derived from the terrain and river type. The riverbed is inscribed by combining compactly supported elevation modifiers over the river course. Subsequently, the water surface is defined as a time-varying continuous function encoded as a blend-flow tree with leaves that are parameterized procedural flow primitives and internal nodes that are blend operators. While river generation is fully automated, we also incorporate intuitive interactive editing of both river trajectories and individual riverbed and flow primitives. The resulting framework enables the generation of a wide range of river forms, ranging from slow meandering rivers to rapids with churning water, including surface effects, such as foam and leaves carried downstream.
  • Item
    Desertscapes Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Paris, Axel; Peytavie, Adrien; Guérin, Eric; Argudo, Oscar; Galin, Eric; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We present an interactive aeolian simulation to author hot desert scenery. Wind is an important erosion agent in deserts which, despite its importance, has been neglected in computer graphics. Our framework overcomes this and allows generating a variety of sand dunes, including barchans, longitudinal and anchored dunes, and simulates abrasion which erodes bedrock and sculpts complex landforms. Given an input time varying high altitude wind field, we compute the wind field at the surface of the terrain according to the relief, and simulate the transport of sand blown by the wind. The user can interactively model complex desert landscapes, and control their evolution throughout time either by using a variety of interactive brushes or by prescribing events along a user-defined time-line.
  • Item
    Parallel Generation and Visualization of Bacterial Genome Structures
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Klein, Tobias; Mindek, Peter; Autin, Ludovic; Goodsell, David; Olson, Arthur; Groeller, Eduard; Viola, Ivan; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Visualization of biological mesoscale models provides a glimpse at the inner workings of living cells. One of the most complex components of these models is DNA, which is of fundamental importance for all forms of life. Modeling the 3D structure of genomes has previously only been attempted by sequential approaches. We present the first parallel approach for the instant construction of DNA structures. Traditionally, such structures are generated with algorithms like random walk, which have inherent sequential constraints. These algorithms result in the desired structure, are easy to control, and simple to formulate. Their execution, however, is very time-consuming, as they are not designed to exploit parallelism. We propose an approach to parallelize the process, facilitating an implementation on the GPU.
  • Item
    Learning to Trace: Expressive Line Drawing Generation from Photographs
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Inoue, Naoto; Ito, Daichi; Xu, Ning; Yang, Jimei; Price, Brian; Yamasaki, Toshihiko; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In this paper, we present a new computational method for automatically tracing high-resolution photographs to create expressive line drawings. We define expressive lines as those that convey important edges, shape contours, and large-scale texture lines that are necessary to accurately depict the overall structure of objects (similar to those found in technical drawings) while still being sparse and artistically pleasing. Given a photograph, our algorithm extracts expressive edges and creates a clean line drawing using a convolutional neural network (CNN). We employ an end-to-end trainable fully-convolutional CNN to learn the model in a data-driven manner. The model consists of two networks to cope with two sub-tasks; extracting coarse lines and refining them to be more clean and expressive. To build a model that is optimal for each domain, we construct two new datasets for face/body and manga background. The experimental results qualitatively and quantitatively demonstrate the effectiveness of our model. We further illustrate two practical applications.
  • Item
    Deep Line Drawing Vectorization via Line Subdivision and Topology Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Guo, Yi; Zhang, Zhuming; Han, Chu; Hu, Wenbo; Li, Chengze; Wong, Tien-Tsin; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Vectorizing line drawing is necessary for the digital workflows of 2D animation and engineering design. But it is challenging due to the ambiguity of topology, especially at junctions. Existing vectorization methods either suffer from low accuracy or cannot deal with high-resolution images. To deal with a variety of challenging containing different kinds of complex junctions, we propose a two-phase line drawing vectorization method that analyzes the global and local topology. In the first phase, we subdivide the lines into partial curves, and in the second phase, we reconstruct the topology at junctions. With the overall topology estimated in the two phases, we can trace and vectorize the curves. To qualitatively and quantitatively evaluate our method and compare it with the existing methods, we conduct extensive experiments on not only existing datasets but also our newly synthesized dataset which contains different types of complex and ambiguous junctions. Experimental statistics show that our method greatly outperforms existing methods in terms of computational speed and achieves visually better topology reconstruction accuracy.
  • Item
    Pencil Drawing Video Rendering Using Convolutional Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Yan, Dingkun; Sheng, Yun; Mao, Xiaoyang; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Traditional pencil drawing rendering algorithms when applied to video may suffer from temporal inconsistency and showerdoor effect due to the stochastic noise models employed. This paper attempts to resolve these problems with deep learning. Recently, many research endeavors have demonstrated that feed-forward Convolutional Neural Networks (CNNs) are capable of using a reference image to stylize a whole video sequence while removing the shower-door effect in video style transfer applications. Compared with video style transfer, pencil drawing video is more sensitive to the inconsistency of texture and requires a stronger expression of pencil hatching. Thus, in this paper we develop an approach by combining a latest Line Integral Convolution (LIC) based method, specializing in realistically simulating pencil drawing images, with a new feedforward CNN that can eliminate the shower-door effect successfully. Taking advantage of optical flow, we adopt a feature-maplevel temporal loss function and propose a new framework to avoid the temporal inconsistency between consecutive frames, enhancing the visual impression of pencil strokes and tone. Experimental comparisons with the existing feed-forward CNNs have demonstrated that our method can generate temporally more stable and visually more pleasant pencil drawing video results in a faster manner.
  • Item
    Active Scene Understanding via Online Semantic Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Zheng, Lintao; Zhu, Chenyang; Zhang, Jiazhao; Zhao, Hang; Huang, Hui; Niessner, Matthias; Xu, Kai; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We propose a novel approach to robot-operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real-time voxel-based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.
  • Item
    Surface Fairing towards Regular Principal Curvature Line Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Chu, Lei; Bo, Pengbo; Liu, Yang; Wang, Wenping; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Freeform surfaces whose principal curvature line network is regularly distributed, are essential to many real applications like CAD modeling, architecture design, and industrial fabrication. However, most designed surfaces do not hold this nice property because it is hard to enforce such constraints in the design process. In this paper, we present a novel method for surface fairing which takes a regular distribution of the principal curvature line network on a surface as an objective. Our method first removes the high-frequency signals from the curvature tensor field of an input freeform surface by a novel rolling guidance tensor filter, which results in a more regular and smooth curvature tensor field, then deforms the input surface to match the smoothed field as much as possible. As an application, we solve the problem of approximating freeform surfaces with regular principal curvature line networks, discretized by quadrilateral meshes. By introducing the circular or conical conditions on the quadrilateral mesh to guarantee the existence of discrete principal curvature line networks, and minimizing the approximate error to the original surface and improving the fairness of the quad mesh, we obtain a regular discrete principal curvature line network that approximates the original surface. We evaluate the efficacy of our method on various freeform surfaces and demonstrate the superiority of the rolling guidance tensor filter over other tensor smoothing techniques. We also utilize our method to generate high-quality circular/conical meshes for architecture design and cyclide spline surfaces for CAD modeling.
  • Item
    Subdivision Schemes for Quadrilateral Meshes with the Least Polar Artifact in Extraordinary Regions
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Ma, Yue; Ma, Weiyin; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    This paper presents subdivision schemes with subdivision stencils near an extraordinary vertex that are free from or with substantially reduced polar artifact in extraordinary regions while maintaining the best possible bounded curvature at extraordinary positions. The subdivision stencils are firstly constructed to meet tangent plane continuity with bounded curvature at extraordinary positions. They are further optimized towards curvature continuity at an extraordinary position with additional measures for removing or for minimizing the polar artifact in extraordinary regions. The polar artifact for subdivision stencils of lower valences is removed by applying an additional constraint to the subdominant eigenvalue to be the same as that of subdivision at regular vertices, while the polar artifact for subdivision stencils of higher valances is substantially reduced by introducing an additional thin-plate energy function and a penalty function for maintaining the uniformity and regularity of the characteristic map. A new tuned subdivision scheme is introduced by replacing subdivision stencils of Catmull-Clark subdivision with that from this paper for extraordinary vertices of valences up to nine. We also compare the refined meshes and limit surface quality of the resulting subdivision scheme with that of Catmull-Clark subdivision and other tuned subdivision schemes. The results show that subdivision stencils from our method produce well behaved subdivision meshes with the least polar artifact while maintaining satisfactory limit surface quality.
  • Item
    Imitating Popular Photos to Select Views for an Indoor Scene
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Su, Rung-De; Liao, Zhe-Yo; Chen, Li-Chi; Tung, Ai-Ling; Wang, Yu-Shuen; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Selecting informative and visually appealing views for 3D indoor scenes is beneficial for the housing, decoration, and entertainment industries. A set of views that exhibit comfort, aesthetics, and functionality of a particular scene can attract customers and facilitate business transactions. However, selecting views for an indoor scene is challenging because the system has to consider not only the need to reveal as much information as possible, but also object arrangements, occlusions, and characteristics. Since there can be many principles utilized to guide the view selection, and various principles to follow under different circumstances, we achieve the goal by imitating popular photos on the Internet. Specifically, we select the view that can optimize the contour similarity of corresponding objects to the photo. Because the selected view can be inadequate if object arrangements in the 3D scene and the photo are different, our system imitates many popular photos and selects a certain number of views. After that, it clusters the selected views and determines the view/cluster centers by the weighted average to finally exhibit the scene. Experimental results demonstrate that the views selected by our method are visually appealing.
  • Item
    Scale-adaptive Structure-preserving Texture Filtering
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Song, Chengfang; Xiao, Chunxia; Lei, Ling; Sui, Haigang; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    This paper proposes a scale-adaptive filtering method to improve the performance of structure-preserving texture filtering for image smoothing. With classical texture filters, it usually is challenging to smooth texture at multiple scales while preserving salient structures in an image. We address this issue in the concept of adaptive bilateral filtering, where the scales of Gaussian range kernels are allowed to vary from pixel to pixel. Based on direction-wise statistics, our method distinguishes texture from structure effectively, identifies appropriate scope around a pixel to be smoothed and thus infers an optimal smoothing scale for it. Filtering an image with varying-scale kernels, the image is smoothed according to the distribution of texture adaptively. With commendable experimental results, we show that, needing less iterations, our proposed scheme boosts texture filtering performance in terms of preserving the geometric structures of multiple scales even after aggressive smoothing of the original image.
  • Item
    Rain Wiper: An Incremental RandomlyWired Network for Single Image Deraining
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Liang, Xiwen; Qiu, Bin; Su, Zhuo; Gao, Chengying; Shi, Xiaohong; Wang, Ruomei; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Single image rain removal is a challenging ill-posed problem due to various shapes and densities of rain streaks. We present a novel incremental randomly wired network (IRWN) for single image deraining. Different from previous methods, most structures of modules in IRWN are generated by a stochastic network generator based on the random graph theory, which ease the burden of manual design and further help to characterize more complex rain streaks. To decrease network parameters and extract more details efficiently, the image pyramid is fused via the multi-scale network structure. An incremental rectified loss is proposed to better remove rain streaks in different rain conditions and recover the texture information of target objects. Extensive experiments on synthetic and real-world datasets demonstrate that the proposed method outperforms the state-ofthe- art methods significantly. In addition, an ablation study is conducted to illustrate the improvements obtained by different modules and loss items in IRWN.
  • Item
    Field-aligned Quadrangulation for Image Vectorization
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wei, Guangshun; Zhou, Yuanfeng; Gao, Xifeng; Ma, Qian; Xin, Shiqing; He, Ying; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Image vectorization is an important yet challenging problem, especially when the input image has rich content. In this paper, we develop a novel method for automatically vectorizing natural images with feature-aligned quad-dominant meshes. Inspired by the quadrangulation methods in 3D geometry processing, we propose a new directional field optimization technique by encoding the color gradients, sidestepping the explicit computing of salient image features. We further compute the anisotropic scales of the directional field by accommodating the distance among image features. Our method is fully automatic and efficient, which takes only a few seconds for a 400x400 image on a normal laptop. We demonstrate the effectiveness of the proposed method on various image editing applications.
  • Item
    Learning Explicit Smoothing Kernels for Joint Image Filtering
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Fang, Xiaonan; Wang, Miao; Shamir, Ariel; Hu, Shi-Min; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Smoothing noises while preserving strong edges in images is an important problem in image processing. Image smoothing filters can be either explicit (based on local weighted average) or implicit (based on global optimization). Implicit methods are usually time-consuming and cannot be applied to joint image filtering tasks, i.e., leveraging the structural information of a guidance image to filter a target image.Previous deep learning based image smoothing filters are all implicit and unavailable for joint filtering. In this paper, we propose to learn explicit guidance feature maps as well as offset maps from the guidance image and smoothing parameter that can be utilized to smooth the input itself or to filter images in other target domains. We design a deep convolutional neural network consisting of a fully-convolution block for guidance and offset maps extraction together with a stacked spatially varying deformable convolution block for joint image filtering. Our models can approximate several representative image smoothing filters with high accuracy comparable to state-of-the-art methods, and serve as general tools for other joint image filtering tasks, such as color interpolation, depth map upsampling, saliency map upsampling, flash/non-flash image denoising and RGB/NIR image denoising.
  • Item
    ManyLands: A Journey Across 4D Phase Space of Trajectories
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Amirkhanov, Aleksandr; Kosiuk, Ilona; Szmolyan, Peter; Amirkhanov, Artem; Mistelbauer, Gabriel; Gröller, Eduard; Raidou, Renata Georgia; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Mathematical models of ordinary differential equations are used to describe and understand biological phenomena. These models are dynamical systems that often describe the time evolution of more than three variables, i.e., their dynamics take place in a multi-dimensional space, called the phase space. Currently, mathematical domain scientists use plots of typical trajectories in the phase space to analyze the qualitative behavior of dynamical systems. These plots are called phase portraits and they perform well for 2D and 3D dynamical systems. However, for 4D, the visual exploration of trajectories becomes challenging, as simple subspace juxtaposition is not sufficient. We propose ManyLands to support mathematical domain scientists in analyzing 4D models of biological systems. By describing the subspaces as Lands, we accompany domain scientists along a continuous journey through 4D HyperLand, 3D SpaceLand, and 2D FlatLand, using seamless transitions. The Lands are also linked to 1D TimeLines. We offer an additional dissected view of trajectories that relies on small-multiple compass-alike pictograms for easy navigation across subspaces and trajectory segments of interest. We show three use cases of 4D dynamical systems from cell biology and biochemistry. An informal evaluation with mathematical experts confirmed that ManyLands helps them to visualize and analyze complex 4D dynamics, while facilitating mathematical experiments and simulations.
  • Item
    Inertia-based Fast Vectorization of Line Drawings
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Najgebauer, Patryk; Scherer, Rafal; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Image vectorisation is a fundamental method in graphic design and is one of the tools allowing to transfer artist work into computer graphics. The existing methods are based mainly on segmentation, or they analyse every image pixel; thus, they are relatively slow. We introduce a novel method for fast line drawing image vectorisation, based on a multi-scale second derivative detector accelerated by the summed-area table and an auxiliary grid. Image is scanned initially along the grid lines, and nodes are added to improve accuracy. Applying inertia in the line tracing allows for better junction mapping in a single pass. Our method is dedicated to grey-scale sketches and line drawings. It works efficiently regardless of the thickness of the line or its shading. Experiments show it is more than two orders of magnitude faster than the existing methods, not sacrificing accuracy.
  • Item
    Generating 3D Faces using Multi-column Graph Convolutional Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Li, Kun; Liu, Jingying; Lai, Yu-Kun; Yang, Jingyu; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In this work, we introduce multi-column graph convolutional networks (MGCNs), a deep generative model for 3D mesh surfaces that effectively learns a non-linear facial representation. We perform spectral decomposition of meshes and apply convolutions directly in the frequency domain. Our network architecture involves multiple columns of graph convolutional networks (GCNs), namely large GCN (L-GCN), medium GCN (M-GCN) and small GCN (S-GCN), with different filter sizes to extract features at different scales. L-GCN is more useful to extract large-scale features, whereas S-GCN is effective for extracting subtle and fine-grained features, and M-GCN captures information in between. Therefore, to obtain a high-quality representation, we propose a selective fusion method that adaptively integrates these three kinds of information. Spatially non-local relationships are also exploited through a self-attention mechanism to further improve the representation ability in the latent vector space. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction. Moreover, with the help of variational inference, our model has excellent generating ability.
  • Item
    Figure Skating Simulation from Video
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Yu, Ri; Park, Hwangpil; Lee, Jehee; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Figure skating is one of the most popular ice sports at the Winter Olympic Games. The skaters perform several skating skills to express the beauty of the art on ice. Skating involves moving on ice while wearing skate shoes with thin blades; thus, it requires much practice to skate without losing balance. Moreover, figure skating presents dynamic moves, such as jumping, artistically. Therefore, demonstrating figure skating skills is even more difficult to achieve than basic skating, and professional skaters often fall during Winter Olympic performances. We propose a system to demonstrate figure skating motions with a physically simulated human-like character. We simulate skating motions with non-holonomic constraints, which make the skate blade glide on the ice surface. It is difficult to obtain reference motions from figure skaters because figure skating motions are very fast and dynamic. Instead of using motion capture data, we use key poses extracted from videos on YouTube and complete reference motions using trajectory optimization. We demonstrate figure skating skills, such as crossover, three-turn, and even jump. Finally, we use deep reinforcement learning to generate a robust controller for figure skating skills.
  • Item
    Towards Robust Direction Invariance in Character Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Ma, Li-Ke; Yang, Zeshi; Guo, Baining; Yin, KangKang; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In character animation, direction invariance is a desirable property. That is, a pose facing north and the same pose facing south are considered the same; a character that can walk to the north is expected to be able to walk to the south in a similar style. To achieve such direction invariance, the current practice is to remove the facing direction's rotation around the vertical axis before further processing. Such a scheme, however, is not robust for rotational behaviors in the sagittal plane. In search of a smooth scheme to achieve direction invariance, we prove that in general a singularity free scheme does not exist. We further connect the problem with the hairy ball theorem, which is better-known to the graphics community. Due to the nonexistence of a singularity free scheme, a general solution does not exist and we propose a remedy by using a properly-chosen motion direction that can avoid singularities for specific motions at hand. We perform comparative studies using two deep-learning based methods, one builds kinematic motion representations and the other learns physics-based controls. The results show that with our robust direction invariant features, both methods can achieve better results in terms of learning speed and/or final quality. We hope this paper can not only boost performance for character animation methods, but also help related communities currently not fully aware of the direction invariance problem to achieve more robust results.
  • Item
    Dual Illumination Estimation for Robust Exposure Correction
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Zhang, Qing; Nie, Yongwei; Zheng, Wei-Shi; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Exposure correction is one of the fundamental tasks in image processing and computational photography. While various methods have been proposed, they either fail to produce visually pleasing results, or only work well for limited types of image (e.g., underexposed images). In this paper, we present a novel automatic exposure correction method, which is able to robustly produce high-quality results for images of various exposure conditions (e.g., underexposed, overexposed, and partially under- and over-exposed). At the core of our approach is the proposed dual illumination estimation, where we separately cast the underand over-exposure correction as trivial illumination estimation of the input image and the inverted input image. By performing dual illumination estimation, we obtain two intermediate exposure correction results for the input image, with one fixes the underexposed regions and the other one restores the overexposed regions. A multi-exposure image fusion technique is then employed to adaptively blend the visually best exposed parts in the two intermediate exposure correction images and the input image into a globally well-exposed image. Experiments on a number of challenging images demonstrate the effectiveness of the proposed approach and its superiority over the state-of-the-art methods and popular automatic exposure correction tools.
  • Item
    Specular Highlight Removal for Real-world Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Fu, Gang; Zhang, Qing; Song, Chengfang; Lin, Qifeng; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Removing specular highlight in an image is a fundamental research problem in computer vision and computer graphics. While various methods have been proposed, they typically do not work well for real-world images due to the presence of rich textures, complex materials, hard shadows, occlusions and color illumination, etc. In this paper, we present a novel specular highlight removal method for real-world images. Our approach is based on two observations of the real-world images: (i) the specular highlight is often small in size and sparse in distribution; (ii) the remaining diffuse image can be represented by linear com- bination of a small number of basis colors with the sparse encoding coefficients. Based on the two observations, we design an optimization framework for simultaneously estimating the diffuse and specular highlight images from a single image. Specif- ically, we recover the diffuse components of those regions with specular highlight by encouraging the encoding coefficients sparseness using L0 norm. Moreover, the encoding coefficients and specular highlight are also subject to the non-negativity according to the additive color mixing theory and the illumination definition, respectively. Extensive experiments have been performed on a variety of images to validate the effectiveness of the proposed method and its superiority over the previous methods.
  • Item
    Light Field Video Compression and Real Time Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Hajisharif, Saghi; Miandji, Ehsan; Larsson, Per; Tran, Kiet; Unger, Jonas; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Light field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post-capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real-time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.
  • Item
    Naturalness-Preserving Image Tone Enhancement Using Generative Adversarial Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Son, Hyeongseok; Lee, Gunhee; Cho, Sunghyun; Lee, Seungyong; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    This paper proposes a deep learning-based image tone enhancement approach that can maximally enhance the tone of an image while preserving the naturalness. Our approach does not require carefully generated ground-truth images by human experts for training. Instead, we train a deep neural network to mimic the behavior of a previous classical filtering method that produces drastic but possibly unnatural-looking tone enhancement results. To preserve the naturalness, we adopt the generative adversarial network (GAN) framework as a regularizer for the naturalness. To suppress artifacts caused by the generative nature of the GAN framework, we also propose an imbalanced cycle-consistency loss. Experimental results show that our approach can effectively enhance the tone and contrast of an image while preserving the naturalness compared to previous state-of-the-art approaches.
  • Item
    Practical Foldover-Free Volumetric Mapping Construction
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Su, Jian-Ping; Fu, Xiao-Ming; Liu, Ligang; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In this paper, we present a practically robust method for computing foldover-free volumetric mappings with hard linear constraints. Central to this approach is a projection algorithm that monotonically and efficiently decreases the distance from the mapping to the bounded conformal distortion mapping space. After projection, the conformal distortion of the updated mapping tends to be below the given bound, thereby significantly reducing foldovers. Since it is non-trivial to define an optimal bound, we introduce a practical conformal distortion bound generation scheme to facilitate subsequent projections. By iteratively generating conformal distortion bounds and trying to project mappings into bounded conformal distortion spaces monotonically, our algorithm achieves high-quality foldover-free volumetric mappings with strong practical robustness and high efficiency. Compared with existing methods, our method computes mesh-based and meshless volumetric mappings with no prescribed conformal distortion bounds. We demonstrate the efficacy and efficiency of our method through a variety of geometric processing tasks.
  • Item
    Computing Surface PolyCube-Maps by Constrained Voxelization
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Yang, Yang; Fu, Xiao-Ming; Liu, Ligang; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We present a novel method to compute bijective PolyCube-maps with low isometric distortion. Given a surface and its preaxis- aligned shape that is not an exact PolyCube shape, the algorithm contains two steps: (i) construct a PolyCube shape to approximate the pre-axis-aligned shape; and (ii) generate a bijective, low isometric distortion mapping between the constructed PolyCube shape and the input surface. The PolyCube construction is formulated as a constrained optimization problem, where the objective is the number of corners in the constructed PolyCube, and the constraint is to bound the approximation error between the constructed PolyCube and the input pre-axis-aligned shape while ensuring topological validity. A novel erasing-and-filling solver is proposed to solve this challenging problem. Centeral to the algorithm for computing bijective PolyCube-maps is a quad mesh optimization process that projects the constructed PolyCube onto the input surface with high-quality quads. We demonstrate the efficacy of our algorithm on a data set containing 300 closed meshes. Compared to state-of-the-art methods, our method achieves higher practical robustness and lower mapping distortion.
  • Item
    Polycube Shape Space
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Zhao, Hui; Li, Xuan; Wang, Wencheng; Wang, Xiaoling; Wang, Shaodong; Lei, Na; Gu, Xianfeng; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    There are many methods proposed for generating polycube polyhedrons, but it lacks the study about the possibility of generating polycube polyhedrons. In this paper, we prove a theorem for characterizing the necessary condition for the skeleton graph of a polycube polyhedron, by which Steinitz's theorem for convex polyhedra and Eppstein's theorem for simple orthogonal polyhedra are generalized to polycube polyhedra of any genus and with non-simply connected faces. Based on our theorem, we present a faster linear algorithm to determine the dimensions of the polycube shape space for a valid graph, for all its possible polycube polyhedrons. We also propose a quadratic optimization method to generate embedding polycube polyhedrons with interactive assistance. Finally, we provide a graph-based framework for polycube mesh generation, quadrangulation, and all-hex meshing to demonstrate the utility and applicability of our approach.
  • Item
    Compacting Voxelized Polyhedra via Tree Stacking
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Hao, Yue; Lien, Jyh-Ming; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Volume compaction is a geometric problem that aims to reduce the volume of a polyhedron via shape transform. Compactable structures are easier to transport and in some cases easier to manufacture, therefore, they are commonly found in our daily life (e.g. collapsible containers) and advanced technology industries (e.g., the recent launch of 60 Starlink satellites compacted in a single rocket by SpaceX). It is known in the literature that finding a universal solution to compact an arbitrary 3D shape is computationally challenging. Previous approaches showed that stripifying mesh surface can lead to optimal compaction, but the resulting structures were often impractical. In this paper, we propose an algorithm that cuts the 3D orthogonal polyhedron, tessellated by thick square panels, into a tree structure that can be transformed into compact piles by folding and stacking. We call this process tree stacking. Our research found that it is possible to decompose the problem into a pipeline of several solvable local optimizations. We also provide an efficient algorithm to check if the solution exists by avoiding the computational bottleneck of the pipeline. Our results show that tree stacking can efficiently generate stackable structures that have better folding accuracy and similar compactness comparing to the most compact stacking using strips.
  • Item
    Pyramid Multi-View Stereo with Local Consistency
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Liao, Jie; Fu, Yanping; Yan, Qingan; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In this paper, we propose a PatchMatch-based Multi-View Stereo (MVS) algorithm which can efficiently estimate geometry for the textureless area. Conventional PatchMatch-based MVS algorithms estimate depth and normal hypotheses mainly by optimizing photometric consistency metrics between patch in the reference image and its projection on other images. The photometric consistency works well in textured regions but can not discriminate textureless regions, which makes geometry estimation for textureless regions hard work. To address this issue, we introduce the local consistency. Based on the assumption that neighboring pixels with similar colors likely belong to the same surface and share approximate depth-normal values, local consistency guides the depth and normal estimation with geometry from neighboring pixels with similar colors. To fasten the convergence of pixelwise local consistency across the image, we further introduce a pyramid architecture similar to previous work which can also provide coarse estimation at upper levels. We validate the effectiveness of our method on the ETH3D benchmark and Tanks and Temples benchmark. Results show that our method outperforms the state-of-the-art.
  • Item
    Automatic Modeling of Cluttered Multi-room Floor Plans From Panoramic Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Pintore, Giovanni; Ganovelli, Fabio; Villanueva, Alberto Jaspe; Gobbetti, Enrico; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We present a novel and light-weight approach to capture and reconstruct structured 3D models of multi-room floor plans. Starting from a small set of registered panoramic images, we automatically generate a 3D layout of the rooms and of all the main objects inside. Such a 3D layout is directly suitable for use in a number of real-world applications, such as guidance, location, routing, or content creation for security and energy management. Our novel pipeline introduces several contributions to indoor reconstruction from purely visual data. In particular, we automatically partition panoramic images in a connectivity graph, according to the visual layout of the rooms, and exploit this graph to support object recovery and rooms boundaries extraction. Moreover, we introduce a plane-sweeping approach to jointly reason about the content of multiple images and solve the problem of object inference in a top-down 2D domain. Finally, we combine these methods in a fully automated pipeline for creating a structured 3D model of a multi-room floor plan and of the location and extent of clutter objects. These contribution make our pipeline able to handle cluttered scenes with complex geometry that are challenging to existing techniques. The effectiveness and performance of our approach is evaluated on both real-world and synthetic models.
  • Item
    A Generalized Cubemap for Encoding 360° VR Videos using Polynomial Approximation
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Xiao, Jianye; Tang, Jingtao; Zhang, Xinyu; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    360° VR videos provide users with an immersive visual experience. To encode 360° VR videos, spherical pixels must be mapped onto a two-dimensional domain to take advantage of the existing video encoding and storage standards. In VR industry, standard cubemap projection is the most widely used projection method for encoding 360° VR videos. However, it exhibits pixel density variation at different regions due to projection distortion. We present a generalized algorithm to improve the efficiency of cubemap projection using polynomial approximation. In our algorithm, standard cubemap projection can be regarded as a special form with 1st-order polynomial. Our experiments show that the generalized cubemap projection can significantly reduce the projection distortion using higher order polynomials. As a result, pixel distribution can be well balanced in the resulting 360° VR videos. We use PSNR, S-PSNR and CPP-PSNR to evaluate the visual quality and the experimental results demonstrate promising performance improvement against standard cubemap projection and Google's equi-angular cubemap.
  • Item
    Interactive Curation of Datasets for Training and Refining Generative Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Ye, Wenjie; Dong, Yue; Peers, Pieter; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We present a novel interactive learning-based method for curating datasets using user-defined criteria for training and refining Generative Adversarial Networks. We employ a novel batch-mode active learning strategy to progressively select small batches of candidate exemplars for which the user is asked to indicate whether they match the, possibly subjective, selection criteria. After each batch, a classifier that models the user's intent is refined and subsequently used to select the next batch of candidates. After the selection process ends, the final classifier, trained with limited but adaptively selected training data, is used to sift through the large collection of input exemplars to extract a sufficiently large subset for training or refining the generative model that matches the user's selection criteria. A key distinguishing feature of our system is that we do not assume that the user can always make a firm binary decision (i.e., ''meets'' or ''does not meet'' the selection criteria) for each candidate exemplar, and we allow the user to label an exemplar as ''undecided''. We rely on a non-binary query-by-committee strategy to distinguish between the user's uncertainty and the trained classifier's uncertainty, and develop a novel disagreement distance metric to encourage a diverse candidate set. In addition, a number of optimization strategies are employed to achieve an interactive experience. We demonstrate our interactive curation system on several applications related to training or refining generative models: training a Generative Adversarial Network that meets a user-defined criteria, adjusting the output distribution of an existing generative model, and removing unwanted samples from a generative model.
  • Item
    HidingGAN: High Capacity Information Hiding with Generative Adversarial Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, Zihan; Gao, Neng; Wang, Xin; Xiang, Ji; Zha, Daren; Li, Linghui; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Image steganography is the technique of hiding secret information within images. It is an important research direction in the security field. Benefitting from the rapid development of deep neural networks, many steganographic algorithms based on deep learning have been proposed. However, two problems remain to be solved in which the most existing methods are limited by small image size and information capacity. In this paper, to address these problems, we propose a high capacity image steganographic model named HidingGAN. The proposed model utilizes a new secret information preprocessing method and Inception-ResNet block to promote better integration of secret information and image features. Meanwhile, we introduce generative adversarial networks and perceptual loss to maintain the same statistical characteristics of cover images and stego images in the high-dimensional feature space, thereby improving the undetectability. Through these manners, our model reaches higher imperceptibility, security, and capacity. Experiment results show that our HidingGAN achieves the capacity of 4 bitsper- pixel (bpp) at 256x256 pixels, improving over the previous best result of 0.4 bpp at 32x32 pixels.
  • Item
    Shadow Inpainting and Removal Using Generative Adversarial Networks with Slice Convolutions
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wei, Jinjiang; Long, Chengjiang; Zou, Hua; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In this paper, we propose a two-stage top-down and bottom-up Generative Adversarial Networks (TBGANs) for shadow inpainting and removal which uses a novel top-down encoder and a bottom-up decoder with slice convolutions. These slice convolutions can effectively extract and restore the long-range spatial information for either down-sampling or up-sampling. Different from the previous shadow removal methods based on deep learning, we propose to inpaint shadow to handle the possible dark shadows to achieve a coarse shadow-removal image at the first stage, and then further recover the details and enhance the color and texture details with a non-local block to explore both local and global inter-dependencies of pixels at the second stage. With such a two-stage coarse-to-fine processing, the overall effect of shadow removal is greatly improved, and the effect of color retention in non-shaded areas is significant. By comparing with a variety of mainstream shadow removal methods, we demonstrate that our proposed method outperforms the state-of-the-art methods.
  • Item
    Visibility-Aware Progressive Farthest Point Sampling on the GPU
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Brandt, Sascha; Jähn, Claudius; Fischer, Matthias; Heide, Friedhelm Meyer auf der; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In this paper, we present the first algorithm for progressive sampling of 3D surfaces with blue noise characteristics that runs entirely on the GPU. The performance of our algorithm is comparable to state-of-the-art GPU Poisson-disk sampling methods, while additionally producing ordered sequences of samples where every prefix exhibits good blue noise properties. The basic idea is, to reduce the 3D sampling domain to a set of 2.5D images which we sample in parallel utilizing the rasterization hardware of current GPUs. This allows for simple visibility-aware sampling that only captures the surface as seen from outside the sampled object, which is especially useful for point-based level-of-detail rendering methods. However, our method can be easily extended for sampling the entire surface without changing the basic algorithm. We provide a statistical analysis of our algorithm and show that it produces good blue noise characteristics for every prefix of the resulting sample sequence and analyze the performance of our method compared to related state-of-the-art sampling methods.
  • Item
    Two-phase Hair Image Synthesis by Self-Enhancing Generative Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Qiu, Haonan; Wang, Chuan; Zhu, Hang; zhu, xiangyu; Gu, Jinjin; Han, Xiaoguang; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Generating plausible hair image given limited guidance, such as sparse sketches or low-resolution image, has been made possible with the rise of Generative Adversarial Networks (GANs). Traditional image-to-image translation networks can generate recognizable results, but finer textures are usually lost and blur artifacts commonly exist. In this paper, we propose a two-phase generative model for high-quality hair image synthesis. The two-phase pipeline first generates a coarse image by an existing image translation model, then applies a re-generating network with self-enhancing capability to the coarse image. The selfenhancing capability is achieved by a proposed differentiable layer, which extracts the structural texture and orientation maps from a hair image. Extensive experiments on two tasks, Sketch2Hair and Hair Super-Resolution, demonstrate that our approach is able to synthesize plausible hair image with finer details, and reaches the state-of-the-art.
  • Item
    Unsupervised Dense Light Field Reconstruction with Occlusion Awareness
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Ni, Lixia; Jiang, Haiyong; Cai, Jianfei; Zheng, Jianmin; Li, Haifeng; Liu, Xu; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Light field (LF) reconstruction is a fundamental technique in light field imaging and has applications in both software and hardware aspects. This paper presents an unsupervised learning method for LF-oriented view synthesis, which provides a simple solution for generating quality light fields from a sparse set of views. The method is built on disparity estimation and image warping. Specifically, we first use per-view disparity as a geometry proxy to warp input views to novel views. Then we compensate the occlusion with a network by a forward-backward warping process. Cycle-consistency between different views are explored to enable unsupervised learning and accurate synthesis. The method overcomes the drawbacks of fully supervised learning methods that require large labeled training dataset and epipolar plane image based interpolation methods that do not make full use of geometry consistency in LFs. Experimental results demonstrate that the proposed method can generate high quality views for LF, which outperforms unsupervised approaches and is comparable to fully-supervised approaches.
  • Item
    Seamless Mipmap Filtering for Dual Paraboloid Maps
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, Zhenni; Ho, Tze Yui; Leung, Chi-Sing; Wong, Eric Wing Ming; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Dual paraboloid mapping is an approach for environment mapping. Its major advantage is its fast map generation speed. For graphics applications, when filtering is needed, the filtering tool would naturally be mipmapping. However, directly applying mipmapping to dual paraboloid mapping would give us three problems. They are the discontinuity across the dual paraboloid map boundary, the non-uniform sampling problem and the depth testing issue. We propose three approaches to solve these problems. Our approaches are based on some closed form equations derived via theoretical analysis. Using these equations, we modify the coordinates involved during the rendering process. In other words, these problems are handled just by using dual paraboloid maps and mipmaps differently, instead of fundamentally altering their data structures. Consequently, we are fixing the problems without damaging the map generation speed advantage. Applying all three approaches, we improve the rendering quality of dual paraboloid map mipmaps to a level equivalent to that of cubemap mipmaps, while preserving its fast map generation speed advantage. This gives dual paraboloid map mipmaps the potential to be a better choice than cubemap mipmaps for the devices with less computational power. The effectiveness and the efficiency of the proposed approaches are demonstrated using a glossy reflection application and an omnidirectional soft shadow generation application.
  • Item
    Real-time Indirect Illumination of Emissive Inhomogeneous Volumes using Layered Polygonal Area Lights
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Kuge, Takahiro; Yatagawa, Tatsuya; Morishima, Shigeo; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Indirect illumination involving with visually rich participating media such as turbulent smoke and loud explosions contributes significantly to the appearances of other objects in a rendering scene. However, previous real-time techniques have focused only on the appearances of the media directly visible from the viewer. Specifically, appearances that can be indirectly seen over reflective surfaces have not attracted much attention. In this paper, we present a real-time rendering technique for such indirect views that involves the participating media. To achieve real-time performance for computing indirect views, we leverage layered polygonal area lights (LPALs) that can be obtained by slicing the media into multiple flat layers. Using this representation, radiance entering each surface point from each slice of the volume is analytically evaluated to achieve instant calculation. The analytic solution can be derived for standard bidirectional reflectance distribution functions (BRDFs) based on the microfacet theory. Accordingly, our method is sufficiently robust to work on surfaces with arbitrary shapes and roughness values. In addition, we propose a quadrature method for more accurate rendering of scenes with dense volumes, and a transformation of the domain of volumes to simplify the calculation and implementation of the proposed method. By taking advantage of these computation techniques, the proposed method achieves real-time rendering of indirect illumination for emissive volumes.
  • Item
    A Unified Neural Network for Panoptic Segmentation
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Yao, Li; Chyau, Ang; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In this paper, we propose a unified neural network for panoptic segmentation, a task aiming to achieve more fine-grained segmentation. Following existing methods combining semantic and instance segmentation, our method relies on a triple-branch neural network for tackling the unifying work. In the first stage, we adopt a ResNet50 with a feature pyramid network (FPN) as shared backbone to extract features. Then each branch leverages the shared feature maps and serves as the stuff, things, or mask branch. Lastly, the outputs are fused following a well-designed strategy. Extensive experimental results on MS-COCO dataset demonstrate that our approach achieves a competitive Panoptic Quality (PQ) metric score with the state of the art.
  • Item
    Style Mixer: Semantic-aware Multi-Style Transfer Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) HUANG, Zixuan; ZHANG, Jinghuai; LIAO, Jing; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Recent neural style transfer frameworks have obtained astonishing visual quality and flexibility in Single-style Transfer (SST), but little attention has been paid to Multi-style Transfer (MST) which refers to simultaneously transferring multiple styles to the same image. Compared to SST, MST has the potential to create more diverse and visually pleasing stylization results. In this paper, we propose the first MST framework to automatically incorporate multiple styles into one result based on regional semantics. We first improve the existing SST backbone network by introducing a novel multi-level feature fusion module and a patch attention module to achieve better semantic correspondences and preserve richer style details. For MST, we designed a conceptually simple yet effective region-based style fusion module to insert into the backbone. It assigns corresponding styles to content regions based on semantic matching, and then seamlessly combines multiple styles together. Comprehensive evaluations demonstrate that our framework outperforms existing works of SST and MST.
  • Item
    A Color-Pair Based Approach for Accurate Color Harmony Estimation
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Yang, Bailin; Wei, Tianxiang; Fang, Xianyong; Deng, Zhigang; Li, Frederick W. B.; Ling, Yun; Wang, Xun; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Harmonious color combinations can stimulate positive user emotional responses. However, a widely open research question is: how can we establish a robust and accurate color harmony measure for the public and professional designers to identify the harmony level of a color theme or color set. Building upon the key discovery that color pairs play an important role in harmony estimation, in this paper we present a novel color-pair based estimation model to accurately measure the color harmony. It first takes a two-layer maximum likelihood estimation (MLE) based method to compute an initial prediction of color harmony by statistically modeling the pair-wise color preferences from existing datasets. Then, the initial scores are refined through a back-propagation neural network (BPNN) with a variety of color features extracted in different color spaces, so that an accurate harmony estimation can be obtained at the end. Our extensive experiments, including performance comparisons of harmony estimation applications, show the advantages of our method in comparison with the state of the art methods.
  • Item
    A Rigging-Skinning Scheme to Control Fluid Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Lu, Jia-Ming; Chen, Xiao-Song; Yan, Xiao; Li, Chen-Feng; Lin, Ming; Hu, Shi-Min; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Inspired by skeletal animation, a novel rigging-skinning flow control scheme is proposed to animate fluids intuitively and efficiently. The new animation pipeline creates fluid animation via two steps: fluid rigging and fluid skinning. The fluid rig is defined by a point cloud with rigid-body movement and incompressible deformation, whose time series can be intuitively specified by a rigid body motion and a constrained free-form deformation, respectively. The fluid skin generates plausible fluid flows by virtually fluidizing the point-cloud fluid rig with adjustable zero- and first-order flow features and at fixed computational cost. Fluid rigging allows the animator to conveniently specify the desired low-frequency flow motion through intuitive manipulations of a point cloud, while fluid skinning truthfully and efficiently converts the motion specified on the fluid rig into plausible flows of the animation fluid, with adjustable fine-scale effects. Besides being intuitive, the rigging-skinning scheme for fluid animation is robust and highly efficient, avoiding completely iterative trials or time-consuming nonlinear optimization. It is also versatile, supporting both particle- and grid- based fluid solvers. A series of examples including liquid, gas and mixed scenes are presented to demonstrate the performance of the new animation pipeline.
  • Item
    Distribution Update of Deformable Patches for Texture Synthesis on the Free Surface of Fluids
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Gagnon, jonathan; Guzmán, Julián E.; Vervondel, Valentin; Dagenais, François; Mould, David; Paquette, Eric; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We propose an approach for temporally coherent patch-based texture synthesis on the free surface of fluids. Our approach is applied as a post-process, using the surface and velocity field from any fluid simulator. We apply the texture from the exemplar through multiple local mesh patches fitted to the surface and mapped to the exemplar. Our patches are constructed from the fluid free surface by taking a subsection of the free surface mesh. As such, they are initially very well adapted to the fluid's surface, and can later deform according to the free surface velocity field, allowing a greater ability to represent surface motion than rigid or 2D grid-based patches. From one frame to the next, the patch centers and surrounding patch vertices are advected according to the velocity field. We seek to maintain a Poisson disk distribution of patches, and following advection, the Poisson disk criterion determines where to add new patches and which patches should e flagged for removal. The removal considers the local number of patches: in regions containing too many patches, we accelerate the temporal removal. This reduces the number of patches while still meeting the Poisson disk criterion. Reducing areas with too many patches speeds up the computation and avoids patch-blending artifacts. The final step of our approach creates the overall texture in an atlas where each texel is computed from the patches using a contrast-preserving blending function. Our tests show that the approach works well on free surfaces undergoing significant deformation and topological changes. Furthermore, we show that our approach provides good results for many fluid simulation scenarios, and with many texture exemplars. We also confirm that the optical flow from the resulting texture matches the fluid velocity field. Overall, our approach compares favorably against recent work in this area.
  • Item
    High Dynamic Range Point Clouds for Real-Time Relighting
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Sabbadin, Manuele; Palma, Gianpaolo; BANTERLE, FRANCESCO; Boubekeur, Tamy; Cignoni, Paolo; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.
  • Item
    Offline Deep Importance Sampling for Monte Carlo Path Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Bako, Steve; Meyer, Mark; DeRose, Tony; Sen, Pradeep; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Although modern path tracers are successfully being applied to many rendering applications, there is considerable interest to push them towards ever-decreasing sampling rates. As the sampling rate is substantially reduced, however, even Monte Carlo (MC) denoisers-which have been very successful at removing large amounts of noise-typically do not produce acceptable final results. As an orthogonal approach to this, we believe that good importance sampling of paths is critical for producing betterconverged, path-traced images at low sample counts that can then, for example, be more effectively denoised. However, most recent importance-sampling techniques for guiding path tracing (an area known as ''path guiding'') involve expensive online (per-scene) training and offer benefits only at high sample counts. In this paper, we propose an offline, scene-independent deeplearning approach that can importance sample first-bounce light paths for general scenes without the need of the costly online training, and can start guiding path sampling with as little as 1 sample per pixel. Instead of learning to ''overfit'' to the sampling distribution of a specific scene like most previous work, our data-driven approach is trained a priori on a set of training scenes on how to use a local neighborhood of samples with additional feature information to reconstruct the full incident radiance at a point in the scene, which enables first-bounce importance sampling for new test scenes. Our solution is easy to integrate into existing rendering pipelines without the need for retraining, as we demonstrate by incorporating it into both the Blender/Cycles and Mitsuba path tracers. Finally, we show how our offline, deep importance sampler (ODIS) increases convergence at low sample counts and improves the results of an off-the-shelf denoiser relative to other state-of-the-art sampling techniques.
  • Item
    Deep Video-Based Performance Synthesis from Sparse Multi-View Capture
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Chen, Mingjia; Wang, Changbo; Liu, Ligang; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We present a deep learning based technique that enables novel-view videos of human performances to be synthesized from sparse multi-view captures. While performance capturing from a sparse set of videos has received significant attention, there has been relatively less progress which is about non-rigid objects (e.g., human bodies). The rich articulation modes of human body make it rather challenging to synthesize and interpolate the model well. To address this problem, we propose a novel deep learning based framework that directly predicts novel-view videos of human performances without explicit 3D reconstruction. Our method is a composition of two steps: novel-view prediction and detail enhancement. We first learn a novel deep generative query network for view prediction. We synthesize novel-view performances from a sparse set of just five or less camera videos. Then, we use a new generative adversarial network to enhance fine-scale details of the first step results. This opens up the possibility of high-quality low-cost video-based performance synthesis, which is gaining popularity for VA and AR applications. We demonstrate a variety of promising results, where our method is able to synthesis more robust and accurate performances than existing state-of-the-art approaches when only sparse views are available.
  • Item
    Appearance Flow Completion for Novel View Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Le, Hoang; Liu, Feng; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Novel view synthesis from sparse and unstructured input views faces challenges like the difficulty with dense 3D reconstruction and large occlusion. This paper addresses these problems by estimating proper appearance flows from the target to input views to warp and blend the input views. Our method first estimates a sparse set 3D scene points using an off-the-shelf 3D reconstruction method and calculates sparse flows from the target to input views. Our method then performs appearance flow completion to estimate the dense flows from the corresponding sparse ones. Specifically, we design a deep fully convolutional neural network that takes sparse flows and input views as input and outputs the dense flows. Furthermore, we estimate the optical flows between input views as references to guide the estimation of dense flows between the target view and input views. Besides the dense flows, our network also estimates the masks to blend multiple warped inputs to render the target view. Experiments on the KITTI benchmark show that our method can generate high quality novel views from sparse and unstructured input views.
  • Item
    FontRNN: Generating Large-scale Chinese Fonts via Recurrent Neural Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Tang, Shusen; Xia, Zeqing; Lian, Zhouhui; Tang, Yingmin; Xiao, Jianguo; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Despite the recent impressive development of deep neural networks, using deep learning based methods to generate largescale Chinese fonts is still a rather challenging task due to the huge number of intricate Chinese glyphs, e.g., the official standard Chinese charset GB18030-2000 consists of 27,533 Chinese characters. Until now, most existing models for this task adopt Convolutional Neural Networks (CNNs) to generate bitmap images of Chinese characters due to CNN based models' remarkable success in various applications. However, CNN based models focus more on image-level features while usually ignore stroke order information when writing characters. Instead, we treat Chinese characters as sequences of points (i.e., writing trajectories) and propose to handle this task via an effective Recurrent Neural Network (RNN) model with monotonic attention mechanism, which can learn from as few as hundreds of training samples and then synthesize glyphs for remaining thousands of characters in the same style. Experimental results show that our proposed FontRNN can be used for synthesizing large-scale Chinese fonts as well as generating realistic Chinese handwritings efficiently.
  • Item
    Mesh Defiltering via Cascaded Geometry Recovery
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wei, Mingqiang; Guo, Xianglin; Huang, Jin; Xie, Haoran; Zong, Hua; Kwan, Reggie; Wang, Fu Lee; Qin, Jing; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    This paper addresses the nontraditional but practically meaningful reversibility problem of mesh filtering. This reverse-filtering approach (termed a DeFilter) seeks to recover the geometry of a set of filtered meshes to their artifact-free status. To solve this scenario, we adapt cascaded normal regression (CNR) to understand the commonly used mesh filters and recover automatically the mesh geometry that was lost through various geometric operations. We formulate mesh defiltering by an extreme learning machine (ELM) on the mesh normals at an offline training stage and perform it automatically at a runtime defiltering stage. Specifically, (1) to measure the local geometry of a filtered mesh, we develop a generalized reverse Filtered Facet Normal Descriptor (grFND) in the consistent neighbors; (2) to map the grFNDs to the normals of the ground-truth meshes, we learn a regression function from a set of filtered meshes and their ground-truth counterparts; and (3) at runtime, we reversely filter the normals of a filtered mesh, using the learned regression function for recovering the lost geometry. We evaluate multiple quantitative and qualitative results on synthetic and real data to verify our DeFilter's performance thoroughly. From a practical point of view, our method can recover the lost geometry of denoised meshes without needing to know the exact filter used previously, and can act as a geometry-recovery plugin for most of the state-of-the-art methods of mesh denoising.
  • Item
    Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Bemana, Mojtaba; Keinert, Joachim; Myszkowski, Karol; Bätz, Michel; Ziegler, Matthias; Seidel, Hans-Peter; Ritschel, Tobias; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Image metrics predict the perceived per-pixel difference between a reference image and its degraded (e. g., re-rendered) version. In several important applications, the reference image is not available and image metrics cannot be applied. We devise a neural network architecture and training procedure that allows predicting the MSE, SSIM or VGG16 image difference from the distorted image alone while the reference is not observed. This is enabled by two insights: The first is to inject sufficiently many un-distorted natural image patches, which can be found in arbitrary amounts and are known to have no perceivable difference to themselves. This avoids false positives. The second is to balance the learning, where it is carefully made sure that all image errors are equally likely, avoiding false negatives. Surprisingly, we observe that the resulting no-reference metric, subjectively, can even perform better than the reference-based one, as it had to become robust against mis-alignments. We evaluate the effectiveness of our approach in an image-based rendering context, both quantitatively and qualitatively. Finally, we demonstrate two applications which reduce light field capture time and provide guidance for interactive depth adjustment.
  • Item
    Intrinsic Symmetry Detection on 3D Models with Skeleton-guided Combination of Extrinsic Symmetries
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, Wencheng; Ma, Junhui; Xu, Panpan; Chu, Yiyao; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    The existing methods for intrinsic symmetry detection on 3D models always need complex measures such as geodesic distances for describing intrinsic geometry and statistical computation for finding non-rigid transformations to associate symmetrical shapes. They are expensive, may miss symmetries, and cannot guarantee their obtained symmetrical parts in high quality. We observe that only extrinsic symmetries exist between convex shapes, and two intrinsically symmetric shapes can be determined if their belonged convex sub-shapes are symmetrical to each other correspondingly and connected in a similar topological structure. Thus, we propose to decompose the model into convex parts, and use the similar structures of the skeleton of the model to guide combination of extrinsic symmetries between convex parts for intrinsic symmetry detection. In this way, we give up statistical computation for intrinsic symmetry detection, and avoid complex measures for describing intrinsic geometry. With the similar structures being from small to large gradually, we can quickly detect multi-scale partial intrinsic symmetries in a bottom up manner. Benefited from the well segmented convex parts, our obtained symmetrical parts are in high quality. Experimental results show that our method can find many more symmetries and runs much faster than the existing methods, even by several orders of magnitude.
  • Item
    Topology Preserving Simplification of Medial Axes in 3D Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Chu, Yiyao; Hou, Fei; Wang, Wencheng; Li, Lei; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    We propose an efficient method for topology-preserving simplification of medial axes of 3D models. Existing methods either cannot preserve the topology during medial axes simplification or have the problem of being geometrically inaccurate or computationally expensive. To tackle these issues, we restrict our topology-checking to the areas around the topological holes to avoid unnecessary checks in other areas. Our algorithm can keep high precision even when the medial axis is simplified to be in very few vertices. Furthermore, we parallelize the medial axes simplification procedure to enhance the performance significantly. Experimental results show that our method can preserve the topology with highly efficient performance, much superior to the existing methods in terms of topology preservation, accuracy and performance.
  • Item
    Single-View Modeling of Layered Origami with Plausible Outer Shape
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Kato, Yuya; Tanaka, Shinichi; Kanamori, Yoshihiro; Mitani, Jun; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Modeling 3D origami pieces using conventional software is laborious due to the geometric constraints imposed by the complicated layered structure. Targeting origami models used in visual content such as CG illustrations and movies, we propose an interactive system that dramatically simplifies the modeling of 3D origami pieces with plausible outer shapes, while omitting accurate inner structures. By focusing on flat origami models with a front-and-back symmetry commonly found in traditional artworks, our system realizes easy and quick modeling via single-view interface; given a reference image of the target origami piece, the user draws polygons of planar faces onto the image, and assigns annotations indicating the types of folding operations. Our system automatically rectifies the manually-specified polygons, infers the folded structures that should yield the user-specified polygons with reference to the depth order of layered polygons, and generates a plausible 3D model while accounting for gaps between layers. Our system is versatile enough for modeling pseudo-origami models that are not realizable by folding a single sheet of paper. Our user study demonstrates that even novice users without the specialized knowledge and experience on origami and 3D modeling can create plausible origami models quickly.
  • Item
    Image Composition of Partially Occluded Objects
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Tan, Xuehan; Xu, Panpan; Guo, Shihui; Wang, Wencheng; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Image composition extracts the content of interest (COI) from a source image and blends it into a target image to generate a new image. In the majority of existing works, the COI is manually extracted and then overlaid on top of the target image. However, in practice, it is often necessary to deal with situations in which the COI is partially occluded by the target image content. In this regard, both tasks of extracting the COI and cropping its occluded part require intensive user interactions, which are laborious and seriously reduce the composition efficiency. This paper addresses the aforementioned challenges by proposing an efficient image composition method. First, we extract the semantic contents of the images by using state-of-the-art deep learning methods. Therefore, the COI can be selected with clicks only, which can greatly reduce the demanded user interactions. Second, according to the user's operations (such as translation or scale) on the COI, we can effectively infer the occlusion relationships between the COI and the contents of the target image. Thus, the COI can be adaptively embedded into the target image without concern about cropping its occluded part. Therefore, the procedures of content extraction and occlusion handling can be significantly simplified, and work efficiency is remarkably improved. Experimental results show that compared to existing works, our method can reduce the number of user interactions to approximately one-tenth and increase the speed of image composition by more than ten times.
  • Item
    A PatchMatch-based Approach for Matte Propagation in Videos
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Backes, Marcos; Menezes de Oliveira Neto, Manuel; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Despite considerable advances in natural image matting over the last decades, video matting still remains a difficult problem. The main challenges faced by existing methods are the large amount of user input required, and temporal inconsistencies in mattes between pairs of adjacent frames. We present a temporally-coherent matte-propagation method for videos based on PatchMatch and edge-aware filtering. Given an input video and trimaps for a few frames, including the first and last, our approach generates alpha mattes for all frames of the video sequence. We also present a user scribble-based interface for video matting that takes advantage of the efficiency of our method to interactively refine the matte results. We demonstrate the effectiveness of our approach by using it to generate temporally-coherent mattes for several natural video sequences. We perform quantitative comparisons against the state-of-the-art sparse-input video matting techniques and show that our method produces significantly better results according to three different metrics. We also perform qualitative comparisons against the state-of-the-art dense-input video matting techniques and show that our approach produces similar quality results while requiring only about 7% of the amount of user input required by such techniques. These results show that our method is both effective and user-friendly, outperforming state-of-the-art solutions.
  • Item
    Wavelet Flow: Optical Flow Guided Wavelet Facial Image Fusion
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Ding, Hong; Yan, Qingan; Fu, Gang; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Estimating the correspondence between the images using optical flow is the key component for image fusion, however, computing optical flow between a pair of facial images including backgrounds is challenging due to large differences in illumination, texture, color and background in the images. To improve optical flow results for image fusion, we propose a novel flow estimation method, wavelet flow, which can handle both the face and background in the input images. The key idea is that instead of computing flow directly between the input image pair, we estimate the image flow by incorporating multi-scale image transfer and optical flow guided wavelet fusion. Multi-scale image transfer helps to preserve the background and lighting detail of input, while optical flow guided wavelet fusion produces a series of intermediate images for further fusion quality optimizing. Our approach can significantly improve the performance of the optical flow algorithm and provide more natural fusion results for both faces and backgrounds in the images. We evaluate our method on a variety of datasets to show its high outperformance.