39-Issue 7

Permanent URI for this collection

Pacific Graphics 2020 - Symposium Proceedings
Proceedings published in 2020, Articles to be presented in 2021: Wellington, New Zealand

(for Short Papers, Posters, and Work-in-Progress Papers see PG 2020 - Short Papers, Posters, and Work-in-Progress Papers)
Geometry and Modeling
Memory-Efficient Bijective Parameterizations of Very-Large-Scale Models
Chunyang Ye, Jian-Ping Su, Ligang Liu, and Xiao-Ming Fu
Practical Fabrication of Discrete Chebyshev Nets
Hao-Yu Liu, Zhong-Yuan Liu, Zheng-Yu Zhao, Ligang Liu, and Xiao-Ming Fu
A Deep Residual Network for Geometric Decontouring
Zhongping Ji, Chengqin Zhou, Qiankan Zhang, Yu-Wei Zhang, and Wenping Wang
Robust Computation of 3D Apollonius Diagrams
Peihui Wang, Na Yuan, Yuewen Ma, Shiqing Xin, Ying He, Shuangmin Chen, Jian Xu, and Wenping Wang
Image-Driven Furniture Style for Interactive 3D Scene Modeling
Tomer Weiss, Ilkay Yildiz, Nitin Agarwal, Esra Ataer-Cansizoglu, and Jae-Woo Choi
Physics-based Material Animation
Adjustable Constrained Soft-Tissue Dynamics
Bohan Wang, Mianlun Zheng, and Jernej Barbic
Learning Elastic Constitutive Material and Damping Models
Bin Wang, Yuanmin Deng, Paul Kry, Uri Ascher, Hui Huang, and Baoquan Chen
Fracture Patterns Design for Anisotropic Models with the Material Point Method
Wei Cao, Luan Lyu, Xiaohua Ren, Bob Zhang, Zhixin Yang, and Enhua Wu
A Novel Plastic Phase-Field Method for Ductile Fracture with GPU Optimization
Zipeng Zhao, Kemeng Huang, Chen Li, Changbo Wang, and Hong Qin
Physics and Graphics
Simulation of Arbitrarily-shaped Magnetic Objects
Seung-wook Kim and JungHyun Han
Semi-analytical Solid Boundary Conditions for Free Surface Flows
Yue Chang, Shusen Liu, Xiaowei He, Sheng Li, and Guoping Wang
Cosserat Rod with rh-Adaptive Discretization
Jiahao Wen, Jiong Chen, Umetani Nobuyuki, Hujun Bao, and Jin Huang
Rendering
Fast Out-of-Core Octree Generation for Massive Point Clouds
Markus Schütz, Stefan Ohrhallinger, and Michael Wimmer
Real Time Multiscale Rendering of Dense Dynamic Stackings
Élie Michel and Tamy Boubekeur
Automatic Band-Limited Approximation of Shaders Using Mean-Variance Statistics in Clamped Domain
Shi Li, Rui Wang, Yuchi Huo, Wenting Zheng, Wei Hua, and Hujun Bao
Lights and Ray Tracing
Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering
Zilin Xu, Qiang Sun, Lu Wang, Yanning Xu, and Beibei Wang
Next Event Estimation++: Visibility Mapping for Efficient Light Transport Simulation
Jerry Jinfeng Guo, Martin Eisemann, and Elmar Eisemann
Two-stage Resampling for Bidirectional Path Tracing with Multiple Light Sub-paths
Kosuke Nabata, Kei Iwasaki, and Yoshinori Dobashi
Materials and Shading Models
Computing the Bidirectional Scattering of a Microstructure Using Scalar Diffraction Theory and Path Tracing
Viggo Falster, Adrián Jarabo, and Jeppe Revall Frisvad
Procedural Physically based BRDF for Real-Time Rendering of Glints
Xavier Chermain, Basile Sauvage, Jean-Michel Dischler, and Carsten Dachsbacher
A Bayesian Inference Framework for Procedural Material Parameter Estimation
Yu Guo, Milos Hasan, Lingqi Yan, and Shuang Zhao
Recognition
SRF-Net: Spatial Relationship Feature Network for Tooth Point Cloud Classification
Qian Ma, Guangshun Wei, Yuanfeng Zhou, Xiao Pan, Shiqing Xin, and Wenping Wang
Semi-Supervised 3D Shape Recognition via Multimodal Deep Co-training
Mofei Song, Yu Liu, and Xiao Fan Liu
The Layerizing VoxPoint Annular Convolutional Network for 3D Shape Classification
Tong Wang, Wenyuan Tao, Chung-Ming Own, Xiantuo Lou, and Yuehua Zhao
SRNet: A 3D Scene Recognition Network using Static Graph and Dense Semantic Fusion
Zhaoxin Fan, Hongyan Liu, Jun He, Qi Sun, and Xiaoyong Du
A Graph-based One-Shot Learning Method for Point Cloud Recognition
Zhaoxin Fan, Hongyan Liu, Jun He, Qi Sun, and Xiaoyong Du
Human Pose
Human Pose Transfer by Adaptive Hierarchical Deformation
Jinsong Zhang, Xingzi Liu, and Kun Li
Personalized Hand Modeling from Multiple Postures with Multi-View Color Images
Yangang Wang, Ruting Rao, and Changqing Zou
Monocular Human Pose and Shape Reconstruction using Part Differentiable Rendering
Min Wang, Feng Qiu, Wentao Liu, Chen Qian, Xiaowei Zhou, and Lizhuang Ma
PointSkelCNN: Deep Learning-Based 3D Human Skeleton Extraction from Point Clouds
Hongxing Qin, Songshan Zhang, Qihuang Liu, Li Chen, and Baoquan Chen
FAKIR: An Algorithm for Revealing the Anatomy and Pose of Statues from Raw Point Sets
Tong Fu, Raphaelle Chaine, and Julie Digne
Tracking and Saliency
Learning Target-Adaptive Correlation Filters for Visual Tracking
Ying She, Yang Yi, and Jialiang Gu
An Occlusion-aware Edge-Based Method for Monocular 3D Object Tracking using Edge Confidence
Hong Huang, Fan Zhong, Yuqing Sun, and Xueying Qin
Coarse to Fine:Weak Feature Boosting Network for Salient Object Detection
Chenhao Zhang, Shanshan Gao, Xiao Pan, Yuting Wang, and Yuanfeng Zhou
Vision Meets Graphics
Generating High-quality Superpixels in Textured Images
Zhe Zhang, Panpan Xu, Jian Chang, Wencheng Wang, Chong Zhao, and Jian Jun Zhang
InstanceFusion: Real-time Instance-level 3D Reconstruction Using a Single RGBD Camera
Feixiang Lu, Haotian Peng, Hongyu Wu, Jun Yang, Xinhang Yang, Ruizhi Cao, Liangjun Zhang, Ruigang Yang, and Bin Zhou
Weakly Supervised Part-wise 3D Shape Reconstruction from Single-View RGB Images
Chengjie Niu, Yang Yu, Zhenwei Bian, Jun Li, and Kai Xu
Deep Separation of Direct and Global Components from a Single Photograph under Structured Lighting
Zhaoliang Duan, James Bieron, and Pieter Peers
Image Restoration
Pixel-wise Dense Detector for Image Inpainting
Ruisong Zhang, Weize Quan, Baoyuan Wu, Zhifeng Li, and Dong-Ming Yan
CLA-GAN: A Context and Lightness Aware Generative Adversarial Network for Shadow Removal
Ling Zhang, Chengjiang Long, Qingan Yan, Xiaolong Zhang, and Chunxia Xiao
Not All Areas Are Equal: A Novel Separation-Restoration-Fusion Network for Image Raindrop Removal
Dongdong Ren, Jinbao Li, Meng Han, and Minglei Shu
SCGA-Net: Skip Connections Global Attention Network for Image Restoration
Dongdong Ren, Jinbao Li, Meng Han, and Minglei Shu
Image Manipulation
Diversifying Semantic Image Synthesis and Editing via Class- and Layer-wise VAEs
Yuki Endo and Yoshihiro Kanamori
Simultaneous Multi-Attribute Image-to-Image Translation Using Parallel Latent Transform Networks
Sen-Zhe Xu and Yu-Kun Lai
Interactive Design and Preview of Colored Snapshots of Indoor Scenes
Qiang Fu, Hai Yan, Hongbo Fu, and Xueming Li
A Multi-Person Selfie System via Augmented Reality
Jie Lin and Chuan-Kai Yang
Multi-scale Information Assembly for Image Matting
Yu Qiao, Yuhao Liu, Qiang Zhu, Xin Yang, Yuxin Wang, Qiang Zhang, and Xiaopeng Wei
Stylized Graphics
StyleProp: Real-time Example-based Stylization of 3D Models
Filip Hauptfleisch, Ondrej Texler, Aneta Texler, Jaroslav Krivánek, and Daniel Sýkora
Two-stage Photograph Cartoonization via Line Tracing
Simin Li, Qiang Wen, Shuang Zhao, Zixun Sun, and Shengfeng He
Colorization of Line Drawings with Empty Pupils
Kenta Akita, Yuki Morimoto, and Reiji Tsuruno
Visualization and Interaction
RadEx: Integrated Visual Exploration of Multiparametric Studies for Radiomic Tumor Profiling
Eric Mörth, Kari Wagner-Larsen, Erlend Hodneland, Camilla Krakstad, Ingfrid S. Haldorsen, Stefan Bruckner, and Noeska N. Smit
Slice and Dice: A Physicalization Workflow for Anatomical Edutainment
Renata Georgia Raidou, Eduard Gröller, and Hsiang-Yun Wu
Visual Analytics in Dental Aesthetics
Aleksandr Amirkhanov, Matthias Bernhard, Alexey Karimov, Sabine Stiller, Andreas Geier, Eduard Gröller, and Gabriel Mistelbauer

BibTeX (39-Issue 7)
                
@article{
10.1111:cgf.14122,
journal = {Computer Graphics Forum}, title = {{
Memory-Efficient Bijective Parameterizations of Very-Large-Scale Models}},
author = {
Ye, Chunyang
and
Su, Jian-Ping
and
Liu, Ligang
and
Fu, Xiao-Ming
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14122}
}
                
@article{
10.1111:cgf.14123,
journal = {Computer Graphics Forum}, title = {{
Practical Fabrication of Discrete Chebyshev Nets}},
author = {
Liu, Hao-Yu
and
Liu, Zhong-Yuan
and
Zhao, Zheng-Yu
and
Liu, Ligang
and
Fu, Xiao-Ming
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14123}
}
                
@article{
10.1111:cgf.14124,
journal = {Computer Graphics Forum}, title = {{
A Deep Residual Network for Geometric Decontouring}},
author = {
Ji, Zhongping
and
Zhou, Chengqin
and
Zhang, Qiankan
and
Zhang, Yu-Wei
and
Wang, Wenping
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14124}
}
                
@article{
10.1111:cgf.14125,
journal = {Computer Graphics Forum}, title = {{
Robust Computation of 3D Apollonius Diagrams}},
author = {
Wang, Peihui
and
Yuan, Na
and
Ma, Yuewen
and
Xin, Shiqing
and
He, Ying
and
Chen, Shuangmin
and
Xu, Jian
and
Wang, Wenping
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14125}
}
                
@article{
10.1111:cgf.14126,
journal = {Computer Graphics Forum}, title = {{
Image-Driven Furniture Style for Interactive 3D Scene Modeling}},
author = {
Weiss, Tomer
and
Yildiz, Ilkay
and
Agarwal, Nitin
and
Ataer-Cansizoglu, Esra
and
Choi, Jae-Woo
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14126}
}
                
@article{
10.1111:cgf.14127,
journal = {Computer Graphics Forum}, title = {{
Adjustable Constrained Soft-Tissue Dynamics}},
author = {
Wang, Bohan
and
Zheng, Mianlun
and
Barbic, Jernej
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14127}
}
                
@article{
10.1111:cgf.14128,
journal = {Computer Graphics Forum}, title = {{
Learning Elastic Constitutive Material and Damping Models}},
author = {
Wang, Bin
and
Deng, Yuanmin
and
Kry, Paul
and
Ascher, Uri
and
Huang, Hui
and
Chen, Baoquan
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14128}
}
                
@article{
10.1111:cgf.14129,
journal = {Computer Graphics Forum}, title = {{
Fracture Patterns Design for Anisotropic Models with the Material Point Method}},
author = {
Cao, Wei
and
Lyu, Luan
and
Ren, Xiaohua
and
Zhang, Bob
and
Yang, Zhixin
and
Wu, Enhua
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14129}
}
                
@article{
10.1111:cgf.14130,
journal = {Computer Graphics Forum}, title = {{
A Novel Plastic Phase-Field Method for Ductile Fracture with GPU Optimization}},
author = {
Zhao, Zipeng
and
Huang, Kemeng
and
Li, Chen
and
Wang, Changbo
and
Qin, Hong
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14130}
}
                
@article{
10.1111:cgf.14131,
journal = {Computer Graphics Forum}, title = {{
Simulation of Arbitrarily-shaped Magnetic Objects}},
author = {
Kim, Seung-wook
and
Han, JungHyun
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14131}
}
                
@article{
10.1111:cgf.14133,
journal = {Computer Graphics Forum}, title = {{
Cosserat Rod with rh-Adaptive Discretization}},
author = {
Wen, Jiahao
and
Chen, Jiong
and
Nobuyuki, Umetani
and
Bao, Hujun
and
Huang, Jin
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14133}
}
                
@article{
10.1111:cgf.14132,
journal = {Computer Graphics Forum}, title = {{
Semi-analytical Solid Boundary Conditions for Free Surface Flows}},
author = {
Chang, Yue
and
Liu, Shusen
and
He, Xiaowei
and
Li, Sheng
and
Wang, Guoping
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14132}
}
                
@article{
10.1111:cgf.14134,
journal = {Computer Graphics Forum}, title = {{
Fast Out-of-Core Octree Generation for Massive Point Clouds}},
author = {
Schütz, Markus
and
Ohrhallinger, Stefan
and
Wimmer, Michael
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14134}
}
                
@article{
10.1111:cgf.14135,
journal = {Computer Graphics Forum}, title = {{
Real Time Multiscale Rendering of Dense Dynamic Stackings}},
author = {
Michel, Élie
and
Boubekeur, Tamy
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14135}
}
                
@article{
10.1111:cgf.14136,
journal = {Computer Graphics Forum}, title = {{
Automatic Band-Limited Approximation of Shaders Using Mean-Variance Statistics in Clamped Domain}},
author = {
Li, Shi
and
Wang, Rui
and
Huo, Yuchi
and
Zheng, Wenting
and
Hua, Wei
and
Bao, Hujun
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14136}
}
                
@article{
10.1111:cgf.14137,
journal = {Computer Graphics Forum}, title = {{
Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering}},
author = {
Xu, Zilin
and
Sun, Qiang
and
Wang, Lu
and
Xu, Yanning
and
Wang, Beibei
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14137}
}
                
@article{
10.1111:cgf.14138,
journal = {Computer Graphics Forum}, title = {{
Next Event Estimation++: Visibility Mapping for Efficient Light Transport Simulation}},
author = {
Guo, Jerry Jinfeng
and
Eisemann, Martin
and
Eisemann, Elmar
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14138}
}
                
@article{
10.1111:cgf.14139,
journal = {Computer Graphics Forum}, title = {{
Two-stage Resampling for Bidirectional Path Tracing with Multiple Light Sub-paths}},
author = {
Nabata, Kosuke
and
Iwasaki, Kei
and
Dobashi, Yoshinori
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14139}
}
                
@article{
10.1111:cgf.14140,
journal = {Computer Graphics Forum}, title = {{
Computing the Bidirectional Scattering of a Microstructure Using Scalar Diffraction Theory and Path Tracing}},
author = {
Falster, Viggo
and
Jarabo, Adrián
and
Frisvad, Jeppe Revall
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14140}
}
                
@article{
10.1111:cgf.14141,
journal = {Computer Graphics Forum}, title = {{
Procedural Physically based BRDF for Real-Time Rendering of Glints}},
author = {
Chermain, Xavier
and
Sauvage, Basile
and
Dischler, Jean-Michel
and
Dachsbacher, Carsten
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14141}
}
                
@article{
10.1111:cgf.14142,
journal = {Computer Graphics Forum}, title = {{
A Bayesian Inference Framework for Procedural Material Parameter Estimation}},
author = {
Guo, Yu
and
Hasan, Milos
and
Yan, Lingqi
and
Zhao, Shuang
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14142}
}
                
@article{
10.1111:cgf.14143,
journal = {Computer Graphics Forum}, title = {{
SRF-Net: Spatial Relationship Feature Network for Tooth Point Cloud Classification}},
author = {
Ma, Qian
and
Wei, Guangshun
and
Zhou, Yuanfeng
and
Pan, Xiao
and
Xin, Shiqing
and
Wang, Wenping
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14143}
}
                
@article{
10.1111:cgf.14145,
journal = {Computer Graphics Forum}, title = {{
The Layerizing VoxPoint Annular Convolutional Network for 3D Shape Classification}},
author = {
Wang, Tong
and
Tao, Wenyuan
and
Own, Chung-Ming
and
Lou, Xiantuo
and
Zhao, Yuehua
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14145}
}
                
@article{
10.1111:cgf.14146,
journal = {Computer Graphics Forum}, title = {{
SRNet: A 3D Scene Recognition Network using Static Graph and Dense Semantic Fusion}},
author = {
Fan, Zhaoxin
and
Liu, Hongyan
and
He, Jun
and
Sun, Qi
and
Du, Xiaoyong
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14146}
}
                
@article{
10.1111:cgf.14144,
journal = {Computer Graphics Forum}, title = {{
Semi-Supervised 3D Shape Recognition via Multimodal Deep Co-training}},
author = {
Song, Mofei
and
Liu, Yu
and
Liu, Xiao Fan
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14144}
}
                
@article{
10.1111:cgf.14147,
journal = {Computer Graphics Forum}, title = {{
A Graph-based One-Shot Learning Method for Point Cloud Recognition}},
author = {
Fan, Zhaoxin
and
Liu, Hongyan
and
He, Jun
and
Sun, Qi
and
Du, Xiaoyong
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14147}
}
                
@article{
10.1111:cgf.14149,
journal = {Computer Graphics Forum}, title = {{
Personalized Hand Modeling from Multiple Postures with Multi-View Color Images}},
author = {
Wang, Yangang
and
Rao, Ruting
and
Zou, Changqing
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14149}
}
                
@article{
10.1111:cgf.14148,
journal = {Computer Graphics Forum}, title = {{
Human Pose Transfer by Adaptive Hierarchical Deformation}},
author = {
Zhang, Jinsong
and
Liu, Xingzi
and
Li, Kun
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14148}
}
                
@article{
10.1111:cgf.14151,
journal = {Computer Graphics Forum}, title = {{
PointSkelCNN: Deep Learning-Based 3D Human Skeleton Extraction from Point Clouds}},
author = {
Qin, Hongxing
and
Zhang, Songshan
and
Liu, Qihuang
and
Chen, Li
and
Chen, Baoquan
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14151}
}
                
@article{
10.1111:cgf.14150,
journal = {Computer Graphics Forum}, title = {{
Monocular Human Pose and Shape Reconstruction using Part Differentiable Rendering}},
author = {
Wang, Min
and
Qiu, Feng
and
Liu, Wentao
and
Qian, Chen
and
Zhou, Xiaowei
and
Ma, Lizhuang
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14150}
}
                
@article{
10.1111:cgf.14153,
journal = {Computer Graphics Forum}, title = {{
Learning Target-Adaptive Correlation Filters for Visual Tracking}},
author = {
She, Ying
and
Yi, Yang
and
Gu, Jialiang
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14153}
}
                
@article{
10.1111:cgf.14152,
journal = {Computer Graphics Forum}, title = {{
FAKIR: An Algorithm for Revealing the Anatomy and Pose of Statues from Raw Point Sets}},
author = {
Fu, Tong
and
Chaine, Raphaelle
and
Digne, Julie
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14152}
}
                
@article{
10.1111:cgf.14155,
journal = {Computer Graphics Forum}, title = {{
Coarse to Fine:Weak Feature Boosting Network for Salient Object Detection}},
author = {
Zhang, Chenhao
and
Gao, Shanshan
and
Pan, Xiao
and
Wang, Yuting
and
Zhou, Yuanfeng
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14155}
}
                
@article{
10.1111:cgf.14154,
journal = {Computer Graphics Forum}, title = {{
An Occlusion-aware Edge-Based Method for Monocular 3D Object Tracking using Edge Confidence}},
author = {
Huang, Hong
and
Zhong, Fan
and
Sun, Yuqing
and
Qin, Xueying
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14154}
}
                
@article{
10.1111:cgf.14156,
journal = {Computer Graphics Forum}, title = {{
Generating High-quality Superpixels in Textured Images}},
author = {
Zhang, Zhe
and
Xu, Panpan
and
Chang, Jian
and
Wang, Wencheng
and
Zhao, Chong
and
Zhang, Jian Jun
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14156}
}
                
@article{
10.1111:cgf.14157,
journal = {Computer Graphics Forum}, title = {{
InstanceFusion: Real-time Instance-level 3D Reconstruction Using a Single RGBD Camera}},
author = {
Lu, Feixiang
and
Peng, Haotian
and
Wu, Hongyu
and
Yang, Jun
and
Yang, Xinhang
and
Cao, Ruizhi
and
Zhang, Liangjun
and
Yang, Ruigang
and
Zhou, Bin
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14157}
}
                
@article{
10.1111:cgf.14158,
journal = {Computer Graphics Forum}, title = {{
Weakly Supervised Part-wise 3D Shape Reconstruction from Single-View RGB Images}},
author = {
Niu, Chengjie
and
Yu, Yang
and
Bian, Zhenwei
and
Li, Jun
and
Xu, Kai
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14158}
}
                
@article{
10.1111:cgf.14159,
journal = {Computer Graphics Forum}, title = {{
Deep Separation of Direct and Global Components from a Single Photograph under Structured Lighting}},
author = {
Duan, Zhaoliang
and
Bieron, James
and
Peers, Pieter
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14159}
}
                
@article{
10.1111:cgf.14160,
journal = {Computer Graphics Forum}, title = {{
Pixel-wise Dense Detector for Image Inpainting}},
author = {
Zhang, Ruisong
and
Quan, Weize
and
Wu, Baoyuan
and
Li, Zhifeng
and
Yan, Dong-Ming
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14160}
}
                
@article{
10.1111:cgf.14162,
journal = {Computer Graphics Forum}, title = {{
Not All Areas Are Equal: A Novel Separation-Restoration-Fusion Network for Image Raindrop Removal}},
author = {
Ren, Dongdong
and
Li, Jinbao
and
Han, Meng
and
Shu, Minglei
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14162}
}
                
@article{
10.1111:cgf.14161,
journal = {Computer Graphics Forum}, title = {{
CLA-GAN: A Context and Lightness Aware Generative Adversarial Network for Shadow Removal}},
author = {
Zhang, Ling
and
Long, Chengjiang
and
Yan, Qingan
and
Zhang, Xiaolong
and
Xiao, Chunxia
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14161}
}
                
@article{
10.1111:cgf.14163,
journal = {Computer Graphics Forum}, title = {{
SCGA-Net: Skip Connections Global Attention Network for Image Restoration}},
author = {
Ren, Dongdong
and
Li, Jinbao
and
Han, Meng
and
Shu, Minglei
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14163}
}
                
@article{
10.1111:cgf.14164,
journal = {Computer Graphics Forum}, title = {{
Diversifying Semantic Image Synthesis and Editing via Class- and Layer-wise VAEs}},
author = {
Endo, Yuki
and
Kanamori, Yoshihiro
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14164}
}
                
@article{
10.1111:cgf.14165,
journal = {Computer Graphics Forum}, title = {{
Simultaneous Multi-Attribute Image-to-Image Translation Using Parallel Latent Transform Networks}},
author = {
Xu, Sen-Zhe
and
Lai, Yu-Kun
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14165}
}
                
@article{
10.1111:cgf.14166,
journal = {Computer Graphics Forum}, title = {{
Interactive Design and Preview of Colored Snapshots of Indoor Scenes}},
author = {
Fu, Qiang
and
Yan, Hai
and
Fu, Hongbo
and
Li, Xueming
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14166}
}
                
@article{
10.1111:cgf.14167,
journal = {Computer Graphics Forum}, title = {{
A Multi-Person Selfie System via Augmented Reality}},
author = {
Lin, Jie
and
Yang, Chuan-Kai
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14167}
}
                
@article{
10.1111:cgf.14168,
journal = {Computer Graphics Forum}, title = {{
Multi-scale Information Assembly for Image Matting}},
author = {
Qiao, Yu
and
Liu, Yuhao
and
Zhu, Qiang
and
Yang, Xin
and
Wang, Yuxin
and
Zhang, Qiang
and
Wei, Xiaopeng
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14168}
}
                
@article{
10.1111:cgf.14169,
journal = {Computer Graphics Forum}, title = {{
StyleProp: Real-time Example-based Stylization of 3D Models}},
author = {
Hauptfleisch, Filip
and
Texler, Ondrej
and
Texler, Aneta
and
Krivánek, Jaroslav
and
Sýkora, Daniel
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14169}
}
                
@article{
10.1111:cgf.14170,
journal = {Computer Graphics Forum}, title = {{
Two-stage Photograph Cartoonization via Line Tracing}},
author = {
Li, Simin
and
Wen, Qiang
and
Zhao, Shuang
and
Sun, Zixun
and
He, Shengfeng
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14170}
}
                
@article{
10.1111:cgf.14171,
journal = {Computer Graphics Forum}, title = {{
Colorization of Line Drawings with Empty Pupils}},
author = {
Akita, Kenta
and
Morimoto, Yuki
and
Tsuruno, Reiji
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14171}
}
                
@article{
10.1111:cgf.14172,
journal = {Computer Graphics Forum}, title = {{
RadEx: Integrated Visual Exploration of Multiparametric Studies for Radiomic Tumor Profiling}},
author = {
Mörth, Eric
and
Wagner-Larsen, Kari
and
Hodneland, Erlend
and
Krakstad, Camilla
and
Haldorsen, Ingfrid S.
and
Bruckner, Stefan
and
Smit, Noeska N.
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14172}
}
                
@article{
10.1111:cgf.14173,
journal = {Computer Graphics Forum}, title = {{
Slice and Dice: A Physicalization Workflow for Anatomical Edutainment}},
author = {
Raidou, Renata Georgia
and
Gröller, Eduard
and
Wu, Hsiang-Yun
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14173}
}
                
@article{
10.1111:cgf.14174,
journal = {Computer Graphics Forum}, title = {{
Visual Analytics in Dental Aesthetics}},
author = {
Amirkhanov, Aleksandr
and
Bernhard, Matthias
and
Karimov, Alexey
and
Stiller, Sabine
and
Geier, Andreas
and
Gröller, Eduard
and
Mistelbauer, Gabriel
}, year = {
2020},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14174}
}

Browse

Recent Submissions

Now showing 1 - 54 of 54
  • Item
    Pacific Graphics 2020 - CGF 39-7: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Eisemann, Elmar; Jacobson, Alec; Zhang, Fang-Lue; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
  • Item
    Memory-Efficient Bijective Parameterizations of Very-Large-Scale Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Ye, Chunyang; Su, Jian-Ping; Liu, Ligang; Fu, Xiao-Ming; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    As high-precision 3D scanners become more and more widespread, it is easy to obtain very-large-scale meshes containing at least millions of vertices. However, processing these very-large-scale meshes is still a very challenging task due to memory limitations. This paper focuses on a fundamental geometric processing task, i.e., bijective parameterization construction. To this end, we present a spline-enhanced method to compute bijective and low distortion parameterizations for very-large-scale disk topology meshes. Instead of computing descent directions using the mesh vertices as variables, we estimate descent directions for each vertex by optimizing a proxy energy defined in spline spaces. Since the spline functions contain a small set of control points, it significantly decreases memory requirement. Besides, a divide-and-conquer method is proposed to obtain bijective initializations, and a submesh-based optimization strategy is developed to reduce distortion further. The capability and feasibility of our method are demonstrated over various complex models. Compared to the existing methods for bijective parameterizations of very-large-scale meshes, our method exhibits better scalability and requires much less memory.
  • Item
    Practical Fabrication of Discrete Chebyshev Nets
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Liu, Hao-Yu; Liu, Zhong-Yuan; Zhao, Zheng-Yu; Liu, Ligang; Fu, Xiao-Ming; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    We propose a computational and practical technique to allow home users to fabricate discrete Chebyshev nets for various 3D models. The success of our method relies on two key components. The first one is a novel and simple method to approximate discrete integrable, unit-length, and angle-bounded frame fields, used to model discrete Chebyshev nets. Central to our field generation process is an alternating algorithm that takes turns executing one pass to enforce integrability and another pass to approach unit length while bounding angles. The second is a practical fabrication specification. The discrete Chebyshev net is first partitioned into a set of patches to facilitate manufacturing. Then, each patch is assigned a specification on pulling, bend, and fold to fit the nets. We demonstrate the capability and feasibility of our method in various complex models.
  • Item
    A Deep Residual Network for Geometric Decontouring
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Ji, Zhongping; Zhou, Chengqin; Zhang, Qiankan; Zhang, Yu-Wei; Wang, Wenping; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Grayscale images are intensively used to construct or represent geometric details in field of computer graphics. In practice, displacement mapping technique often allows an 8-bit grayscale image input to manipulate the position of vertices. Human eyes are insensitive to the change of intensity between consecutive gray levels, so a grayscale image only provides 256 levels of luminances. However, when the luminances are converted into geometric elements, certain artifacts such as false contours become obvious. In this paper, we formulate the geometric decontouring as a constrained optimization problem from a geometric perspective. Instead of directly solving this optimization problem, we propose a data-driven method to learn a residual mapping function. We design a Geometric DeContouring Network (GDCNet) to eliminate the false contours effectively. To this end, we adopt a ResNet-based network structure and a normal-based loss function. Extensive experimental results demonstrate that accurate reconstructions can be achieved effectively. Our method can be used as a relief compressed representation and enhance the traditional displacement mapping technique to augment 3D models with high-quality geometric details using grayscale images efficiently.
  • Item
    Robust Computation of 3D Apollonius Diagrams
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Peihui; Yuan, Na; Ma, Yuewen; Xin, Shiqing; He, Ying; Chen, Shuangmin; Xu, Jian; Wang, Wenping; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Apollonius diagrams, also known as additively weighted Voronoi diagrams, are an extension of Voronoi diagrams, where the weighted distance is defined by the Euclidean distance minus the weight. The bisectors of Apollonius diagrams have a hyperbolic form, which is fundamentally different from traditional Voronoi diagrams and power diagrams. Though robust solvers are available for computing 2D Apollonius diagrams, there is no practical approach for the 3D counterpart. In this paper, we systematically analyze the structural features of 3D Apollonius diagrams, and then develop a fast algorithm for robustly computing Apollonius diagrams in 3D. Our algorithm consists of vertex location, edge tracing and face extraction, among which the key step is to adaptively subdivide the initial large box into a set of sufficiently small boxes such that each box contains at most one Apollonius vertex. Finally, we use centroidal Voronoi tessellation (CVT) to discretize the curved bisectors with well-tessellated triangle meshes. We validate the effectiveness and robustness of our algorithm through extensive evaluation and experiments. We also demonstrate an application on computing centroidal Apollonius diagram.
  • Item
    Image-Driven Furniture Style for Interactive 3D Scene Modeling
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Weiss, Tomer; Yildiz, Ilkay; Agarwal, Nitin; Ataer-Cansizoglu, Esra; Choi, Jae-Woo; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Creating realistic styled spaces is a complex task, which involves design know-how for what furniture pieces go well together. Interior style follows abstract rules involving color, geometry and other visual elements. Following such rules, users manually select similar-style items from large repositories of 3D furniture models, a process which is both laborious and time-consuming. We propose a method for fast-tracking style-similarity tasks, by learning a furniture's style-compatibility from interior scene images. Such images contain more style information than images depicting single furniture. To understand style, we train a deep learning network on a classification task. Based on image embeddings extracted from our network, we measure stylistic compatibility of furniture. We demonstrate our method with several 3D model style-compatibility results, and with an interactive system for modeling style-consistent scenes.
  • Item
    Adjustable Constrained Soft-Tissue Dynamics
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Bohan; Zheng, Mianlun; Barbic, Jernej; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Physically based simulation is often combined with geometric mesh animation to add realistic soft-body dynamics to virtual characters. This is commonly done using constraint-based simulation whereby a soft-tissue simulation is constrained to geometric animation of a subpart (or otherwise proxy representation) of the character. We observe that standard constraint-based simulation suffers from an important flaw that limits the expressiveness of soft-body dynamics. Namely, under correct physics, the frequency and amplitude of soft-tissue dynamics arising from constraints (''inertial amplitude'') are coupled, and cannot be adjusted independently merely by adjusting the material properties of the model. This means that the space of physically based simulations is inherently limited and cannot capture all effects typically expected by computer animators. For example, animators need the ability to adjust the frequency, inertial amplitude, gravity sag and damping properties of the virtual character, independently from each other, as these are the primary visual characteristics of the soft-tissue dynamics. We demonstrate that independence can be achieved by transforming the equations of motion into a non-inertial reference coordinate frame, then scaling the resulting inertial forces, and then converting the equations of motion back to the inertial frame. Such scaling of inertia makes it possible for the animator to set the character's inertial amplitude independently from frequency. We also provide exact controls for the amount of character's gravity sag, and the damping properties. In our examples, we use linear blend skinning and pose-space deformation for geometric mesh animation, and the Finite Element Method for soft-body constrained simulation; but our idea of scaling inertial forces is general and applicable to other animation and simulation methods. We demonstrate our technique on several character examples.
  • Item
    Learning Elastic Constitutive Material and Damping Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Bin; Deng, Yuanmin; Kry, Paul; Ascher, Uri; Huang, Hui; Chen, Baoquan; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Commonly used linear and nonlinear constitutive material models in deformation simulation contain many simplifications and only cover a tiny part of possible material behavior. In this work we propose a framework for learning customized models of deformable materials from example surface trajectories. The key idea is to iteratively improve a correction to a nominal model of the elastic and damping properties of the object, which allows new forward simulations with the learned correction to more accurately predict the behavior of a given soft object. Space-time optimization is employed to identify gentle control forces with which we extract necessary data for model inference and to finally encapsulate the material correction into a compact parametric form. Furthermore, a patch based position constraint is proposed to tackle the challenge of handling incomplete and noisy observations arising in real-world examples. We demonstrate the effectiveness of our method with a set of synthetic examples, as well with data captured from real world homogeneous elastic objects.
  • Item
    Fracture Patterns Design for Anisotropic Models with the Material Point Method
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Cao, Wei; Lyu, Luan; Ren, Xiaohua; Zhang, Bob; Yang, Zhixin; Wu, Enhua; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Physically plausible fracture animation is a challenging topic in computer graphics. Most of the existing approaches focus on the fracture of isotropic materials. We proposed a frame-field method for the design of anisotropic brittle fracture patterns. In this case, the material anisotropy is determined by two parts: anisotropic elastic deformation and anisotropic damage mechanics. For the elastic deformation, we reformulate the constitutive model of hyperelastic materials to achieve anisotropy by adding additional energy density functions in particular directions. For the damage evolution, we propose an improved phasefield fracture method to simulate the anisotropy by designing a deformation-aware second-order structural tensor. These two parts can present elastic anisotropy and fractured anisotropy independently, or they can be well coupled together to exhibit rich crack effects. To ensure the flexibility of simulation, we further introduce a frame-field concept to assist in setting local anisotropy, similar to the fiber orientation of textiles. For the discretization of the deformable object, we adopt a novel Material Point Method(MPM) according to its fracture-friendly nature. We also give some design criteria for anisotropic models through comparative analysis. Experiments show that our anisotropic method is able to be well integrated with the MPM scheme for simulating the dynamic fracture behavior of anisotropic materials.
  • Item
    A Novel Plastic Phase-Field Method for Ductile Fracture with GPU Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhao, Zipeng; Huang, Kemeng; Li, Chen; Wang, Changbo; Qin, Hong; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    efficiently simulate ductile fracture with GPU optimization. At the theoretical level of physically-based modeling and simulation, our PPF approach assumes the fracture sensitivity of the material increases with the plastic strain accumulation. As a result, we first develop a hardening-related fracture toughness function towards phase-field evolution. Second, we follow the associative flow rule and adopt a novel degraded von Mises yield criterion. In this way, we establish the tight coupling of the phase-field and plastic treatment, with which our PPF method can present distinct elastoplasticity, necking, and fracture characteristics during ductile fracture simulation. At the numerical level towards GPU optimization, we further devise an advanced parallel framework, which takes the full advantages of hierarchical architecture. Our strategy dramatically enhances the computational efficiency of preprocessing and phase-field evolution for our PPF with the material point method (MPM). Based on our extensive experiments on a variety of benchmarks, our novel method's performance gain can reach 1.56x speedup of the primary GPU MPM. Finally, our comprehensive simulation results have confirmed that this new PPF method can efficiently and realistically simulate complex ductile fracture phenomena in 3D interactive graphics and animation.
  • Item
    Simulation of Arbitrarily-shaped Magnetic Objects
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Kim, Seung-wook; Han, JungHyun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Abstract We propose a novel method for simulating rigid magnets in a stable way. It is based on analytic solutions of the magnetic vector potential and flux density, which make the magnetic forces and torques calculated using them seldom diverge. Therefore, our magnet simulations remain stable even though magnets are in close proximity or penetrate each other. Thanks to the stability, our method can simulate magnets of any shapes. Another strength of our method is that the time complexities for computing the magnetic forces and torques are significantly reduced, compared to the previous methods. Our method is easily integrated with classic rigid-body simulators. The experiment results presented in this paper prove the stability and efficiency of our method.
  • Item
    Cosserat Rod with rh-Adaptive Discretization
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Wen, Jiahao; Chen, Jiong; Nobuyuki, Umetani; Bao, Hujun; Huang, Jin; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Rod-like one-dimensional elastic objects often exhibit complex behaviors which pose great challenges to the discretization method for pursuing a faithful simulation. By only moving a small portion of material points, the Eulerian-on-Lagrangian (EoL) method already shows great adaptivity to handle sharp contact, but it is still far from enough to reproduce rich and complex geometry details arising in simulations. In this paper, we extend the discrete configuration space by unifying all Lagrangian and EoL nodes in representation for even more adaptivity with every sample being assigned with a dynamic material coordinate. However, this great extension will immediately bring in much more redundancy in the dynamic system. Therefore, we propose additional energy to control the spatial distribution of all material points, seeking to equally space them with respect to a curvature-based density field as a monitor. This flexible approach can effectively constrain the motion of material points to resolve numerical degeneracy, while simultaneously enables them to notably slide inside the parametric domain to account for the shape parameterization. Besides, to accurately respond to sharp contact, our method can also insert or remove nodes online and adjust the energy stiffness to suppress possible jittering artifacts that could be excited in a stiff system. As a result of this hybrid rh-adaption, our proposed method is capable of reproducing many realistic rod dynamics, such as excessive bending, twisting and knotting while only using a limited number of elements.
  • Item
    Semi-analytical Solid Boundary Conditions for Free Surface Flows
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Chang, Yue; Liu, Shusen; He, Xiaowei; Li, Sheng; Wang, Guoping; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    The treatment of solid boundary conditions remains one of the most challenging parts in the SPH method. We present a semianalytical approach to handle complex solid boundaries of arbitrary shape. Instead of calculating a renormalizing factor for the particle near the boundary, we propose to calculate the volume integral inside the solid boundary under the local spherical frame of a particle. By converting the volume integral into a surface integral, a computer aided design (CAD) mesh file representing the boundary can be naturally integrated for particle simulations. To accelerate the search for a particle's neighboring triangles, a uniform grid is applied to store indices of intersecting triangles. The new semi-analytical solid boundary handling approach is integrated into a position-based method [MM13] as well as a projection-based [HWW*20] to demonstrate its effectiveness in handling complex boundaries. Experiments show that our method is able to achieve comparable results with those simulated using ghost particles. In addition, since our method requires no boundary particles for deforming surfaces, our method is flexible enough to handle complex solid boundaries, including sharp corners and shells.
  • Item
    Fast Out-of-Core Octree Generation for Massive Point Clouds
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Schütz, Markus; Ohrhallinger, Stefan; Wimmer, Michael; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    We propose an efficient out-of-core octree generation method for arbitrarily large point clouds. It utilizes a hierarchical counting sort to quickly split the point cloud into small chunks, which are then processed in parallel. Levels of detail are generated by subsampling the full data set bottom up using one of multiple exchangeable sampling strategies.We introduce a fast hierarchical approximate blue-noise strategy and compare it to a uniform random sampling strategy. The throughput, including out-of-core access to disk, generating the octree, and writing the final result to disk, is about an order of magnitude faster than the state of the art, and reaches up to around 6 million points per second for the blue-noise approach and up to around 9 million points per second for the uniform random approach on modern SSDs.
  • Item
    Real Time Multiscale Rendering of Dense Dynamic Stackings
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Michel, Élie; Boubekeur, Tamy; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Dense dynamic aggregates of similar elements are frequent in natural phenomena and challenging to render under full real time constraints. The optimal representation to render them changes drastically depending on the distance at which they are observed, ranging from sets of detailed textured meshes for near views to point clouds for distant ones. Our multiscale representation use impostors to achieve the mid-range transition from mesh-based to point-based scales. To ensure a visual continuum, the impostor model should match as closely as possible the mesh on one side, and reduce to a single pixel response that equals point rendering on the other. In this paper, we propose a model based on rich spherical impostors, able to combine precomputed as well as dynamic procedural data, and offering seamless transitions from close instanced meshes to distant points. Our approach is architectured around an on-the-fly discrimination mechanism and intensively exploits the rough spherical geometry of the impostor proxy. In particular, we propose a new sampling mechanism to reconstruct novel views from the precomputed ones, together with a new conservative occlusion culling method, coupled with a two-pass rendering pipeline leveraging early-Z rejection. As a result, our system scales well and is even able to render sand, while supporting completely dynamic stackings.
  • Item
    Automatic Band-Limited Approximation of Shaders Using Mean-Variance Statistics in Clamped Domain
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Li, Shi; Wang, Rui; Huo, Yuchi; Zheng, Wenting; Hua, Wei; Bao, Hujun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    In this paper, we present a new shader smoothing method to improve the quality and generality of band-limiting shader programs. Previous work [YB18] treats intermediate values in the program as random variables, and utilizes mean and variance statistics to smooth shader programs. In this work, we extend such a band-limiting framework by exploring the observation that one intermediate value in the program is usually computed by a complex composition of functions, where the domain and range of composited functions heavily impact the statistics of smoothed programs. Accordingly, we propose three new shader smoothing rules for specific composition of functions by considering the domain and range, enabling better mean and variance statistics of approximations. Aside from continuous functions, the texture, such as color texture or normal map, is treated as a discrete function with limited domain and range, thereby can be processed similarly in the newly proposed framework. Experiments show that compared with previous work, our method is capable of generating better smoothness of shader programs as well as handling a broader set of shader programs.
  • Item
    Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Zilin; Sun, Qiang; Wang, Lu; Xu, Yanning; Wang, Beibei; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Gradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.
  • Item
    Next Event Estimation++: Visibility Mapping for Efficient Light Transport Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Guo, Jerry Jinfeng; Eisemann, Martin; Eisemann, Elmar; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Monte-Carlo rendering requires determining the visibility between scene points as the most common and compute intense operation to establish paths between camera and light source. Unfortunately, many tests reveal occlusions and the corresponding paths do not contribute to the final image. In this work, we present next event estimation++ (NEE++): a visibility mapping technique to perform visibility tests in a more informed way by caching voxel to voxel visibility probabilities. We show two scenarios: Russian roulette style rejection of visibility tests and direct importance sampling of the visibility. We show applications to next event estimation and light sampling in a uni-directional path tracer, and light-subpath sampling in Bi-Directional Path Tracing. The technique is simple to implement, easy to add to existing rendering systems, and comes at almost no cost, as the required information can be directly extracted from the rendering process itself. It discards up to 80% of visibility tests on average, while reducing variance by ~20% compared to other state-of-the-art light sampling techniques with the same number of samples. It gracefully handles complex scenes with efficiency similar to Metropolis light transport techniques but with a more uniform convergence.
  • Item
    Two-stage Resampling for Bidirectional Path Tracing with Multiple Light Sub-paths
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Nabata, Kosuke; Iwasaki, Kei; Dobashi, Yoshinori; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Recent advances in bidirectional path tracing (BPT) reveal that the use of multiple light sub-paths and the resampling of a small number of these can improve the efficiency of BPT. By increasing the number of pre-sampled light sub-paths, the possibility of generating light paths that provide large contributions can be better explored and this can alleviate the correlation of light paths due to the reuse of pre-sampled light sub-paths by all eye sub-paths. The increased number of pre-sampled light subpaths, however, also incurs a high computational cost. In this paper, we propose a two-stage resampling method for BPT to efficiently handle a large number of pre-sampled light sub-paths. We also derive a weighting function that can treat the changes in path probability due to the two-stage resampling. Our method can handle a two orders of magnitude larger number of presampled light sub-paths than previous methods in equal-time rendering, resulting in stable and better noise reduction than state-of-the-art methods.
  • Item
    Computing the Bidirectional Scattering of a Microstructure Using Scalar Diffraction Theory and Path Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Falster, Viggo; Jarabo, Adrián; Frisvad, Jeppe Revall; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Most models for bidirectional surface scattering by arbitrary explicitly defined microgeometry are either based on geometric optics and include multiple scattering but no diffraction effects or based on wave optics and include diffraction but no multiple scattering effects. The few exceptions to this tendency are based on rigorous solution of Maxwell's equations and are computationally intractable for surface microgeometries that are tens or hundreds of microns wide. We set up a measurement equation for combining results from single scattering scalar diffraction theory with multiple scattering geometric optics using Monte Carlo integration. Since we consider an arbitrary surface microgeometry, our method enables us to compute expected bidirectional scattering of the metasurfaces with increasingly smaller details seen more and more often in production. In addition, we can take a measured microstructure as input and, for example, compute the difference in bidirectional scattering between a desired surface and a produced surface. In effect, our model can account for both diffraction colors due to wavelength-sized features in the microgeometry and brightening due to multiple scattering. We include scalar diffraction for refraction, and we verify that our model is reasonable by comparing with the rigorous solution for a microsurface with half ellipsoids.
  • Item
    Procedural Physically based BRDF for Real-Time Rendering of Glints
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Chermain, Xavier; Sauvage, Basile; Dischler, Jean-Michel; Dachsbacher, Carsten; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Physically based rendering of glittering surfaces is a challenging problem in computer graphics. Several methods have proposed off-line solutions, but none is dedicated to high-performance graphics. In this work, we propose a novel physically based BRDF for real-time rendering of glints. Our model can reproduce the appearance of sparkling materials (rocks, rough plastics, glitter fabrics, etc.). Compared to the previous real-time method [ZK16], which is not physically based, our BRDF uses normalized NDFs and converges to the standard microfacet BRDF [CT82] for a large number of microfacets. Our method procedurally computes NDFs with hundreds of sharp lobes. It relies on a dictionary of 1D marginal distributions: at each location two of them are randomly picked and multiplied (to obtain a NDF), rotated (to increase the variety), and scaled (to control standard deviation/roughness). The dictionary is multiscale, does not depend on roughness, and has a low memory footprint (less than 1 MiB)
  • Item
    A Bayesian Inference Framework for Procedural Material Parameter Estimation
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Guo, Yu; Hasan, Milos; Yan, Lingqi; Zhao, Shuang; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Procedural material models have been gaining traction in many applications thanks to their flexibility, compactness, and easy editability. We explore the inverse rendering problem of procedural material parameter estimation from photographs, presenting a unified view of the problem in a Bayesian framework. In addition to computing point estimates of the parameters by optimization, our framework uses a Markov Chain Monte Carlo approach to sample the space of plausible material parameters, providing a collection of plausible matches that a user can choose from, and efficiently handling both discrete and continuous model parameters. To demonstrate the effectiveness of our framework, we fit procedural models of a range of materials-wall plaster, leather, wood, anisotropic brushed metals and layered metallic paints-to both synthetic and real target images.
  • Item
    SRF-Net: Spatial Relationship Feature Network for Tooth Point Cloud Classification
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Ma, Qian; Wei, Guangshun; Zhou, Yuanfeng; Pan, Xiao; Xin, Shiqing; Wang, Wenping; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    3D scanned point cloud data of teeth is popular used in digital orthodontics. The classification and semantic labelling for point cloud of each tooth is a key and challenging task for planning dental treatment. Utilizing the priori ordered position information of tooth arrangement, we propose an effective network for tooth model classification in this paper. The relative position and the adjacency similarity feature vectors are calculated for tooth 3D model, and combine the geometric feature into the fully connected layers of the classification training task. For the classification of dental anomalies, we present a dental anomalies processing method to improve the classification accuracy. We also use FocalLoss as the loss function to solve the sample imbalance of wisdom teeth. The extensive evaluations, ablation studies and comparisons demonstrate that the proposed network can classify tooth models accurately and automatically and outperforms state-of-the-art point cloud classification methods.
  • Item
    The Layerizing VoxPoint Annular Convolutional Network for 3D Shape Classification
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Tong; Tao, Wenyuan; Own, Chung-Ming; Lou, Xiantuo; Zhao, Yuehua; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Analyzing the geometric and semantic properties of 3D point cloud data via the deep learning networks is still challenging due to the irregularity and sparsity of samplings of their geometric structures. In our study, the authors combine the advantage of voxels and point clouds by presenting a new data form of voxel models, called Layer-Ring data. This data type can retain the fine description of the 3D data, and keep the high efficiency of feature extraction. After that, based on the Layer-Ring data, a modern network architecture, called VoxPoint Annular Network (VAN), works on the Layer-Ring data for the feature extraction and object category prediction. The design idea is based on the edge-extraction and the coordinate representation for each voxel on the separated layer. With the flexible design, our proposed VAN can adapt to the layer's geometric variability and scalability. Finally, the extensive experiments and comparisons demonstrate that our approach obtained the notable results with the state-of-the-art methods on a variety of standard benchmark datasets (e.g., ModelNet10, ModelNet40). Moreover, the tests also proved that 3D shape features could learn efficiently and robustly. All relevant codes will be available at https://github.com/helloFionaQ/Vox-PointNet.
  • Item
    SRNet: A 3D Scene Recognition Network using Static Graph and Dense Semantic Fusion
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Fan, Zhaoxin; Liu, Hongyan; He, Jun; Sun, Qi; Du, Xiaoyong; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Point cloud based 3D scene recognition is fundamental to many real world applications such as Simultaneous Localization and Mapping (SLAM). However, most of existing methods do not take full advantage of the contextual semantic features of scenes. And their recognition abilities are severely affected by dynamic noise such as points of cars and pedestrians in the scene. To tackle these issues, we propose a new Scene Recognition Network, namely SRNet. In this model, to learn local features without being affected by dynamic noise, we propose Static Graph Convolution (SGC) module, which are then stacked as our backbone. Next, to further avoid dynamic noise, we introduce a Spatial Attention Module (SAM) to make the feature descriptor pay more attention to immovable local areas that are more relevant to our task. Finally, in order to make a more profound sense of the scene, we design a Dense Semantic Fusion (DSF) strategy to integrate multi-level features during feature propagation, which helps the model deepen its understanding of the contextual semantics of scenes. By utilizing these designs, SRNet can map scenes to discriminative and generalizable feature vectors, which are then used for finding matching pairs. Experimental studies demonstrate that SRNet achieves new state-of-the-art on scene recognition and shows good generalization ability to other point cloud based tasks.
  • Item
    Semi-Supervised 3D Shape Recognition via Multimodal Deep Co-training
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Song, Mofei; Liu, Yu; Liu, Xiao Fan; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    3D shape recognition has been actively investigated in the field of computer graphics. With the rapid development of deep learning, various deep models have been introduced and achieved remarkable results. Most 3D shape recognition methods are supervised and learn only from the large amount of labeled shapes. However, it is expensive and time consuming to obtain such a large training set. In contrast to these methods, this paper studies a semi-supervised learning framework to train a deep model for 3D shape recognition by using both labeled and unlabeled shapes. Inspired by the co-training algorithm, our method iterates between model training and pseudo-label generation phases. In the model training phase, we train two deep networks based on the point cloud and multi-view representation simultaneously. In the pseudo-label generation phase, we generate the pseudo-labels of the unlabeled shapes using the joint prediction of two networks, which augments the labeled set for the next iteration. To extract more reliable consensus information from multiple representations, we propose an uncertainty-aware consistency loss function to combine the two networks into a multimodal network. This not only encourages the two networks to give similar predictions on the unlabeled set, but also eliminates the negative influence of the large performance gap between the two networks. Experiments on the benchmark ModelNet40 demonstrate that, with only 10% labeled training data, our approach achieves competitive performance to the results reported by supervised methods.
  • Item
    A Graph-based One-Shot Learning Method for Point Cloud Recognition
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Fan, Zhaoxin; Liu, Hongyan; He, Jun; Sun, Qi; Du, Xiaoyong; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Point cloud based 3D vision tasks, such as 3D object recognition, are critical to many real world applications such as autonomous driving. Many point cloud processing models based on deep learning have been proposed by researchers recently. However, they are all large-sample dependent, which means that a large amount of manually labelled training data are needed to train the model, resulting in huge labor cost. In this paper, to tackle this problem, we propose a One-Shot learning model for Point Cloud Recognition, namely OS-PCR. Different from previous methods, our method formulates a new setting, where the model only needs to see one sample per class once for memorizing at inference time when new classes are needed to be recognized. To fulfill this task, we design three modules in the model: an Encoder Module, an Edge-conditioned Graph Convolutional Network Module, and a Query Module. To evaluate the performance of the proposed model, we build a one-shot learning benchmark dataset for 3D point cloud analysis. Then, comprehensive experiments are conducted on it to demonstrate the effectiveness of our proposed model.
  • Item
    Personalized Hand Modeling from Multiple Postures with Multi-View Color Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Yangang; Rao, Ruting; Zou, Changqing; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Personalized hand models can be utilized to synthesize high quality hand datasets, provide more possible training data for deep learning and improve the accuracy of hand pose estimation. In recent years, parameterized hand models, e.g., MANO, are widely used for obtaining personalized hand models. However, due to the low resolution of existing parameterized hand models, it is still hard to obtain high-fidelity personalized hand models. In this paper, we propose a new method to estimate personalized hand models from multiple hand postures with multi-view color images. The personalized hand model is represented by a personalized neutral hand, and multiple hand postures. We propose a novel optimization strategy to estimate the neutral hand from multiple hand postures. To demonstrate the performance of our method, we have built a multi-view system and captured more than 35 people, and each of them has 30 hand postures.We hope the estimated hand models can boost the research of highfidelity parameterized hand modeling in the future. All the hand models are publicly available on www.yangangwang.com.
  • Item
    Human Pose Transfer by Adaptive Hierarchical Deformation
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Jinsong; Liu, Xingzi; Li, Kun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Human pose transfer, as a misaligned image generation task, is very challenging. Existing methods cannot effectively utilize the input information, which often fail to preserve the style and shape of hair and clothes. In this paper, we propose an adaptive human pose transfer network with two hierarchical deformation levels. The first level generates human semantic parsing aligned with the target pose, and the second level generates the final textured person image in the target pose with the semantic guidance. To avoid the drawback of vanilla convolution that treats all the pixels as valid information, we use gated convolution in both two levels to dynamically select the important features and adaptively deform the image layer by layer. Our model has very few parameters and is fast to converge. Experimental results demonstrate that our model achieves better performance with more consistent hair, face and clothes with fewer parameters than state-of-the-art methods. Furthermore, our method can be applied to clothing texture transfer. The code is available for research purposes at https://github.com/Zhangjinso/PINet_PG.
  • Item
    PointSkelCNN: Deep Learning-Based 3D Human Skeleton Extraction from Point Clouds
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Qin, Hongxing; Zhang, Songshan; Liu, Qihuang; Chen, Li; Chen, Baoquan; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    A 3D human skeleton plays important roles in human shape reconstruction and human animation. Remarkable advances have been achieved recently in 3D human skeleton estimation from color and depth images via a powerful deep convolutional neural network. However, applying deep learning frameworks to 3D human skeleton extraction from point clouds remains challenging because of the sparsity of point clouds and the high nonlinearity of human skeleton regression. In this study, we develop a deep learning-based approach for 3D human skeleton extraction from point clouds. We convert 3D human skeleton extraction into offset vector regression and human body segmentation via deep learning-based point cloud contraction. Furthermore, a disambiguation strategy is adopted to improve the robustness of joint points regression. Experiments on the public human pose dataset UBC3V and the human point cloud skeleton dataset 3DHumanSkeleton compiled by the authors show that the proposed approach outperforms the state-of-the-art methods.
  • Item
    Monocular Human Pose and Shape Reconstruction using Part Differentiable Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Min; Qiu, Feng; Liu, Wentao; Qian, Chen; Zhou, Xiaowei; Ma, Lizhuang; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Superior human pose and shape reconstruction from monocular images depends on removing the ambiguities caused by occlusions and shape variance. Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth. However, 3D ground truth is neither in abundance nor can efficiently be obtained. In this paper, we introduce body part segmentation as critical supervision. Part segmentation not only indicates the shape of each body part but helps to infer the occlusions among parts as well. To improve the reconstruction with part segmentation, we propose a part-level differentiable renderer that enables part-based models to be supervised by part segmentation in neural networks or optimization loops. We also introduce a general parametric model engaged in the rendering pipeline as an intermediate representation between skeletons and detailed shapes, which consists of primitive geometries for better interpretability. The proposed approach combines parameter regression, body model optimization, and detailed model registration altogether. Experimental results demonstrate that the proposed method achieves balanced evaluation on pose and shape, and outperforms the state-of-the-art approaches on Human3.6M, UP-3D and LSP datasets.
  • Item
    Learning Target-Adaptive Correlation Filters for Visual Tracking
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) She, Ying; Yi, Yang; Gu, Jialiang; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Correlation filters (CF) achieve excellent performance in visual tracking but suffer from undesired boundary effects. A significant amount of approaches focus on enlarging search regions to make up for this shortcoming. However, this introduces excessive background noises and misleads the filter into learning from the ambiguous information. In this paper, we propose a novel target-adaptive correlation filter (TACF) that incorporates context and spatial-temporal regularizations into the CF framework, thus learning a more robust appearance model in the case of large appearance variations. Besides, it can be effectively optimized via the alternating direction method of multipliers(ADMM), thus achieving a global optimal solution. Finally, an adaptive updating strategy is presented to discriminate the unreliable samples and alleviate the contamination of these training samples. Extensive evaluations on OTB-2013, OTB-2015, VOT-2016, VOT-2017 and TC-128 datasets demonstrate that our TACF is very promising for various challenging scenarios compared with several state-of-the-art trackers, with real-time performance of 20 frames per second(fps).
  • Item
    FAKIR: An Algorithm for Revealing the Anatomy and Pose of Statues from Raw Point Sets
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Fu, Tong; Chaine, Raphaelle; Digne, Julie; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    3D acquisition of archaeological artefacts has become an essential part of cultural heritage research for preservation or restoration purpose. Statues, in particular, have been at the center of many projects. In this paper, we introduce a way to improve the understanding of acquired statues representing real or imaginary creatures by registering a simple and pliable articulated model to the raw point set data. Our approach performs a Forward And bacKward Iterative Registration (FAKIR) which proceeds joint by joint, needing only a few iterations to converge. We are thus able to detect the pose and elementary anatomy of sculptures, with possibly non realistic body proportions. By adapting our simple skeleton, our method can work on animals and imaginary creatures.
  • Item
    Coarse to Fine:Weak Feature Boosting Network for Salient Object Detection
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Chenhao; Gao, Shanshan; Pan, Xiao; Wang, Yuting; Zhou, Yuanfeng; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Salient object detection is to identify objects or regions with maximum visual recognition in an image, which brings significant help and improvement to many computer visual processing tasks. Although lots of methods have occurred for salient object detection, the problem is still not perfectly solved especially when the background scene is complex or the salient object is small. In this paper, we propose a novel Weak Feature Boosting Network (WFBNet) for the salient object detection task. In the WFBNet, we extract the unpredictable regions (low confidence regions) of the image via a polynomial function and enhance the features of these regions through a well-designed weak feature boosting module (WFBM). Starting from a coarse saliency map, we gradually refine it according to the boosted features to obtain the final saliency map, and our network does not need any post-processing step. We conduct extensive experiments on five benchmark datasets using comprehensive evaluation metrics. The results show that our algorithm has considerable advantages over the existing state-of-the-art methods.
  • Item
    An Occlusion-aware Edge-Based Method for Monocular 3D Object Tracking using Edge Confidence
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Huang, Hong; Zhong, Fan; Sun, Yuqing; Qin, Xueying; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    We propose an edge-based method for 6DOF pose tracking of rigid objects using a monocular RGB camera. One of the critical problem for edge-based methods is to search the object contour points in the image corresponding to the known 3D model points. However, previous methods often produce false object contour points in case of cluttered backgrounds and partial occlusions. In this paper, we propose a novel edge-based 3D objects tracking method to tackle this problem. To search the object contour points, foreground and background clutter points are first filtered out using edge color cue, then object contour points are searched by maximizing their edge confidence which combines edge color and distance cues. Furthermore, the edge confidence is integrated into the edge-based energy function to reduce the influence of false contour points caused by cluttered backgrounds and partial occlusions. We also extend our method to multi-object tracking which can handle mutual occlusions. We compare our method with the recent state-of-art methods on challenging public datasets. Experiments demonstrate that our method improves robustness and accuracy against cluttered backgrounds and partial occlusions.
  • Item
    Generating High-quality Superpixels in Textured Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Zhe; Xu, Panpan; Chang, Jian; Wang, Wencheng; Zhao, Chong; Zhang, Jian Jun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Superpixel segmentation is important for promoting various image processing tasks. However, existing methods still have difficulties in generating high-quality superpixels in textured images, because they cannot separate textures from structures well. Though texture filtering can be adopted for smoothing textures before superpixel segmentation, the filtering would also smooth the object boundaries, and thus weaken the quality of generated superpixels. In this paper, we propose to use the adaptive scale box smoothing instead of the texture filtering to obtain more high-quality texture and boundary information. Based on this, we design a novel distance metric to measure the distance between different pixels, which considers boundary, color and Euclidean distance simultaneously. As a result, our method can achieve high-quality superpixel segmentation in textured images without texture filtering. The experimental results demonstrate the superiority of our method over existing methods, even the learning-based methods. Benefited from using boundaries to guide superpixel segmentation, our method can also suppress noise to generate high-quality superpixels in non-textured images.
  • Item
    InstanceFusion: Real-time Instance-level 3D Reconstruction Using a Single RGBD Camera
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Lu, Feixiang; Peng, Haotian; Wu, Hongyu; Yang, Jun; Yang, Xinhang; Cao, Ruizhi; Zhang, Liangjun; Yang, Ruigang; Zhou, Bin; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    We present InstanceFusion, a robust real-time system to detect, segment, and reconstruct instance-level 3D objects of indoor scenes with a hand-held RGBD camera. It combines the strengths of deep learning and traditional SLAM techniques to produce visually compelling 3D semantic models. The key success comes from our novel segmentation scheme and the efficient instancelevel data fusion, which are both implemented on GPU. Specifically, for each incoming RGBD frame, we take the advantages of the RGBD features, the 3D point cloud, and the reconstructed model to perform instance-level segmentation. The corresponding RGBD data along with the instance ID are then fused to the surfel-based models. In order to sufficiently store and update these data, we design and implement a new data structure using the OpenGL Shading Language. Experimental results show that our method advances the state-of-the-art (SOTA) methods in instance segmentation and data fusion by a big margin. In addition, our instance segmentation improves the precision of 3D reconstruction, especially in the loop closure. InstanceFusion system runs 20.5Hz on a consumer-level GPU, which supports a number of augmented reality (AR) applications (e.g., 3D model registration, virtual interaction, AR map) and robot applications (e.g., navigation, manipulation, grasping). To facilitate future research and reproduce our system more easily, the source code, data, and the trained model are released on Github: https://github.com/Fancomi2017/InstanceFusion.
  • Item
    Weakly Supervised Part-wise 3D Shape Reconstruction from Single-View RGB Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Niu, Chengjie; Yu, Yang; Bian, Zhenwei; Li, Jun; Xu, Kai; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    In order for the deep learning models to truly understand the 2D images for 3D geometry recovery, we argue that singleview reconstruction should be learned in a part-aware and weakly supervised manner. Such models lead to more profound interpretation of 2D images in which part-based parsing and assembling are involved. To this end, we learn a deep neural network which takes a single-view RGB image as input, and outputs a 3D shape in parts represented by 3D point clouds with an array of 3D part generators. In particular, we devise two levels of generative adversarial network (GAN) to generate shapes with both correct part shape and reasonable overall structure. To enable a self-taught network training, we devise a differentiable projection module along with a self-projection loss measuring the error between the shape projection and the input image. The training data in our method is unpaired between the 2D images and the 3D shapes with part decomposition. Through qualitative and quantitative evaluations on public datasets, we show that our method achieves good performance in part-wise single-view reconstruction.
  • Item
    Deep Separation of Direct and Global Components from a Single Photograph under Structured Lighting
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Duan, Zhaoliang; Bieron, James; Peers, Pieter; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    We present a deep learning based solution for separating the direct and global light transport components from a single photograph captured under high frequency structured lighting with a co-axial projector-camera setup. We employ an architecture with one encoder and two decoders that shares information between the encoder and the decoders, as well as between both decoders to ensure a consistent decomposition between both light transport components. Furthermore, our deep learning separation approach does not require binary structured illumination, allowing us to utilize the full resolution capabilities of the projector. Consequently, our deep separation network is able to achieve high fidelity decompositions for lighting frequency sensitive features such as subsurface scattering and specular reflections. We evaluate and demonstrate our direct and global separation method on a wide variety of synthetic and captured scenes.
  • Item
    Pixel-wise Dense Detector for Image Inpainting
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Ruisong; Quan, Weize; Wu, Baoyuan; Li, Zhifeng; Yan, Dong-Ming; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Recent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar, which inevitably lose the position information of visual artifacts. Moreover, the adversarial loss and reconstruction loss (e.g., `1 loss) are combined with tradeoff weights, which are also difficult to tune. In this paper, we propose a novel detection-based generative framework for image inpainting, which adopts the min-max strategy in an adversarial process. The generator follows an encoder-decoder architecture to fill the missing regions, and the detector using weakly supervised learning localizes the position of artifacts in a pixel-wise manner. Such position information makes the generator pay attention to artifacts and further enhance them. More importantly, we explicitly insert the output of the detector into the reconstruction loss with a weighting criterion, which balances the weight of the adversarial loss and reconstruction loss automatically rather than manual operation. Experiments on multiple public datasets show the superior performance of the proposed framework. The source code is available at https://github.com/Evergrow/GDN_Inpainting.
  • Item
    Not All Areas Are Equal: A Novel Separation-Restoration-Fusion Network for Image Raindrop Removal
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Ren, Dongdong; Li, Jinbao; Han, Meng; Shu, Minglei; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Detecting and removing raindrops from an image while keeping the high quality of image details has attracted tremendous studies, but remains a challenging task due to the inhomogeneity of the degraded region and the complexity of the degraded intensity. In this paper, we get rid of the dependence of deep learning on image-to-image translation and propose a separationrestoration- fusion network for raindrops removal. Our key idea is to recover regions of different damage levels individually, so that each region achieves the optimal recovery result, and finally fuse the recovered areas. In the region restoration module, to complete the restoration of a specific area, we propose a multi-scale feature fusion global information aggregation attention network to achieve global to local information aggregation. Besides, we also design an inside and outside dense connection dilated network, to ensure the fusion of the separated regions and the fine restoration of the image. The qualitatively and quantitatively evaluations are conducted to evaluate our method with the latest existing methods. The result demonstrates that our method outperforms state-of-the-art methods by a large margin on the benchmark datasets in extensive experiments.
  • Item
    CLA-GAN: A Context and Lightness Aware Generative Adversarial Network for Shadow Removal
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Ling; Long, Chengjiang; Yan, Qingan; Zhang, Xiaolong; Xiao, Chunxia; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    In this paper, we propose a novel context and lightness aware Generative Adversarial Network (CLA-GAN) framework for shadow removal, which refines a coarse result to a final shadow removal result in a coarse-to-fine fashion. At the refinement stage, we first obtain a lightness map using an encoder-decoder structure. With the lightness map and the coarse result as the inputs, the following encoder-decoder tries to refine the final result. Specifically, different from current methods restricted pixel-based features from shadow images, we embed a context-aware module into the refinement stage, which exploits patch-based features. The embedded module transfers features from non-shadow regions to shadow regions to ensure the consistency in appearance in the recovered shadow-free images. Since we consider pathces, the module can additionally enhance the spatial association and continuity around neighboring pixels. To make the model pay more attention to shadow regions during training, we use dynamic weights in the loss function. Moreover, we augment the inputs of the discriminator by rotating images in different degrees and use rotation adversarial loss during training, which can make the discriminator more stable and robust. Extensive experiments demonstrate the validity of the components in our CLA-GAN framework. Quantitative evaluation on different shadow datasets clearly shows the advantages of our CLA-GAN over the state-of-the-art methods.
  • Item
    SCGA-Net: Skip Connections Global Attention Network for Image Restoration
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Ren, Dongdong; Li, Jinbao; Han, Meng; Shu, Minglei; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Deep convolutional neural networks (DCNN) have shown their advantages in the image restoration tasks. But most existing DCNN-based methods still suffer from the residual corruptions and coarse textures. In this paper, we propose a general framework ''Skip Connections Global Attention Network'' to focus on the semantics delivery from shallow layers to deep layers for low-level vision tasks including image dehazing, image denoising, and low-light image enhancement. First of all, by applying dense dilated convolution and multi-scale feature fusion mechanism, we establish a novel encoder-decoder network framework to aggregate large-scale spatial context and enhance feature reuse. Secondly, the solution we proposed for skipping connection uses attention mechanism to constraint information, thereby enhancing the high-frequency details of feature maps and suppressing the output of corruptions. Finally, we also present a novel attention module dubbed global constraint attention, which could effectively captures the relationship between pixels on the entire feature maps, to obtain the subtle differences among pixels and produce an overall optimal 3D attention maps. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods in image dehazing, image denoising, and low-light image enhancement.
  • Item
    Diversifying Semantic Image Synthesis and Editing via Class- and Layer-wise VAEs
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Endo, Yuki; Kanamori, Yoshihiro; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Semantic image synthesis is a process for generating photorealistic images from a single semantic mask. To enrich the diversity of multimodal image synthesis, previous methods have controlled the global appearance of an output image by learning a single latent space. However, a single latent code is often insufficient for capturing various object styles because object appearance depends on multiple factors. To handle individual factors that determine object styles, we propose a class- and layer-wise extension to the variational autoencoder (VAE) framework that allows flexible control over each object class at the local to global levels by learning multiple latent spaces. Furthermore, we demonstrate that our method generates images that are both plausible and more diverse compared to state-of-the-art methods via extensive experiments with real and synthetic datasets in three different domains. We also show that our method enables a wide range of applications in image synthesis and editing tasks.
  • Item
    Simultaneous Multi-Attribute Image-to-Image Translation Using Parallel Latent Transform Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Sen-Zhe; Lai, Yu-Kun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Image-to-image translation has been widely studied. Since real-world images can often be described by multiple attributes, it is useful to manipulate them at the same time. However, most methods focus on transforming between two domains, and when they chain multiple single attribute transform networks together, the results are affected by the order of chaining, and the performance drops with the out-of-domain issue for intermediate results. Existing multi-domain transfer methods mostly manipulate multiple attributes by adding a list of attribute labels to the network feature, but they also suffer from interference of different attributes, and perform worse when multiple attributes are manipulated. We propose a novel approach to multiattribute image-to-image translation using several parallel latent transform networks, where multiple attributes are manipulated in parallel and simultaneously, which eliminates both issues. To avoid the interference of different attributes, we introduce a novel soft independence constraint for the changes caused by different attributes. Extensive experiments show that our method outperforms state-of-the-art methods.
  • Item
    Interactive Design and Preview of Colored Snapshots of Indoor Scenes
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Fu, Qiang; Yan, Hai; Fu, Hongbo; Li, Xueming; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    This paper presents an interactive system for quickly designing and previewing colored snapshots of indoor scenes. Different from high-quality 3D indoor scene rendering, which often takes several minutes to render a moderately complicated scene under a specific color theme with high-performance computing devices, our system aims at improving the effectiveness of color theme design of indoor scenes and employs an image colorization approach to efficiently obtain high-resolution snapshots with editable colors. Given several pre-rendered, multi-layer, gray images of the same indoor scene snapshot, our system is designed to colorize and merge them into a single colored snapshot. Our system also assists users in assigning colors to certain objects/components and infers more harmonious colors for the unassigned objects based on pre-collected priors to guide the colorization. The quickly generated snapshots of indoor scenes provide previews of interior design schemes with different color themes, making it easy to determine the personalized design of indoor scenes. To demonstrate the usability and effectiveness of this system, we present a series of experimental results on indoor scenes of different types, and compare our method with a state-of-the-art method for indoor scene material and color suggestion and offline/online rendering software packages.
  • Item
    A Multi-Person Selfie System via Augmented Reality
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Lin, Jie; Yang, Chuan-Kai; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    The limited length of a selfie stick always poses the problem of distortion in a selfie, in spite of the prevalence of selfie stick in recent years.We propose a technique, based on modifying existing augmented reality technology, to support the selfie of multiple persons, through properly aligning different photographing processes. It can be shown that our technique helps avoiding the common distortion drawback of using a selfie stick, and facilitates the composition process of a group photo. It can also be used to create some special effects, including creating an illusion of having multiple appearances of a person.
  • Item
    Multi-scale Information Assembly for Image Matting
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Qiao, Yu; Liu, Yuhao; Zhu, Qiang; Yang, Xin; Wang, Yuxin; Zhang, Qiang; Wei, Xiaopeng; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Image matting is a long-standing problem in computer graphics and vision, mostly identified as the accurate estimation of the foreground in input images.We argue that the foreground objects can be represented by different-level information, including the central bodies, large-grained boundaries, refined details, etc. Based on this observation, in this paper, we propose a multi-scale information assembly framework (MSIA-matte) to pull out high-quality alpha mattes from single RGB images. Technically speaking, given an input image, we extract advanced semantics as our subject content and retain initial CNN features to encode different-level foreground expression, then combine them by our well-designed information assembly strategy. Extensive experiments can prove the effectiveness of the proposed MSIA-matte, and we can achieve state-of-the-art performance compared to most existing matting networks.
  • Item
    StyleProp: Real-time Example-based Stylization of 3D Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Hauptfleisch, Filip; Texler, Ondrej; Texler, Aneta; Krivánek, Jaroslav; Sýkora, Daniel; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    We present a novel approach to the real-time non-photorealistic rendering of 3D models in which a single hand-drawn exemplar specifies its appearance. We employ guided patch-based synthesis to achieve high visual quality as well as temporal coherence. However, unlike previous techniques that maintain consistency in one dimension (temporal domain), in our approach, multiple dimensions are taken into account to cover all degrees of freedom given by the available space of interactions (e.g., camera rotations). To enable interactive experience, we precalculate a sparse latent representation of the entire interaction space, which allows rendering of a stylized image in real-time, even on a mobile device. To the best of our knowledge, the proposed system is the first that enables interactive example-based stylization of 3D models with full temporal coherence in predefined interaction space.
  • Item
    Two-stage Photograph Cartoonization via Line Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Li, Simin; Wen, Qiang; Zhao, Shuang; Sun, Zixun; He, Shengfeng; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Cartoon is highly abstracted with clear edges, which makes it unique from the other art forms. In this paper, we focus on the essential cartoon factors of abstraction and edges, aiming to cartoonize real-world photographs like an artist. To this end, we propose a two-stage network, each stage explicitly targets at producing abstracted shading and crisp edges respectively. In the first abstraction stage, we propose a novel unsupervised bilateral flattening loss, which allows generating high-quality smoothing results in a label-free manner. Together with two other semantic-aware losses, the abstraction stage imposes different forms of regularization for creating cartoon-like flattened images. In the second stage we draw lines on the structural edges of the flattened cartoon with the fully supervised line drawing objective and unsupervised edge augmenting loss. We collect a cartoon-line dataset with line tracing, and it serves as the starting point for preparing abstraction and line drawing data. We have evaluated the proposed method on a large number of photographs, by converting them to three different cartoon styles. Our method substantially outperforms state-of-the-art methods in terms of visual quality quantitatively and qualitatively.
  • Item
    Colorization of Line Drawings with Empty Pupils
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Akita, Kenta; Morimoto, Yuki; Tsuruno, Reiji; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the convolutional neural network are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically colorizing eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our pupil position estimation network.
  • Item
    RadEx: Integrated Visual Exploration of Multiparametric Studies for Radiomic Tumor Profiling
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Mörth, Eric; Wagner-Larsen, Kari; Hodneland, Erlend; Krakstad, Camilla; Haldorsen, Ingfrid S.; Bruckner, Stefan; Smit, Noeska N.; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Better understanding of the complex processes driving tumor growth and metastases is critical for developing targeted treatment strategies in cancer. Radiomics extracts large amounts of features from medical images which enables radiomic tumor profiling in combination with clinical markers. However, analyzing complex imaging data in combination with clinical data is not trivial and supporting tools aiding in these exploratory analyses are presently missing. In this paper, we present an approach that aims to enable the analysis of multiparametric medical imaging data in combination with numerical, ordinal, and categorical clinical parameters to validate established and unravel novel biomarkers. We propose a hybrid approach where dimensionality reduction to a single axis is combined with multiple linked views allowing clinical experts to formulate hypotheses based on all available imaging data and clinical parameters. This may help to reveal novel tumor characteristics in relation to molecular targets for treatment, thus providing better tools for enabling more personalized targeted treatment strategies. To confirm the utility of our approach, we closely collaborate with experts from the field of gynecological cancer imaging and conducted an evaluation with six experts in this field.
  • Item
    Slice and Dice: A Physicalization Workflow for Anatomical Edutainment
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Raidou, Renata Georgia; Gröller, Eduard; Wu, Hsiang-Yun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    During the last decades, anatomy has become an interesting topic in education-even for laymen or schoolchildren. As medical imaging techniques become increasingly sophisticated, virtual anatomical education applications have emerged. Still, anatomical models are often preferred, as they facilitate 3D localization of anatomical structures. Recently, data physicalizations (i.e., physical visualizations) have proven to be effective and engaging-sometimes, even more than their virtual counterparts. So far, medical data physicalizations involve mainly 3D printing, which is still expensive and cumbersome. We investigate alternative forms of physicalizations, which use readily available technologies (home printers) and inexpensive materials (paper or semi-transparent films) to generate crafts for anatomical edutainment. To the best of our knowledge, this is the first computergenerated crafting approach within an anatomical edutainment context. Our approach follows a cost-effective, simple, and easy-to-employ workflow, resulting in assemblable data sculptures (i.e., semi-transparent sliceforms). It primarily supports volumetric data (such as CT or MRI), but mesh data can also be imported. An octree slices the imported volume and an optimization step simplifies the slice configuration, proposing the optimal order for easy assembly. A packing algorithm places the resulting slices with their labels, annotations, and assembly instructions on a paper or transparent film of user-selected size, to be printed, assembled into a sliceform, and explored. We conducted two user studies to assess our approach, demonstrating that it is an initial positive step towards the successful creation of interactive and engaging anatomical physicalizations.
  • Item
    Visual Analytics in Dental Aesthetics
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Amirkhanov, Aleksandr; Bernhard, Matthias; Karimov, Alexey; Stiller, Sabine; Geier, Andreas; Gröller, Eduard; Mistelbauer, Gabriel; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Dental healthcare increasingly employs computer-aided design software, to provide patients with high-quality dental prosthetic devices. In modern dental reconstruction, dental technicians address the unique anatomy of each patient individually, by capturing the dental impression and measuring the mandibular movements. Subsequently, dental technicians design a custom denture that fits the patient from a functional point of view. The current workflow does not include a systematic analysis of aesthetics, and dental technicians rely only on an aesthetically pleasing mock-up that they discuss with the patient, and on their experience. Therefore, the final denture aesthetics remain unknown until the dental technicians incorporate the denture into the patient. In this work, we present a solution that integrates aesthetics analysis into the functional workflow of dental technicians. Our solution uses a video recording of the patient, to preview the denture design at any stage of the denture design process. We present a teeth pose estimation technique that enables denture preview and a set of linked visualizations that support dental technicians in the aesthetic design of dentures. These visualizations assist dental technicians in choosing the most aesthetically fitting preset from a library of dentures, in identifying the suitable denture size, and in adjusting the denture position. We demonstrate the utility of our system with four use cases, explored by a dental technician. Also, we performed a quantitative evaluation for teeth pose estimation, and an informal usability evaluation, with positive outcomes concerning the integration of aesthetics analysis into the functional workflow.