42-Issue 7

Permanent URI for this collection

Pacific Graphics 2023 - Symposium Proceedings
Daejeon, South Korea || October 10 – 13, 2023

(for Short Papers and Posters see PG 2023 - Short Papers and Posters)
Neural Rendering
IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields
Changwoon Choi, Juhyeon Kim, and Young Min Kim
Learning to Generate and Manipulate 3D Radiance Field by a Hierarchical Diffusion Framework with CLIP Latent
Jiaxu Wang, Ziyi Zhang, and Renjing Xu
Robust Novel View Synthesis with Color Transform Module
Sang Min Kim, Changwoon Choi, Hyeongjun Heo, and Young Min Kim
Geometry
Meso-Skeleton Guided Hexahedral Mesh Design
Paul Viville, Pierre Kraemer, and Dominique Bechmann
A Surface Subdivision Scheme Based on Four-Directional S^1_3 Non-Box Splines
Zhangjin Huang
Groupwise Shape Correspondence Refinement with a Region of Interest Focus
Pierre Galmiche and Hyewon Seo
Procedural Modeling and Model Extraction
Data-guided Authoring of Procedural Models of Shapes
Ishtiaque Hossain, I-Chao Shen, Takeo Igarashi, and Oliver van Kaick
Authoring Terrains with Spatialised Style
Simon Perche, Adrien Peytavie, Bedrich Benes, Eric Galin, and Eric Guérin
Cloth Simulation
D-Cloth: Skinning-based Cloth Dynamic Prediction with a Three-stage Network
Yu Di Li, Min Tang, Xiao Rui Chen, Yun Yang, Ruo Feng Tong, Bai Lin An, Shuang Cai Yang, Yao Li, and Qi Long Kou
Controllable Garment Image Synthesis Integrated with Frequency Domain Features
Xinru Liang, Haoran Mo, and Chengying Gao
Combating Spurious Correlations in Loose-fitting Garment Animation Through Joint-Specific Feature Learning
Junqi Diao, Jun Xiao, Yihong He, and Haiyong Jiang
Modeling by Learning
CP-NeRF: Conditionally Parameterized Neural Radiance Fields for Cross-scene Novel View Synthesis
Hao He, Yixun Liang, Shishi Xiao, Jierun Chen, and Yingcong Chen
Interactive Authoring of Terrain using Diffusion Models
Joshua Lochner, James Gain, Simon Perche, Adrien Peytavie, Eric Galin, and Eric Guérin
Structure Learning for 3D Point Cloud Generation from Single RGB Images
Tarek Ben Charrada, Hamid Laga, and Hedi Tabia
Face Reconstruction
Neural Shading Fields for Efficient Facial Inverse Rendering
Gilles Rainer, Lewis Bridgeman, and Abhijeet Ghosh
Facial Image Shadow Removal via Graph-based Feature Fusion
Ling Zhang, Ben Chen, Zheng Liu, and Chunxia Xiao
A Perceptual Shape Loss for Monocular 3D Face Reconstruction
Christopher Otto, Prashanth Chandran, Gaspard Zoss, Markus Gross, Paulo Gotardo, and Derek Bradley
Sketch-based Modeling
Efficient Interpolation of Rough Line Drawings
Jiazhou Chen, Xinding Zhu, Melvin Even, Jean Basset, Pierre Bénard, and Pascal Barla
Sharing Model Framework for Zero-Shot Sketch-Based Image Retrieval
Yi-Hsuan Ho, Der-Lor Way, and Zen-Chung Shih
GA-Sketching: Shape Modeling from Multi-View Sketching with Geometry-Aligned Deep Implicit Functions
Jie Zhou, Zhongjin Luo, Qian Yu, Xiaoguang Han, and Hongbo Fu
Virtual Humans
Semantics-guided Generative Diffusion Model with a 3DMM Model Condition for Face Swapping
Xiyao Liu, Yang Liu, Yuhao Zheng, Ting Yang, Jian Zhang, Victoria Wang, and Hui Fang
Palette-Based and Harmony-Guided Colorization for Vector Icons
Miao Lin, I-Chao Shen, Hsiao-Yuan Chin, Ruo-Xi Chen, and Bing-Yu Chen
Multi-Level Implicit Function for Detailed Human Reconstruction by Relaxing SMPL Constraints
Xikai Ma, Jieyu Zhao, Yiqing Teng, and Li Yao
Multi-Modal Face Stylization with a Generative Prior
Mengtian Li, Yi Dong, Minxuan Lin, Haibin Huang, Pengfei Wan, and Chongyang Ma
Computational Fabrication
An Efficient Self-supporting Infill Structure for Computational Fabrication
Shengfa Wang, Zheng Liu, Jiangbei Hu, Na Lei, and Zhongxuan Luo
Fabricatable 90° Pop-ups: Interactive Transformation of a 3D Model into a Pop-up Structure
Junpei Fujikawa and Takashi Ijiri
Volumetric Reconstruction
Efficient Neural Representation of Volumetric Data using Coordinate-Based Networks.
Sudarshan Devkota and Sumant Pattanaik
A Differential Diffusion Theory for Participating Media
Yunchi Cen, Chen Li, Frederick W. B. Li, Bailin Yang, and Xiaohui Liang
Precomputed Radiative Heat Transport for Efficient Thermal Simulation
Christian Freude, David Hahn, Florian Rist, Lukas Lipp, and Michael Wimmer
Imaging
Multi-scale Iterative Model-guided Unfolding Network for NLOS Reconstruction
Xiongfei Su, Yu Hong, Juntian Ye, Feihu Xu, and Xin Yuan
Robust Distribution-aware Color Correction for Single-shot Images
Daljit Singh J. Dhillon, Parisha Joshi, Jessica Baron, and Eric K. Patterson
Enhancing Low-Light Images: A Variation-based Retinex with Modified Bilateral Total Variation and Tensor Sparse Coding
Weipeng Yang, Hongxia Gao, Wenbin Zou, Shasha Huang, Hongsheng Chen, and Jianliang Ma
Motion Capture and Generation
MOVIN: Real-time Motion Capture using a Single LiDAR
Deok-Kyeong Jang, Dongseok Yang, Deok-Yun Jang, Byeoli Choi, Taeil Jin, and Sung-Hee Lee
DAFNet: Generating Diverse Actions for Furniture Interaction by Learning Conditional Pose Distribution
Taeil Jin and Sung-Hee Lee
OptCtrlPoints: Finding the Optimal Control Points for Biharmonic 3D Shape Deformation
Kunho Kim, Mikaela Angelina Uy, Despoina Paschalidou, Alec Jacobson, Leonidas J. Guibas, and Minhyuk Sung
Image Editing and Color
Integrating High-Level Features for Consistent Palette-based Multi-image Recoloring
Danna Xue, Javier Vazquez Corral, Luis Herranz, Yanning Zhang, and Michael S. Brown
Data-Driven Ink Painting Brushstroke Rendering
Koki Madono and Edgar Simo-Serra
Continuous Layout Editing of Single Images with Diffusion Models
Zhiyuan Zhang, Zhitong Huang, and Jing Liao
Images, Vectorization, and Layouts
Error-bounded Image Triangulation
Zhi-Duo Fang, Jia-Peng Guo, Yanyang Xiao, and Xiao-Ming Fu
Dissection Puzzles Composed of Multicolor Polyominoes
Naoki Kita
H-ETC2: Design of a CPU-GPU Hybrid ETC2 Encoder
Hyeon-ki Lee and Jae-Ho Nah
Details and Styles on 3D Models
Refinement of Hair Geometry by Strand Integration
Ryota Maeda, Kenshi Takayama, and Takafumi Taketomi
Fine Back Surfaces Oriented Human Reconstruction for Single RGB-D Images
Xianyong Fang, Yu Qian, Jinshen He, Linbo Wang, and Zhengyi Liu
Learning-based Reflectance
Deep Shape and SVBRDF Estimation using Smartphone Multi-lens Imaging
Chongrui Fan, Yiming Lin, and Abhijeet Ghosh
SVBRDF Reconstruction by Transferring Lighting Knowledge
Pengfei Zhu, Shuichang Lai, Mufan Chen, Jie Guo, Yifan Liu, and Yanwen Guo
Dynamic Scenes
World-Space Spatiotemporal Path Resampling for Path Tracing
Hangyu Zhang and Beibei Wang
Efficient Caustics Rendering via Spatial and Temporal Path Reuse
Xiaofeng Xu, Lu Wang, and Beibei Wang
3D Object Tracking for Rough Models
Xiuqiang Song, Weijian Xie, Jiachen Li, Nan Wang, Fan Zhong, Guofeng Zhang, and Xueying Qin
Learning and Image Processing
A Post Processing Technique to Automatically Remove Floater Artifacts in Neural Radiance Fields
Tristan Wirth, Arne Rak, Volker Knauthe, and Dieter W. Fellner
MAPMaN: Multi-Stage U-Shaped Adaptive Pattern Matching Network for Semantic Segmentation of Remote Sensing Images
Tingfeng Hong, Xiaowen Ma, Xinyu Wang, Rui Che, Chenlu Hu, Tian Feng, and Wei Zhang
Balancing Rotation Minimizing Frames with Additional Objectives
Christopher Mossman, Richard H. Bartels, and Faramarz F. Samavati
Radiance and Appearance
Generating Parametric BRDFs from Natural Language Descriptions
Sean Memery, Osmar Cedron, and Kartic Subr
Neural Impostor: Editing Neural Radiance Fields with Explicit Shape Manipulation
Ruiyang Liu, Jinxu Xiang, Bowen Zhao, Ran Zhang, Jingyi Yu, and Changxi Zheng
Reconstructing 3D Human Pose from RGB-D Data with Occlusions
Bowen Dang, Xi Zhao, Bowen Zhang, and He Wang
Color Harmonization on Images
Fast Grayscale Morphology for Circular Window
Yuji Moroto and Nobuyuki Umetani
BubbleFormer: Bubble Diagram Generation via Dual Transformer Models
Jiahui Sun, Liping Zheng, Gaofeng Zhang, and Wenming Wu

BibTeX (42-Issue 7)
                
@article{
10.1111:cgf.14985,
journal = {Computer Graphics Forum}, title = {{
Pacific Graphics 2023 - CGF 42-7: Frontmatter}},
author = {
Chaine, Raphaëlle
and
Deng, Zhigang
and
Kim, Min H.
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14985}
}
                
@article{
10.1111:cgf.14929,
journal = {Computer Graphics Forum}, title = {{
IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields}},
author = {
Choi, Changwoon
and
Kim, Juhyeon
and
Kim, Young Min
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14929}
}
                
@article{
10.1111:cgf.14931,
journal = {Computer Graphics Forum}, title = {{
Robust Novel View Synthesis with Color Transform Module}},
author = {
Kim, Sang Min
and
Choi, Changwoon
and
Heo, Hyeongjun
and
Kim, Young Min
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14931}
}
                
@article{
10.1111:cgf.14930,
journal = {Computer Graphics Forum}, title = {{
Learning to Generate and Manipulate 3D Radiance Field by a Hierarchical Diffusion Framework with CLIP Latent}},
author = {
Wang, Jiaxu
and
Zhang, Ziyi
and
Xu, Renjing
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14930}
}
                
@article{
10.1111:cgf.14932,
journal = {Computer Graphics Forum}, title = {{
Meso-Skeleton Guided Hexahedral Mesh Design}},
author = {
Viville, Paul
and
Kraemer, Pierre
and
Bechmann, Dominique
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14932}
}
                
@article{
10.1111:cgf.14933,
journal = {Computer Graphics Forum}, title = {{
A Surface Subdivision Scheme Based on Four-Directional S^1_3 Non-Box Splines}},
author = {
Huang, Zhangjin
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14933}
}
                
@article{
10.1111:cgf.14935,
journal = {Computer Graphics Forum}, title = {{
Data-guided Authoring of Procedural Models of Shapes}},
author = {
Hossain, Ishtiaque
and
Shen, I-Chao
and
Igarashi, Takeo
and
Kaick, Oliver van
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14935}
}
                
@article{
10.1111:cgf.14934,
journal = {Computer Graphics Forum}, title = {{
Groupwise Shape Correspondence Refinement with a Region of Interest Focus}},
author = {
Galmiche, Pierre
and
Seo, Hyewon
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14934}
}
                
@article{
10.1111:cgf.14937,
journal = {Computer Graphics Forum}, title = {{
D-Cloth: Skinning-based Cloth Dynamic Prediction with a Three-stage Network}},
author = {
Li, Yu Di
and
Tang, Min
and
Chen, Xiao Rui
and
Yang, Yun
and
Tong, Ruo Feng
and
An, Bai Lin
and
Yang, Shuang Cai
and
Li, Yao
and
Kou, Qi Long
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14937}
}
                
@article{
10.1111:cgf.14936,
journal = {Computer Graphics Forum}, title = {{
Authoring Terrains with Spatialised Style}},
author = {
Perche, Simon
and
Peytavie, Adrien
and
Benes, Bedrich
and
Galin, Eric
and
Guérin, Eric
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14936}
}
                
@article{
10.1111:cgf.14938,
journal = {Computer Graphics Forum}, title = {{
Controllable Garment Image Synthesis Integrated with Frequency Domain Features}},
author = {
Liang, Xinru
and
Mo, Haoran
and
Gao, Chengying
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14938}
}
                
@article{
10.1111:cgf.14939,
journal = {Computer Graphics Forum}, title = {{
Combating Spurious Correlations in Loose-fitting Garment Animation Through Joint-Specific Feature Learning}},
author = {
Diao, Junqi
and
Xiao, Jun
and
He, Yihong
and
Jiang, Haiyong
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14939}
}
                
@article{
10.1111:cgf.14940,
journal = {Computer Graphics Forum}, title = {{
CP-NeRF: Conditionally Parameterized Neural Radiance Fields for Cross-scene Novel View Synthesis}},
author = {
He, Hao
and
Liang, Yixun
and
Xiao, Shishi
and
Chen, Jierun
and
Chen, Yingcong
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14940}
}
                
@article{
10.1111:cgf.14941,
journal = {Computer Graphics Forum}, title = {{
Interactive Authoring of Terrain using Diffusion Models}},
author = {
Lochner, Joshua
and
Gain, James
and
Perche, Simon
and
Peytavie, Adrien
and
Galin, Eric
and
Guérin, Eric
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14941}
}
                
@article{
10.1111:cgf.14942,
journal = {Computer Graphics Forum}, title = {{
Structure Learning for 3D Point Cloud Generation from Single RGB Images}},
author = {
Charrada, Tarek Ben
and
Laga, Hamid
and
Tabia, Hedi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14942}
}
                
@article{
10.1111:cgf.14943,
journal = {Computer Graphics Forum}, title = {{
Neural Shading Fields for Efficient Facial Inverse Rendering}},
author = {
Rainer, Gilles
and
Bridgeman, Lewis
and
Ghosh, Abhijeet
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14943}
}
                
@article{
10.1111:cgf.14944,
journal = {Computer Graphics Forum}, title = {{
Facial Image Shadow Removal via Graph-based Feature Fusion}},
author = {
Zhang, Ling
and
Chen, Ben
and
Liu, Zheng
and
Xiao, Chunxia
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14944}
}
                
@article{
10.1111:cgf.14945,
journal = {Computer Graphics Forum}, title = {{
A Perceptual Shape Loss for Monocular 3D Face Reconstruction}},
author = {
Otto, Christopher
and
Chandran, Prashanth
and
Zoss, Gaspard
and
Gross, Markus
and
Gotardo, Paulo
and
Bradley, Derek
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14945}
}
                
@article{
10.1111:cgf.14946,
journal = {Computer Graphics Forum}, title = {{
Efficient Interpolation of Rough Line Drawings}},
author = {
Chen, Jiazhou
and
Zhu, Xinding
and
Even, Melvin
and
Basset, Jean
and
Bénard, Pierre
and
Barla, Pascal
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14946}
}
                
@article{
10.1111:cgf.14947,
journal = {Computer Graphics Forum}, title = {{
Sharing Model Framework for Zero-Shot Sketch-Based Image Retrieval}},
author = {
Ho, Yi-Hsuan
and
Way, Der-Lor
and
Shih, Zen-Chung
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14947}
}
                
@article{
10.1111:cgf.14948,
journal = {Computer Graphics Forum}, title = {{
GA-Sketching: Shape Modeling from Multi-View Sketching with Geometry-Aligned Deep Implicit Functions}},
author = {
Zhou, Jie
and
Luo, Zhongjin
and
Yu, Qian
and
Han, Xiaoguang
and
Fu, Hongbo
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14948}
}
                
@article{
10.1111:cgf.14949,
journal = {Computer Graphics Forum}, title = {{
Semantics-guided Generative Diffusion Model with a 3DMM Model Condition for Face Swapping}},
author = {
Liu, Xiyao
and
Liu, Yang
and
Zheng, Yuhao
and
Yang, Ting
and
Zhang, Jian
and
Wang, Victoria
and
Fang, Hui
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14949}
}
                
@article{
10.1111:cgf.14950,
journal = {Computer Graphics Forum}, title = {{
Palette-Based and Harmony-Guided Colorization for Vector Icons}},
author = {
Lin, Miao
and
Shen, I-Chao
and
Chin, Hsiao-Yuan
and
Chen, Ruo-Xi
and
Chen, Bing-Yu
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14950}
}
                
@article{
10.1111:cgf.14951,
journal = {Computer Graphics Forum}, title = {{
Multi-Level Implicit Function for Detailed Human Reconstruction by Relaxing SMPL Constraints}},
author = {
Ma, Xikai
and
Zhao, Jieyu
and
Teng, Yiqing
and
Yao, Li
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14951}
}
                
@article{
10.1111:cgf.14952,
journal = {Computer Graphics Forum}, title = {{
Multi-Modal Face Stylization with a Generative Prior}},
author = {
Li, Mengtian
and
Dong, Yi
and
Lin, Minxuan
and
Huang, Haibin
and
Wan, Pengfei
and
Ma, Chongyang
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14952}
}
                
@article{
10.1111:cgf.14953,
journal = {Computer Graphics Forum}, title = {{
An Efficient Self-supporting Infill Structure for Computational Fabrication}},
author = {
Wang, Shengfa
and
Liu, Zheng
and
Hu, Jiangbei
and
Lei, Na
and
Luo, Zhongxuan
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14953}
}
                
@article{
10.1111:cgf.14954,
journal = {Computer Graphics Forum}, title = {{
Fabricatable 90° Pop-ups: Interactive Transformation of a 3D Model into a Pop-up Structure}},
author = {
Fujikawa, Junpei
and
Ijiri, Takashi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14954}
}
                
@article{
10.1111:cgf.14955,
journal = {Computer Graphics Forum}, title = {{
Efficient Neural Representation of Volumetric Data using Coordinate-Based Networks.}},
author = {
Devkota, Sudarshan
and
Pattanaik, Sumant
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14955}
}
                
@article{
10.1111:cgf.14956,
journal = {Computer Graphics Forum}, title = {{
A Differential Diffusion Theory for Participating Media}},
author = {
Cen, Yunchi
and
Li, Chen
and
Li, Frederick W. B.
and
Yang, Bailin
and
Liang, Xiaohui
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14956}
}
                
@article{
10.1111:cgf.14958,
journal = {Computer Graphics Forum}, title = {{
Multi-scale Iterative Model-guided Unfolding Network for NLOS Reconstruction}},
author = {
Su, Xiongfei
and
Hong, Yu
and
Ye, Juntian
and
Xu, Feihu
and
Yuan, Xin
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14958}
}
                
@article{
10.1111:cgf.14957,
journal = {Computer Graphics Forum}, title = {{
Precomputed Radiative Heat Transport for Efficient Thermal Simulation}},
author = {
Freude, Christian
and
Hahn, David
and
Rist, Florian
and
Lipp, Lukas
and
Wimmer, Michael
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14957}
}
                
@article{
10.1111:cgf.14959,
journal = {Computer Graphics Forum}, title = {{
Robust Distribution-aware Color Correction for Single-shot Images}},
author = {
Dhillon, Daljit Singh J.
and
Joshi, Parisha
and
Baron, Jessica
and
Patterson, Eric K.
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14959}
}
                
@article{
10.1111:cgf.14960,
journal = {Computer Graphics Forum}, title = {{
Enhancing Low-Light Images: A Variation-based Retinex with Modified Bilateral Total Variation and Tensor Sparse Coding}},
author = {
Yang, Weipeng
and
Gao, Hongxia
and
Zou, Wenbin
and
Huang, Shasha
and
Chen, Hongsheng
and
Ma, Jianliang
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14960}
}
                
@article{
10.1111:cgf.14961,
journal = {Computer Graphics Forum}, title = {{
MOVIN: Real-time Motion Capture using a Single LiDAR}},
author = {
Jang, Deok-Kyeong
and
Yang, Dongseok
and
Jang, Deok-Yun
and
Choi, Byeoli
and
Jin, Taeil
and
Lee, Sung-Hee
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14961}
}
                
@article{
10.1111:cgf.14962,
journal = {Computer Graphics Forum}, title = {{
DAFNet: Generating Diverse Actions for Furniture Interaction by Learning Conditional Pose Distribution}},
author = {
Jin, Taeil
and
Lee, Sung-Hee
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14962}
}
                
@article{
10.1111:cgf.14963,
journal = {Computer Graphics Forum}, title = {{
OptCtrlPoints: Finding the Optimal Control Points for Biharmonic 3D Shape Deformation}},
author = {
Kim, Kunho
and
Uy, Mikaela Angelina
and
Paschalidou, Despoina
and
Jacobson, Alec
and
Guibas, Leonidas J.
and
Sung, Minhyuk
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14963}
}
                
@article{
10.1111:cgf.14964,
journal = {Computer Graphics Forum}, title = {{
Integrating High-Level Features for Consistent Palette-based Multi-image Recoloring}},
author = {
Xue, Danna
and
Corral, Javier Vazquez
and
Herranz, Luis
and
Zhang, Yanning
and
Brown, Michael S.
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14964}
}
                
@article{
10.1111:cgf.14965,
journal = {Computer Graphics Forum}, title = {{
Data-Driven Ink Painting Brushstroke Rendering}},
author = {
Madono, Koki
and
Simo-Serra, Edgar
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14965}
}
                
@article{
10.1111:cgf.14966,
journal = {Computer Graphics Forum}, title = {{
Continuous Layout Editing of Single Images with Diffusion Models}},
author = {
Zhang, Zhiyuan
and
Huang, Zhitong
and
Liao, Jing
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14966}
}
                
@article{
10.1111:cgf.14967,
journal = {Computer Graphics Forum}, title = {{
Error-bounded Image Triangulation}},
author = {
Fang, Zhi-Duo
and
Guo, Jia-Peng
and
Xiao, Yanyang
and
Fu, Xiao-Ming
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14967}
}
                
@article{
10.1111:cgf.14968,
journal = {Computer Graphics Forum}, title = {{
Dissection Puzzles Composed of Multicolor Polyominoes}},
author = {
Kita, Naoki
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14968}
}
                
@article{
10.1111:cgf.14969,
journal = {Computer Graphics Forum}, title = {{
H-ETC2: Design of a CPU-GPU Hybrid ETC2 Encoder}},
author = {
Lee, Hyeon-ki
and
Nah, Jae-Ho
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14969}
}
                
@article{
10.1111:cgf.14970,
journal = {Computer Graphics Forum}, title = {{
Refinement of Hair Geometry by Strand Integration}},
author = {
Maeda, Ryota
and
Takayama, Kenshi
and
Taketomi, Takafumi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14970}
}
                
@article{
10.1111:cgf.14971,
journal = {Computer Graphics Forum}, title = {{
Fine Back Surfaces Oriented Human Reconstruction for Single RGB-D Images}},
author = {
Fang, Xianyong
and
Qian, Yu
and
He, Jinshen
and
Wang, Linbo
and
Liu, Zhengyi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14971}
}
                
@article{
10.1111:cgf.14972,
journal = {Computer Graphics Forum}, title = {{
Deep Shape and SVBRDF Estimation using Smartphone Multi-lens Imaging}},
author = {
Fan, Chongrui
and
Lin, Yiming
and
Ghosh, Abhijeet
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14972}
}
                
@article{
10.1111:cgf.14973,
journal = {Computer Graphics Forum}, title = {{
SVBRDF Reconstruction by Transferring Lighting Knowledge}},
author = {
Zhu, Pengfei
and
Lai, Shuichang
and
Chen, Mufan
and
Guo, Jie
and
Liu, Yifan
and
Guo, Yanwen
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14973}
}
                
@article{
10.1111:cgf.14974,
journal = {Computer Graphics Forum}, title = {{
World-Space Spatiotemporal Path Resampling for Path Tracing}},
author = {
Zhang, Hangyu
and
Wang, Beibei
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14974}
}
                
@article{
10.1111:cgf.14975,
journal = {Computer Graphics Forum}, title = {{
Efficient Caustics Rendering via Spatial and Temporal Path Reuse}},
author = {
Xu, Xiaofeng
and
Wang, Lu
and
Wang, Beibei
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14975}
}
                
@article{
10.1111:cgf.14976,
journal = {Computer Graphics Forum}, title = {{
3D Object Tracking for Rough Models}},
author = {
Song, Xiuqiang
and
Xie, Weijian
and
Li, Jiachen
and
Wang, Nan
and
Zhong, Fan
and
Zhang, Guofeng
and
Qin, Xueying
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14976}
}
                
@article{
10.1111:cgf.14977,
journal = {Computer Graphics Forum}, title = {{
A Post Processing Technique to Automatically Remove Floater Artifacts in Neural Radiance Fields}},
author = {
Wirth, Tristan
and
Rak, Arne
and
Knauthe, Volker
and
Fellner, Dieter W.
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14977}
}
                
@article{
10.1111:cgf.14978,
journal = {Computer Graphics Forum}, title = {{
MAPMaN: Multi-Stage U-Shaped Adaptive Pattern Matching Network for Semantic Segmentation of Remote Sensing Images}},
author = {
Hong, Tingfeng
and
Ma, Xiaowen
and
Wang, Xinyu
and
Che, Rui
and
Hu, Chenlu
and
Feng, Tian
and
Zhang, Wei
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14978}
}
                
@article{
10.1111:cgf.14979,
journal = {Computer Graphics Forum}, title = {{
Balancing Rotation Minimizing Frames with Additional Objectives}},
author = {
Mossman, Christopher
and
Bartels, Richard H.
and
Samavati, Faramarz F.
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14979}
}
                
@article{
10.1111:cgf.14980,
journal = {Computer Graphics Forum}, title = {{
Generating Parametric BRDFs from Natural Language Descriptions}},
author = {
Memery, Sean
and
Cedron, Osmar
and
Subr, Kartic
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14980}
}
                
@article{
10.1111:cgf.14981,
journal = {Computer Graphics Forum}, title = {{
Neural Impostor: Editing Neural Radiance Fields with Explicit Shape Manipulation}},
author = {
Liu, Ruiyang
and
Xiang, Jinxu
and
Zhao, Bowen
and
Zhang, Ran
and
Yu, Jingyi
and
Zheng, Changxi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14981}
}
                
@article{
10.1111:cgf.14982,
journal = {Computer Graphics Forum}, title = {{
Reconstructing 3D Human Pose from RGB-D Data with Occlusions}},
author = {
Dang, Bowen
and
Zhao, Xi
and
Zhang, Bowen
and
Wang, He
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14982}
}
                
@article{
10.1111:cgf.14983,
journal = {Computer Graphics Forum}, title = {{
Fast Grayscale Morphology for Circular Window}},
author = {
Moroto, Yuji
and
Umetani, Nobuyuki
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14983}
}
                
@article{
10.1111:cgf.14984,
journal = {Computer Graphics Forum}, title = {{
BubbleFormer: Bubble Diagram Generation via Dual Transformer Models}},
author = {
Sun, Jiahui
and
Zheng, Liping
and
Zhang, Gaofeng
and
Wu, Wenming
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14984}
}

Browse

Recent Submissions

Now showing 1 - 57 of 57
  • Item
    Pacific Graphics 2023 - CGF 42-7: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
  • Item
    IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Choi, Changwoon; Kim, Juhyeon; Kim, Young Min; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of large-scale indoor scenes into intrinsic components. Recent approaches further decompose the baked radiance of the implicit volume into intrinsic components such that one can partially approximate the rendering equation. However, they are limited to representing isolated objects with a shared environment lighting, and suffer from computational burden to aggregate rays with Monte Carlo integration. In contrast, our prefiltered radiance field extends the original NeRF formulation to capture the spatial variation of lighting within the scene volume, in addition to surface properties. Specifically, the scenes of diverse materials are decomposed into intrinsic components for rendering, namely, albedo, roughness, surface normal, irradiance, and prefiltered radiance. All of the components are inferred as neural images from MLP, which can model large-scale general scenes. Especially the prefiltered radiance effectively models the volumetric light field, and captures spatial variation beyond a single environment light. The prefiltering aggregates rays in a set of predefined neighborhood sizes such that we can replace the costly Monte Carlo integration of global illumination with a simple query from a neural image. By adopting NeRF, our approach inherits superior visual quality and multi-view consistency for synthesized images as well as the intrinsic components. We demonstrate the performance on scenes with complex object layouts and light configurations, which could not be processed in any of the previous works.
  • Item
    Robust Novel View Synthesis with Color Transform Module
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Kim, Sang Min; Choi, Changwoon; Heo, Hyeongjun; Kim, Young Min; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    The advancements of the Neural Radiance Field (NeRF) and its variants have demonstrated remarkable capabilities in generating photo-realistic novel views from a small set of input images. While recent works suggest various techniques and model architectures that enhance speed or reconstruction quality, little attention is paid to exploring the RGB color space of input images. In this paper, we propose a universal color transform module that can maximally harness the captured evidence for the neural networks at hand. The color transform module utilizes an encoder-decoder framework that maps the RGB color space into a new latent space, enhancing the expressiveness of the input domain. We attach the encoder and the decoder at the input and output of a NeRF model of choice, respectively, and jointly optimize them to maintain the cycle consistency of the proposed transform, in addition to minimizing the reconstruction errors in the feature domain. Our comprehensive experiments demonstrate that the learned color space can significantly improve the quality of reconstructions compared to the conventional RGB representation. Its benefits are particularly pronounced in challenging scenarios characterized by low-light environments and scenes with low-textured regions. The proposed color transform pushes the boundaries of limitations in the input domain and offers a promising avenue for advancing the reconstruction capabilities of various neural representations. Source code is available at https://github.com/sangminkim-99/ColorTransformModule.
  • Item
    Learning to Generate and Manipulate 3D Radiance Field by a Hierarchical Diffusion Framework with CLIP Latent
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Wang, Jiaxu; Zhang, Ziyi; Xu, Renjing; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    3D-aware generative adversarial networks (GAN) are widely adopted in generating and editing neural radiance fields (NeRF). However, these methods still suffer from GAN-related issues including degraded diversity and training instability. Moreover, 3D-aware GANs consider NeRF pipeline as regularizers and do not directly operate with 3D assets, leading to imperfect 3D consistencies. Besides, the independent changes in disentangled editing cannot be ensured due to the sharing of some shallow hidden features in generators. To address these challenges, we propose the first purely diffusion-based three-stage framework for generative and editing tasks, with a series of well-designed loss functions that can directly handle 3D models. In addition, we present a generalizable neural point field as our 3D representation, which explicitly disentangles geometry and appearance in feature spaces. For 3D data conversion, it simplifies the preparation pipeline of datasets. Assisted by the representation, our diffusion model can separately manipulate the shape and appearance in a hierarchical manner by image/text prompts that are provided by the CLIP encoder. Moreover, it can generate new samples by adding a simple generative head. Experiments show that our approach outperforms the SOTA work in the generative tasks of direct generation of 3D representations and novel image synthesis, and completely disentangles the manipulation of shape and appearance with correct semantic correspondence in the editing tasks.
  • Item
    Meso-Skeleton Guided Hexahedral Mesh Design
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Viville, Paul; Kraemer, Pierre; Bechmann, Dominique; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We present a novel approach for the generation of hexahedral meshes in a volume domain given its meso-skeleton. This compact representation of the topology and geometry, composed of both curve and surface parts, is used to produce a raw decomposition of the domain into hexahedral blocks. Analysis of the different local configurations of the skeleton leads to the construction of a set of connection surfaces that are used as a scaffold onto which the hexahedral blocks are assembled. These local configurations of the skeleton completely determine the singularities of the final mesh, and by following the skeleton, the geometry of the produced mesh naturally follows the geometry of the domain. Depending on the end user needs, the obtained mesh can be further adapted, refined or optimized, for example to better fit the boundary of the domain. Our algorithm does not involve the resolution of any global problem, most decisions are taken locally and it is thus highly suitable for parallel processing. This efficiency allows the user to stay in the loop for the correction or edition of the meso-skeleton for which a first sketch can be given by an existing automatic extraction algorithm.
  • Item
    A Surface Subdivision Scheme Based on Four-Directional S^1_3 Non-Box Splines
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Huang, Zhangjin; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    In this paper, we propose a novel surface subdivision scheme called non-box subdivision, which is generalized from fourdirectional S13 on-box splines. The resulting subdivision surfaces achieve C1 continuity with the convex hull property. This scheme can be regarded as either a four-directional subdivision or a special quadrilateral subdivision. When used as a quadrilateral subdivision, the proposed scheme can control the shape of the limit surface more flexibly than traditional schemes due to the natural introduction of auxiliary face control vertices.
  • Item
    Data-guided Authoring of Procedural Models of Shapes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Hossain, Ishtiaque; Shen, I-Chao; Igarashi, Takeo; Kaick, Oliver van; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Procedural models enable the generation of a large amount of diverse shapes by varying the parameters of the model. However, writing a procedural model for replicating a collection of reference shapes is difficult, requiring much inspection of the original and replicated shapes during the development of the model. In this paper, we introduce a data-guided method for aiding a programmer in creating a procedural model to replicate a collection of reference shapes. The user starts by writing an initial procedural model, and the system automatically predicts the model parameters for reference shapes, also grouping shapes by how well they are approximated by the current procedural model. The user can then update the procedural model based on the given feedback and iterate the process. Our system thus automates the tedious process of discovering the parameters that replicate reference shapes, allowing the programmer to focus on designing the high-level rules that generate the shapes. We demonstrate through qualitative examples and a user study that our method is able to speed up the development time for creating procedural models of 2D and 3D man-made shapes.
  • Item
    Groupwise Shape Correspondence Refinement with a Region of Interest Focus
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Galmiche, Pierre; Seo, Hyewon; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    While collections of scan shapes are becoming more prevalent in many real-world applications, finding accurate and dense correspondences across multiple shapes remains a challenging task. In this work, we introduce a new approach for refining non-rigid correspondences among a collection of 3D shapes undergoing non-rigid deformation. Our approach incorporates a Region Of Interest (ROI) into the refinement process, which is specified by the user on one shape within the collection. Based on the functional map framework and more specifically on the notion of cycle-consistency, our formulation improves the overall matching consistency while prioritizing that of the region of interest. Specifically, the initial pairwise correspondences are refined by first defining the localized harmonics that are confined within the transferred ROI on each shape, and subsequently applying the CCLB (Canonical Consistent Latent Basis) framework both on the global and the localized harmonics. This leads to an enhanced matching accuracy for both the ROIs and the overall shapes across the collection. We evaluate our method on various synthetic and real scan datasets, in comparison with the state-of-the-art techniques.
  • Item
    D-Cloth: Skinning-based Cloth Dynamic Prediction with a Three-stage Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Li, Yu Di; Tang, Min; Chen, Xiao Rui; Yang, Yun; Tong, Ruo Feng; An, Bai Lin; Yang, Shuang Cai; Li, Yao; Kou, Qi Long; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We propose a three-stage network that utilizes a skinning-based model to accurately predict dynamic cloth deformation. Our approach decomposes cloth deformation into three distinct components: static, coarse dynamic, and wrinkle dynamic components. To capture these components, we train our three-stage network accordingly. In the first stage, the static component is predicted by constructing a static skinning model that incorporates learned joint increments and skinning weight increments. Then, in the second stage, the coarse dynamic component is added to the static skinning model by incorporating serialized skeleton information. Finally, in the third stage, the mesh sequence stage refines the prediction by incorporating the wrinkle dynamic component using serialized mesh information. We have implemented our network and used it in a Unity game scene, enabling real-time prediction of cloth dynamics. Our implementation achieves impressive prediction speeds of approximately 3.65ms using an NVIDIA GeForce RTX 3090 GPU and 9.66ms on an Intel i7-7700 CPU. Compared to SOTA methods, our network excels in accurately capturing fine dynamic cloth deformations.
  • Item
    Authoring Terrains with Spatialised Style
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Perche, Simon; Peytavie, Adrien; Benes, Bedrich; Galin, Eric; Guérin, Eric; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Various terrain modelling methods have been proposed for the past decades, providing efficient and often interactive authoring tools. However, they seldom include any notion of style, which is critical for designers in the entertainment industry. We introduce a new generative network method that bridges the gap between automatic terrain synthesis and authoring, providing a versatile set of authoring tools allowing spatialised style. We build upon the StyleGAN2 architecture and extend it with authoring tools. Given an input sketch or existing elevation map, our method generates a terrain with features that can be authored, enhanced, and augmented using interactive brushes and style manipulation tools. The strength of our approach lies in the versatility and interoperability of the different tools. We validate our method quantitatively with drainage calculation against other previous techniques and qualitatively by asking users to follow a prompt or freely create a terrain.
  • Item
    Controllable Garment Image Synthesis Integrated with Frequency Domain Features
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Liang, Xinru; Mo, Haoran; Gao, Chengying; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Using sketches and textures to synthesize garment images is able to conveniently display the realistic visual effect in the design phase, which greatly increases the efficiency of fashion design. Existing garment image synthesis methods from a sketch and a texture tend to fail in working on complex textures, especially those with periodic patterns. We propose a controllable garment image synthesis framework that takes as inputs an outline sketch and a texture patch and generates garment images with complicated and diverse texture patterns. To improve the performance of global texture expansion, we exploit the frequency domain features in the generative process, which are from a Fast Fourier Transform (FFT) and able to represent the periodic information of the patterns. We also introduce a perceptual loss in the frequency domain to measure the similarity of two texture pattern patches in terms of their intrinsic periodicity and regularity. Comparisons with existing approaches and sufficient ablation studies demonstrate the effectiveness of our method that is capable of synthesizing impressive garment images with diverse texture patterns while guaranteeing proper texture expansion and pattern consistency.
  • Item
    Combating Spurious Correlations in Loose-fitting Garment Animation Through Joint-Specific Feature Learning
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Diao, Junqi; Xiao, Jun; He, Yihong; Jiang, Haiyong; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We address the 3D animation of loose-fitting garments from a sequence of body motions. State-of-the-art approaches treat all body joints as a whole to encode motion features, which usually gives rise to learned spurious correlations between garment vertices and irrelevant joints as shown in Fig. 1. To cope with the issue, we encode temporal motion features in a joint-wise manner and learn an association matrix to map human joints only to most related garment regions by encouraging its sparsity. In this way, spurious correlations are mitigated and better performance is achieved. Furthermore, we devise the joint-specific pose space deformation (PSD) to decompose the high-dimensional displacements as the combination of dynamic details caused by individual joint poses. Extensive experiments show that our method outperforms previous works in most indicators. Moreover, garment animations are not interfered with by artifacts caused by spurious correlations, which further validates the effectiveness of our approach. The code is available at https://github.com/qiji77/JointNet.
  • Item
    CP-NeRF: Conditionally Parameterized Neural Radiance Fields for Cross-scene Novel View Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) He, Hao; Liang, Yixun; Xiao, Shishi; Chen, Jierun; Chen, Yingcong; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Neural radiance fields (NeRF) have demonstrated a promising research direction for novel view synthesis. However, the existing approaches either require per-scene optimization that takes significant computation time or condition on local features which overlook the global context of images. To tackle this shortcoming, we propose the Conditionally Parameterized Neural Radiance Fields (CP-NeRF), a plug-in module that enables NeRF to leverage contextual information from different scales. Instead of optimizing the model parameters of NeRFs directly, we train a Feature Pyramid hyperNetwork (FPN) that extracts view-dependent global and local information from images within or across scenes to produce the model parameters. Our model can be trained end-to-end with standard photometric loss from NeRF. Extensive experiments demonstrate that our method can significantly boost the performance of NeRF, achieving state-of-the-art results in various benchmark datasets.
  • Item
    Interactive Authoring of Terrain using Diffusion Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Lochner, Joshua; Gain, James; Perche, Simon; Peytavie, Adrien; Galin, Eric; Guérin, Eric; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Generating heightfield terrains is a necessary precursor to the depiction of computer-generated natural scenes in a variety of applications. Authoring such terrains is made challenging by the need for interactive feedback, effective user control, and perceptually realistic output encompassing a range of landforms.We address these challenges by developing a terrain-authoring framework underpinned by an adaptation of diffusion models for conditional image synthesis, trained on real-world elevation data. This framework supports automated cleaning of the training set; authoring control through style selection and feature sketches; the ability to import and freely edit pre-existing terrains, and resolution amplification up to the limits of the source data. Our framework improves on previous machine-learning approaches by: expanding landform variety beyond mountainous terrain to encompass cliffs, canyons, and plains; providing a better balance between terseness and specificity in user control, and improving the fidelity of global terrain structure and perceptual realism. This is demonstrated through drainage simulations and a user study testing the perceived realism for different classes of terrain. The full source code, blender add-on, and pretrained models are available.
  • Item
    Structure Learning for 3D Point Cloud Generation from Single RGB Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Charrada, Tarek Ben; Laga, Hamid; Tabia, Hedi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    3D point clouds can represent complex 3D objects of arbitrary topologies and with fine-grained details. They are, however, hard to regress from images using convolutional neural networks, making tasks such as 3D reconstruction from monocular RGB images challenging. In fact, unlike images and volumetric grids, point clouds are unstructured and thus lack proper parameterization, which makes them difficult to process using convolutional operations. Existing point-based 3D reconstruction methods that tried to address this problem rely on complex end-to-end architectures with high computational costs. Instead, we propose in this paper a novel mechanism that decouples the 3D reconstruction problem from the structure (or parameterization) learning task, making the 3D reconstruction of objects of arbitrary topologies tractable and thus easier to learn. We achieve this using a novel Teacher-Student network where the Teacher learns to structure the point clouds. The Student then harnesses the knowledge learned by the Teacher to efficiently regress accurate 3D point clouds. We train the Teacher network using 3D ground-truth supervision and the Student network using the Teacher’'s annotations. Finally, we employ a novel refinement network to overcome the upper-bound performance that is set by the Teacher network. Our extensive experiments on ShapeNet and Pix3D benchmarks, and on in-the-wild images demonstrate that the proposed approach outperforms previous methods in terms of reconstruction accuracy and visual quality.
  • Item
    Neural Shading Fields for Efficient Facial Inverse Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Rainer, Gilles; Bridgeman, Lewis; Ghosh, Abhijeet; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Given a set of unstructured photographs of a subject under unknown lighting, 3D geometry reconstruction is relatively easy, but reflectance estimation remains a challenge. This is because it requires disentangling lighting from reflectance in the ambiguous observations. Solutions exist leveraging statistical, data-driven priors to output plausible reflectance maps even in the underconstrained single-view, unknown lighting setting. We propose a very low-cost inverse optimization method that does not rely on data-driven priors, to obtain high-quality diffuse and specular, albedo and normal maps in the setting of multi-view unknown lighting. We introduce compact neural networks that learn the shading of a given scene by efficiently finding correlations in the appearance across the face. We jointly optimize the implicit global illumination of the scene in the networks with explicit diffuse and specular reflectance maps that can subsequently be used for physically-based rendering. We analyze the veracity of results on ground truth data, and demonstrate that our reflectance maps maintain more detail and greater personal identity than state-of-the-art deep learning and differentiable rendering methods.
  • Item
    Facial Image Shadow Removal via Graph-based Feature Fusion
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhang, Ling; Chen, Ben; Liu, Zheng; Xiao, Chunxia; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Despite natural image shadow removal methods have made significant progress, they often perform poorly for facial image due to the unique features of the face. Moreover, most learning-based methods are designed based on pixel-level strategies, ignoring the global contextual relationship in the image. In this paper, we propose a graph-based feature fusion network (GraphFFNet) for facial image shadow removal. We apply a graph-based convolution encoder (GCEncoder) to extract global contextual relationships between regions in the coarse shadow-less image produced by an image flipper. Then, we introduce a feature modulation module to fuse the global topological relation onto the image features, enhancing the feature representation of the network. Finally, the fusion decoder integrates all the effective features to reconstruct the image features, producing a satisfactory shadow-removal result. Experimental results demonstrate the superiority of the proposed GraphFFNet over the state-of-the-art and validate the effectiveness of facial image shadow removal.
  • Item
    A Perceptual Shape Loss for Monocular 3D Face Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Otto, Christopher; Chandran, Prashanth; Zoss, Gaspard; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Monocular 3D face reconstruction is a wide-spread topic, and existing approaches tackle the problem either through fast neural network inference or offline iterative reconstruction of face geometry. In either case carefully-designed energy functions are minimized, commonly including loss terms like a photometric loss, a landmark reprojection loss, and others. In this work we propose a new loss function for monocular face capture, inspired by how humans would perceive the quality of a 3D face reconstruction given a particular image. It is widely known that shading provides a strong indicator for 3D shape in the human visual system. As such, our new 'perceptual' shape loss aims to judge the quality of a 3D face estimate using only shading cues. Our loss is implemented as a discriminator-style neural network that takes an input face image and a shaded render of the geometry estimate, and then predicts a score that perceptually evaluates how well the shaded render matches the given image. This 'critic' network operates on the RGB image and geometry render alone, without requiring an estimate of the albedo or illumination in the scene. Furthermore, our loss operates entirely in image space and is thus agnostic to mesh topology. We show how our new perceptual shape loss can be combined with traditional energy terms for monocular 3D face optimization and deep neural network regression, improving upon current state-of-the-art results.
  • Item
    Efficient Interpolation of Rough Line Drawings
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Chen, Jiazhou; Zhu, Xinding; Even, Melvin; Basset, Jean; Bénard, Pierre; Barla, Pascal; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    In traditional 2D animation, sketches drawn at distant keyframes are used to design motion, yet it would be far too laborintensive to draw all the inbetween frames to fully visualize that motion. We propose a novel efficient interpolation algorithm that generates these intermediate frames in the artist's drawing style. Starting from a set of registered rough vector drawings, we first generate a large number of candidate strokes during a pre-process, and then, at each intermediate frame, we select the subset of those that appropriately conveys the underlying interpolated motion, interpolates the stroke distributions of the key drawings, and introduces a minimum amount of temporal artifacts. In addition, we propose quantitative error metrics to objectively evaluate different stroke selection strategies. We demonstrate the potential of our method on various animations and drawing styles, and show its superiority over competing raster- and vector-based methods.
  • Item
    Sharing Model Framework for Zero-Shot Sketch-Based Image Retrieval
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Ho, Yi-Hsuan; Way, Der-Lor; Shih, Zen-Chung; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Sketch-based image retrieval (SBIR) is an emerging task in computer vision. Research interests have arisen in solving this problem under the realistic and challenging setting of zero-shot learning. Given a sketch as a query, the search goal is to retrieve the corresponding photographs in a zero-shot scenario. In this paper, we divide the aforementioned challenging work into three tasks and propose a sharing model framework that addresses these problems. First, the weights of the proposed sharing model effectively reduced the modality gap between sketches and photographs. Second, semantic information was used to handle different label spaces during the training and testing stages. The sketch and photograph domains share semantic information. Finally, a memory mechanism is used to reduce the intrinsic variety in sketches, even if they all belong to the same class. Sketches and photographs dominate the embeddings in turn. Because sketches are not limited by language, our ultimate goal is to find a method to replace text searches. We also designed a demonstration program to demonstrate the use of the proposed method in real-world applications. Our results indicate that the proposed method exhibits considerably higher zero-shot SBIR performance than do other state-of-the-art methods on the challenging Sketchy, TU-Berlin, and QuickDraw datasets.
  • Item
    GA-Sketching: Shape Modeling from Multi-View Sketching with Geometry-Aligned Deep Implicit Functions
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhou, Jie; Luo, Zhongjin; Yu, Qian; Han, Xiaoguang; Fu, Hongbo; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Sketch-based shape modeling aims to bridge the gap between 2D drawing and 3D modeling by providing an intuitive and accessible approach to create 3D shapes from 2D sketches. However, existing methods still suffer from limitations in reconstruction quality and multi-view interaction friendliness, hindering their practical application. This paper proposes a faithful and user-friendly iterative solution to tackle these limitations by learning geometry-aligned deep implicit functions from one or multiple sketches. Our method lifts 2D sketches to volume-based feature tensors, which align strongly with the output 3D shape, enabling accurate reconstruction and faithful editing. Such a geometry-aligned feature encoding technique is well-suited to iterative modeling since features from different viewpoints can be easily memorized or aggregated. Based on these advantages, we design a unified interactive system for sketch-based shape modeling. It enables users to generate the desired geometry iteratively by drawing sketches from any number of viewpoints. In addition, it allows users to edit the generated surface by making a few local modifications. We demonstrate the effectiveness and practicality of our method with extensive experiments and user studies, where we found that our method outperformed existing methods in terms of accuracy, efficiency, and user satisfaction. The source code of this project is available at https://github.com/LordLiang/GA-Sketching.
  • Item
    Semantics-guided Generative Diffusion Model with a 3DMM Model Condition for Face Swapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Liu, Xiyao; Liu, Yang; Zheng, Yuhao; Yang, Ting; Zhang, Jian; Wang, Victoria; Fang, Hui; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Face swapping is a technique that replaces a face in a target media with another face of a different identity from a source face image. Currently, research on the effective utilisation of prior knowledge and semantic guidance for photo-realistic face swapping remains limited, despite the impressive synthesis quality achieved by recent generative models. In this paper, we propose a novel conditional Denoising Diffusion Probabilistic Model (DDPM) enforced by a two-level face prior guidance. Specifically, it includes (i) an image-level condition generated by a 3D Morphable Model (3DMM), and (ii) a high-semantic level guidance driven by information extracted from several pre-trained attribute classifiers, for high-quality face image synthesis. Although swapped face image from 3DMM does not achieve photo-realistic quality on its own, it provides a strong image-level prior, in parallel with high-level face semantics, to guide the DDPM for high fidelity image generation. The experimental results demonstrate that our method outperforms state-of-the-art face swapping methods on benchmark datasets in terms of its synthesis quality, and capability to preserve the target face attributes and swap the source face identity.
  • Item
    Palette-Based and Harmony-Guided Colorization for Vector Icons
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Lin, Miao; Shen, I-Chao; Chin, Hsiao-Yuan; Chen, Ruo-Xi; Chen, Bing-Yu; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Colorizing icon is a challenging task, even for skillful artists, as it involves balancing aesthetics and practical considerations. Prior works have primarily focused on colorizing pixel-based icons, which do not seamlessly integrate into the current vectorbased icon design workflow. In this paper, we propose a palette-based colorization algorithm for vector icons without the need for rasterization. Our algorithm takes a vector icon and a five-color palette as input and generates various colorized results for designers to choose from. Inspired by the common icon design workflow, we developed our algorithm to consist of two steps: generating a colorization template and performing the palette-based color transfer. To generate the colorization templates, we introduce a novel vector icon colorization model that employs an MRF-based loss and a color harmony loss. The color harmony loss encourages the alignment of the resulting color template with widely used harmony templates. We then map the predicted colorization template to chroma-like palette colors to obtain diverse colorization results. We compare our results with those generated by previous pixel-based icon colorization methods and validate the effectiveness of our algorithm by evaluations in both qualitative and quantitative measurements. Our method enables icon designers to explore diverse colorization results for a single icon using different color palettes while also efficiently evaluating the suitability of a color palette for a set of icons.
  • Item
    Multi-Level Implicit Function for Detailed Human Reconstruction by Relaxing SMPL Constraints
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Ma, Xikai; Zhao, Jieyu; Teng, Yiqing; Yao, Li; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Aiming at enhancing the rationality and robustness of the results of single-view image-based human reconstruction and acquiring richer surface details, we propose a multi-level reconstruction framework based on implicit functions.This framework first utilizes the predicted SMPL model (Skinned Multi-Person Linear Model) as a prior to further predict consistent 2.5D sketches (depth map and normal map), and then obtains a coarse reconstruction result through an Implicit Function fitting network (IF-Net). Subsequently, with a pixel-aligned feature extraction module and a fine IF-Net, the strong constraints imposed by SMPL are relaxed to add more surface details to the reconstruction result and remove noise. Finally, to address the trade-off between surface details and rationality under complex poses, we propose a novel fusion repair algorithm that reuses existing information. This algorithm compensates for the missing parts of the fine reconstruction results with the coarse reconstruction results, leading to a robust, rational, and richly detailed reconstruction. The final experiments prove the effectiveness of our method and demonstrate that it achieves the richest surface details while ensuring rationality. The project website can be found at https://github.com/MXKKK/2.5D-MLIF.
  • Item
    Multi-Modal Face Stylization with a Generative Prior
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Li, Mengtian; Dong, Yi; Lin, Minxuan; Huang, Haibin; Wan, Pengfei; Ma, Chongyang; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    In this work, we introduce a new approach for face stylization. Despite existing methods achieving impressive results in this task, there is still room for improvement in generating high-quality artistic faces with diverse styles and accurate facial reconstruction. Our proposed framework, MMFS, supports multi-modal face stylization by leveraging the strengths of StyleGAN and integrates it into an encoder-decoder architecture. Specifically, we use the mid-resolution and high-resolution layers of StyleGAN as the decoder to generate high-quality faces, while aligning its low-resolution layer with the encoder to extract and preserve input facial details. We also introduce a two-stage training strategy, where we train the encoder in the first stage to align the feature maps with StyleGAN and enable a faithful reconstruction of input faces. In the second stage, the entire network is fine-tuned with artistic data for stylized face generation. To enable the fine-tuned model to be applied in zero-shot and one-shot stylization tasks, we train an additional mapping network from the large-scale Contrastive-Language-Image-Pre-training (CLIP) space to a latent w+ space of fine-tuned StyleGAN. Qualitative and quantitative experiments show that our framework achieves superior performance in both one-shot and zero-shot face stylization tasks, outperforming state-of-the-art methods by a large margin.
  • Item
    An Efficient Self-supporting Infill Structure for Computational Fabrication
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Wang, Shengfa; Liu, Zheng; Hu, Jiangbei; Lei, Na; Luo, Zhongxuan; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Efficiently optimizing the internal structure of 3D printing models is a critical focus in the field of industrial manufacturing, particularly when designing self-supporting structures that offer high stiffness and lightweight characteristics. To tackle this challenge, this research introduces a novel approach featuring a self-supporting polyhedral structure and an efficient optimization algorithm. Specifically, the internal space of the model is filled with a combination of self-supporting octahedrons and tetrahedrons, strategically arranged to maximize structural integrity. Our algorithm optimizes the wall thickness of the polyhedron elements to satisfy specific stiffness requirements, while ensuring efficient alignment of the filled structures in finite element calculations. Our approach results in a considerable decrease in optimization time. The optimization process is stable, converges rapidly, and consistently delivers effective results. Through a series of experiments, we have demonstrated the effectiveness and efficiency of our method in achieving the desired design objectives
  • Item
    Fabricatable 90° Pop-ups: Interactive Transformation of a 3D Model into a Pop-up Structure
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Fujikawa, Junpei; Ijiri, Takashi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Ninety-degree pop-ups are a type of papercraft on which a three-dimensional (3D) structure pops up when the angle of the base fold is 90°. They are fabricated by cutting and creasing a single sheet of paper. Traditional 90° pop-ups are limited to 3D shapes only comprising planar shapes because they are made of paper. In this paper, we present novel pop-ups, fabricatable 90° pop-ups that employ the 90° pop-up mechanism, consist of components with curved shapes, and can be fabricatable using a 3D printer. We propose a method for converting a 3D model into a fabricatable 90° pop-up. The user first interactively designs a layout of pop-up components, and the system automatically deforms the components using the 3D model. Because the generated pop-ups contain necessary cuts and folds, no additional assembly process is required. To demonstrate the feasibility of the proposed method, we designed and fabricated various 90° pop-ups using a 3D printer.
  • Item
    Efficient Neural Representation of Volumetric Data using Coordinate-Based Networks.
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Devkota, Sudarshan; Pattanaik, Sumant; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    In this paper, we propose an efficient approach for the compression and representation of volumetric data utilizing coordinatebased networks and multi-resolution hash encoding. Efficient compression of volumetric data is crucial for various applications, such as medical imaging and scientific simulations. Our approach enables effective compression by learning a mapping between spatial coordinates and intensity values. We compare different encoding schemes and demonstrate the superiority of multiresolution hash encoding in terms of compression quality and training efficiency. Furthermore, we leverage optimization-based meta-learning, specifically using the Reptile algorithm, to learn weight initialization for neural representations tailored to volumetric data, enabling faster convergence during optimization. Additionally, we compare our approach with state-of-the-art methods to showcase improved image quality and compression ratios. These findings highlight the potential of coordinate-based networks and multi-resolution hash encoding for an efficient and accurate representation of volumetric data, paving the way for advancements in large-scale data visualization and other applications.
  • Item
    A Differential Diffusion Theory for Participating Media
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Cen, Yunchi; Li, Chen; Li, Frederick W. B.; Yang, Bailin; Liang, Xiaohui; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We present a novel approach to differentiable rendering for participating media, addressing the challenge of computing scene parameter derivatives. While existing methods focus on derivative computation within volumetric path tracing, they fail to significantly improve computational performance due to the expensive computation of multiply-scattered light. To overcome this limitation, we propose a differential diffusion theory inspired by the classical diffusion equation. Our theory enables real-time computation of arbitrary derivatives such as optical absorption, scattering coefficients, and anisotropic parameters of phase functions. By solving derivatives through the differential form of the diffusion equation, our approach achieves remarkable speed gains compared to Monte Carlo methods. This marks the first differentiable rendering framework to compute scene parameter derivatives based on diffusion approximation. Additionally, we derive the discrete form of diffusion equation derivatives, facilitating efficient numerical solutions. Our experimental results using synthetic and realistic images demonstrate the accurate and efficient estimation of arbitrary scene parameter derivatives. Our work represents a significant advancement in differentiable rendering for participating media, offering a practical and efficient solution to compute derivatives while addressing the limitations of existing approaches.
  • Item
    Multi-scale Iterative Model-guided Unfolding Network for NLOS Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Su, Xiongfei; Hong, Yu; Ye, Juntian; Xu, Feihu; Yuan, Xin; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Non-line-of-sight (NLOS) imaging can reconstruct hidden objects by analyzing diffuse reflection of relay surfaces, and is potentially used in autonomous driving, medical imaging and national defense. Despite the challenges of low signal-to-noise ratio (SNR) and ill-conditioned problem, NLOS imaging has developed rapidly in recent years. While deep neural networks have achieved impressive success in NLOS imaging, most of them lack flexibility when dealing with multiple spatial-temporal resolution and multi-scene images in practical applications. To bridge the gap between learning methods and physical priors, we present a novel end-to-end Multi-scale Iterative Model-guided Unfolding (MIMU), with superior performance and strong flexibility. Furthermore, we overcome the lack of real training data with a general architecture that can be trained in simulation. Unlike existing encoder-decoder architectures and generative adversarial networks, the proposed method allows for only one trained model adaptive for various dimensions, such as various sampling time resolution, various spatial resolution and multiple channels for colorful scenes. Simulation and real-data experiments verify that the proposed method achieves better reconstruction results both in quality and quantity than existing methods.
  • Item
    Precomputed Radiative Heat Transport for Efficient Thermal Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Freude, Christian; Hahn, David; Rist, Florian; Lipp, Lukas; Wimmer, Michael; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Architectural design and urban planning are complex design tasks. Predicting the thermal impact of design choices at interactive rates enhances the ability of designers to improve energy efficiency and avoid problematic heat islands while maintaining design quality. We show how to use and adapt methods from computer graphics to efficiently simulate heat transfer via thermal radiation, thereby improving user guidance in the early design phase of large-scale construction projects and helping to increase energy efficiency and outdoor comfort. Our method combines a hardware-accelerated photon tracing approach with a carefully selected finite element discretization, inspired by precomputed radiance transfer. This combination allows us to precompute a radiative transport operator, which we then use to rapidly solve either steady-state or transient heat transport throughout the entire scene. Our formulation integrates time-dependent solar irradiation data without requiring changes in the transport operator, allowing us to quickly analyze many different scenarios such as common weather patterns, monthly or yearly averages, or transient simulations spanning multiple days or weeks. We show how our approach can be used for interactive design workflows such as city planning via fast feedback in the early design phase.
  • Item
    Robust Distribution-aware Color Correction for Single-shot Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Dhillon, Daljit Singh J.; Joshi, Parisha; Baron, Jessica; Patterson, Eric K.; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Color correction for photographed images is an ill-posed problem. It is also a crucial initial step towards material acquisition for inverse rendering methods or pipelines. Several state-of-the-art methods rely on reducing color differences for imaged reference color chart blocks of known color values to devise or optimize their solution. In this paper, we first establish through simulations the limitation of this minimality criteria which in principle results in overfitting. Next, we study and propose a few spatial distribution measures to augment the evaluation criteria. Thereafter, we propose a novel patch-based, white-point centric approach that processes luminance and chrominance information separately to improve on the color matching task. We compare our method qualitatively with several state-of-the art methods using our augmented evaluation criteria along with quantitative examinations. Finally, we perform rigorous experiments and demonstrate results to clearly establish the benefits of our proposed method.
  • Item
    Enhancing Low-Light Images: A Variation-based Retinex with Modified Bilateral Total Variation and Tensor Sparse Coding
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Yang, Weipeng; Gao, Hongxia; Zou, Wenbin; Huang, Shasha; Chen, Hongsheng; Ma, Jianliang; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Low-light conditions often result in the presence of significant noise and artifacts in captured images, which can be further exacerbated during the image enhancement process, leading to a decrease in visual quality. This paper aims to present an effective low-light image enhancement model based on the variation Retinex model that successfully suppresses noise and artifacts while preserving image details. To achieve this, we propose a modified Bilateral Total Variation to better smooth out fine textures in the illuminance component while maintaining weak structures. Additionally, tensor sparse coding is employed as a regularization term to remove noise and artifacts from the reflectance component. Experimental results on extensive and challenging datasets demonstrate the effectiveness of the proposed method, exhibiting superior or comparable performance compared to state-ofthe- art approaches. Code, dataset and experimental results are available at https://github.com/YangWeipengscut/BTRetinex.
  • Item
    MOVIN: Real-time Motion Capture using a Single LiDAR
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Jang, Deok-Kyeong; Yang, Dongseok; Jang, Deok-Yun; Choi, Byeoli; Jin, Taeil; Lee, Sung-Hee; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Recent advancements in technology have brought forth new forms of interactive applications, such as the social metaverse, where end users interact with each other through their virtual avatars. In such applications, precise full-body tracking is essential for an immersive experience and a sense of embodiment with the virtual avatar. However, current motion capture systems are not easily accessible to end users due to their high cost, the requirement for special skills to operate them, or the discomfort associated with wearable devices. In this paper, we present MOVIN, the data-driven generative method for real-time motion capture with global tracking, using a single LiDAR sensor. Our autoregressive conditional variational autoencoder (CVAE) model learns the distribution of pose variations conditioned on the given 3D point cloud from LiDAR. As a central factor for high-accuracy motion capture, we propose a novel feature encoder to learn the correlation between the historical 3D point cloud data and global, local pose features, resulting in effective learning of the pose prior. Global pose features include root translation, rotation, and foot contacts, while local features comprise joint positions and rotations. Subsequently, a pose generator takes into account the sampled latent variable along with the features from the previous frame to generate a plausible current pose. Our framework accurately predicts the performer's 3D global information and local joint details while effectively considering temporally coherent movements across frames. We demonstrate the effectiveness of our architecture through quantitative and qualitative evaluations, comparing it against state-of-the-art methods. Additionally, we implement a real-time application to showcase our method in real-world scenarios. MOVIN dataset is available at https://movin3d. github.io/movin_pg2023/.
  • Item
    DAFNet: Generating Diverse Actions for Furniture Interaction by Learning Conditional Pose Distribution
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Jin, Taeil; Lee, Sung-Hee; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We present DAFNet, a novel data-driven framework capable of generating various actions for indoor environment interactions. By taking desired root and upper-body poses as control inputs, DAFNet generates whole-body poses suitable for furniture of various shapes and combinations. To enable the generation of diverse actions, we introduce an action predictor that automatically infers the probabilities of individual action types based on the control input and environment. The action predictor is learned in an unsupervised manner by training Gaussian Mixture Variational Autoencoder (GMVAE). Additionally, we propose a two-part normalizing flow-based pose generator that sequentially generates upper and lower body poses. This two-part model improves motion quality and the accuracy of satisfying conditions over a single model generating the whole body. Our experiments show that DAFNet can create continuous character motion for indoor scene scenarios, and both qualitative and quantitative evaluations demonstrate the effectiveness of our framework.
  • Item
    OptCtrlPoints: Finding the Optimal Control Points for Biharmonic 3D Shape Deformation
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Kim, Kunho; Uy, Mikaela Angelina; Paschalidou, Despoina; Jacobson, Alec; Guibas, Leonidas J.; Sung, Minhyuk; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We propose OPTCTRLPOINTS, a data-driven framework designed to identify the optimal sparse set of control points for reproducing target shapes using biharmonic 3D shape deformation. Control-point-based 3D deformation methods are widely utilized for interactive shape editing, and their usability is enhanced when the control points are sparse yet strategically distributed across the shape. With this objective in mind, we introduce a data-driven approach that can determine the most suitable set of control points, assuming that we have a given set of possible shape variations. The challenges associated with this task primarily stem from the computationally demanding nature of the problem. Two main factors contribute to this complexity: solving a large linear system for the biharmonic weight computation and addressing the combinatorial problem of finding the optimal subset of mesh vertices. To overcome these challenges, we propose a reformulation of the biharmonic computation that reduces the matrix size, making it dependent on the number of control points rather than the number of vertices. Additionally, we present an efficient search algorithm that significantly reduces the time complexity while still delivering a nearly optimal solution. Experiments on SMPL, SMAL, and DeformingThings4D datasets demonstrate the efficacy of our method. Our control points achieve better template-to-target fit than FPS, random search, and neural-network-based prediction. We also highlight the significant reduction in computation time from days to approximately 3 minutes.
  • Item
    Integrating High-Level Features for Consistent Palette-based Multi-image Recoloring
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Xue, Danna; Corral, Javier Vazquez; Herranz, Luis; Zhang, Yanning; Brown, Michael S.; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Achieving visually consistent colors across multiple images is important when images are used in photo albums, websites, and brochures. Unfortunately, only a handful of methods address multi-image color consistency compared to one-to-one color transfer techniques. Furthermore, existing methods do not incorporate high-level features that can assist graphic designers in their work. To address these limitations, we introduce a framework that builds upon a previous palette-based color consistency method and incorporates three high-level features: white balance, saliency, and color naming. We show how these features overcome the limitations of the prior multi-consistency workflow and showcase the user-friendly nature of our framework.
  • Item
    Data-Driven Ink Painting Brushstroke Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Madono, Koki; Simo-Serra, Edgar; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Although digital painting has advanced much in recent years, there is still a significant divide between physically drawn paintings and purely digitally drawn paintings. These differences arise due to the physical interactions between the brush, ink, and paper, which are hard to emulate in the digital domain. Most ink painting approaches have focused on either using heuristics or physical simulation to attempt to bridge the gap between digital and analog, however, these approaches are still unable to capture the diversity of painting effects, such as ink fading or blotting, found in the real world. In this work, we propose a data-driven approach to generate ink paintings based on a semi-automatically collected high-quality real-world ink painting dataset. We use a multi-camera robot-based setup to automatically create a diversity of ink paintings, which allows for capturing the entire process in high resolution, including capturing detailed brush motions and drawing results. To ensure high-quality capture of the painting process, we calibrate the setup and perform occlusion-aware blending to capture all the strokes in high resolution in a robust and efficient way. Using our new dataset, we propose a recursive deep learning-based model to reproduce the ink paintings stroke by stroke while capturing complex ink painting effects such as bleeding and mixing. Our results corroborate the fidelity of the proposed approach to real hand-drawn ink paintings in comparison with existing approaches. We hope the availability of our dataset will encourage new research on digital realistic ink painting techniques.
  • Item
    Continuous Layout Editing of Single Images with Diffusion Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhang, Zhiyuan; Huang, Zhitong; Liao, Jing; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Recent advancements in large-scale text-to-image diffusion models have enabled many applications in image editing. However, none of these methods have been able to edit the layout of single existing images. To address this gap, we propose the first framework for layout editing of a single image while preserving its visual properties, thus allowing for continuous editing on a single image. Our approach is achieved through two key modules. First, to preserve the characteristics of multiple objects within an image, we disentangle the concepts of different objects and embed them into separate textual tokens using a novel method called masked textual inversion. Next, we propose a training-free optimization method to perform layout control for a pre-trained diffusion model, which allows us to regenerate images with learned concepts and align them with user-specified layouts. As the first framework to edit the layout of existing images, we demonstrate that our method is effective and outperforms other baselines that were modified to support this task. Code is available at our project page.
  • Item
    Error-bounded Image Triangulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Fang, Zhi-Duo; Guo, Jia-Peng; Xiao, Yanyang; Fu, Xiao-Ming; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We propose a novel image triangulation method to reduce the complexity of image triangulation under the color error-bounded constraint and the triangle quality constraint. Meanwhile, we realize a variety of visual effects by supporting different types of triangles (e.g., linear or curved) and color approximation functions (e.g., constant, linear, or quadratic). To adapt to these discontinuous and combinatorial objectives and constraints, we formulate it as a constrained optimization problem that is solved by a series of tailored local remeshing operations. The feasibility and practicability of our method are demonstrated over various types of images, such as organisms, landscapes, portraits and cartoons. Compared to state-of-the-art methods, our method generates far fewer triangles for the same color error or much smaller color errors using the same number of triangles.
  • Item
    Dissection Puzzles Composed of Multicolor Polyominoes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Kita, Naoki; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Dissection puzzles leverage geometric dissections, wherein a set of puzzle pieces can be reassembled in various configurations to yield unique geometric figures. Mathematically, a dissection between two 2D polygons can always be established. Consequently, researchers and puzzle enthusiasts strive to design unique dissection puzzles using the fewest pieces feasible. In this study, we introduce novel dissection puzzles crafted with multi-colored polyominoes. Diverging from the traditional aim of establishing geometric dissection between two 2D polygons with the minimal piece count, we seek to identify a common pool of polyomino pieces with colored faces that can be configured into multiple distinct shapes and appearances. Moreover, we offer a method to identify an optimized sequence for rearranging pieces from one form to another, thus minimizing the total relocation distance. This approach can guide users in puzzle assembly and lessen their physical exertion when manually reconfiguring pieces. It could potentially also decrease power consumption when pieces are reorganized using robotic assistance. We showcase the efficacy of our proposed approach through a wide range of shapes and appearances.
  • Item
    H-ETC2: Design of a CPU-GPU Hybrid ETC2 Encoder
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Lee, Hyeon-ki; Nah, Jae-Ho; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    This paper proposes a novel CPU-GPU hybrid encoding method based on the ETC2 format, commonly used on mobile platforms. Traditional texture compression techniques often face a trade-off between encoding speed and quality. For a better trade-off, our approach utilizes both the CPU and GPU. In a pipeline we designed, the CPU encoder identifies problematic pixel blocks during the encoding process, and the GPU encoder re-encodes them. Additionally, we carefully improve the base CPU and GPU encoders regarding encoding speed and quality. As a result, our encoder minimizes compression artifacts, increases encoding speed, or achieves both of these goals compared to previous high-quality offline ETC2 encoders.
  • Item
    Refinement of Hair Geometry by Strand Integration
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Maeda, Ryota; Takayama, Kenshi; Taketomi, Takafumi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Reconstructing 3D hair is challenging due to its complex micro-scale geometry, and is of essential importance for the efficient creation of high-fidelity virtual humans. Existing hair capture methods based on multi-view stereo tend to generate results that are noisy and inaccurate. In this study, we propose a refinement method for hair geometry by incorporating the gradient of strands into the computation of their position. We formulate a gradient integration strategy for hair strands. We evaluate the performance of our method using a synthetic multi-view dataset containing four hairstyles, and show that our refinement produces more accurate hair geometry. Furthermore, we tested our method with a real image input. Our method produces a plausible result. Our source code is publicly available at https://github.com/elerac/strand_integration.
  • Item
    Fine Back Surfaces Oriented Human Reconstruction for Single RGB-D Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Fang, Xianyong; Qian, Yu; He, Jinshen; Wang, Linbo; Liu, Zhengyi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Current single RGB-D image based human surface reconstruction methods generally take both the RGB images and the captured frontal depth maps together so that the 3D cues from the frontal surfaces can help infer the full surface geometries. However, we observe that the back surfaces can often be quite different from the frontal surfaces and, therefore, current methods can mess the recovery process by adopting such 3D cues, especially for the unseen back surfaces. We need to do the back surface inference without the frontal depth map. Consequently, a novel human reconstruction framework is proposed, so that human models with fine geometric details, especially for the back surfaces, can be obtained. In this approach, a progressive estimation method is introduced to effectively recover the unseen back depth maps. The coarse back depth maps are recovered by the parametric models of the subjects, with the fine ones further obtained by the normal-maps conditioned GAN. This framework also includes a cross-attention based denoising method for the frontal depth maps. This method adopts the cross attention between the features of the last two layers encoded from the frontal depth maps and thus suppresses the noise for fine depth maps by the attentions of features from the low-noise and globally-structured highest layer. Experimental results show the efficacies of the proposed ideas.
  • Item
    Deep Shape and SVBRDF Estimation using Smartphone Multi-lens Imaging
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Fan, Chongrui; Lin, Yiming; Ghosh, Abhijeet; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We present a deep neural network-based method that acquires high-quality shape and spatially varying reflectance of 3D objects using smartphone multi-lens imaging. Our method acquires two images simultaneously using a zoom lens and a wide angle lens of a smartphone under either natural illumination or phone flash conditions, effectively functioning like a single-shot method. Unlike traditional multi-view stereo methods which require sufficient differences in viewpoint and only estimate depth at a certain coarse scale, our method estimates fine-scale depth by utilising an optical-flow field extracted from subtle baseline and perspective due to different optics in the two images captured simultaneously. We further guide the SVBRDF estimation using the estimated depth, resulting in superior results compared to existing single-shot methods.
  • Item
    SVBRDF Reconstruction by Transferring Lighting Knowledge
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Pengfei; Lai, Shuichang; Chen, Mufan; Guo, Jie; Liu, Yifan; Guo, Yanwen; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    The problem of reconstructing spatially-varying BRDFs from RGB images has been studied for decades. Researchers found themselves in a dilemma: opting for either higher quality with the inconvenience of camera and light calibration, or greater convenience at the expense of compromised quality without complex setups. We address this challenge by introducing a twobranch network to learn the lighting effects in images. The two branches, referred to as Light-known and Light-aware, diverge in their need for light information. The Light-aware branch is guided by the Light-known branch to acquire the knowledge of discerning light effects and surface reflectance properties, but without the reliance of light positions. Both branches are trained using the synthetic dataset, but during testing on real-world cases without calibration, only the Light-aware branch is activated. To facilitate a more effective utilization of various light conditions, we employ gated recurrent units (GRUs) to fuse the features extracted from different images. The two modules mutually benefit when multiple inputs are provided. We present our reconstructed results on both synthetic and real-world examples, demonstrating high quality while maintaining a lightweight characteristic in comparison to previous methods.
  • Item
    World-Space Spatiotemporal Path Resampling for Path Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhang, Hangyu; Wang, Beibei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    With the advent of hardware-accelerated ray tracing, more and more real-time rendering applications tend to render images with ray-traced global illumination (GI). However, the low sample counts at real-time framerates bring enormous challenges to existing path sampling methods. Recent work (ReSTIR GI) samples indirect illumination effectively with a dramatic bias reduction. However, as a screen-space based path resampling approach, it can only reuse the path at the first bounce and brings subtle benefits for complex scenes. To this end, we propose a world-space based spatiotemporal path resampling approach. Our approach caches more path samples into a world-space grid, which allows reusing sub-path starting from non-primary path vertices. Furthermore, we introduce a practical normal-aware hash grid construction approach, providing more efficient candidate samples for path resampling. Eventually, our method achieves improvements ranging from 16.6% to 41.9% in terms of mean squared errors (MSE) compared against the previous method with only 4.4% ~ 8.4% extra time cost.
  • Item
    Efficient Caustics Rendering via Spatial and Temporal Path Reuse
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Xu, Xiaofeng; Wang, Lu; Wang, Beibei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Caustics are complex optical effects caused by the light being concentrated in a small area due to reflection or refraction on surfaces with low roughness, typically under a sharp light source. Rendering caustic effects is challenging for Monte Carlobased approaches, due to the difficulties of sampling the specular paths. One effective solution is using the specular manifold to locate these valid specular paths. Unfortunately, it needs many iterations to find these paths, leading to a long rendering time. To address this issue, our key insight is that the specular paths tend to be similar for neighboring shading points. To this end, we propose to reuse the specular paths spatially. More specifically, we generate some specular path samples with a low sample rate and then reuse these specular path samples as the initialization for specular manifold walk among neighboring shading points. In this way, much fewer specular path-searching iterations are performed, due to the efficient initialization close to the final solution. Furthermore, this reuse strategy can be extended for dynamic scenes in a temporal manner, such as light moving or specular geometry deformation. Our method outperforms current state-of-the-art methods and can handle multiple bounces of light and various scenes.
  • Item
    3D Object Tracking for Rough Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Song, Xiuqiang; Xie, Weijian; Li, Jiachen; Wang, Nan; Zhong, Fan; Zhang, Guofeng; Qin, Xueying; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Visual monocular 6D pose tracking methods for textureless or weakly-textured objects heavily rely on contour constraints established by the precise 3D model. However, precise models are not always available in reality, and rough models can potentially degrade tracking performance and impede the widespread usage of 3D object tracking. To address this new problem, we propose a novel tracking method that handles rough models. We reshape the rough contour through the probability map, which can avoid explicitly processing the 3D rough model itself. We further emphasize the inner region information of the object, where the points are sampled to provide color constrains. To sufficiently satisfy the assumption of small displacement between frames, the 2D translation of the object is pre-searched for a better initial pose. Finally, we combine constraints from both the contour and inner region to optimize the object pose. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on both roughly and precisely modeled objects. Particularly for the highly rough model, the accuracy is significantly improved (40.4% v.s. 16.9%).
  • Item
    A Post Processing Technique to Automatically Remove Floater Artifacts in Neural Radiance Fields
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Wirth, Tristan; Rak, Arne; Knauthe, Volker; Fellner, Dieter W.; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Neural Radiance Fields have revolutionized Novel View Synthesis by providing impressive levels of realism. However, in most in-the-wild scenes they suffer from floater artifacts that occur due to sparse input images or strong view-dependent effects. We propose an approach that uses neighborhood based clustering and a consistency metric on NeRF models trained on different scene scales to identify regions that contain floater artifacts based on Instant-NGPs multiscale occupancy grids. These occupancy grids contain the position of relevant optical densities in the scene. By pruning the regions that we identified as containing floater artifacts, they are omitted during the rendering process, leading to higher quality resulting images. Our approach has no negative runtime implications for the rendering process and does not require retraining of the underlying Multi Layer Perceptron. We show on a qualitative base, that our approach is suited to remove floater artifacts while preserving most of the scenes relevant geometry. Furthermore, we conduct a comparison to state-of-the-art techniques on the Nerfbusters dataset, that was created with measuring the implications of floater artifacts in mind. This comparison shows, that our method outperforms currently available techniques. Our approach does not require additional user input, but can be be used in an interactive manner. In general, the presented approach is applicable to every architecture that uses an explicit representation of a scene's occupancy distribution to accelerate the rendering process.
  • Item
    MAPMaN: Multi-Stage U-Shaped Adaptive Pattern Matching Network for Semantic Segmentation of Remote Sensing Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Hong, Tingfeng; Ma, Xiaowen; Wang, Xinyu; Che, Rui; Hu, Chenlu; Feng, Tian; Zhang, Wei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Remote sensing images (RSIs) often possess obvious background noises, exhibit a multi-scale phenomenon, and are characterized by complex scenes with ground objects in diversely spatial distribution pattern, bringing challenges to the corresponding semantic segmentation. CNN-based methods can hardly address the diverse spatial distributions of ground objects, especially their compositional relationships, while Vision Transformers (ViTs) introduce background noises and have a quadratic time complexity due to dense global matrix multiplications. In this paper, we introduce Adaptive Pattern Matching (APM), a lightweight method for long-range adaptive weight aggregation. Our APM obtains a set of pixels belonging to the same spatial distribution pattern of each pixel, and calculates the adaptive weights according to their compositional relationships. In addition, we design a tiny U-shaped network using the APM as a module to address the large variance of scales of ground objects in RSIs. This network is embedded after each stage in a backbone network to establish a Multi-stage U-shaped Adaptive Pattern Matching Network (MAPMaN), for nested multi-scale modeling of ground objects towards semantic segmentation of RSIs. Experiments on three datasets demonstrate that our MAPMaN can outperform the state-of-the-art methods in common metrics. The code can be available at https://github.com/INiid/MAPMaN.
  • Item
    Balancing Rotation Minimizing Frames with Additional Objectives
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Mossman, Christopher; Bartels, Richard H.; Samavati, Faramarz F.; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    When moving along 3D curves, one may require local coordinate frames for visited points, such as for animating virtual cameras, controlling robotic motion, or constructing sweep surfaces. Often, consecutive coordinate frames should be similar, avoiding sharp twists. Previous work achieved this goal by using various methods to approximate rotation minimizing frames (RMFs) with respect to a curve's tangent. In this work, we use Householder transformations to construct preliminary tangentaligned coordinate frames and then optimize these initial frames under the constraint that they remain tangent-aligned. This optimization minimizes the weighted sum of squared distances between selected vectors within the new frames and fixed vectors outside them (such as the axes of previous frames). By selecting different vectors for this objective function, we reproduce existing RMF approximation methods and modify them to consider additional objectives beyond rotation minimization. We also provide some example computer graphics use cases for this new frame tracking.
  • Item
    Generating Parametric BRDFs from Natural Language Descriptions
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Memery, Sean; Cedron, Osmar; Subr, Kartic; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Artistic authoring of 3D environments is a laborious enterprise that also requires skilled content creators. There have been impressive improvements in using machine learning to address different aspects of generating 3D content, such as generating meshes, arranging geometry, synthesizing textures, etc. In this paper we develop a model to generate Bidirectional Reflectance Distribution Functions (BRDFs) from descriptive textual prompts. BRDFs are four dimensional probability distributions that characterize the interaction of light with surface materials. They are either represented parametrically, or by tabulating the probability density associated with every pair of incident and outgoing angles. The former lends itself to artistic editing while the latter is used when measuring the appearance of real materials. Numerous works have focused on hypothesizing BRDF models from images of materials.We learn a mapping from textual descriptions of materials to parametric BRDFs. Our model is first trained using a semi-supervised approach before being tuned via an unsupervised scheme. Although our model is general, in this paper we specifically generate parameters for MDL materials, conditioned on natural language descriptions, within NVIDIA's Omniverse platform. This enables use cases such as real-time text prompts to change materials of objects in 3D environments such as ''dull plastic'' or ''shiny iron''. Since the output of our model is a parametric BRDF, rather than an image of the material, it may be used to render materials using any shape under arbitrarily specified viewing and lighting conditions.
  • Item
    Neural Impostor: Editing Neural Radiance Fields with Explicit Shape Manipulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Liu, Ruiyang; Xiang, Jinxu; Zhao, Bowen; Zhang, Ran; Yu, Jingyi; Zheng, Changxi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Neural Radiance Fields (NeRF) have significantly advanced the generation of highly realistic and expressive 3D scenes. However, the task of editing NeRF, particularly in terms of geometry modification, poses a significant challenge. This issue has obstructed NeRF's wider adoption across various applications. To tackle the problem of efficiently editing neural implicit fields, we introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field designated for each tetrahedron within the explicit mesh. Our framework bridges the explicit shape manipulation and the geometric editing of implicit fields by utilizing multigrid barycentric coordinate encoding, thus offering a pragmatic solution to deform, composite, and generate neural implicit fields while maintaining a complex volumetric appearance. Furthermore, we propose a comprehensive pipeline for editing neural implicit fields based on a set of explicit geometric editing operations. We show the robustness and adaptability of our system through diverse examples and experiments, including the editing of both synthetic objects and real captured data. Finally, we demonstrate the authoring process of a hybrid synthetic-captured object utilizing a variety of editing operations, underlining the transformative potential of Neural Impostor in the field of 3D content creation and manipulation.
  • Item
    Reconstructing 3D Human Pose from RGB-D Data with Occlusions
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Dang, Bowen; Zhao, Xi; Zhang, Bowen; Wang, He; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We propose a new method to reconstruct the 3D human body from RGB-D images with occlusions. The foremost challenge is the incompleteness of the RGB-D data due to occlusions between the body and the environment, leading to implausible reconstructions that suffer from severe human-scene penetration. To reconstruct a semantically and physically plausible human body, we propose to reduce the solution space based on scene information and prior knowledge. Our key idea is to constrain the solution space of the human body by considering the occluded body parts and visible body parts separately: modeling all plausible poses where the occluded body parts do not penetrate the scene, and constraining the visible body parts using depth data. Specifically, the first component is realized by a neural network that estimates the candidate region named the "free zone", a region carved out of the open space within which it is safe to search for poses of the invisible body parts without concern for penetration. The second component constrains the visible body parts using the "truncated shadow volume" of the scanned body point cloud. Furthermore, we propose to use a volume matching strategy, which yields better performance than surface matching, to match the human body with the confined region. We conducted experiments on the PROX dataset, and the results demonstrate that our method produces more accurate and plausible results compared with other methods.
  • Item
    Fast Grayscale Morphology for Circular Window
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Moroto, Yuji; Umetani, Nobuyuki; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Morphological operations are among the most popular classic image filters. The filter assumes the maximum or minimum value within a window and is often used for light object thickening and thinning operations, which are important components of various workflows, such as object recognition and stylization. Circular windows are preferred over rectangular windows for obtaining isotropic filter results. However, the existing efficient algorithms focus on rectangular or binary input images. Efficient morphological operations with circular windows for grayscale images remain challenging. In this study, we present a fast grayscale morphology heuristic computation algorithm that decomposes circular windows using the convex hull of circles. We significantly accelerate traditional methods based on Minkowski addition by introducing new decomposition rules specialized for circular windows. As our morphological operation using a convex hull can be computed independently for each pixel, the algorithm is efficient for modern multithreaded hardware.
  • Item
    BubbleFormer: Bubble Diagram Generation via Dual Transformer Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Sun, Jiahui; Zheng, Liping; Zhang, Gaofeng; Wu, Wenming; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Bubble diagrams serve as a crucial tool in the field of architectural planning and graphic design. With the surge of Artificial Intelligence Generated Content (AIGC), there has been a continuous emergence of research and development efforts focused on utilizing bubble diagrams for layout design and generation. However, there is a lack of research efforts focused on bubble diagram generation. In this paper, we propose a novel generative model, BubbleFormer, for generating diverse and plausible bubble diagrams. BubbleFormer consists of two improved Transformer networks: NodeFormer and EdgeFormer. These networks generate nodes and edges of the bubble diagram, respectively. To enhance the generation diversity, a VAE module is incorporated into BubbleFormer, allowing for the sampling and generation of numerous high-quality bubble diagrams. BubbleFormer is trained end-to-end and evaluated through qualitative and quantitative experiments. The results demonstrate that Bubble- Former can generate convincing and diverse bubble diagrams, which in turn drive downstream tasks to produce high-quality layout plans. The model also shows generalization capabilities in other layout generation tasks and outperforms state-of-the-art techniques in terms of quality and diversity. In previous work, bubble diagrams as input are provided by users, and as a result, our bubble diagram generative model fills a significant gap in automated layout generation driven by bubble diagrams, thereby enabling an end-to-end layout design and generation. Code for this paper is at https://github.com/cgjiahui/BubbleFormer.