40-Issue 6

Permanent URI for this collection

Issue Information

Issue Information

Articles

Visualizing and Interacting with Geospatial Networks: A Survey and Design Space

Schöttler, Sarah
Yang, Yalong
Pfister, Hanspeter
Bach, Benjamin
Articles

Parametric Skeletons with Reduced Soft‐Tissue Deformations

Tapia, Javier
Romero, Cristian
Pérez, Jesús
Otaduy, Miguel A.
Articles

Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN

Zhao, Yong
Yang, Le
Pei, Ercheng
Oveneke, Meshia Cédric
Alioscha‐Perez, Mitchel
Li, Longfei
Jiang, Dongmei
Sahli, Hichem
Articles

Fluid Reconstruction and Editing from a Monocular Video based on the SPH Model with External Force Guidance

Nie, Xiaoying
Hu, Yong
Su, Zhiyuan
Shen, Xukun
Articles

Linear Polarization Demosaicking for Monochrome and Colour Polarization Focal Plane Arrays

Qiu, Simeng
Fu, Qiang
Wang, Congli
Heidrich, Wolfgang
Articles

Self‐Supervised Learning of Part Mobility from Point Cloud Sequence

Shi, Yahao
Cao, Xinyu
Zhou, Bin
Articles

Transfer Deep Learning for Reconfigurable Snapshot HDR Imaging Using Coded Masks

Alghamdi, Masheal
Fu, Qiang
Thabet, Ali
Heidrich, Wolfgang
Articles

Fast Ray Tracing of Scale‐Invariant Integral Surfaces

Aydinlilar, Melike
Zanni, Cedric
Articles

Visualization of Tensor Fields in Mechanics

Hergl, Chiara
Blecha, Christian
Kretzschmar, Vanessa
Raith, Felix
Günther, Fabian
Stommel, Markus
Jankowai, Jochen
Hotz, Ingrid
Nagel, Thomas
Scheuermann, Gerik
Articles

IMAT: The Iterative Medial Axis Transform

Lee, Yonghyeon
Baek, Jonghyuk
Kim, Young Min
Park, Frank Chongwoo
Articles

An Efficient Hybrid Optimization Strategy for Surface Reconstruction

Bertolino, Giulia
Montemurro, Marco
Perry, Nicolas
Pourroy, Franck
Articles

SREC‐RT: A Structure for Ray Tracing Rounded Edges and Corners

Courtin, Simon
Ribardière, Mickael
Horna, Sebastien
Poulin, Pierre
Meneveaux, Daniel
Articles

Efficient Rendering of Ocular Wavefront Aberrations using Tiled Point‐Spread Function Splatting

Csoba, István
Kunkli, Roland
Articles

A Rapid, End‐to‐end, Generative Model for Gaseous Phenomena from Limited Views

Qiu, Sheng
Li, Chen
Wang, Changbo
Qin, Hong
Articles

NOVA: Rendering Virtual Worlds with Humans for Computer Vision Tasks

Kerim, Abdulrahman
Aslan, Cem
Celikcan, Ufuk
Erdem, Erkut
Erdem, Aykut
Articles

Inverse Dynamics Filtering for Sampling‐based Motion Control

Xie, Kaixiang
Kry, Paul G.
Articles

Deep Neural Models for Illumination Estimation and Relighting: A Survey

Einabadi, Farshad
Guillemaut, Jean‐Yves
Hilton, Adrian
Articles

Neural Modelling of Flower Bas‐relief from 2D Line Drawing

Zhang, Yu‐Wei
Wang, Jinlei
Wang, Wenping
Chen, Yanzhao
Liu, Hui
Ji, Zhongping
Zhang, Caiming
Articles

Estimating Garment Patterns from Static Scan Data

Bang, Seungbae
Korosteleva, Maria
Lee, Sung‐Hee
Articles

Customized Summarizations of Visual Data Collections

Yuan, Mengke
Ghanem, Bernard
Yan, Dong‐Ming
Wu, Baoyuan
Zhang, Xiaopeng
Wonka, Peter
Articles

Neural BRDF Representation and Importance Sampling

Sztrajman, Alejandro
Rainer, Gilles
Ritschel, Tobias
Weyrich, Tim
Articles

Half‐body Portrait Relighting with Overcomplete Lighting Representation

Song, Guoxian
Cham, Tat‐Jen
Cai, Jianfei
Zheng, Jianmin
Articles

Visual Analysis of Large‐Scale Protein‐Ligand Interaction Data

Schatz, Karsten
Franco‐Moreno, Juan José
Schäfer, Marco
Rose, Alexander S.
Ferrario, Valerio
Pleiss, Jürgen
Vázquez, Pere‐Pau
Ertl, Thomas
Krone, Michael
Articles

Optimized Processing of Localized Collisions in Projective Dynamics

Wang, Qisi
Tao, Yutian
Brandt, Eric
Cutting, Court
Sifakis, Eftychios
Articles

Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence

Ye, Wenjie
Dong, Yue
Peers, Pieter
Guo, Baining
Articles

Example‐Based Colour Transfer for 3D Point Clouds

Goudé, Ific
Cozot, Rémi
Le Meur, Olivier
Bouatouch, Kadi
Articles

Design and Evaluation of Visualization Techniques to Facilitate Argument Exploration

Khartabil, D.
Collins, C.
Wells, S.
Bach, B.
Kennedy, J.
Articles

Fashion Transfer: Dressing 3D Characters from Stylized Fashion Sketches

Fondevilla, Amelie
Rohmer, Damien
Hahmann, Stefanie
Bousseau, Adrien
Cani, Marie‐Paule
Articles

From Noon to Sunset: Interactive Rendering, Relighting, and Recolouring of Landscape Photographs by Modifying Solar Position

Türe, Murat
Çıklabakkal, Mustafa Ege
Erdem, Aykut
Erdem, Erkut
Satılmış, Pinar
Akyüz, Ahmet Oguz
Articles

Visual Analytics of Text Conversation Sentiment and Semantics

Healey, Christopher G.
Dinakaran, Gowtham
Padia, Kalpesh
Nie, Shaoliang
Benson, J. Riley
Caira, Dave
Shaw, Dean
Catalfu, Gary
Devarajan, Ravi


BibTeX (40-Issue 6)
                
@article{
10.1111:cgf.14040,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14040}
}
                
@article{
10.1111:cgf.14198,
journal = {Computer Graphics Forum}, title = {{
Visualizing and Interacting with Geospatial Networks: A Survey and Design Space}},
author = {
Schöttler, Sarah
and
Yang, Yalong
and
Pfister, Hanspeter
and
Bach, Benjamin
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14198}
}
                
@article{
10.1111:cgf.14199,
journal = {Computer Graphics Forum}, title = {{
Parametric Skeletons with Reduced Soft‐Tissue Deformations}},
author = {
Tapia, Javier
and
Romero, Cristian
and
Pérez, Jesús
and
Otaduy, Miguel A.
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14199}
}
                
@article{
10.1111:cgf.14202,
journal = {Computer Graphics Forum}, title = {{
Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN}},
author = {
Zhao, Yong
and
Yang, Le
and
Pei, Ercheng
and
Oveneke, Meshia Cédric
and
Alioscha‐Perez, Mitchel
and
Li, Longfei
and
Jiang, Dongmei
and
Sahli, Hichem
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14202}
}
                
@article{
10.1111:cgf.14203,
journal = {Computer Graphics Forum}, title = {{
Fluid Reconstruction and Editing from a Monocular Video based on the SPH Model with External Force Guidance}},
author = {
Nie, Xiaoying
and
Hu, Yong
and
Su, Zhiyuan
and
Shen, Xukun
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14203}
}
                
@article{
10.1111:cgf.14204,
journal = {Computer Graphics Forum}, title = {{
Linear Polarization Demosaicking for Monochrome and Colour Polarization Focal Plane Arrays}},
author = {
Qiu, Simeng
and
Fu, Qiang
and
Wang, Congli
and
Heidrich, Wolfgang
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14204}
}
                
@article{
10.1111:cgf.14207,
journal = {Computer Graphics Forum}, title = {{
Self‐Supervised Learning of Part Mobility from Point Cloud Sequence}},
author = {
Shi, Yahao
and
Cao, Xinyu
and
Zhou, Bin
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14207}
}
                
@article{
10.1111:cgf.14205,
journal = {Computer Graphics Forum}, title = {{
Transfer Deep Learning for Reconfigurable Snapshot HDR Imaging Using Coded Masks}},
author = {
Alghamdi, Masheal
and
Fu, Qiang
and
Thabet, Ali
and
Heidrich, Wolfgang
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14205}
}
                
@article{
10.1111:cgf.14208,
journal = {Computer Graphics Forum}, title = {{
Fast Ray Tracing of Scale‐Invariant Integral Surfaces}},
author = {
Aydinlilar, Melike
and
Zanni, Cedric
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14208}
}
                
@article{
10.1111:cgf.14209,
journal = {Computer Graphics Forum}, title = {{
Visualization of Tensor Fields in Mechanics}},
author = {
Hergl, Chiara
and
Blecha, Christian
and
Kretzschmar, Vanessa
and
Raith, Felix
and
Günther, Fabian
and
Stommel, Markus
and
Jankowai, Jochen
and
Hotz, Ingrid
and
Nagel, Thomas
and
Scheuermann, Gerik
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14209}
}
                
@article{
10.1111:cgf.14266,
journal = {Computer Graphics Forum}, title = {{
IMAT: The Iterative Medial Axis Transform}},
author = {
Lee, Yonghyeon
and
Baek, Jonghyuk
and
Kim, Young Min
and
Park, Frank Chongwoo
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14266}
}
                
@article{
10.1111:cgf.14269,
journal = {Computer Graphics Forum}, title = {{
An Efficient Hybrid Optimization Strategy for Surface Reconstruction}},
author = {
Bertolino, Giulia
and
Montemurro, Marco
and
Perry, Nicolas
and
Pourroy, Franck
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14269}
}
                
@article{
10.1111:cgf.14268,
journal = {Computer Graphics Forum}, title = {{
SREC‐RT: A Structure for Ray Tracing Rounded Edges and Corners}},
author = {
Courtin, Simon
and
Ribardière, Mickael
and
Horna, Sebastien
and
Poulin, Pierre
and
Meneveaux, Daniel
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14268}
}
                
@article{
10.1111:cgf.14267,
journal = {Computer Graphics Forum}, title = {{
Efficient Rendering of Ocular Wavefront Aberrations using Tiled Point‐Spread Function Splatting}},
author = {
Csoba, István
and
Kunkli, Roland
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14267}
}
                
@article{
10.1111:cgf.14270,
journal = {Computer Graphics Forum}, title = {{
A Rapid, End‐to‐end, Generative Model for Gaseous Phenomena from Limited Views}},
author = {
Qiu, Sheng
and
Li, Chen
and
Wang, Changbo
and
Qin, Hong
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14270}
}
                
@article{
10.1111:cgf.14271,
journal = {Computer Graphics Forum}, title = {{
NOVA: Rendering Virtual Worlds with Humans for Computer Vision Tasks}},
author = {
Kerim, Abdulrahman
and
Aslan, Cem
and
Celikcan, Ufuk
and
Erdem, Erkut
and
Erdem, Aykut
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14271}
}
                
@article{
10.1111:cgf.14274,
journal = {Computer Graphics Forum}, title = {{
Inverse Dynamics Filtering for Sampling‐based Motion Control}},
author = {
Xie, Kaixiang
and
Kry, Paul G.
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14274}
}
                
@article{
10.1111:cgf.14283,
journal = {Computer Graphics Forum}, title = {{
Deep Neural Models for Illumination Estimation and Relighting: A Survey}},
author = {
Einabadi, Farshad
and
Guillemaut, Jean‐Yves
and
Hilton, Adrian
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14283}
}
                
@article{
10.1111:cgf.14273,
journal = {Computer Graphics Forum}, title = {{
Neural Modelling of Flower Bas‐relief from 2D Line Drawing}},
author = {
Zhang, Yu‐Wei
and
Wang, Jinlei
and
Wang, Wenping
and
Chen, Yanzhao
and
Liu, Hui
and
Ji, Zhongping
and
Zhang, Caiming
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14273}
}
                
@article{
10.1111:cgf.14272,
journal = {Computer Graphics Forum}, title = {{
Estimating Garment Patterns from Static Scan Data}},
author = {
Bang, Seungbae
and
Korosteleva, Maria
and
Lee, Sung‐Hee
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14272}
}
                
@article{
10.1111:cgf.14336,
journal = {Computer Graphics Forum}, title = {{
Customized Summarizations of Visual Data Collections}},
author = {
Yuan, Mengke
and
Ghanem, Bernard
and
Yan, Dong‐Ming
and
Wu, Baoyuan
and
Zhang, Xiaopeng
and
Wonka, Peter
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14336}
}
                
@article{
10.1111:cgf.14335,
journal = {Computer Graphics Forum}, title = {{
Neural BRDF Representation and Importance Sampling}},
author = {
Sztrajman, Alejandro
and
Rainer, Gilles
and
Ritschel, Tobias
and
Weyrich, Tim
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14335}
}
                
@article{
10.1111:cgf.14384,
journal = {Computer Graphics Forum}, title = {{
Half‐body Portrait Relighting with Overcomplete Lighting Representation}},
author = {
Song, Guoxian
and
Cham, Tat‐Jen
and
Cai, Jianfei
and
Zheng, Jianmin
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14384}
}
                
@article{
10.1111:cgf.14386,
journal = {Computer Graphics Forum}, title = {{
Visual Analysis of Large‐Scale Protein‐Ligand Interaction Data}},
author = {
Schatz, Karsten
and
Franco‐Moreno, Juan José
and
Schäfer, Marco
and
Rose, Alexander S.
and
Ferrario, Valerio
and
Pleiss, Jürgen
and
Vázquez, Pere‐Pau
and
Ertl, Thomas
and
Krone, Michael
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14386}
}
                
@article{
10.1111:cgf.14385,
journal = {Computer Graphics Forum}, title = {{
Optimized Processing of Localized Collisions in Projective Dynamics}},
author = {
Wang, Qisi
and
Tao, Yutian
and
Brandt, Eric
and
Cutting, Court
and
Sifakis, Eftychios
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14385}
}
                
@article{
10.1111:cgf.14387,
journal = {Computer Graphics Forum}, title = {{
Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence}},
author = {
Ye, Wenjie
and
Dong, Yue
and
Peers, Pieter
and
Guo, Baining
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14387}
}
                
@article{
10.1111:cgf.14388,
journal = {Computer Graphics Forum}, title = {{
Example‐Based Colour Transfer for 3D Point Clouds}},
author = {
Goudé, Ific
and
Cozot, Rémi
and
Le Meur, Olivier
and
Bouatouch, Kadi
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14388}
}
                
@article{
10.1111:cgf.14389,
journal = {Computer Graphics Forum}, title = {{
Design and Evaluation of Visualization Techniques to Facilitate Argument Exploration}},
author = {
Khartabil, D.
and
Collins, C.
and
Wells, S.
and
Bach, B.
and
Kennedy, J.
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14389}
}
                
@article{
10.1111:cgf.14390,
journal = {Computer Graphics Forum}, title = {{
Fashion Transfer: Dressing 3D Characters from Stylized Fashion Sketches}},
author = {
Fondevilla, Amelie
and
Rohmer, Damien
and
Hahmann, Stefanie
and
Bousseau, Adrien
and
Cani, Marie‐Paule
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14390}
}
                
@article{
10.1111:cgf.14392,
journal = {Computer Graphics Forum}, title = {{
From Noon to Sunset: Interactive Rendering, Relighting, and Recolouring of Landscape Photographs by Modifying Solar Position}},
author = {
Türe, Murat
and
Çıklabakkal, Mustafa Ege
and
Erdem, Aykut
and
Erdem, Erkut
and
Satılmış, Pinar
and
Akyüz, Ahmet Oguz
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14392}
}
                
@article{
10.1111:cgf.14391,
journal = {Computer Graphics Forum}, title = {{
Visual Analytics of Text Conversation Sentiment and Semantics}},
author = {
Healey, Christopher G.
and
Dinakaran, Gowtham
and
Padia, Kalpesh
and
Nie, Shaoliang
and
Benson, J. Riley
and
Caira, Dave
and
Shaw, Dean
and
Catalfu, Gary
and
Devarajan, Ravi
}, year = {
2021},
publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14391}
}

Browse

Recent Submissions

Now showing 1 - 31 of 31
  • Item
    Issue Information
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Benes, Bedrich and Hauser, Helwig
  • Item
    Visualizing and Interacting with Geospatial Networks: A Survey and Design Space
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Schöttler, Sarah; Yang, Yalong; Pfister, Hanspeter; Bach, Benjamin; Benes, Bedrich and Hauser, Helwig
    This paper surveys visualization and interaction techniques for geospatial networks from a total of 95 papers. Geospatial networks are graphs where nodes and links can be associated with geographic locations. Examples can include social networks, trade and migration, as well as traffic and transport networks. Visualizing geospatial networks poses numerous challenges around the integration of both network and geographical information as well as additional information such as node and link attributes, time and uncertainty. Our overview analyses existing techniques along four dimensions: (i) the representation of geographical information, (ii) the representation of network information, (iii) the visual integration of both and (iv) the use of interaction. These four dimensions allow us to discuss techniques with respect to the trade‐offs they make between showing information across all these dimensions and how they solve the problem of showing as much information as necessary while maintaining readability of the visualization. .
  • Item
    Parametric Skeletons with Reduced Soft‐Tissue Deformations
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Tapia, Javier; Romero, Cristian; Pérez, Jesús; Otaduy, Miguel A.; Benes, Bedrich and Hauser, Helwig
    We present a method to augment parametric skeletal models with subspace soft‐tissue deformations. We combine the benefits of data‐driven skeletal models, i.e. accurate replication of contact‐free static deformations, with the benefits of pure physics‐based models, i.e. skin and skeletal reaction to contact and inertial motion with two‐way coupling. We succeed to do so in a highly efficient manner, thanks to a careful choice of reduced model for the subspace deformation. With our method, it is easy to design expressive reduced models with efficient yet accurate force computations, without the need for training deformation examples. We demonstrate the application of our method to parametric models of human bodies, SMPL, and hands, MANO, with interactive simulations of contact with nonlinear soft‐tissue deformation and skeletal response.>
  • Item
    Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhao, Yong; Yang, Le; Pei, Ercheng; Oveneke, Meshia Cédric; Alioscha‐Perez, Mitchel; Li, Longfei; Jiang, Dongmei; Sahli, Hichem; Benes, Bedrich and Hauser, Helwig
    Recent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis‐by‐analysis approach by leveraging the power of GAN framework and state‐of‐the‐art AU detection model to achieve better results for AU‐driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch‐attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state‐of‐the‐art in terms of photo‐realism and expressiveness of the facial expression quantitatively and qualitatively.
  • Item
    Fluid Reconstruction and Editing from a Monocular Video based on the SPH Model with External Force Guidance
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Nie, Xiaoying; Hu, Yong; Su, Zhiyuan; Shen, Xukun; Benes, Bedrich and Hauser, Helwig
    We specifically present a general method for monocular fluid videos to reconstruct and edit 3D fluid volume. Although researchers have developed many monocular video‐based methods, the reconstructed results are merely one layer of geometry surface, lack of accurate physical attributes of fluids, and challenging to edit fluid. We obtain a high‐quality 3D fluid volume by extending the smoothed particle hydrodynamics (SPH) model with external force guidance. For reconstructing fluid, we design target particles that are recovered from the shape from shading (SFS) method and initialize fluid particles that are spatially consistent with target particles. For editing fluid, we translate the deformation of target particles into the 3D fluid volume by merging user‐specified features of interest. Separating the low‐ and high‐frequency height field allows us to efficiently solve the motion equations for a liquid while retaining enough details to obtain realistic‐looking behaviours. Our experimental results compare favourably to the state‐of‐the‐art in terms of global fluid volume motion features and fluid surface details and demonstrate our model can achieve desirable and pleasing effects.
  • Item
    Linear Polarization Demosaicking for Monochrome and Colour Polarization Focal Plane Arrays
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Qiu, Simeng; Fu, Qiang; Wang, Congli; Heidrich, Wolfgang; Benes, Bedrich and Hauser, Helwig
    Division‐of‐focal‐plane (DoFP) polarization image sensors allow for snapshot imaging of linear polarization effects with inexpensive and straightforward setups. However, conventional interpolation based image reconstruction methods for such sensors produce unreliable and noisy estimates of quantities such as Degree of Linear Polarization (DoLP) or Angle of Linear Polarization (AoLP). In this paper, we propose a polarization demosaicking algorithm by inverting the polarization image formation model for both monochrome and colour DoFP cameras. Compared to previous interpolation methods, our approach can significantly reduce noise induced artefacts and drastically increase the accuracy in estimating polarization states. We evaluate and demonstrate the performance of the methods on a new high‐resolution colour polarization dataset. Simulation and experimental results show that the proposed reconstruction and analysis tools offer an effective solution to polarization imaging.
  • Item
    Self‐Supervised Learning of Part Mobility from Point Cloud Sequence
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Shi, Yahao; Cao, Xinyu; Zhou, Bin; Benes, Bedrich and Hauser, Helwig
    Part mobility analysis is a significant aspect required to achieve a functional understanding of 3D objects. It would be natural to obtain part mobility from the continuous part motion of 3D objects. In this study, we introduce a self‐supervised method for segmenting motion parts and predicting their motion attributes from a point cloud sequence representing a dynamic object. To sufficiently utilize spatiotemporal information from the point cloud sequence, we generate trajectories by using correlations among successive frames of the sequence instead of directly processing the point clouds. We propose a novel neural network architecture called PointRNN to learn feature representations of trajectories along with their part rigid motions. We evaluate our method on various tasks including motion part segmentation, motion axis prediction and motion range estimation. The results demon strate that our method outperforms previous techniques on both synthetic and real datasets. Moreover, our method has the ability to generalize to new and unseen objects. It is important to emphasize that it is not required to know any prior shape structure, prior shape category information or shape orientation. To the best of our knowledge, this is the first study on deep learning to extract part mobility from point cloud sequence of a dynamic object.
  • Item
    Transfer Deep Learning for Reconfigurable Snapshot HDR Imaging Using Coded Masks
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Alghamdi, Masheal; Fu, Qiang; Thabet, Ali; Heidrich, Wolfgang; Benes, Bedrich and Hauser, Helwig
    High dynamic range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this paper we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware and building a deep learning algorithm to reconstruct the HDR image. We leverage transfer learning to overcome the lack of sufficiently large HDR datasets available. We show how transferring from a different large‐scale task (image classification on ImageNet) leads to considerable improvements in HDR reconstruction. We achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware–software so lution offers a flexible yet robust way to modulate per‐pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparison results show that our method outperforms the state of the art in terms of visual perception quality.
  • Item
    Fast Ray Tracing of Scale‐Invariant Integral Surfaces
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Aydinlilar, Melike; Zanni, Cedric; Benes, Bedrich and Hauser, Helwig
    Scale‐invariant integral surfaces, which are implicit representations of surfaces, provide a way to define smooth surfaces from skeletons with prescribed radii defined at their vertices. We introduce a new rendering pipeline allowing to visualize such surfaces in real‐time. We rely on the distance to skeleton to define a sampling strategy along the camera rays, dividing each ray into sub‐intervals. The proposed strategy is chosen to capture main field variations. Resulting intervals are processed iteratively, relying on two main ingredients; quadratic interpolation and field mapping, to an approximate squared homothetic distance. The first provides efficient root finding while the second increases the precision of the interpolation, and the combination of both results in an efficient processing routine. Finally, we present a GPU implementation that relies on a dynamic data‐structure in order to efficiently generate the intervals along the ray. This data‐structure also serves as an acceleration structure that allows constant time access to the primitives of interest during the processing of a given ray.
  • Item
    Visualization of Tensor Fields in Mechanics
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Hergl, Chiara; Blecha, Christian; Kretzschmar, Vanessa; Raith, Felix; Günther, Fabian; Stommel, Markus; Jankowai, Jochen; Hotz, Ingrid; Nagel, Thomas; Scheuermann, Gerik; Benes, Bedrich and Hauser, Helwig
    Tensors are used to describe complex physical processes in many applications. Examples include the distribution of stresses in technical materials, acting forces during seismic events, or remodeling of biological tissues. While tensors encode such complex information mathematically precisely, the semantic interpretation of a tensor is challenging. Visualization can be beneficial here and is frequently used by domain experts. Typical strategies include the use of glyphs, color plots, lines, and isosurfaces. However, data complexity is nowadays accompanied by the sheer amount of data produced by large‐scale simulations and adds another level of obstruction between user and data. Given the limitations of traditional methods, and the extra cognitive effort of simple methods, more advanced tensor field visualization approaches have been the focus of this work. This survey aims to provide an overview of recent research results with a strong application‐oriented focus, targeting applications based on continuum mechanics, namely the fields of structural, bio‐, and geomechanics. As such, the survey is complementing and extending previously published surveys. Its utility is twofold: (i) It serves as basis for the visualization community to get an overview of recent visualization techniques. (ii) It emphasizes and explains the necessity for further research for visualizations in this context.
  • Item
    IMAT: The Iterative Medial Axis Transform
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Lee, Yonghyeon; Baek, Jonghyuk; Kim, Young Min; Park, Frank Chongwoo; Benes, Bedrich and Hauser, Helwig
    We present the iterative medial axis transform (IMAT), an iterative descent method that constructs a medial axis transform (MAT) for a sparse, noisy, oriented point cloud sampled from an object's boundary. We first establish the equivalence between the traditional definition of the MAT of an object, i.e., the set of centres and corresponding radii of all balls maximally inscribed inside the object, with an alternative characterization matching the boundary enclosing the union of the balls with the object boundary. Based on this boundary equivalence characterization, a new MAT algorithm is proposed, in which an error function that reflects the difference between the two boundaries is minimized while restricting the number of balls to within some a priori specified upper limit. An iterative descent method with guaranteed local convergence is developed for the minimization that is also amenable to parallelization. Both quantitative and qualitative analyses of diverse 2D and 3D objects demonstrate the noise robustness, shape fidelity, and representation efficiency of the resulting MAT.
  • Item
    An Efficient Hybrid Optimization Strategy for Surface Reconstruction
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Bertolino, Giulia; Montemurro, Marco; Perry, Nicolas; Pourroy, Franck; Benes, Bedrich and Hauser, Helwig
    An efficient surface reconstruction strategy is presented in this study, which is able to approximate non‐convex sets of target points (TPs). The approach is split in two phases: (a) the mapping phase, making use of the shape preserving method (SPM) to get a proper parametrization of each sub‐domain composing the TPs set; (b) the fitting phase, where each patch is fitted by means of a suitable non‐uniform rational basis spline (NURBS) surface by considering, as design variables, all parameters involved in its definition. To this purpose, the surface fitting problem is formulated as a constrained non‐linear programming problem (CNLPP) defined over a domain having changing dimension, wherein both the number and the value of the design variables are optimized. To deal with this CNLPP, the optimization process is split in two steps. Firstly, a special genetic algorithm (GA) optimizes both the value and the number of design variables by means of a two‐level evolution strategy (species and individuals). Secondly, the solution provided by the GA constitutes the initial guess for the deterministic optimization, which aims at improving the accuracy of the fitting surfaces. The effectiveness of the proposed methodology is proven through some meaningful benchmarks taken from the literature.
  • Item
    SREC‐RT: A Structure for Ray Tracing Rounded Edges and Corners
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Courtin, Simon; Ribardière, Mickael; Horna, Sebastien; Poulin, Pierre; Meneveaux, Daniel; Benes, Bedrich and Hauser, Helwig
    Man‐made objects commonly exhibit rounded edges and corners generated through their manufacturing processes. The variation of surface normals at these confined locations produces shading details that are visually essential to the realism of synthetic scenes. The more specular the surface, the finer and more prominent its highlights. However, most geometric modellers represent rounded edges and corners with dense polygonal meshes that are limited in terms of smoothness, while tremendously increasing scene complexity. This paper proposes a non‐invasive method (i.e. that does not modify the original geometry) for the modelling and rendering of smooth edges and corners from any input polygonal geometry defined with infinitely sharp edges. At the heart of our contribution is a geometric structure that automatically and accurately defines the geometry of edge and corner rounded areas, as well as the topological relationships at edges and vertices. This structure, called SREC‐RT, is integrated in a ray‐tracing‐based acceleration structure in order to determine the region of interest of each rounded edge and corner. It allows systematic rounding of all edges and vertices without increasing the 3D scene geometric complexity. While the underlying rounded geometry can be of any type, we propose a practical ray‐edge and ray‐corner intersection based on parametric surfaces. We analyse comparisons generated with existing methods. Our results present the advantages of our method, including extreme close‐up views of surfaces with a much higher quality for very little additional memory, and reasonable computation time overhead.
  • Item
    Efficient Rendering of Ocular Wavefront Aberrations using Tiled Point‐Spread Function Splatting
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Csoba, István; Kunkli, Roland; Benes, Bedrich and Hauser, Helwig
    Visual aberrations are the imperfections in human vision, which play an important role in our everyday lives. Existing algorithms to simulate such conditions are either not suited for low‐latency workloads or limit the kinds of supported aberrations. In this paper, we present a new simulation method that supports arbitrary visual aberrations and runs at interactive, near real‐time performance on commodity hardware. Furthermore, our method only requires a single set of on‐axis phase aberration coefficients as input and handles the dynamic change of pupil size and focus distance at runtime. We first describe a custom parametric eye model and parameter estimation method to find the physical properties of the simulated eye. Next, we talk about our parameter sampling strategy which we use with the estimated eye model to establish a coarse point‐spread function (PSF) grid. We also propose a GPU‐based interpolation scheme for the kernel grid which we use at runtime to obtain the final vision simulation by extending an existing tile‐based convolution approach. We showcase the capabilities of our eye estimation and rendering processes using several different eye conditions and provide the corresponding performance metrics to demonstrate the applicability of our method for interactive environments.
  • Item
    A Rapid, End‐to‐end, Generative Model for Gaseous Phenomena from Limited Views
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Qiu, Sheng; Li, Chen; Wang, Changbo; Qin, Hong; Benes, Bedrich and Hauser, Helwig
    Despite the rapid development and proliferation of computer graphics hardware devices for scene capture in the most recent decade, the high‐resolution 3D/4D acquisition of gaseous scenes (e.g., smokes) in real time remains technically challenging in graphics research nowadays. In this paper, we explore a hybrid approach to simultaneously taking advantage of both the model‐centric method and the data‐driven method. Specifically, this paper develops a novel conditional generative model to rapidly reconstruct the temporal density and velocity fields of gaseous phenomena based on the sequence of two projection views. With the data‐driven method, we can achieve the strong coupling of density update and the estimation of flow motion, as a result, we can greatly improve the reconstruction performance for smoke scenes. First, we employ a conditional generative network to generate the initial density field from input projection views and estimate the flow motion based on the adjacent frames. Second, we utilize the differentiable advection layer and design a velocity estimation network with the long‐term mechanism to help achieve the end‐to‐end training and more stable graphics effects. Third, we can re‐simulate the input scene with flexible coupling effects based on the estimated velocity field subject to artists' guidance or user interaction. Moreover, our generative model could accommodate single projection view as input. In practice, more input projection views are enabling and facilitating the high‐fidelity reconstruction with more realistic and finer details. We have conducted extensive experiments to confirm the effectiveness, efficiency, and robustness of our new method compared with the previous state‐of‐the‐art techniques.
  • Item
    NOVA: Rendering Virtual Worlds with Humans for Computer Vision Tasks
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Kerim, Abdulrahman; Aslan, Cem; Celikcan, Ufuk; Erdem, Erkut; Erdem, Aykut; Benes, Bedrich and Hauser, Helwig
    Today, the cutting edge of computer vision research greatly depends on the availability of large datasets, which are critical for effectively training and testing new methods. Manually annotating visual data, however, is not only a labor‐intensive process but also prone to errors. In this study, we present NOVA, a versatile framework to create realistic‐looking 3D rendered worlds containing procedurally generated humans with rich pixel‐level ground truth annotations. NOVA can simulate various environmental factors such as weather conditions or different times of day, and bring an exceptionally diverse set of humans to life, each having a distinct body shape, gender and age. To demonstrate NOVA's capabilities, we generate two synthetic datasets for person tracking. The first one includes 108 sequences, each with different levels of difficulty like tracking in crowded scenes or at nighttime and aims for testing the limits of current state‐of‐the‐art trackers. A second dataset of 97 sequences with normal weather conditions is used to show how our synthetic sequences can be utilized to train and boost the performance of deep‐learning based trackers. Our results indicate that the synthetic data generated by NOVA represents a good proxy of the real‐world and can be exploited for computer vision tasks.
  • Item
    Inverse Dynamics Filtering for Sampling‐based Motion Control
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Xie, Kaixiang; Kry, Paul G.; Benes, Bedrich and Hauser, Helwig
    We improve the sampling‐based motion control method proposed by Liu et al. using inverse dynamics. To deal with noise in the motion capture we filter the motion data using a Butterworth filter where we choose the cutoff frequency such that the zero‐moment point falls within the support polygon for the greatest number of frames. We discuss how to detect foot contact for foot and ground optimization and inverse dynamics, and we optimize to increase the area of supporting polygon. Sample simulations receive filtered inverse dynamics torques at frames where the ZMP is sufficiently close to the support polygon, which simplifies the problem of finding the PD targets that produce physically valid control matching the target motion. We test our method on different motions and we demonstrate that our method has lower error, higher success rates, and generally produces smoother results.
  • Item
    Deep Neural Models for Illumination Estimation and Relighting: A Survey
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Einabadi, Farshad; Guillemaut, Jean‐Yves; Hilton, Adrian; Benes, Bedrich and Hauser, Helwig
    Scene relighting and estimating illumination of a real scene for insertion of virtual objects in a mixed‐reality scenario are well‐studied challenges in the computer vision and graphics fields. Classical inverse rendering approaches aim to decompose a scene into its orthogonal constituting elements, namely scene geometry, illumination and surface materials, which can later be used for augmented reality or to render new images under novel lighting or viewpoints. Recently, the application of deep neural computing to illumination estimation, relighting and inverse rendering has shown promising results. This contribution aims to bring together in a coherent manner current advances in this conjunction. We examine in detail the attributes of the proposed approaches, presented in three categories: scene illumination estimation, relighting with reflectance‐aware scene‐specific representations and finally relighting as image‐to‐image transformations. Each category is concluded with a discussion on the main characteristics of the current methods and possible future trends. We also provide an overview of current publicly available datasets for neural lighting applications.
  • Item
    Neural Modelling of Flower Bas‐relief from 2D Line Drawing
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhang, Yu‐Wei; Wang, Jinlei; Wang, Wenping; Chen, Yanzhao; Liu, Hui; Ji, Zhongping; Zhang, Caiming; Benes, Bedrich and Hauser, Helwig
    Different from other types of bas‐reliefs, a flower bas‐relief contains a large number of depth‐discontinuity edges. Most existing line‐based methods reconstruct free‐form surfaces by ignoring the depth‐discontinuities, thus are less efficient in modeling flower bas‐reliefs. This paper presents a neural‐based solution which benefits from the recent advances in CNN. Specially, we use line gradients to encode the depth orderings at leaf edges. Given a line drawing, a heuristic method is first proposed to compute 2D gradients at lines. Line gradients and dense curvatures interpolated from sparse user inputs are then fed into a neural network, which outputs depths and normals of the final bas‐relief. In addition, we introduce an object‐based method to generate flower bas‐reliefs and line drawings for network training. Extensive experiments show that our method is effective in modelling bas‐reliefs with depth‐discontinuity edges. User evaluation also shows that our method is intuitive and accessible to common users.
  • Item
    Estimating Garment Patterns from Static Scan Data
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Bang, Seungbae; Korosteleva, Maria; Lee, Sung‐Hee; Benes, Bedrich and Hauser, Helwig
    The acquisition of highly detailed static 3D scan data for people in clothing is becoming widely available. Since 3D scan data is given as a single mesh without semantic separation, in order to animate the data, it is necessary to model shape and deformation behaviour of individual body and garment parts. This paper presents a new method for generating simulation‐ready garment models from 3D static scan data of clothed humans. A key contribution of our method is a novel approach to segmenting garments by finding optimal boundaries between the skin and garment. Our boundary‐based garment segmentation method allows for stable and smooth separation of garments by using an implicit representation of the boundary and its optimization strategy. In addition, we present a novel framework to construct a 2D pattern from the segmented garment and place it around the body for a draping simulation. The effectiveness of our method is validated by generating garment patterns for a number of scan data.
  • Item
    Customized Summarizations of Visual Data Collections
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Yuan, Mengke; Ghanem, Bernard; Yan, Dong‐Ming; Wu, Baoyuan; Zhang, Xiaopeng; Wonka, Peter; Benes, Bedrich and Hauser, Helwig
    We propose a framework to generate customized summarizations of visual data collections, such as collections of images, materials, 3D shapes, and 3D scenes. We assume that the elements in the visual data collections can be mapped to a set of vectors in a feature space, in which a fitness score for each element can be defined, and we pose the problem of customized summarizations as selecting a subset of these elements. We first describe the design choices a user should be able to specify for modeling customized summarizations and propose a corresponding user interface. We then formulate the problem as a constrained optimization problem with binary variables and propose a practical and fast algorithm based on the alternating direction method of multipliers (ADMM). Our results show that our problem formulation enables a wide variety of customized summarizations, and that our solver is both significantly faster than state‐of‐the‐art commercial integer programming solvers and produces better solutions than fast relaxation‐based solvers.
  • Item
    Neural BRDF Representation and Importance Sampling
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Sztrajman, Alejandro; Rainer, Gilles; Ritschel, Tobias; Weyrich, Tim; Benes, Bedrich and Hauser, Helwig
    Controlled capture of real‐world material appearance yields tabulated sets of highly realistic reflectance data. In practice, however, its high memory footprint requires compressing into a representation that can be used efficiently in rendering while remaining faithful to the original. Previous works in appearance encoding often prioritized one of these requirements at the expense of the other, by either applying high‐fidelity array compression strategies not suited for efficient queries during rendering, or by fitting a compact analytic model that lacks expressiveness. We present a compact neural network‐based representation of BRDF data that combines high‐accuracy reconstruction with efficient practical rendering via built‐in interpolation of reflectance. We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling, critical for the accurate reconstruction of specular highlights. Additionally, we propose a novel approach to make our representation amenable to importance sampling: rather than inverting the trained networks, we learn to encode them in a more compact embedding that can be mapped to parameters of an analytic BRDF for which importance sampling is known. We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real‐world datasets, and importance sampling performance for isotropic BRDFs mapped to two different analytic models.
  • Item
    Half‐body Portrait Relighting with Overcomplete Lighting Representation
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Song, Guoxian; Cham, Tat‐Jen; Cai, Jianfei; Zheng, Jianmin; Benes, Bedrich and Hauser, Helwig
    We present a neural‐based model for relighting a half‐body portrait image by simply referring to another portrait image with the desired lighting condition. Rather than following classical inverse rendering methodology that involves estimating normals, albedo and environment maps, we implicitly encode the subject and lighting in a latent space, and use these latent codes to generate relighted images by neural rendering. A key technical innovation is the use of a novel overcomplete lighting representation, which facilitates lighting interpolation in the latent space, as well as helping regularize the self‐organization of the lighting latent space during training. In addition, we propose a novel multiplicative neural render that more effectively combines the subject and lighting latent codes for rendering. We also created a large‐scale photorealistic rendered relighting dataset for training, which allows our model to generalize well to real images. Extensive experiments demonstrate that our system not only outperforms existing methods for referral‐based portrait relighting, but also has the capability generate sequences of relighted images via lighting rotations.
  • Item
    Visual Analysis of Large‐Scale Protein‐Ligand Interaction Data
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Schatz, Karsten; Franco‐Moreno, Juan José; Schäfer, Marco; Rose, Alexander S.; Ferrario, Valerio; Pleiss, Jürgen; Vázquez, Pere‐Pau; Ertl, Thomas; Krone, Michael; Benes, Bedrich and Hauser, Helwig
    When studying protein‐ligand interactions, many different factors can influence the behaviour of the protein as well as the ligands. Molecular visualisation tools typically concentrate on the movement of single ligand molecules; however, viewing only one molecule can merely provide a hint of the overall behaviour of the system. To tackle this issue, we do not focus on the visualisation of the local actions of individual ligand molecules but on the influence of a protein and their overall movement. Since the simulations required to study these problems can have millions of time steps, our presented system decouples visualisation and data preprocessing: our preprocessing pipeline aggregates the movement of ligand molecules relative to a receptor protein. For data analysis, we present a web‐based visualisation application that combines multiple linked 2D and 3D views that display the previously calculated data The central view, a novel enhanced sequence diagram that shows the calculated values, is linked to a traditional surface visualisation of the protein. This results in an interactive visualisation that is independent of the size of the underlying data, since the memory footprint of the aggregated data for visualisation is constant and very low, even if the raw input consisted of several terabytes.
  • Item
    Optimized Processing of Localized Collisions in Projective Dynamics
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Wang, Qisi; Tao, Yutian; Brandt, Eric; Cutting, Court; Sifakis, Eftychios; Benes, Bedrich and Hauser, Helwig
    We present a method for the efficient processing of contact and collision in volumetric elastic models simulated using the Projective Dynamics paradigm. Our approach enables interactive simulation of tetrahedral meshes with more than half a million elements, provided that the model satisfies two fundamental properties: the region of the model's surface that is susceptible to collision events needs to be known in advance, and the simulation degrees of freedom associated with that surface region should be limited to a small fraction (e.g. 5%) of the total simulation nodes. In such scenarios, a partial Cholesky factorization can abstract away the behaviour of the collision‐safe subset of the face model into the Schur Complement matrix with respect to the collision‐prone region. We demonstrate how fast and accurate updates of bilateral penalty‐based collision terms can be incorporated into this representation, and solved with high efficiency on the GPU. We also demonstrate iterating a partial update of the element rotations, akin to a selective application of the local step, specifically on the smaller collision‐prone region without explicitly paying the cost associated with the rest of the simulation mesh. We demonstrate efficient and robust interactive simulation in detailed models from animation and medical applications.
  • Item
    Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Ye, Wenjie; Dong, Yue; Peers, Pieter; Guo, Baining; Benes, Bedrich and Hauser, Helwig
    In this paper we present a novel method for recovering high‐resolution spatially‐varying isotropic surface reflectance of a planar exemplar from a flash‐lit close‐up video sequence captured with a regular hand‐held mobile phone. We do not require careful calibration of the camera and lighting parameters, but instead compute a per‐pixel flow map using a deep neural network to align the input video frames. For each video frame, we also extract the reflectance parameters, and warp the neural reflectance features directly using the per‐pixel flow, and subsequently pool the warped features. Our method facilitates convenient hand‐held acquisition of spatially‐varying surface reflectance with commodity hardware by non‐expert users. Furthermore, our method enables aggregation of reflectance features from surface points visible in only a subset of the captured video frames, enabling the creation of high‐resolution reflectance maps that exceed the native camera resolution. We demonstrate and validate our method on a variety of synthetic and real‐world spatially‐varying materials.
  • Item
    Example‐Based Colour Transfer for 3D Point Clouds
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Goudé, Ific; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi; Benes, Bedrich and Hauser, Helwig
    Example‐based colour transfer between images, which has raised a lot of interest in the past decades, consists of transferring the colour of an image to another one. Many methods based on colour distributions have been proposed, and more recently, the efficiency of neural networks has been demonstrated again for colour transfer problems. In this paper, we propose a new pipeline with methods adapted from the image domain to automatically transfer the colour from a target point cloud to an input point cloud. These colour transfer methods are based on colour distributions and account for the geometry of the point clouds to produce a coherent result. The proposed methods rely on simple statistical analysis, are effective, and succeed in transferring the colour style from one point cloud to another. The qualitative results of the colour transfers are evaluated and compared with existing methods.
  • Item
    Design and Evaluation of Visualization Techniques to Facilitate Argument Exploration
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Khartabil, D.; Collins, C.; Wells, S.; Bach, B.; Kennedy, J.; Benes, Bedrich and Hauser, Helwig
    This paper reports the design and comparison of three visualizations to represent the structure and content within arguments. Arguments are artifacts of reasoning widely used across domains such as education, policy making, and science. An is made up of sequences of statements (premises) which can support or contradict each other, individually or in groups through Boolean operators. Understanding the resulting hierarchical structure of arguments while being able to read the arguments' text poses problems related to overview, detail, and navigation. Based on interviews with argument analysts we iteratively designed three techniques, each using combinations of tree visualizations (sunburst, icicle), content display (in‐situ, tooltip) and interactive navigation. Structured discussions with the analysts show benefits of each these techniques; for example, sunburst being good in presenting overview but showing arguments in‐situ is better than pop‐ups. A controlleduser study with 21 participants and three tasks shows complementary evidence suggesting that a sunburst with pop‐up for the content is the best trade‐off solution. Our results can inform visualizations within existing argument visualization tools and increase the visibility of ‘novel‐and‐effective’ visualizations in the argument visualization community.
  • Item
    Fashion Transfer: Dressing 3D Characters from Stylized Fashion Sketches
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Fondevilla, Amelie; Rohmer, Damien; Hahmann, Stefanie; Bousseau, Adrien; Cani, Marie‐Paule; Benes, Bedrich and Hauser, Helwig
    Fashion design often starts with hand‐drawn, expressive sketches that communicate the essence of a garment over idealized human bodies. We propose an approach to automatically dress virtual characters from such input, previously complemented with user‐annotations. In contrast to prior work requiring users to draw garments with accurate proportions over each virtual character to be dressed, our method follows a style transfer strategy : the information extracted from a single, annotated fashion sketch can be used to inform the synthesis of one to many new garment(s) with similar style, yet different proportions. In particular, we define the style of a loose garment from its silhouette and folds, which we extract from the drawing. Key to our method is our strategy to extract both shape and repetitive patterns of folds from the 2D input. As our results show, each input sketch can be used to dress a variety of characters of different morphologies, from virtual humans to cartoon‐style characters.
  • Item
    From Noon to Sunset: Interactive Rendering, Relighting, and Recolouring of Landscape Photographs by Modifying Solar Position
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Türe, Murat; Çıklabakkal, Mustafa Ege; Erdem, Aykut; Erdem, Erkut; Satılmış, Pinar; Akyüz, Ahmet Oguz; Benes, Bedrich and Hauser, Helwig
    Image editing is a commonly studied problem in computer graphics. Despite the presence of many advanced editing tools, there is no satisfactory solution to controllably update the position of the sun using a single image. This problem is made complicated by the presence of clouds, complex landscapes, and the atmospheric effects that must be accounted for. In this paper, we tackle this problem starting with only a single photograph. With the user clicking on the initial position of the sun, our algorithm performs several estimation and segmentation processes for finding the horizon, scene depth, clouds, and the sky line. After this initial process, the user can make both fine‐ and large‐scale changes on the position of the sun: it can be set beneath the mountains or moved behind the clouds practically turning a midday photograph into a sunset (or vice versa). We leverage a precomputed atmospheric scattering algorithm to make all of these changes not only realistic but also in real‐time. We demonstrate our results using both clear and cloudy skies, showing how to add, remove, and relight clouds, all the while allowing for advanced effects such as scattering, shadows, light shafts, and lens flares.
  • Item
    Visual Analytics of Text Conversation Sentiment and Semantics
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Healey, Christopher G.; Dinakaran, Gowtham; Padia, Kalpesh; Nie, Shaoliang; Benson, J. Riley; Caira, Dave; Shaw, Dean; Catalfu, Gary; Devarajan, Ravi; Benes, Bedrich and Hauser, Helwig
    This paper describes the design and implementation of a web‐based system to visualize large collections of text conversations integrated into a hierarchical four‐level‐of‐detail design. Viewers can visualize conversations: (1) in a streamgraph topic overview for a user‐specified time period; (2) as emotion patterns for a topic chosen from the streamgraph; (3) as semantic sequences for a user‐selected emotion pattern, and (4) as an emotion‐driven conversation graph for a single conversation. We collaborated with the Live Chatcustomer service group at SAS Institute to design and evaluate our system's strengths and limitations.