38-Issue 1

Permanent URI for this collection

Report

Ballet

Lawonn, Kai
Günther, Tobias
Editorial

Editorial 2019 CGF 38-1

Chen, Min
Benes, Bedrich
Issue Information

Issue Information CGF38-1

Articles

Robust Structure‐Based Shape Correspondence

Kleiman, Yanir
Ovsjanikov, Maks
Articles

VisFM: Visual Analysis of Image Feature Matchings

Li, Chenhui
Baciu, George
Articles

A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments

Mendes, D.
Caputo, F. M.
Giachetti, A.
Ferreira, A.
Jorge, J.
Articles

Optimal Sample Weights for Hemispherical Integral Quadratures

Marques, Ricardo
Bouville, Christian
Bouatouch, Kadi
Articles

Turning a Digital Camera into an Absolute 2D Tele‐Colorimeter

Guarnera, G. C.
Bianco, S.
Schettini, R.
Articles

FitConnect: Connecting Noisy 2D Samples by Fitted Neighbourhoods

Ohrhallinger, S.
Wimmer, M.
Articles

Generation and Visual Exploration of Medical Flow Data: Survey, Research Trends and Future Challenges

Oeltze‐Jafra, S.
Meuschke, M.
Neugebauer, M.
Saalfeld, S.
Lawonn, K.
Janiga, G.
Hege, H.‐C.
Zachow, S.
Preim, B.
Articles

An Adaptive Multi‐Grid Solver for Applications in Computer Graphics

Kazhdan, Misha
Hoppe, Hugues
Articles

Realtime Performance‐Driven Physical Simulation for Facial Animation

Barrielle, V.
Stoiber, N.
Articles

A Survey of Simple Geometric Primitives Detection Methods for Captured 3D Data

Kaiser, Adrien
Ybanez Zepeda, Jose Alonso
Boubekeur, Tamy
Articles

Applying Visual Analytics to Physically Based Rendering

Simons, G.
Herholz, S.
Petitjean, V.
Rapp, T.
Ament, M.
Lensch, H.
Dachsbacher, C.
Eisemann, M.
Eisemann, E.
Articles

Visualization of Neural Network Predictions for Weather Forecasting

Roesch, Isabelle
Günther, Tobias
Articles

MegaViews: Scalable Many‐View Rendering With Concurrent Scene‐View Hierarchy Traversal

Kol, Timothy R.
Bauszat, Pablo
Lee, Sungkil
Eisemann, Elmar
Articles

Stylized Image Triangulation

Lawonn, Kai
Günther, Tobias
Articles

Autonomous Particles for Interactive Flow Visualization

Engelke, Wito
Lawonn, Kai
Preim, Bernhard
Hotz, Ingrid
Articles

Flexible Use of Temporal and Spatial Reasoning for Fast and Scalable CPU Broad‐Phase Collision Detection Using KD‐Trees

Serpa, Ygor Rebouças
Rodrigues, Maria Andréia Formico
Articles

Controllable Image‐Based Transfer of Flow Phenomena

Bosch, Carles
Patow, Gustavo
Articles

On Visualizing Continuous Turbulence Scales

Liu, Xiaopei
Mishra, Maneesh
Skote, Martin
Fu, Chi‐Wing
Articles

Projected Field Similarity for Comparative Visualization of Multi‐Run Multi‐Field Time‐Varying Spatial Data

Fofonov, A.
Linsen, L.
Articles

TexNN: Fast Texture Encoding Using Neural Networks

Pratapa, S.
Olson, T.
Chalfin, A.
Manocha, D.
Articles

Denoising Deep Monte Carlo Renderings

Vicini, D.
Adler, D.
Novák, J.
Rousselle, F.
Burley, B.
Articles

Privacy Preserving Visualization: A Study on Event Sequence Data

Chou, Jia‐Kai
Wang, Yang
Ma, Kwan‐Liu
Articles

A Survey on Data‐Driven 3D Shape Descriptors

Rostami, R.
Bashiri, F. S.
Rostami, B.
Yu, Z.
Articles

Gradient‐Guided Local Disparity Editing

Scandolo, Leonardo
Bauszat, Pablo
Eisemann, Elmar
Articles

Superpixel Generation by Agglomerative Clustering With Quadratic Error Minimization

Dong, Xiao
Chen, Zhonggui
Yao, Junfeng
Guo, Xiaohu
Articles

Shading‐Based Surface Recovery Using Subdivision‐Based Representation

Deng, Teng
Zheng, Jianmin
Cai, Jianfei
Cham, Tat‐Jen
Articles

Learning A Stroke‐Based Representation for Fonts

Balashova, Elena
Bermano, Amit H.
Kim, Vladimir G.
DiVerdi, Stephen
Hertzmann, Aaron
Funkhouser, Thomas
Articles

Real‐Time Facial Expression Transformation for Monocular RGB Video

Ma, L.
Deng, Z.
Articles

Real‐Time Human Shadow Removal in a Front Projection System

Kim, Jaedong
Seo, Hyunggoog
Cha, Seunghoon
Noh, Junyong
Articles

Urban Walkability Design Using Virtual Population Simulation

Mathew, C. D. Tharindu
Knob, Paulo R.
Musse, Soraia Raupp
Aliaga, Daniel G.
Articles

A Variational Approach to Registration with Local Exponential Coordinates

Paman, Ashish
Rangarajan, Ramsharan
Articles

Incremental Labelling of Voronoi Vertices for Shape Reconstruction

Peethambaran, J.
Parakkat, A.D.
Tagliasacchi, A.
Wang, R.
Muthuganapathy, R.
Articles

Visual Exploration of Dynamic Multichannel EEG Coherence Networks

Ji, C.
Gronde, J. J.
Maurits, N. M.
Roerdink, J. B. T. M.
Articles

Style Invariant Locomotion Classification for Character Control

Boehs, G.E.
Vieira, M.L.H.
Articles

A Probabilistic Steering Parameter Model for Deterministic Motion Planning Algorithms

Agethen, Philipp
Gaisbauer, Felix
Rukzio, Enrico
Articles

Solid Geometry Processing on Deconstructed Domains

Sellán, Silvia
Cheng, Herng Yi
Ma, Yuming
Dembowski, Mitchell
Jacobson, Alec
Articles

Selective Padding for Polycube‐Based Hexahedral Meshing

Cherchi, G.
Alliez, P.
Scateni, R.
Lyon, M.
Bommes, D.
Articles

A Survey of Information Visualization Books

Rees, D.
Laramee, R. S.
Articles

Increasing the Spatial Resolution of BTF Measurement with Scheimpflug Imaging

Havran, V.
Hošek, J.
Němcová, Š.
Čáp, J.
Articles

MyEvents: A Personal Visual Analytics Approach for Mining Key Events and Knowledge Discovery in Support of Personal Reminiscence

Parvinzamir, F.
Zhao, Y.
Deng, Z.
Dong, F.
Articles

Filtered Quadrics for High‐Speed Geometry Smoothing and Clustering

Legrand, Hélène
Thiery, Jean‐Marc
Boubekeur, Tamy
Articles

Functional Maps Representation On Product Manifolds

Rodolà, E.
Lähner, Z.
Bronstein, A. M.
Bronstein, M. M.
Solomon, J.


BibTeX (38-Issue 1)
                
@article{
10.1111:cgf.13602,
journal = {Computer Graphics Forum}, title = {{
Ballet}},
author = {
Lawonn, Kai
and
Günther, Tobias
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13602}
}
                
@article{
10.1111:cgf.13605,
journal = {Computer Graphics Forum}, title = {{
Editorial 2019 CGF 38-1}},
author = {
Chen, Min
and
Benes, Bedrich
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13605}
}
                
@article{
10.1111:cgf.13455,
journal = {Computer Graphics Forum}, title = {{
Issue Information CGF38-1}},
author = {}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13455}
}
                
@article{
10.1111:cgf.13389,
journal = {Computer Graphics Forum}, title = {{
Robust Structure‐Based Shape Correspondence}},
author = {
Kleiman, Yanir
and
Ovsjanikov, Maks
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13389}
}
                
@article{
10.1111:cgf.13391,
journal = {Computer Graphics Forum}, title = {{
VisFM: Visual Analysis of Image Feature Matchings}},
author = {
Li, Chenhui
and
Baciu, George
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13391}
}
                
@article{
10.1111:cgf.13390,
journal = {Computer Graphics Forum}, title = {{
A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments}},
author = {
Mendes, D.
and
Caputo, F. M.
and
Giachetti, A.
and
Ferreira, A.
and
Jorge, J.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13390}
}
                
@article{
10.1111:cgf.13392,
journal = {Computer Graphics Forum}, title = {{
Optimal Sample Weights for Hemispherical Integral Quadratures}},
author = {
Marques, Ricardo
and
Bouville, Christian
and
Bouatouch, Kadi
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13392}
}
                
@article{
10.1111:cgf.13393,
journal = {Computer Graphics Forum}, title = {{
Turning a Digital Camera into an Absolute 2D Tele‐Colorimeter}},
author = {
Guarnera, G. C.
and
Bianco, S.
and
Schettini, R.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13393}
}
                
@article{
10.1111:cgf.13395,
journal = {Computer Graphics Forum}, title = {{
FitConnect: Connecting Noisy 2D Samples by Fitted Neighbourhoods}},
author = {
Ohrhallinger, S.
and
Wimmer, M.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13395}
}
                
@article{
10.1111:cgf.13394,
journal = {Computer Graphics Forum}, title = {{
Generation and Visual Exploration of Medical Flow Data: Survey, Research Trends and Future Challenges}},
author = {
Oeltze‐Jafra, S.
and
Meuschke, M.
and
Neugebauer, M.
and
Saalfeld, S.
and
Lawonn, K.
and
Janiga, G.
and
Hege, H.‐C.
and
Zachow, S.
and
Preim, B.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13394}
}
                
@article{
10.1111:cgf.13449,
journal = {Computer Graphics Forum}, title = {{
An Adaptive Multi‐Grid Solver for Applications in Computer Graphics}},
author = {
Kazhdan, Misha
and
Hoppe, Hugues
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13449}
}
                
@article{
10.1111:cgf.13450,
journal = {Computer Graphics Forum}, title = {{
Realtime Performance‐Driven Physical Simulation for Facial Animation}},
author = {
Barrielle, V.
and
Stoiber, N.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13450}
}
                
@article{
10.1111:cgf.13451,
journal = {Computer Graphics Forum}, title = {{
A Survey of Simple Geometric Primitives Detection Methods for Captured 3D Data}},
author = {
Kaiser, Adrien
and
Ybanez Zepeda, Jose Alonso
and
Boubekeur, Tamy
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13451}
}
                
@article{
10.1111:cgf.13452,
journal = {Computer Graphics Forum}, title = {{
Applying Visual Analytics to Physically Based Rendering}},
author = {
Simons, G.
and
Herholz, S.
and
Petitjean, V.
and
Rapp, T.
and
Ament, M.
and
Lensch, H.
and
Dachsbacher, C.
and
Eisemann, M.
and
Eisemann, E.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13452}
}
                
@article{
10.1111:cgf.13453,
journal = {Computer Graphics Forum}, title = {{
Visualization of Neural Network Predictions for Weather Forecasting}},
author = {
Roesch, Isabelle
and
Günther, Tobias
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13453}
}
                
@article{
10.1111:cgf.13527,
journal = {Computer Graphics Forum}, title = {{
MegaViews: Scalable Many‐View Rendering With Concurrent Scene‐View Hierarchy Traversal}},
author = {
Kol, Timothy R.
and
Bauszat, Pablo
and
Lee, Sungkil
and
Eisemann, Elmar
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13527}
}
                
@article{
10.1111:cgf.13526,
journal = {Computer Graphics Forum}, title = {{
Stylized Image Triangulation}},
author = {
Lawonn, Kai
and
Günther, Tobias
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13526}
}
                
@article{
10.1111:cgf.13528,
journal = {Computer Graphics Forum}, title = {{
Autonomous Particles for Interactive Flow Visualization}},
author = {
Engelke, Wito
and
Lawonn, Kai
and
Preim, Bernhard
and
Hotz, Ingrid
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13528}
}
                
@article{
10.1111:cgf.13529,
journal = {Computer Graphics Forum}, title = {{
Flexible Use of Temporal and Spatial Reasoning for Fast and Scalable CPU Broad‐Phase Collision Detection Using KD‐Trees}},
author = {
Serpa, Ygor Rebouças
and
Rodrigues, Maria Andréia Formico
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13529}
}
                
@article{
10.1111:cgf.13530,
journal = {Computer Graphics Forum}, title = {{
Controllable Image‐Based Transfer of Flow Phenomena}},
author = {
Bosch, Carles
and
Patow, Gustavo
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13530}
}
                
@article{
10.1111:cgf.13532,
journal = {Computer Graphics Forum}, title = {{
On Visualizing Continuous Turbulence Scales}},
author = {
Liu, Xiaopei
and
Mishra, Maneesh
and
Skote, Martin
and
Fu, Chi‐Wing
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13532}
}
                
@article{
10.1111:cgf.13531,
journal = {Computer Graphics Forum}, title = {{
Projected Field Similarity for Comparative Visualization of Multi‐Run Multi‐Field Time‐Varying Spatial Data}},
author = {
Fofonov, A.
and
Linsen, L.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13531}
}
                
@article{
10.1111:cgf.13534,
journal = {Computer Graphics Forum}, title = {{
TexNN: Fast Texture Encoding Using Neural Networks}},
author = {
Pratapa, S.
and
Olson, T.
and
Chalfin, A.
and
Manocha, D.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13534}
}
                
@article{
10.1111:cgf.13533,
journal = {Computer Graphics Forum}, title = {{
Denoising Deep Monte Carlo Renderings}},
author = {
Vicini, D.
and
Adler, D.
and
Novák, J.
and
Rousselle, F.
and
Burley, B.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13533}
}
                
@article{
10.1111:cgf.13535,
journal = {Computer Graphics Forum}, title = {{
Privacy Preserving Visualization: A Study on Event Sequence Data}},
author = {
Chou, Jia‐Kai
and
Wang, Yang
and
Ma, Kwan‐Liu
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13535}
}
                
@article{
10.1111:cgf.13536,
journal = {Computer Graphics Forum}, title = {{
A Survey on Data‐Driven 3D Shape Descriptors}},
author = {
Rostami, R.
and
Bashiri, F. S.
and
Rostami, B.
and
Yu, Z.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13536}
}
                
@article{
10.1111:cgf.13537,
journal = {Computer Graphics Forum}, title = {{
Gradient‐Guided Local Disparity Editing}},
author = {
Scandolo, Leonardo
and
Bauszat, Pablo
and
Eisemann, Elmar
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13537}
}
                
@article{
10.1111:cgf.13538,
journal = {Computer Graphics Forum}, title = {{
Superpixel Generation by Agglomerative Clustering With Quadratic Error Minimization}},
author = {
Dong, Xiao
and
Chen, Zhonggui
and
Yao, Junfeng
and
Guo, Xiaohu
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13538}
}
                
@article{
10.1111:cgf.13539,
journal = {Computer Graphics Forum}, title = {{
Shading‐Based Surface Recovery Using Subdivision‐Based Representation}},
author = {
Deng, Teng
and
Zheng, Jianmin
and
Cai, Jianfei
and
Cham, Tat‐Jen
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13539}
}
                
@article{
10.1111:cgf.13540,
journal = {Computer Graphics Forum}, title = {{
Learning A Stroke‐Based Representation for Fonts}},
author = {
Balashova, Elena
and
Bermano, Amit H.
and
Kim, Vladimir G.
and
DiVerdi, Stephen
and
Hertzmann, Aaron
and
Funkhouser, Thomas
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13540}
}
                
@article{
10.1111:cgf.13586,
journal = {Computer Graphics Forum}, title = {{
Real‐Time Facial Expression Transformation for Monocular RGB Video}},
author = {
Ma, L.
and
Deng, Z.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13586}
}
                
@article{
10.1111:cgf.13541,
journal = {Computer Graphics Forum}, title = {{
Real‐Time Human Shadow Removal in a Front Projection System}},
author = {
Kim, Jaedong
and
Seo, Hyunggoog
and
Cha, Seunghoon
and
Noh, Junyong
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13541}
}
                
@article{
10.1111:cgf.13585,
journal = {Computer Graphics Forum}, title = {{
Urban Walkability Design Using Virtual Population Simulation}},
author = {
Mathew, C. D. Tharindu
and
Knob, Paulo R.
and
Musse, Soraia Raupp
and
Aliaga, Daniel G.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13585}
}
                
@article{
10.1111:cgf.13587,
journal = {Computer Graphics Forum}, title = {{
A Variational Approach to Registration with Local Exponential Coordinates}},
author = {
Paman, Ashish
and
Rangarajan, Ramsharan
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13587}
}
                
@article{
10.1111:cgf.13589,
journal = {Computer Graphics Forum}, title = {{
Incremental Labelling of Voronoi Vertices for Shape Reconstruction}},
author = {
Peethambaran, J.
and
Parakkat, A.D.
and
Tagliasacchi, A.
and
Wang, R.
and
Muthuganapathy, R.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13589}
}
                
@article{
10.1111:cgf.13588,
journal = {Computer Graphics Forum}, title = {{
Visual Exploration of Dynamic Multichannel EEG Coherence Networks}},
author = {
Ji, C.
and
Gronde, J. J.
and
Maurits, N. M.
and
Roerdink, J. B. T. M.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13588}
}
                
@article{
10.1111:cgf.13590,
journal = {Computer Graphics Forum}, title = {{
Style Invariant Locomotion Classification for Character Control}},
author = {
Boehs, G.E.
and
Vieira, M.L.H.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13590}
}
                
@article{
10.1111:cgf.13591,
journal = {Computer Graphics Forum}, title = {{
A Probabilistic Steering Parameter Model for Deterministic Motion Planning Algorithms}},
author = {
Agethen, Philipp
and
Gaisbauer, Felix
and
Rukzio, Enrico
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13591}
}
                
@article{
10.1111:cgf.13592,
journal = {Computer Graphics Forum}, title = {{
Solid Geometry Processing on Deconstructed Domains}},
author = {
Sellán, Silvia
and
Cheng, Herng Yi
and
Ma, Yuming
and
Dembowski, Mitchell
and
Jacobson, Alec
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13592}
}
                
@article{
10.1111:cgf.13593,
journal = {Computer Graphics Forum}, title = {{
Selective Padding for Polycube‐Based Hexahedral Meshing}},
author = {
Cherchi, G.
and
Alliez, P.
and
Scateni, R.
and
Lyon, M.
and
Bommes, D.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13593}
}
                
@article{
10.1111:cgf.13595,
journal = {Computer Graphics Forum}, title = {{
A Survey of Information Visualization Books}},
author = {
Rees, D.
and
Laramee, R. S.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13595}
}
                
@article{
10.1111:cgf.13594,
journal = {Computer Graphics Forum}, title = {{
Increasing the Spatial Resolution of BTF Measurement with Scheimpflug Imaging}},
author = {
Havran, V.
and
Hošek, J.
and
Němcová, Š.
and
Čáp, J.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13594}
}
                
@article{
10.1111:cgf.13596,
journal = {Computer Graphics Forum}, title = {{
MyEvents: A Personal Visual Analytics Approach for Mining Key Events and Knowledge Discovery in Support of Personal Reminiscence}},
author = {
Parvinzamir, F.
and
Zhao, Y.
and
Deng, Z.
and
Dong, F.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13596}
}
                
@article{
10.1111:cgf.13597,
journal = {Computer Graphics Forum}, title = {{
Filtered Quadrics for High‐Speed Geometry Smoothing and Clustering}},
author = {
Legrand, Hélène
and
Thiery, Jean‐Marc
and
Boubekeur, Tamy
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13597}
}
                
@article{
10.1111:cgf.13598,
journal = {Computer Graphics Forum}, title = {{
Functional Maps Representation On Product Manifolds}},
author = {
Rodolà, E.
and
Lähner, Z.
and
Bronstein, A. M.
and
Bronstein, M. M.
and
Solomon, J.
}, year = {
2019},
publisher = {
© 2019 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13598}
}

Browse

Recent Submissions

Now showing 1 - 45 of 45
  • Item
    Ballet
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Lawonn, Kai; Günther, Tobias; Chen, Min and Benes, Bedrich
  • Item
    Editorial 2019 CGF 38-1
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Chen, Min; Benes, Bedrich; Chen, Min and Benes, Bedrich
  • Item
    Issue Information CGF38-1
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Chen, Min and Benes, Bedrich
  • Item
    Robust Structure‐Based Shape Correspondence
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kleiman, Yanir; Ovsjanikov, Maks; Chen, Min and Benes, Bedrich
    We present a robust method to find region‐level correspondences between shapes, which are invariant to changes in geometry and applicable across multiple shape representations. We generate simplified shape graphs by jointly decomposing the shapes, and devise an adapted graph‐matching technique, from which we infer correspondences between shape regions. The simplified shape graphs are designed to primarily capture the overall structure of the shapes, without reflecting precise information about the geometry of each region, which enables us to find correspondences between shapes that might have significant geometric differences. Moreover, due to the special care we take to ensure the robustness of each part of our pipeline, our method can find correspondences between shapes with different representations, such as triangular meshes and point clouds. We demonstrate that the region‐wise matching that we obtain can be used to find correspondences between feature points, reveal the intrinsic self‐similarities of each shape and even construct point‐to‐point maps across shapes. Our method is both time and space efficient, leading to a pipeline that is significantly faster than comparable approaches. We demonstrate the performance of our approach through an extensive quantitative and qualitative evaluation on several benchmarks where we achieve comparable or superior performance to existing methods.We present a robust method to find region‐level correspondences between shapes, which are invariant to changes in geometry and applicable across multiple shape representations. We generate simplified shape graphs by jointly decomposing the shapes, and devise an adapted graph‐matching technique, from which we infer correspondences between shape regions. The simplified shape graphs are designed to primarily capture the overall structure of the shapes, without reflecting precise information about the geometry of each region, which enables us to find correspondences between shapes that might have significant geometric differences. Moreover, due to the special care we take to ensure the robustness of each part of our pipeline, our method can find correspondences between shapes with different representations, such as triangular meshes and point clouds.
  • Item
    VisFM: Visual Analysis of Image Feature Matchings
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Li, Chenhui; Baciu, George; Chen, Min and Benes, Bedrich
    Feature matching is the most basic and pervasive problem in computer vision and it has become a primary component in big data analytics. Many tools have been developed for extracting and matching features in video streams and image frames. However, one of the most basic tools, that is, a tool for simply visualizing matched features for the comparison and evaluation of computer vision algorithms is not generally available, especially when dealing with a large number of matching lines. We introduce VisFM, an integrated visual analysis system for comprehending and exploring image feature matchings. VisFM presents a matching view with an intuitive line bundling to provide useful insights regarding the quality of matched features. VisFM is capable of showing a summarization of the features and matchings through group view to assist domain experts in observing the feature matching patterns from multiple perspectives. VisFM incorporates a series of interactions for exploring the feature data. We demonstrate the visual efficacy of VisFM by applying it to three scenarios. An informal expert feedback, conducted by our collaborator in computer vision, demonstrates how VisFM can be used for comparing and analysing feature matchings when the goal is to improve an image retrieval algorithm.Feature matching is the most basic and pervasive problem in computer vision and it has become a primary component in big data analytics. Many tools have been developed for extracting and matching features in video streams and image frames. However, one of the most basic tools, that is, a tool for simply visualizing matched features for the comparison and evaluation of computer vision algorithms is not generally available, especially when dealing with a large number of matching lines. We introduce VisFM, an integrated visual analysis system for comprehending and exploring image feature matchings. VisFM presents a matching view with an intuitive line bundling to provide useful insights regarding the quality of matched features. VisFM is capable of showing a summarization of the features and matchings through group view to assist domain experts in observing the feature matching patterns from multiple perspectives.
  • Item
    A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Mendes, D.; Caputo, F. M.; Giachetti, A.; Ferreira, A.; Jorge, J.; Chen, Min and Benes, Bedrich
    Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid‐air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non‐trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid‐air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state‐of‐the‐art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid‐air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.
  • Item
    Optimal Sample Weights for Hemispherical Integral Quadratures
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Marques, Ricardo; Bouville, Christian; Bouatouch, Kadi; Chen, Min and Benes, Bedrich
    This paper proposes optimal quadrature rules over the hemisphere for the shading integral. We leverage recent work regarding the theory of quadrature rules over the sphere in order to derive a new theoretical framework for the general case of hemispherical quadrature error analysis. We then apply our framework to the case of the shading integral. We show that our quadrature error theory can be used to derive optimal sample weights (OSW) which account for both the features of the sampling pattern and the bidirectional reflectance distribution function (BRDF). Our method significantly outperforms familiar Quasi Monte Carlo (QMC) and stochastic Monte Carlo techniques. Our results show that the OSW are very effective in compensating for possible irregularities in the sample distribution. This allows, for example, to significantly exceed the regular convergence rate of stochastic Monte Carlo while keeping the exact same sample sets. Another important benefit of our method is that OSW can be applied whatever the sampling points distribution: the sample distribution need not follow a probability density function, which makes our technique much more flexible than QMC or stochastic Monte Carlo solutions. In particular, our theoretical framework allows to easily combine point sets derived from different sampling strategies (e.g. targeted to diffuse and glossy BRDF). In this context, our rendering results show that our approach overcomes MIS (Multiple Importance Sampling) techniques.This paper proposes optimal quadrature rules over the hemisphere for the shading integral. We leverage recent work regarding the theory of quadrature rules over the sphere in order to derive a new theoretical framework for the general case of hemispherical quadrature error analysis. We then apply our framework to the case of the shading integral. We show that our quadrature error theory can be used to derive optimal sample weights (OSW) which account for both the features of the sampling pattern and the material reflectance function (BRDF). Our method significantly outperforms familiar Quasi Monte Carlo (QMC) and stochastic Monte Carlo techniques. Our results show that the OSW are very effective in compensating for possible irregularities in the sample distribution. This allows, for example, to significantly exceed the regular convergence rate of stochastic Monte Carlo while keeping the exact same sample sets.
  • Item
    Turning a Digital Camera into an Absolute 2D Tele‐Colorimeter
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Guarnera, G. C.; Bianco, S.; Schettini, R.; Chen, Min and Benes, Bedrich
    We present a simple and effective technique for absolute colorimetric camera characterization, invariant to changes in exposure/aperture and scene irradiance, suitable in a wide range of applications including image‐based reflectance measurements, spectral pre‐filtering and spectral upsampling for rendering, to improve colour accuracy in high dynamic range imaging. Our method requires a limited number of acquisitions, an off‐the‐shelf target and a commonly available projector, used as a controllable light source, other than the reflected radiance to be known. The characterized camera can be effectively used as a 2D tele‐colorimeter, providing the user with an accurate estimate of the distribution of luminance and chromaticity in a scene, without requiring explicit knowledge of the incident lighting power spectra. We validate the approach by comparing our estimated absolute tristimulus values (XYZ data in ) with the measurements of a professional 2D tele‐colorimeter, for a set of scenes with complex geometry, spatially varying reflectance and light sources with very different spectral power distribution.We present a simple and effective technique for absolute colorimetric camera characterization, invariant to changes in exposure/aperture and scene irradiance, suitable in a wide range of applications including image‐based reflectance measurements, spectral pre‐filtering and spectral upsampling for rendering, to improve colour accuracy in high dynamic range imaging. Our method requires a limited number of acquisitions, an off‐the‐shelf target and a commonly available projector, used as a controllable light source, other than the reflected radiance to be known. The characterized camera can be effectively used as a 2D tele‐colorimeter, providing the user with an accurate estimate of the distribution of luminance and chromaticity in a scene, without requiring explicit knowledge of the incident lighting power spectra.
  • Item
    FitConnect: Connecting Noisy 2D Samples by Fitted Neighbourhoods
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Ohrhallinger, S.; Wimmer, M.; Chen, Min and Benes, Bedrich
    We propose a parameter‐free method to recover manifold connectivity in unstructured 2D point clouds with high noise in terms of the local feature size. This enables us to capture the features which emerge out of the noise. To achieve this, we extend the reconstruction algorithm , which connects samples to two (noise‐free) neighbours and has been proven to output a manifold for a relaxed sampling condition. Applying this condition to noisy samples by projecting their ‐nearest neighbourhoods onto local circular fits leads to multiple candidate neighbour pairs and thus makes connecting them consistently an NP‐hard problem. To solve this efficiently, we design an algorithm that searches that solution space iteratively on different scales of . It achieves linear time complexity in terms of point count plus quadratic time in the size of noise clusters. Our algorithm extends seamlessly to connect both samples with and without noise, performs as local as the recovered features and can output multiple open or closed piecewise curves. Incidentally, our method simplifies the output geometry by eliminating all but a representative point from noisy clusters. Since local neighbourhood fits overlap consistently, the resulting connectivity represents an ordering of the samples along a manifold. This permits us to simply blend the local fits for denoising with the locally estimated noise extent. Aside from applications like reconstructing silhouettes of noisy sensed data, this lays important groundwork to improve surface reconstruction in 3D. Our open‐source algorithm is available online.We propose a parameter‐free method to recover manifold connectivity in unstructured 2D point clouds with high noise in terms of the local feature size. This enables us to capture the features which emerge out of the noise. To achieve this, we extend the reconstruction algorithm , which connects samples to two (noise‐free) neighbours and has been proven to output a manifold for a relaxed sampling condition. Applying this condition to noisy samples by projecting their ‐nearest neighbourhoods onto local circular fits leads to multiple candidate neighbour pairs and thus makes connecting them consistently an NP‐hard problem. To solve this efficiently, we design an algorithm that searches that solution space iteratively on different scales of . It achieves linear time complexity in terms of point count plus quadratic time in the size of noise clusters.
  • Item
    Generation and Visual Exploration of Medical Flow Data: Survey, Research Trends and Future Challenges
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Oeltze‐Jafra, S.; Meuschke, M.; Neugebauer, M.; Saalfeld, S.; Lawonn, K.; Janiga, G.; Hege, H.‐C.; Zachow, S.; Preim, B.; Chen, Min and Benes, Bedrich
    Simulations and measurements of blood and airflow inside the human circulatory and respiratory system play an increasingly important role in personalized medicine for prevention, diagnosis and treatment of diseases. This survey focuses on three main application areas. (1) Computational fluid dynamics (CFD) simulations of blood flow in cerebral aneurysms assist in predicting the outcome of this pathologic process and of therapeutic interventions. (2) CFD simulations of nasal airflow allow for investigating the effects of obstructions and deformities and provide therapy decision support. (3) 4D phase‐contrast (4D PC) magnetic resonance imaging of aortic haemodynamics supports the diagnosis of various vascular and valve pathologies as well as their treatment. An investigation of the complex and often dynamic simulation and measurement data requires the coupling of sophisticated visualization, interaction and data analysis techniques. In this paper, we survey the large body of work that has been conducted within this realm. We extend previous surveys by incorporating nasal airflow, addressing the joint investigation of blood flow and vessel wall properties and providing a more fine‐granular taxonomy of the existing techniques. From the survey, we extract major research trends and identify open problems and future challenges. The survey is intended for researchers interested in medical flow but also more general, in the combined visualization of physiology and anatomy, the extraction of features from flow field data and feature‐based visualization, the visual comparison of different simulation results and the interactive visual analysis of the flow field and derived characteristics.Simulations and measurements of blood and airflow inside the human circulatory and respiratory system play an increasingly important role in personalized medicine for prevention, diagnosis and treatment of diseases. This survey focuses on three main application areas. (1) Computational fluid dynamics (CFD) simulations of blood flow in cerebral aneurysms assist in predicting the outcome of this pathologic process and of therapeutic interventions. (2) CFD simulations of nasal airflow allow for investigating the effects of obstructions and deformities and provide therapy decision support. (3) 4D phase‐contrast (4D PC) magnetic resonance imaging of aortic haemodynamics supports the diagnosis of various vascular and valve pathologies as well as their treatment. An investigation of the complex and often dynamic simulation and measurement data requires the coupling of sophisticated visualization, interaction and data analysis techniques.
  • Item
    An Adaptive Multi‐Grid Solver for Applications in Computer Graphics
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kazhdan, Misha; Hoppe, Hugues; Chen, Min and Benes, Bedrich
    A key processing step in numerous computer graphics applications is the solution of a linear system discretized over a spatial domain. Often, the linear system can be represented using an adaptive domain tessellation, either because the solution will only be sampled sparsely, or because the solution is known to be ‘interesting’ (e.g. high frequency) only in localized regions. In this work, we propose an adaptive, finite elements, multi‐grid solver capable of efficiently solving such linear systems. Our solver is designed to be general‐purpose, supporting finite elements of different degrees, across different dimensions and supporting both integrated and pointwise constraints. We demonstrate the efficacy of our solver in applications including surface reconstruction, image stitching and Euclidean Distance Transform calculation.A key processing step in numerous computer graphics applications is the solution of a linear system discretized over a spatial domain. Often, the linear system can be represented using an adaptive domain tessellation, either because the solution will only be sampled sparsely, or because the solution is known to be ‘interesting’ (e.g. high frequency) only in localized regions. In this work, we propose an adaptive, finite elements, multi‐grid solver capable of efficiently solving such linear systems. Our solver is designed to be general‐purpose, supporting finite elements of different degrees, across different dimensions and supporting both integrated and pointwise constraints.
  • Item
    Realtime Performance‐Driven Physical Simulation for Facial Animation
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Barrielle, V.; Stoiber, N.; Chen, Min and Benes, Bedrich
    We present the first realtime method for generating facial animations enhanced by physical simulation from realtime performance capture data. Unlike purely data‐based techniques, our method is able to produce physical effects on the fly through the simulation of volumetric skin behaviour, lip contacts and sticky lips. It remains however practical as it does not require any physical/medical data which are complex to acquire and process, and instead relies only on the input of a blendshapes model. We achieve realtime performance on the CPU by introducing an efficient progressive Projective Dynamics solver to efficiently solve the physical integration steps even when confronted to constantly changing constraints. Also key to our realtime performance is a new Taylor approximation and memoization scheme for the computation of the Singular Value Decompositions required for the simulation of volumetric skin. We demonstrate the applicability of our method by animating blendshape characters from a simple webcam feed .We present the first realtime method for generating facial animations enhanced by physical simulation from realtime performance capture data. Unlike purely data‐based techniques, our method is able to produce physical effects on the fly through the simulation of volumetric skin behaviour, lip contacts and sticky lips. It remains however practical as it does not require any physical/medical data which are complex to acquire and process, and instead relies only on the input of a blendshapes model. We achieve realtime performance on the CPU by introducing an efficient progressive Projective Dynamics solver to efficiently solve the physical integration steps even when confronted to constantly changing constraints. Also key to our realtime performance is a new Taylor approximation and memoization scheme for the computation of the Singular Value Decompositions required for the simulation of volumetric skin. We demonstrate the applicability of our method by animating blendshape characters from a simple webcam feed.
  • Item
    A Survey of Simple Geometric Primitives Detection Methods for Captured 3D Data
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kaiser, Adrien; Ybanez Zepeda, Jose Alonso; Boubekeur, Tamy; Chen, Min and Benes, Bedrich
    The amount of captured 3D data is continuously increasing, with the democratization of consumer depth cameras, the development of modern multi‐view stereo capture setups and the rise of single‐view 3D capture based on machine learning. The analysis and representation of this ever growing volume of 3D data, often corrupted with acquisition noise and reconstruction artefacts, is a serious challenge at the frontier between computer graphics and computer vision. To that end, segmentation and optimization are crucial analysis components of the shape abstraction process, which can themselves be greatly simplified when performed on lightened geometric formats. In this survey, we review the algorithms which extract simple geometric primitives from raw dense 3D data. After giving an introduction to these techniques, from the acquisition modality to the underlying theoretical concepts, we propose an application‐oriented characterization, designed to help select an appropriate method based on one's application needs and compare recent approaches. We conclude by giving hints for how to evaluate these methods and a set of research challenges to be explored.
  • Item
    Applying Visual Analytics to Physically Based Rendering
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Simons, G.; Herholz, S.; Petitjean, V.; Rapp, T.; Ament, M.; Lensch, H.; Dachsbacher, C.; Eisemann, M.; Eisemann, E.; Chen, Min and Benes, Bedrich
    Physically based rendering is a well‐understood technique to produce realistic‐looking images. However, different algorithms exist for efficiency reasons, which work well in certain cases but fail or produce rendering artefacts in others. Few tools allow a user to gain insight into the algorithmic processes. In this work, we present such a tool, which combines techniques from information visualization and visual analytics with physically based rendering. It consists of an interactive parallel coordinates plot, with a built‐in sampling‐based data reduction technique to visualize the attributes associated with each light sample. Two‐dimensional (2D) and three‐dimensional (3D) heat maps depict any desired property of the rendering process. An interactively rendered 3D view of the scene displays animated light paths based on the user's selection to gain further insight into the rendering process. The provided interactivity enables the user to guide the rendering process for more efficiency. To show its usefulness, we present several applications based on our tool. This includes differential light transport visualization to optimize light setup in a scene, finding the causes of and resolving rendering artefacts, such as fireflies, as well as a path length contribution histogram to evaluate the efficiency of different Monte Carlo estimators.Few tools allow a user to gain insight into the algorithmic processes of physically‐based rendering. In this work, we present such a tool, which combines techniques from information visualization and visual analytics with physically based rendering.
  • Item
    Visualization of Neural Network Predictions for Weather Forecasting
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Roesch, Isabelle; Günther, Tobias; Chen, Min and Benes, Bedrich
    Recurrent neural networks are prime candidates for learning evolutions in multi‐dimensional time series data. The performance of such a network is judged by the loss function, which is aggregated into a scalar value that decreases during training. Observing only this number hides the variation that occurs within the typically large training and testing data sets. Understanding these variations is of highest importance to adjust network hyper‐parameters, such as the number of neurons, number of layers or to adjust the training set to include more representative examples. In this paper, we design a comprehensive and interactive system that allows users to study the output of recurrent neural networks on both the complete training data and testing data. We follow a coarse‐to‐fine strategy, providing overviews of annual, monthly and daily patterns in the time series and directly support a comparison of different hyper‐parameter settings. We applied our method to a recurrent convolutional neural network that was trained and tested on 25 years of climate data to forecast meteorological attributes, such as temperature, pressure and wind velocity. We further visualize the quality of the forecasting models, when applied to various locations on the Earth and we examine the combination of several forecasting models.Recurrent neural networks are prime candidates for learning evolutions in multi‐dimensional time series data. We describe a comprehensive and interactive system to visually analyse and compare time series predictions that were generated by convolutional and recurrent neural networks in the context of weather forecasting.
  • Item
    MegaViews: Scalable Many‐View Rendering With Concurrent Scene‐View Hierarchy Traversal
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kol, Timothy R.; Bauszat, Pablo; Lee, Sungkil; Eisemann, Elmar; Chen, Min and Benes, Bedrich
    We present a scalable solution to render complex scenes from a large amount of viewpoints. While previous approaches rely either on a scene or a view hierarchy to process multiple elements together, we make full use of both, enabling sublinear performance in terms of views and scene complexity. By concurrently traversing the hierarchies, we efficiently find shared information among views to amortize rendering costs. One example application is many‐light global illumination. Our solution accelerates shadow map generation for virtual point lights, whose number can now be raised to over a million while maintaining interactive rates.
  • Item
    Stylized Image Triangulation
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Lawonn, Kai; Günther, Tobias; Chen, Min and Benes, Bedrich
    The art of representing images with triangles is known as image triangulation, which purposefully uses abstraction and simplification to guide the viewer's attention. The manual creation of image triangulations is tedious and thus several tools have been developed in the past that assist in the placement of vertices by means of image feature detection and subsequent Delaunay triangulation. In this paper, we formulate the image triangulation process as an optimization problem. We provide an interactive system that optimizes the vertex locations of an image triangulation to reduce the root mean squared approximation error. Along the way, the triangulation is incrementally refined by splitting triangles until certain refinement criteria are met. Thereby, the calculation of the energy gradients is expensive and thus we propose an efficient rasterization‐based GPU implementation. To ensure that artists have control over details, the system offers a number of direct and indirect editing tools that split, collapse and re‐triangulate selected parts of the image. For final display, we provide a set of rendering styles, including constant colours, linear gradients, tonal art maps and textures. Finally, we demonstrate temporal coherence for animations and compare our method with existing image triangulation tools.
  • Item
    Autonomous Particles for Interactive Flow Visualization
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Engelke, Wito; Lawonn, Kai; Preim, Bernhard; Hotz, Ingrid; Chen, Min and Benes, Bedrich
    We present an interactive approach to analyse flow fields using a new type of particle system, which is composed of autonomous particles exploring the flow. While particles provide a very intuitive way to visualize flows, it is a challenge to capture the important features with such systems. Particles tend to cluster in regions of low velocity and regions of interest are often sparsely populated. To overcome these disadvantages, we propose an automatic adaption of the particle density with respect to local importance measures. These measures are user defined and the systems sensitivity to them can be adjusted interactively. Together with the particle history, these measures define a probability for particles to multiply or die, respectively. There is no communication between the particles and no neighbourhood information has to be maintained. Thus, the particles can be handled in parallel and support a real‐time investigation of flow fields. To enhance the visualization, the particles' properties and selected field measures are also used to specify the systems rendering parameters, such as colour and size. We demonstrate the effectiveness of our approach on different simulated vector fields from technical and medical applications.We present an interactive approach to analyse flow fields using a new type of particle system, which is composed of autonomous particles exploring the flow. While particles provide a very intuitive way to visualize flows, it is a challenge to capture the important features with such systems. Particles tend to cluster in regions of low velocity and regions of interest are often sparsely populated. To overcome these disadvantages, we propose an automatic adaption of the particle density with respect to local importance measures. These measures are user defined and the systems sensitivity to them can be adjusted interactively. Together with the particle history, these measures define a probability for particles to multiply or die, respectively. There is no communication between the particles and no neighbourhood information has to be maintained. Thus, the particles can be handled in parallel and support a real‐time investigation of flow fields. To enhance the visualization, the particles' properties and selected field measures are also used to specify the systems rendering parameters, such as colour and size. We demonstrate the effectiveness of our approach on different simulated vector fields from technical and medical applications.
  • Item
    Flexible Use of Temporal and Spatial Reasoning for Fast and Scalable CPU Broad‐Phase Collision Detection Using KD‐Trees
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Serpa, Ygor Rebouças; Rodrigues, Maria Andréia Formico; Chen, Min and Benes, Bedrich
    Realistic computer simulations of physical elements such as rigid and deformable bodies, particles and fractures are commonplace in the modern world. In these simulations, the broad‐phase collision detection plays an important role in ensuring that simulations can scale with the number of objects. In these applications, several degrees of motion coherency, distinct spatial distributions and different types of objects exist; however, few attempts have been made at a generally applicable solution for their broad phase. In this regard, this work presents a novel broad‐phase collision detection algorithm based upon a hybrid SIMD optimized KD‐Tree and sweep‐and‐prune, aimed at general applicability. Our solution is optimized for several objects distributions, degrees of motion coherence and varying object sizes. These features are made possible by an efficient and idempotent two‐step tree optimization algorithm and by selectively enabling coherency optimizations. We have tested our solution under varying scenario setups and compared it to other solutions available in the literature and industry, up to a million simulated objects. The results show that our solution is competitive, with average performance values two to three times better than those achieved by other state‐of‐the‐art AABB‐based CPU solutions.Realistic computer simulations of physical elements such as rigid and deformable bodies, particles and fractures are commonplace in the modern world. In these simulations, the broad‐phase collision detection plays an important role in ensuring that simulations can scale with the number of objects. In these applications, several degrees of motion coherency, distinct spatial distributions and different types of objects exist; however, few attempts have been made at a generally applicable solution for their broad phase. In this regard, this work presents a novel broad‐phase collision detection algorithm based upon a hybrid SIMD optimized KD‐Tree and sweep‐and‐prune, aimed at general applicability. Our solution is optimized for several objects distributions, degrees of motion coherence and varying object sizes. These features are made possible by an efficient and idempotent two‐step tree optimization algorithm and by selectively enabling coherency optimizations.
  • Item
    Controllable Image‐Based Transfer of Flow Phenomena
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Bosch, Carles; Patow, Gustavo; Chen, Min and Benes, Bedrich
    Modelling flow phenomena and their related weathering effects is often cumbersome due their dependence on the environment, materials and geometric properties of objects in the scene. Example‐based modelling provides many advantages for reproducing real textures, but little effort has been devoted to reproducing and transferring complex phenomena. In order to produce realistic flow effects, it is possible to take advantage of the widespread availability of flow images on the Internet, which can be used to gather key information about the flow. In this paper, we present a technique that allows the transfer of flow phenomena between photographs, adapting the flow to the target image and giving the user flexibility and control through specifically tailored parameters. This is done through two types of control curves: a fitted theoretical curve to control the mass of deposited material, and an extended colour map for properly adapting to the target appearance. In addition, our method filters and warps the input flow in order to account for the geometric details of the target surface. This leads to a fast and intuitive approach to easily transfer phenomena between images, providing a set of simple and intuitive parameters to control the process.
  • Item
    On Visualizing Continuous Turbulence Scales
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Liu, Xiaopei; Mishra, Maneesh; Skote, Martin; Fu, Chi‐Wing; Chen, Min and Benes, Bedrich
    Turbulent flows are multi‐scale with vortices spanning a wide range of scales continuously. Due to such complexities, turbulence scales are particularly difficult to analyse and visualize. In this work, we present a novel and efficient optimization‐based method for turbulence structure visualization with scale decomposition directly in the Kolmogorov energy spectrum. To achieve this, we first derive a new analytical objective function based on integration approximation. Using this new formulation, we can significantly improve the efficiency of the underlying optimization process and obtain the desired filter in the Kolmogorov energy spectrum for scale decomposition. More importantly, such a decomposition allows a ‘continuous‐scale visualization’ that enables us to efficiently explore the decomposed turbulence scales and further analyse the turbulence structures in a continuous manner. With our approach, we can present scale visualizations of direct numerical simulation data sets continuously over the scale domain for both isotropic and boundary layer turbulent flows. Compared with previous works on multi‐scale turbulence analysis and visualization, our method is highly flexible and efficient in generating scale decomposition and visualization results. The application of the proposed technique to both isotropic and boundary layer turbulence data sets verifies the capability of our technique to produce desirable scale visualization results.Turbulent flows are multi‐scale with vortices spanning a wide range of scales continuously. Due to such complexities, turbulence scales are particularly difficult to analyse and visualize. In this work, we present a novel and efficient optimization‐based method for turbulence structure visualization with scale decomposition directly in the Kolmogorov energy spectrum. To achieve this, we first derive a new analytical objective function based on integration approximation. Using this new formulation, we can significantly improve the efficiency of the underlying optimization process and obtain the desired filter in the Kolmogorov energy spectrum for scale decomposition. More importantly, such a decomposition allows a ‘continuous‐scale visualization’ that enables us to efficiently explore the decomposed turbulence scales and further analyse the turbulence structures in a continuous manner. With our approach, we can present scale visualizations of direct numerical simulation data sets continuously over the scale domain for both isotropic and boundary layer turbulent flows.
  • Item
    Projected Field Similarity for Comparative Visualization of Multi‐Run Multi‐Field Time‐Varying Spatial Data
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Fofonov, A.; Linsen, L.; Chen, Min and Benes, Bedrich
    The purpose of multi‐run simulations is often to capture the variability of the output with respect to different initial settings. Comparative analysis of multi‐run spatio‐temporal simulation data requires us to investigate the differences in the dynamics of the simulations' changes over time. To capture the changes and differences, aggregated statistical information may often be insufficient, and it is desirable to capture the local differences between spatial data fields at different times and between different runs. To calculate the pairwise similarity between data fields, we generalize the concept of isosurface similarity from individual surfaces to entire fields and propose efficient computation strategies. The described approach can be applied considering a single scalar field for all simulation runs or can be generalized to a similarity measure capturing all data fields of a multi‐field data set simultaneously. Given the field similarity, we use multi‐dimensional scaling approaches to visualize the similarity in two‐dimensional or three‐dimensional projected views as well as plotting one‐dimensional similarity projections over time. Each simulation run is depicted as a polyline within the similarity maps. The overall visual analysis concept can be applied using our proposed field similarity or any other existing measure for field similarity. We evaluate our measure in comparison to popular existing measures for different configurations and discuss their advantages and limitations. We apply them to generate similarity maps for real‐world data sets within the overall concept for comparative visualization of multi‐run spatio‐temporal data and discuss the results.The purpose of multi‐run simulations is often to capture the variability of the output with respect to different initial settings. Comparative analysis of multi‐run spatio‐temporal simulation data requires us to investigate the differences in the dynamics of the simulations' changes over time. To capture the changes and differences, aggregated statistical information may often be insufficient, and it is desirable to capture the local differences between spatial data fields at different times and between different runs. To calculate the pairwise similarity between data fields, we generalize the concept of isosurface similarity from individual surfaces to entire fields and propose efficient computation strategies. The described approach can be applied considering a single scalar field for all simulation runs or can be generalized to a similarity measure capturing all data fields of a multi‐field data set simultaneously.
  • Item
    TexNN: Fast Texture Encoding Using Neural Networks
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Pratapa, S.; Olson, T.; Chalfin, A.; Manocha, D.; Chen, Min and Benes, Bedrich
    We present a novel deep learning‐based method for fast encoding of textures into current texture compression formats. Our approach uses state‐of‐the‐art neural network methods to compute the appropriate encoding configurations for fast compression. A key bottleneck in the current encoding algorithms is the search step, and we reduce that computation to a classification problem. We use a trained neural network approximation to quickly compute the encoding configuration for a given texture. We have evaluated our approach for compressing the textures for the widely used adaptive scalable texture compression format and evaluate the performance for different block sizes corresponding to 4 × 4, 6 × 6 and 8 × 8. Overall, our method (TexNN) speeds up the encoding computation up to an order of magnitude compared to prior compression algorithms with very little or no loss in the visual quality.We present a novel deep learning‐based method for fast encoding of textures into current texture compression formats. Our approach uses state‐of‐the‐art neural network methods to compute the appropriate encoding configurations for fast compression. A key bottleneck in the current encoding algorithms is the search step, and we reduce that computation to a classification problem. We use a trained neural network approximation to quickly compute the encoding configuration for a given texture.We have evaluated our approach for compressing the textures for the widely used adaptive scalable texture compression format and evaluate the performance for different block sizes corresponding to 4 × 4, 6 × 6 and 8 × 8.
  • Item
    Denoising Deep Monte Carlo Renderings
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Vicini, D.; Adler, D.; Novák, J.; Rousselle, F.; Burley, B.; Chen, Min and Benes, Bedrich
    We present a novel algorithm to denoise deep Monte Carlo renderings, in which pixels contain multiple colour values, each for a different range of depths. Deep images are a more expressive representation of the scene than conventional flat images. However, since each depth bin receives only a fraction of the flat pixel's samples, denoising the bins is harder due to the less accurate mean and variance estimates. Furthermore, deep images lack a regular structure in depth—the number of depth bins and their depth ranges vary across pixels. This prevents a straightforward application of patch‐based distance metrics frequently used to improve the robustness of existing denoising filters. We address these constraints by combining a flat image‐space non‐local means filter operating on pixel colours with a cross‐bilateral filter operating on auxiliary features (albedo, normal, etc.). Our approach significantly reduces noise in deep images while preserving their structure. To our best knowledge, our algorithm is the first to enable efficient deep‐compositing workflows with denoised Monte Carlo renderings. We demonstrate the performance of our filter on a range of scenes highlighting the challenges and advantages of denoising deep images.
  • Item
    Privacy Preserving Visualization: A Study on Event Sequence Data
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Chou, Jia‐Kai; Wang, Yang; Ma, Kwan‐Liu; Chen, Min and Benes, Bedrich
    The inconceivable ability and common practice to collect personal data as well as the power of data‐driven approaches to businesses, services and security nowadays also introduce significant privacy issues. There have been extensive studies on addressing privacy preserving problems in the data mining community but relatively few have provided supervised control over the anonymization process. Preserving both the value and privacy of the data is largely a non‐trivial task. We present the design and evaluation of a visual interface that assists users in employing commonly used data anonymization techniques for making privacy preserving visualizations. Specifically, we focus on event sequence data due to its vulnerability to privacy concerns. Our interface is designed for data owners to examine potential privacy issues, obfuscate information as suggested by the algorithm and fine‐tune the results per their discretion. Multiple use case scenarios demonstrate the utility of our design. A user study similarly investigates the effectiveness of the privacy preserving strategies. Our results show that using a visual‐based interface is effective for identifying potential privacy issues, for revealing underlying anonymization processes, and for allowing users to balance between data utility and privacy.The inconceivable ability and common practice to collect personal data as well as the power of data‐driven approaches to businesses, services and security nowadays also introduce significant privacy issues. There have been extensive studies on addressing privacy preserving problems in the data mining community but relatively few have provided supervised control over the anonymization process. Preserving both the value and privacy of the data is largely a non‐trivial task. We present the design and evaluation of a visual interface that assists users in employing commonly used data anonymization techniques for making privacy preserving visualizations. Specifically, we focus on event sequence data due to its vulnerability to privacy concerns. Our interface is designed for data owners to examine potential privacy issues, obfuscate information as suggested by the algorithm and fine‐tune the results per their discretion.
  • Item
    A Survey on Data‐Driven 3D Shape Descriptors
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Rostami, R.; Bashiri, F. S.; Rostami, B.; Yu, Z.; Chen, Min and Benes, Bedrich
    Recent advances in scanning device technologies and improvements in techniques that generate and synthesize 3D shapes have made 3D models widespread in various fields including medical research, biology, engineering, etc. 3D shape descriptors play a fundamental role in many 3D shape analysis tasks such as point matching, establishing point‐to‐point correspondence, shape segmentation and labelling, and shape retrieval to name a few. Various methods have been proposed to calculate succinct and informative descriptors for 3D models. Emerging data‐driven techniques use machine learning algorithms to construct accurate and reliable shape descriptors. This survey provides a thorough review of the data‐driven 3D shape descriptors from the machine learning point of view and compares them in different criteria. Also, a comprehensive taxonomy of the existing descriptors is proposed based on the exploited machine learning algorithms. Advantages and disadvantages of each category have been discussed in detail. Besides, two alternative categorizations from the data type and the application perspectives are presented. Finally, some directions for possible future research are also suggested.Recent advances in scanning device technologies and improvements in techniques that generate and synthesize 3D shapes have made 3D models widespread in various fields including medical research, biology, engineering, etc. 3D shape descriptors play a fundamental role in many 3D shape analysis tasks such as point matching, establishing point‐to‐point correspondence, shape segmentation and labelling, and shape retrieval to name a few. Various methods have been proposed to calculate succinct and informative descriptors for 3D models. Emerging data‐driven techniques use machine learning algorithms to construct accurate and reliable shape descriptors.
  • Item
    Gradient‐Guided Local Disparity Editing
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Scandolo, Leonardo; Bauszat, Pablo; Eisemann, Elmar; Chen, Min and Benes, Bedrich
    Stereoscopic 3D technology gives visual content creators a new dimension of design when creating images and movies. While useful for conveying emotion, laying emphasis on certain parts of the scene, or guiding the viewer's attention, editing stereo content is a challenging task. Not respecting comfort zones or adding incorrect depth cues, for example depth inversion, leads to a poor viewing experience. In this paper, we present a solution for editing stereoscopic content that allows an artist to impose disparity constraints and removes resulting depth conflicts using an optimization scheme. Using our approach, an artist only needs to focus on important high‐level indications that are automatically made consistent with the entire scene while avoiding contradictory depth cues and respecting viewer comfort.
  • Item
    Superpixel Generation by Agglomerative Clustering With Quadratic Error Minimization
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Dong, Xiao; Chen, Zhonggui; Yao, Junfeng; Guo, Xiaohu; Chen, Min and Benes, Bedrich
    Superpixel segmentation is a popular image pre‐processing technique in many computer vision applications. In this paper, we present a novel superpixel generation algorithm by agglomerative clustering with quadratic error minimization. We use a quadratic error metric (QEM) to measure the difference of spatial compactness and colour homogeneity between superpixels. Based on the quadratic function, we propose a bottom‐up greedy clustering algorithm to obtain higher quality superpixel segmentation. There are two steps in our algorithm: merging and swapping. First, we calculate the merging cost of two superpixels and iteratively merge the pair with the minimum cost until the termination condition is satisfied. Then, we optimize the boundary of superpixels by swapping pixels according to their swapping cost to improve the compactness. Due to the quadratic nature of the energy function, each of these atomic operations has only (1) time complexity. We compare the new method with other state‐of‐the‐art superpixel generation algorithms on two datasets, and our algorithm demonstrates superior performance.Superpixel segmentation is a popular image pre‐processing technique in many computer vision applications. In this paper, we present a novel superpixel generation algorithm by agglomerative clustering with quadratic error minimization. We use a quadratic error metric (QEM) to measure the difference of spatial compactness and colour homogeneity between superpixels. Based on the quadratic function, we propose a bottom‐up greedy clustering algorithm to obtain higher quality superpixel segmentation. There are two steps in our algorithm: merging and swapping. First, we calculate the merging cost of two superpixels and iteratively merge the pair with the minimum cost until the termination condition is satisfied. Then, we optimize the boundary of superpixels by swapping pixels according to their swapping cost to improve the compactness. Due to the quadratic nature of the energy function, each of these atomic operations has only O(1) time complexity. We compare the new method with other state‐of‐the‐art superpixel generation algorithms on two datasets, and our algorithm demonstrates superior performance.
  • Item
    Shading‐Based Surface Recovery Using Subdivision‐Based Representation
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Deng, Teng; Zheng, Jianmin; Cai, Jianfei; Cham, Tat‐Jen; Chen, Min and Benes, Bedrich
    This paper presents subdivision‐based representations for both lighting and geometry in shape‐from‐shading. A very recent shading‐based method introduced a per‐vertex overall illumination model for surface reconstruction, which has advantage of conveniently handling complicated lighting condition and avoiding explicit estimation of visibility and varied albedo. However, due to its discrete nature, the per‐vertex overall illumination requires a large amount of memory and lacks intrinsic coherence. To overcome these problems, in this paper we propose to use classic subdivision to define the basic smooth lighting function and surface, and introduce additional independent variables into the subdivision to adaptively model sharp changes of illumination and geometry. Compared to previous works, the new model not only preserves the merits of the per‐vertex illumination model, but also greatly reduces the number of variables required in surface recovery and intrinsically regularizes the illumination vectors and the surface. These features make the new model very suitable for multi‐view stereo surface reconstruction under general, unknown illumination condition. Particularly, a variational surface reconstruction method built upon the subdivision representations for lighting and geometry is developed. The experiments on both synthetic and real‐world data sets have demonstrated that the proposed method can achieve memory efficiency and improve surface detail recovery.This paper presents subdivision‐based representations for both lighting and geometry in shape‐from‐shading. A very recent shading‐based method introduced a per‐vertex overall illumination model for surface reconstruction, which has advantage of conveniently handling complicated lighting condition and avoiding explicit estimation of visibility and varied albedo. However, due to its discrete nature, the per‐vertex overall illumination requires a large amount of memory and lacks intrinsic coherence. To overcome these problems, in this paper we propose to use classic subdivision to define the basic smooth lighting function and surface, and introduce additional independent variables into the subdivision to adaptively model sharp changes of illumination and geometry. Compared to previous works, the new model not only preserves the merits of the per‐vertex illumination model, but also greatly reduces the number of variables required in surface recovery and intrinsically regularizes the illumination vectors and the surface. These features make the new model very suitable for multi‐view stereo surface reconstruction under general, unknown illumination condition. Particularly, a variational surface reconstruction method built upon the subdivision representations for lighting and geometry is developed. The experiments on both synthetic and real‐world data sets have demonstrated that the proposed method can achieve memory efficiency and improve surface detail recovery.
  • Item
    Learning A Stroke‐Based Representation for Fonts
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Balashova, Elena; Bermano, Amit H.; Kim, Vladimir G.; DiVerdi, Stephen; Hertzmann, Aaron; Funkhouser, Thomas; Chen, Min and Benes, Bedrich
    Designing fonts and typefaces is a difficult process for both beginner and expert typographers. Existing workflows require the designer to create every glyph, while adhering to many loosely defined design suggestions to achieve an aesthetically appealing and coherent character set. This process can be significantly simplified by exploiting the similar structure character glyphs present across different fonts and the shared stylistic elements within the same font. To capture these correlations, we propose learning a stroke‐based font representation from a collection of existing typefaces. To enable this, we develop a stroke‐based geometric model for glyphs, a fitting procedure to reparametrize arbitrary fonts to our representation. We demonstrate the effectiveness of our model through a manifold learning technique that estimates a low‐dimensional font space. Our representation captures a wide range of everyday fonts with topological variations and naturally handles discrete and continuous variations, such as presence and absence of stylistic elements as well as slants and weights. We show that our learned representation can be used for iteratively improving fit quality, as well as exploratory style applications such as completing a font from a subset of observed glyphs, interpolating or adding and removing stylistic elements in existing fonts.
  • Item
    Real‐Time Facial Expression Transformation for Monocular RGB Video
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Ma, L.; Deng, Z.; Chen, Min and Benes, Bedrich
    This paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions.This paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions.
  • Item
    Real‐Time Human Shadow Removal in a Front Projection System
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kim, Jaedong; Seo, Hyunggoog; Cha, Seunghoon; Noh, Junyong; Chen, Min and Benes, Bedrich
    When a person is located between a display and an operating projector, a shadow is cast on the display. The shadow on the display may eliminate important visual information and therefore adversely affect the viewing experiences. There have been various attempts to remove the human shadow cast on a projection display by using multiple projectors. While previous approaches successfully removed the shadow region when a person moderately moves around or stands stationary in front of the display, there is still an afterimage effect due to the lack of consideration of the limb motion of the person. We propose a new real‐time approach to removing the shadow cast by a person who dynamically interacts with the display, making limb motions in a front projection system. The proposed method utilizes a human skeleton obtained from a depth camera to track the posture of the person which changes over time. A model that consists of spheres and conical frustums is constructed based on the skeleton information in order to represent volumetric information of the person being tracked. Our method precisely estimates the shadow region by projecting the volumetric model onto the display. In addition, employment of intensity masks that are built based on a distance field helps suppress the afterimage of the shadow that appears when the person moves abruptly. It also helps blend the projected overlapping images from different projectors and show one smoothly combined display. The experiment results verify that our approach removes the shadow of a person effectively in a front projection environment and is fast enough to achieve real‐time performance.
  • Item
    Urban Walkability Design Using Virtual Population Simulation
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Mathew, C. D. Tharindu; Knob, Paulo R.; Musse, Soraia Raupp; Aliaga, Daniel G.; Chen, Min and Benes, Bedrich
    We present a system to generate a procedural environment that produces a desired crowd behaviour. Instead of altering the behavioural parameters of the crowd itself, we automatically alter the environment to yield such desired crowd behaviour. This novel inverse approach is useful both to crowd simulation in virtual environments and to urban crowd planning applications. Our approach tightly integrates and extends a space discretization crowd simulator with inverse procedural modelling. We extend crowd simulation by goal exploration (i.e. agents are initially unaware of the goal locations), variable‐appealing sign usage and several acceleration schemes. We use Markov chain Monte Carlo to quickly explore the solution space and yield interactive design. We have applied our method to a variety of virtual and real‐world locations, yielding one order of magnitude faster crowd simulation performance over related methods and several fold improvement of crowd indicators.
  • Item
    A Variational Approach to Registration with Local Exponential Coordinates
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Paman, Ashish; Rangarajan, Ramsharan; Chen, Min and Benes, Bedrich
    We identify a novel parameterization for the group of finite rotations (SO), consisting of an atlas of exponential maps defined over local tangent planes, for the purpose of computing isometric transformations in registration problems that arise in machine vision applications. Together with a simple representation for translations, the resulting system of coordinates for rigid body motions is proper, free from singularities, is unrestricted in the magnitude of motions that can be represented and poses no difficulties in computer implementations despite their multi‐chart nature. Crucially, such a parameterization helps to admit varied types of data sets, to adopt data‐dependent error functionals for registration, seamlessly bridges correspondence and pose calculations, and inspires systematic variational procedures for computing optimal solutions. As a representative problem, we consider that of registering point clouds onto implicit surfaces without introducing any discretization of the latter. We derive coordinate‐free stationarity conditions, compute consistent linearizations, provide algorithms to compute optimal solutions and examine their performance with detailed examples. The algorithm generalizes naturally to registering curves and surfaces onto implicit manifolds, is directly adaptable to handle the familiar problem of pairwise registration of point clouds and allows for incorporating scale factors during registration.We identify a novel parameterization for the group of finite rotations (SO), consisting of an atlas of exponential maps defined over local tangent planes, for the purpose of computing isometric transformations in registration problems that arise in machine vision applications. Together with a simple representation for translations, the resulting system of coordinates for rigid body motions is proper, free from singularities, is unrestricted in the magnitude of motions that can be represented and poses no difficulties in computer implementations despite their multi‐chart nature. Crucially, such a parameterization helps to admit varied types of data sets, to adopt data‐dependent error functionals for registration, seamlessly bridges correspondence and pose calculations, and inspires systematic variational procedures for computing optimal solutions. As a representative problem, we consider that of registering point clouds onto implicit surfaces without introducing any discretization of the latter. We derive coordinate‐free stationarity conditions, compute consistent linearizations, provide algorithms to compute optimal solutions and examine their performance with detailed examples. The algorithm generalizes naturally to registering curves and surfaces onto implicit manifolds, is directly adaptable to handle the familiar problem of pairwise registration of point clouds and allows for incorporating scale factors during registration.
  • Item
    Incremental Labelling of Voronoi Vertices for Shape Reconstruction
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Peethambaran, J.; Parakkat, A.D.; Tagliasacchi, A.; Wang, R.; Muthuganapathy, R.; Chen, Min and Benes, Bedrich
    We present an incremental Voronoi vertex labelling algorithm for approximating contours, medial axes and dominant points (high curvature points) from 2D point sets. Though there exist many number of algorithms for reconstructing curves, medial axes or dominant points, a unified framework capable of approximating all the three in one place from points is missing in the literature. Our algorithm estimates the normals at each sample point through poles (farthest Voronoi vertices of a sample point) and uses the estimated normals and the corresponding tangents to determine the spatial locations (inner or outer) of the Voronoi vertices with respect to the original curve. The vertex classification helps to construct a piece‐wise linear approximation to the object boundary. We provide a theoretical analysis of the algorithm for points non‐uniformly (ε‐sampling) sampled from simple, closed, concave and smooth curves. The proposed framework has been thoroughly evaluated for its usefulness using various test data. Results indicate that even sparsely and non‐uniformly sampled curves with outliers or collection of curves are faithfully reconstructed by the proposed algorithm.We present an incremental Voronoi vertex labelling algorithm for approximating contours, medial axes and dominant points (high curvature points) from 2D point sets. Though there exist many number of algorithms for reconstructing curves, medial axes or dominant points, a unified framework capable of approximating all the three in one place from points is missing in the literature. Our algorithm estimates the normals at each sample point through poles (farthest Voronoi vertices of a sample point) and uses the estimated normals and the corresponding tangents to determine the spatial locations (inner or outer) of the Voronoi vertices with respect to the original curve. The vertex classification helps to construct a piece‐wise linear approximation to the object boundary. We provide a theoretical analysis of the algorithm for points non‐uniformly (ε‐sampling) sampled from simple, closed, concave and smooth curves.
  • Item
    Visual Exploration of Dynamic Multichannel EEG Coherence Networks
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Ji, C.; Gronde, J. J.; Maurits, N. M.; Roerdink, J. B. T. M.; Chen, Min and Benes, Bedrich
    Electroencephalography (EEG) coherence networks represent functional brain connectivity, and are constructed by calculating the coherence between pairs of electrode signals as a function of frequency. Visualization of such networks can provide insight into unexpected patterns of cognitive processing and help neuroscientists to understand brain mechanisms. However, visualizing EEG coherence networks is a challenge for the analysis of brain connectivity, especially when the spatial structure of the network needs to be taken into account. In this paper, we present a design and implementation of a visualization framework for such dynamic networks. First, requirements for supporting typical tasks in the context of dynamic functional connectivity network analysis were collected from neuroscience researchers. In our design, we consider groups of network nodes and their corresponding spatial location for visualizing the evolution of the dynamic coherence network. We introduce an augmented timeline‐based representation to provide an overview of the evolution of functional units (FUs) and their spatial location over time. This representation can help the viewer to identify relations between functional connectivity and brain regions, as well as to identify persistent or transient functional connectivity patterns across the whole time window. In addition, we introduce the time‐annotated FU map representation to facilitate comparison of the behaviour of nodes between consecutive FU maps. A colour coding is designed that helps to distinguish distinct dynamic FUs. Our implementation also supports interactive exploration. The usefulness of our visualization design was evaluated by an informal user study. The feedback we received shows that our design supports exploratory analysis tasks well. The method can serve as a first step before a complete analysis of dynamic EEG coherence networks.Electroencephalography (EEG) coherence networks represent functional brain connectivity, and are constructed by calculating the coherence between pairs of electrode signals as a function of frequency. Visualization of such networks can provide insight into unexpected patterns of cognitive processing and help neuroscientists to understand brain mechanisms. However, visualizing EEG coherence networks is a challenge for the analysis of brain connectivity, especially when the spatial structure of the network needs to be taken into account. In this paper, we present a design and implementation of a visualization framework for such dynamic networks. First, requirements for supporting typical tasks in the context of dynamic functional connectivity network analysis were collected from neuroscience researchers. In our design, we consider groups of network nodes and their corresponding spatial location for visualizing the evolution of the dynamic coherence network. We introduce an augmented timeline‐based representation to provide an overview of the evolution of functional units (FUs) and their spatial location over time. This representation can help the viewer to identify relations between functional connectivity and brain regions, as well as to identify persistent or transient functional connectivity patterns across the whole time window. In addition, we introduce the time‐annotated FU map representation to facilitate comparison of the behaviour of nodes between consecutive FU maps. A colour coding is designed that helps to distinguish distinct dynamic FUs. Our implementation also supports interactive exploration. The usefulness of our visualization design was evaluated by an informal user study. The feedback we received shows that our design supports exploratory analysis tasks well. The method can serve as a first step before a complete analysis of dynamic EEG coherence networks.
  • Item
    Style Invariant Locomotion Classification for Character Control
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Boehs, G.E.; Vieira, M.L.H.; Chen, Min and Benes, Bedrich
    We present a real‐time system for character control that relies on the classification of locomotive actions in skeletal motion capture data. Our method is both progress dependent and style invariant. Two deep neural networks are used to correlate body shape and implicit dynamics to locomotive types and their respective progress. In comparison to related work, our approach does not require a setup step and enables the user to act in a natural, unconstrained manner. Also, our method displays better performance than the related work in scenarios where the actor performs sharp changes in direction and highly stylized motions while maintaining at least as good performance in other scenarios. Our motivation is to enable character control of non‐bipedal characters in virtual production and live immersive experiences, where mannerisms in the actor's performance may be an issue for previous methods.We present a real‐time system for character control that relies on the classification of locomotive actions in skeletal motion capture data. Our method is both progress dependent and style invariant. Two deep neural networks are used to correlate body shape and implicit dynamics to locomotive types and their respective progress. In comparison to related work, our approach does not require a setup step and enables the user to act in a natural, unconstrained manner. Also, our method displays better performance than the related work in scenarios where the actor performs sharp changes in direction and highly stylized motions while maintaining at least as good performance in other scenarios. Our motivation is to enable character control of non‐bipedal characters in virtual production and live immersive experiences, where mannerisms in the actor's performance may be an issue for previous methods.
  • Item
    A Probabilistic Steering Parameter Model for Deterministic Motion Planning Algorithms
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Agethen, Philipp; Gaisbauer, Felix; Rukzio, Enrico; Chen, Min and Benes, Bedrich
    The simulation of two‐dimensional human locomotion in a bird's eye perspective is a key technology for various domains to realistically predict walk paths. The generated trajectories, however, are frequently deviating from reality due to the usage of simplifying assumptions. For instance, common deterministic motion planning algorithms predominantly utilize a set of static steering parameters (e.g. maximum acceleration or velocity of the agent) to simulate the walking behaviour of a person. This procedure neglects important influence factors, which have a significant impact on the spatio‐temporal characteristics of the finally resulting motion—such as the operator's physical conditions or the probabilistic nature of the human locomotor system. In overcome this drawback, this paper presents an approach to derive probabilistic motion models from a database of captured human motions. Although being initially designed for industrial purposes, this method can be applied to a wide range of use cases while considering an arbitrary number of dependencies (input) and steering parameters (output). To underline its applicability, a probabilistic steering parameter model is implemented, which models velocity, angular velocity and acceleration as a function of the travel distances, path curvature and height of a respective person. Finally, the technical performance and advantages of this model are demonstrated within an evaluation.The simulation of two‐dimensional human locomotion in a bird's eye perspective is a key technology for various domains to realistically predict walk paths. The generated trajectories, however, are frequently deviating from reality due to the usage of simplifying assumptions. For instance, common deterministic motion planning algorithms predominantly utilize a set of static steering parameters (e.g. maximum acceleration or velocity of the agent) to simulate the walking behaviour of a person. This procedure neglects important influence factors, which have a significant impact on the spatio‐temporal characteristics of the finally resulting motion—such as the operator's physical conditions or the probabilistic nature of the human locomotor system. In overcome this drawback, this paper presents an approach to derive probabilistic motion models from a database of captured human motions. Although being initially designed for industrial purposes, this method can be applied to a wide range of use cases while considering an arbitrary number of dependencies (input) and steering parameters (output).
  • Item
    Solid Geometry Processing on Deconstructed Domains
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Sellán, Silvia; Cheng, Herng Yi; Ma, Yuming; Dembowski, Mitchell; Jacobson, Alec; Chen, Min and Benes, Bedrich
    Many tasks in geometry processing are modelled as variational problems solved numerically using the finite element method. For solid shapes, this requires a volumetric discretization, such as a boundary conforming tetrahedral mesh. Unfortunately, tetrahedral meshing remains an open challenge and existing methods either struggle to conform to complex boundary surfaces or require manual intervention to prevent failure. Rather than create a single volumetric mesh for the entire shape, we advocate for solid geometry processing on , where a large and complex shape is composed of overlapping solid subdomains. As each smaller and simpler part is now easier to tetrahedralize, the question becomes how to account for overlaps during problem modelling and how to couple solutions on each subdomain together . We explore how and why previous coupling methods fail, and propose a method that couples solid domains only along their boundary surfaces. We demonstrate the superiority of this method through empirical convergence tests and qualitative applications to solid geometry processing on a variety of popular second‐order and fourth‐order partial differential equations.Many tasks in geometry processing are modelled as variational problems solved numerically using the finite element method. For solid shapes, this requires a volumetric discretization, such as a boundary conforming tetrahedral mesh. Unfortunately, tetrahedral meshing remains an open challenge and existing methods either struggle to conform to complex boundary surfaces or require manual intervention to prevent failure. Rather than create a single volumetric mesh for the entire shape, we advocate for solid geometry processing on , where a large and complex shape is composed of overlapping solid subdomains. As each smaller and simpler part is now easier to tetrahedralize, the question becomes how to account for overlaps during problem modelling and how to couple solutions on each subdomain together . We explore how and why previous coupling methods fail, and propose a method that couples solid domains only along their boundary surfaces. We demonstrate the superiority of this method through empirical convergence tests and qualitative applications to solid geometry processing on a variety of popular second‐order and fourth‐order partial differential equations.
  • Item
    Selective Padding for Polycube‐Based Hexahedral Meshing
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Cherchi, G.; Alliez, P.; Scateni, R.; Lyon, M.; Bommes, D.; Chen, Min and Benes, Bedrich
    Hexahedral meshes generated from polycube mapping often exhibit a low number of singularities but also poor‐quality elements located near the surface. It is thus necessary to improve the overall mesh quality, in terms of the minimum scaled Jacobian (MSJ) or average SJ (ASJ). Improving the quality may be obtained via global padding (or pillowing), which pushes the singularities inside by adding an extra layer of hexahedra on the entire domain boundary. Such a global padding operation suffers from a large increase of complexity, with unnecessary hexahedra added. In addition, the quality of elements near the boundary may decrease. We propose a novel optimization method which inserts sheets of hexahedra so as to perform selective padding, where it is most needed for improving the mesh quality. A sheet can pad part of the domain boundary, traverse the domain and form singularities. Our global formulation, based on solving a binary problem, enables us to control the balance between quality improvement, increase of complexity and number of singularities. We show in a series of experiments that our approach increases the MSJ value and preserves (or even improves) the ASJ, while adding fewer hexahedra than global padding.
  • Item
    A Survey of Information Visualization Books
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Rees, D.; Laramee, R. S.; Chen, Min and Benes, Bedrich
    Information visualization is a rapidly evolving field with a growing volume of scientific literature and texts continually published. To keep abreast of the latest developments in the domain, survey papers and state‐of‐the‐art reviews provide valuable tools for managing the large quantity of scientific literature. Recently, a survey of survey papers was published to keep track of the quantity of refereed survey papers in information visualization conferences and journals. However, no such resources exist to inform readers of the large volume of books being published on the subject, leaving the possibility of valuable knowledge being overlooked. We present the first literature survey of information visualization books that addresses this challenge by surveying the large volume of books on the topic of information visualization and visual analytics. This unique survey addresses some special challenges associated with collections of books (as opposed to research papers) including searching, browsing and cost. This paper features a novel two‐level classification based on both books and chapter topics examined in each book, enabling the reader to quickly identify to what depth a topic of interest is covered within a particular book. Readers can use this survey to identify the most relevant book for their needs amongst a quickly expanding collection. In indexing the landscape of information visualization books, this survey provides a valuable resource to both experienced researchers and newcomers in the data visualization discipline.We present the first literature survey of information visualization books, providing a resource to both experienced researchers and newcomers in the data visualization discipline. This paper features a novel two‐level classification based on both books and chapter topics examined in each book, enabling the reader to quickly identify to what depth a topic of interest is covered within a book. Readers can use this survey to identify the most relevant book for their needs amongst a quickly expanding collection.
  • Item
    Increasing the Spatial Resolution of BTF Measurement with Scheimpflug Imaging
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Havran, V.; Hošek, J.; Němcová, Š.; Čáp, J.; Chen, Min and Benes, Bedrich
    We present an improved way of acquiring spatially varying surface reflectance represented by a bidirectional texture function (BTF). Planar BTF samples are measured as images at several inclination angles which puts constraints on the minimum depth of field of cameras used in the measurement instrument. For standard perspective imaging, we show that the size of a sample measured and the achievable spatial resolution are strongly interdependent and limited by diffraction at the lens' aperture. We provide a formula for this relationship. We overcome the issue of the required depth of field by using Scheimpflug imaging further enhanced by an anamorphic attachment. The proposed optics doubles the spatial resolution of images compared to standard perspective imaging optics. We built an instrument prototype with the proposed optics that is portable and allows for measurement on site. We show rendered images using the visual appearance measured by the instrument prototype.
  • Item
    MyEvents: A Personal Visual Analytics Approach for Mining Key Events and Knowledge Discovery in Support of Personal Reminiscence
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Parvinzamir, F.; Zhao, Y.; Deng, Z.; Dong, F.; Chen, Min and Benes, Bedrich
    Reminiscence is an important aspect in our life. It preserves precious memories, allows us to form our own identities and encourages us to accept the past. Our work takes the advantage of modern sensor technologies to support reminiscence, enabling self‐monitoring of personal activities and individual movement in space and time on a daily basis. This paper presents MyEvents, a web‐based personal visual analytics platform designed for non‐computing experts, that allows for the collection of long‐term location and movement data and the generation of event mementos. Our research is focused on two prominent goals in event reminiscence: (1) selection subjectivity and human involvement in the process of self‐knowledge discovery and memento creation; and (2) the enhancement of event familiarity by presenting target events and their related information for optimal memory recall and reminiscence. A novel multi‐significance event ranking model is proposed to determine significant events in the personal history according to user preferences for event category, frequency and regularity. The evaluation results show that MyEvents effectively fulfils the reminiscence goals and tasks.Reminiscence is an important aspect in our life. It preserves precious memories, allows us to form our own identities and encourages us to accept the past. Our work takes the advantage of modern sensor technologies to support reminiscence, enabling self‐monitoring of personal activities and individual movement in space and time on a daily basis. This paper presents MyEvents, a web‐based personal visual analytics platform designed for non‐computing experts, that allows for the collection of long‐term location and movement data and the generation of event mementos. Our research is focused on two prominent goals in event reminiscence: (1) selection subjectivity and human involvement in the process of self‐knowledge discovery and memento creation; and (2) the enhancement of event familiarity by presenting target events and their related information for optimal memory recall and reminiscence. A novel multi‐significance event ranking model is proposed to determine significant events in the personal history according to user preferences for event category, frequency and regularity. The evaluation results show that MyEvents effectively fulfils the reminiscence goals and tasks.
  • Item
    Filtered Quadrics for High‐Speed Geometry Smoothing and Clustering
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Legrand, Hélène; Thiery, Jean‐Marc; Boubekeur, Tamy; Chen, Min and Benes, Bedrich
    Modern 3D capture pipelines produce dense surface meshes at high speed, which challenge geometric operators to process such massive data on‐the‐fly. In particular, aiming at instantaneous feature‐preserving smoothing and clustering disqualifies global variational optimizers and one usually relies on high‐performance parallel kernels based on simple measures performed on the positions and normal vectors associated with the surface vertices. Although these operators are effective on small supports, they fail at properly capturing larger scale surface structures. To cope with this problem, we propose to enrich the surface representation with filtered quadrics, a compact and discriminating range space to guide processing. Compared to normal‐based approaches, this additional vertex attribute significantly improves feature preservation for fast bilateral filtering and mode‐seeking clustering, while exhibiting a linear memory cost in the number of vertices and retaining the simplicity of convolutional filters. In particular, the overall performance of our approach stems from its natural compatibility with modern fine‐grained parallel computing architectures such as graphics processor units (GPU). As a result, filtered quadrics offer a superior ability to handle a broad spectrum of frequencies and preserve large salient structures, delivering meshes on‐the‐fly for interactive and streaming applications, as well as quickly processing large data collections, instrumental in learning‐based geometry analysis.
  • Item
    Functional Maps Representation On Product Manifolds
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Rodolà, E.; Lähner, Z.; Bronstein, A. M.; Bronstein, M. M.; Solomon, J.; Chen, Min and Benes, Bedrich
    We consider the tasks of representing, analysing and manipulating maps between shapes. We model maps as densities over the product manifold of the input shapes; these densities can be treated as scalar functions and therefore are manipulable using the language of signal processing on manifolds. Being a manifold itself, the product space endows the set of maps with a geometry of its own, which we exploit to define map operations in the spectral domain; we also derive relationships with other existing representations (soft maps and functional maps). To apply these ideas in practice, we discretize product manifolds and their Laplace–Beltrami operators, and we introduce localized spectral analysis of the product manifold as a novel tool for map processing. Our framework applies to maps defined between and across 2D and 3D shapes without requiring special adjustment, and it can be implemented efficiently with simple operations on sparse matrices.We consider the tasks of representing, analysing and manipulating maps between shapes. We model maps as densities over the product manifold of the input shapes; these densities can be treated as scalar functions and therefore are manipulable using the language of signal processing on manifolds. Being a manifold itself, the product space endows the set of maps with a geometry of its own, which we exploit to define map operations in the spectral domain; we also derive relationships with other existing representations (soft maps and functional maps). To apply these ideas in practice, we discretize product manifolds and their Laplace–Beltrami operators, and we introduce localized spectral analysis of the product manifold as a novel tool for map processing. Our framework applies to maps defined between and across 2D and 3D shapes without requiring special adjustment, and it can be implemented efficiently with simple operations on sparse matrices.