Visual Attention for Rendered 3D Shapes
dc.contributor.author | Lavoué, Guillaume | en_US |
dc.contributor.author | Cordier, Frédéric | en_US |
dc.contributor.author | Seo, Hyewon | en_US |
dc.contributor.author | Larabi, Mohamed-Chaker | en_US |
dc.contributor.editor | Gutierrez, Diego and Sheffer, Alla | en_US |
dc.date.accessioned | 2018-04-14T18:23:43Z | |
dc.date.available | 2018-04-14T18:23:43Z | |
dc.date.issued | 2018 | |
dc.description.abstract | Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye-tracking experiments involving 3D shapes, with both static and time-varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state-of-the-art mesh saliency models in predicting ground-truth fixations using two different metrics. We show that, even combined with a center-bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human-eye fixations and Schelling points and show that their correlation is weak. | en_US |
dc.description.number | 2 | |
dc.description.sectionheaders | Gaze and Attention | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 37 | |
dc.identifier.doi | 10.1111/cgf.13353 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 191-203 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.13353 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13353 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | Computing methodologies | |
dc.subject | Interest point and salient region detections | |
dc.subject | Perception | |
dc.subject | Mesh models | |
dc.title | Visual Attention for Rendered 3D Shapes | en_US |