Grid Labeling: Crowdsourcing Task-Specific Importance from Visualizations

Loading...
Thumbnail Image
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Knowing where people look in visualizations is key to effective design. Yet, existing research primarily focuses on free-viewingbased saliency models- although visual attention is inherently task-dependent. Collecting task-relevant importance data remains a resource-intensive challenge. To address this, we introduce Grid Labeling - a novel annotation method for collecting task-specific importance data to enhance saliency prediction models. Grid Labeling dynamically segments visualizations into Adaptive Grids, enabling efficient, low-effort annotation while adapting to visualization structure. We conducted a humansubject study comparing Grid Labeling with existing annotation methods, ImportAnnots, and BubbleView across multiple metrics. Results show that Grid Labeling produces the least noisy data and the highest inter-participant agreement with fewer participants while requiring less physical (e.g., clicks/mouse movements) and cognitive effort. An interactive demo and the accompanying dataset are available at https://github.com/jangsus1/Grid-Labeling.
Description

CCS Concepts: Human-centered computing → Visualization techniques; Empirical studies in visualization

        
@inproceedings{
10.2312:evs.20251092
, booktitle = {
EuroVis 2025 - Short Papers
}, editor = {
El-Assady, Mennatallah
and
Ottley, Alvitta
and
Tominski, Christian
}, title = {{
Grid Labeling: Crowdsourcing Task-Specific Importance from Visualizations
}}, author = {
Chang, Minsuk
and
Wang, Yao
and
Wang, Huichen Will
and
Bulling, Andreas
and
Bearfield, Cindy Xiong
}, year = {
2025
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-282-0
}, DOI = {
10.2312/evs.20251092
} }
Citation
Collections