Data-driven Evaluation of Visual Quality Measures

dc.contributor.authorSedlmair, Michaelen_US
dc.contributor.authorAupetit, Michaelen_US
dc.contributor.editorH. Carr, K.-L. Ma, and G. Santuccien_US
dc.date.accessioned2015-05-22T12:51:23Z
dc.date.available2015-05-22T12:51:23Z
dc.date.issued2015en_US
dc.description.abstractVisual quality measures seek to algorithmically imitate human judgments of patterns such as class separability, correlation, or outliers. In this paper, we propose a novel data-driven framework for evaluating such measures. The basic idea is to take a large set of visually encoded data, such as scatterplots, with reliable human ''ground truth'' judgements, and to use this human-labeled data to learn how well a measure would predict human judgements on previously unseen data. Measures can then be evaluated based on predictive performance-an approach that is crucial for generalizing across datasets but has gained little attention so far. To illustrate our framework, we use it to evaluate 15 state-of-the-art class separation measures, using human ground truth data from 828 class separation judgments on color-coded 2D scatterplots.en_US
dc.description.number3en_US
dc.description.sectionheadersEvaluation and Designen_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume34en_US
dc.identifier.doi10.1111/cgf.12632en_US
dc.identifier.pages201-210en_US
dc.identifier.urihttps://doi.org/10.1111/cgf.12632en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectH.5.0 [Information Interfaces and Presentation]en_US
dc.subjectGeneralen_US
dc.titleData-driven Evaluation of Visual Quality Measuresen_US
Files