EuroRVVV: EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization
Permanent URI for this community
Browse
Browsing EuroRVVV: EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization by Subject "concepts and models"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Examining the Components of Trust in Map-Based Visualizations(The Eurographics Association, 2019) Xiong, Cindy; Padilla, Lace; Grayson, Kent; Franconeri, Steven; Kosara, Robert and Lawonn, Kai and Linsen, Lars and Smit, NoeskaPrior research suggests that perceived transparency is often associated with perceived trust. For some data types, greater transparency in data visualization is also associated with an increase in the amount of information depicted. Based on prior work in economics and political science that has identified four dimensions of transparency, we examined the influence of accuracy, clarity, amount of disclosure, and thoroughness on a decision task where participants relied on map-based visualizations with varying complexity to solve a crisis. The results of our preliminary analysis suggest that perceived clarity, amount of disclosure. and thoroughness significantly predicted individuals' selection of a Google Maps-like application with either less information or more information. Trust and perceived accuracy did not significantly predict which navigation application visualization participants decided to use (i.e., one with more information or less information). Further, our preliminary results suggest that an individual's ratings of accuracy and disclosure of a visualization predicted their ratings of the trustworthiness of that visualization. We discuss the implications of a possible dissociation between trust and decision tasks on visualization evaluation. In future work, we aim to examine the influence of the amount of information shown in a visualization on ratings of trust and determine the generalizability of our preliminary findings to different task types and visualization approaches.Item Towards Supporting Interpretability of Clustering Results with Uncertainty Visualization(The Eurographics Association, 2019) Kinkeldey, Christoph; Korjakow, Tim; Benjamin, Jesse Josua; Kosara, Robert and Lawonn, Kai and Linsen, Lars and Smit, NoeskaInterpretation of machine learning results is a major challenge for non-technical experts, with visualization being a common approach to support this process. For instance, interpretation of clustering results is usually based on scatterplots that provide information about cluster characteristics implicitly through the relative location of objects. However, the locations and distances tend to be distorted because of artifacts stemming from dimensionality reduction. This makes interpretation of clusters difficult and may lead to distrust in the system. Most existing approaches that counter this drawback explain the distances in the scatterplot (e.g., error visualization) to foster the interpretability of implicit information. Instead, we suggest explicit visualization of the uncertainty related to the information needed for interpretation, specifically the uncertain membership of each object to its cluster. In our approach, we place objects on a grid, and add a continuous ''topography'' in the background, expressing the distribution of uncertainty over all clusters. We motivate our approach from a use case in which we visualize research projects, clustered by topics extracted from scientific abstracts. We hypothesize that uncertainty visualization can increase trust in the system, which we specify as an emergent property of interaction with an interpretable system. We present a first prototype and outline possible procedures for evaluating if and how the uncertainty visualization approach affects interpretability and trust.