EuroRVVV: EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization
Permanent URI for this community
Browse
Browsing EuroRVVV: EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization by Title
Now showing 1 - 20 of 42
Results Per Page
Sort Options
Item Choosing the Right Sample? Experiences of Selecting Participants for Visualization Evaluation(The Eurographics Association, 2015) Kriglstein, Simone; Pohl, Margit; W. Aigner and P. Rosenthal and C. ScheideggerConducting and reporting evaluation studies has become more and more popular over the last few years in the information visualization community. A big challenge is to describe such studies in a way such that the investigations are repeatable and comparable with other studies. This not only includes the description of methodology, tasks, and procedure of the study but also information about the participants - including the reasons for their selection - to make the work reproducible and to assess its validity. In this paper we give a short overview about our research that we conducted in the past to show in which context and situations which types of test persons (e.g., students or experts) were considered.Item Classifying Medical Projection Techniques based on Parameterization Attribute Preservation(The Eurographics Association, 2016) Kreiser, Julian; Ropinski, Timo; Kai Lawonn and Mario Hlawitschka and Paul RosenthalIn many areas of medicine, visualization researchers can help by contributing to to task simplification, abstraction or complexity reduction. As these approaches, can allow a better workflow in medical environments by exploiting easier communication through visualization, it is important to question their reliability and their reproducibility. Therefore, within this short paper, we investigate how projections used in medical visualization, can be classified with respect to the handled data and the underlying tasks. Many of these techniques are inspired by mesh parameterization, which allows for reducing a surface from R3 to R2. This makes complex structures often easier to understand by humans and machines. In the following section, we will classify different algorithms in this area (see Table 1) and discuss how these mappings benefit medical visualization.Item A Crowdsourced Approach to Colormap Assessment(The Eurographics Association, 2017) Turton, Terece L.; Ware, Colin; Samsel, Francesca; Rogers, David H.; Kai Lawonn and Noeska Smit and Douglas CunninghamDespite continual research and discussion on the perceptual effects of color in scientific visualization, psychophysical testing is often limited. In-person lab studies can be expensive and time-consuming while results can be difficult to extrapolate from meticulously controlled laboratory conditions to the real world of the visualization user. We draw on lessons learned from the use of crowdsourced participant pools in the behavioral sciences and information visualization to apply a crowdsourced approach to a classic psychophysical experiment assessing the ability of a colormap to impart metric information. We use an online presentation analogous to the color key task from Ware's 1988 paper, Color Sequences for Univariate Maps, testing colormaps similar to those in the original paper along with contemporary colormap standards and new alternatives in the scientific visualization domain. We explore the issue of potential contamination from color deficient participants and establish that perceptual color research can appropriately leverage a crowdsourced participant pool without significant CVD concerns. The updated version of the Ware color key task also provides a method to assess and compare colormaps.Item Debugging Vega through Inspection of the Data Flow Graph(The Eurographics Association, 2015) Hoffswell, Jane; Satyanarayan, Arvind; Heer, Jeffrey; W. Aigner and P. Rosenthal and C. ScheideggerVega is a declarative visualization grammar that decouples specification from execution to allow users to focus on the visual representation rather than low-level implementation decisions. However, this representation comes at the cost of effective debugging as its execution is obfuscated. By presenting the developer with Vega's data flow graph along with interactive capabilities, we can bridge the gap between specification and execution to enable direct inspection of the connections between each component. This inspection can augment the developer's mental model of the specification, enabling the developer to more easily identify areas of interest and implement changes to the resulting visualization.Item Detection of Confirmation and Distinction Biases in Visual Analytics Systems(The Eurographics Association, 2019) Nalcaci, Atilla Alpay; Girgin, Dilara; Balki, Semih; Talay, Fatih; Boz, Hasan Alp; Balcisoy, Selim; Kosara, Robert and Lawonn, Kai and Linsen, Lars and Smit, NoeskaCognitive bias is a systematic error that introduces drifts and distortions in the human judgment in terms of visual decomposition in the direction of the dominant instance. It has a significant role in decision-making process by means of evaluation of data visualizations. This paper elaborates on the experimental depiction of two cognitive bias types, namely Distinction Bias and Confirmation Bias, through the examination of cognate visual experimentations. The main goal of this implementation is to indicate the existence of cognitive bias in visual analytics systems through the adjustment of data visualization and crowdsourcing in terms of confirmation and distinction biases. Two distinct surveys that include biased and unbiased data visualizations which are related to a given data set were established in order to detect and measure the level of existence of introduced bias types. Practice of crowdsourcing which is provided by Amazon Mechanical Turk have been used for experimentation purposes through prepared surveys. Results statistically indicate that both distinction and confirmation biases has substantial effect and prominent significance on decision-making process.Item Detection of Diabetic Neuropathy - Can Visual Analytics Methods Really Help in Practice?(The Eurographics Association, 2016) Röhlig, Martin; Stachs, Oliver; Schumann, Heidrun; Kai Lawonn and Mario Hlawitschka and Paul RosenthalVisual analytics (VA) methods are valuable means for supporting the detection of diabetic neuropathy, the most common longterm complication of diabetes mellitus. We suggest two strategies for strengthening reliability, reproducibility, and applicability of dedicated VA methods in practice. First, we introduce a novel workflow visualization that shows activities together with metadata and produced output, facilitating a guided step-wise analysis. Second, we present a tailored user interface that integrates various VA tools, unifying access to their functionality and enabling free exploration for further assisting the medical diagnosis. By applying both strategies, we effectively enhance the practical utility of our VA approach for detecting diabetic neuropathy.Item EuroRV3 2016: Frontmatter(Eurographics Association, 2016) Kai Lawonn; Mario Hlawitschka; Paul Rosenthal;Item EuroRV3 2017: Frontmatter(Eurographics Association, 2017) Lawonn, Kai; Smit, Noeska; Cunningham, Douglas;Item EuroRV3 2018: Frontmatter(The Eurographics Association, 2018) Kai Lawonn; Noeska Smit; Lars Linsen; Robert Kosara; Kai Lawonn and Noeska Smit and Lars Linsen and Robert KosaraItem Evaluating the Perceptual Uniformity of Color Sequences for Feature Discrimination(The Eurographics Association, 2017) Ware, Colin; Turton, Terece L.; Samsel, Francesca; Bujack, Roxana; Rogers, David H.; Kai Lawonn and Noeska Smit and Douglas CunninghamProbably the most common method for visualizing univariate data maps is through pseudocoloring and one of the most commonly cited requirements of a good colormap is that it be perceptually uniform. This means that differences between adjacent colors in the sequence be equally distinct. The practical value of uniformity is for features in the data to be equally distinctive no matter where they lie in the colormap, but there are reasons for thinking that uniformity in terms of feature detection may not be achieved by current methods which are based on the use of uniform color spaces. In this paper we provide a new method for directly evaluating colormaps in terms of their capacity for feature resolution. We apply the method in a study using Amazon Mechanical Turk to evaluate seven colormaps. Among other findings the results show that two new double ended sequences have the highest discriminative power and good uniformity. Ways in which the technique can be applied include the design of colormaps for uniformity, and a method for evaluating colormaps through feature discrimination curves for differently sized features.Item Examining the Components of Trust in Map-Based Visualizations(The Eurographics Association, 2019) Xiong, Cindy; Padilla, Lace; Grayson, Kent; Franconeri, Steven; Kosara, Robert and Lawonn, Kai and Linsen, Lars and Smit, NoeskaPrior research suggests that perceived transparency is often associated with perceived trust. For some data types, greater transparency in data visualization is also associated with an increase in the amount of information depicted. Based on prior work in economics and political science that has identified four dimensions of transparency, we examined the influence of accuracy, clarity, amount of disclosure, and thoroughness on a decision task where participants relied on map-based visualizations with varying complexity to solve a crisis. The results of our preliminary analysis suggest that perceived clarity, amount of disclosure. and thoroughness significantly predicted individuals' selection of a Google Maps-like application with either less information or more information. Trust and perceived accuracy did not significantly predict which navigation application visualization participants decided to use (i.e., one with more information or less information). Further, our preliminary results suggest that an individual's ratings of accuracy and disclosure of a visualization predicted their ratings of the trustworthiness of that visualization. We discuss the implications of a possible dissociation between trust and decision tasks on visualization evaluation. In future work, we aim to examine the influence of the amount of information shown in a visualization on ratings of trust and determine the generalizability of our preliminary findings to different task types and visualization approaches.Item Experiences on Validation of Multi-Component System Simulations for Medical Training Applications(The Eurographics Association, 2016) Law, Yuen C.; Weyers, Benjamin; Kuhlen, Torsten W.; Kai Lawonn and Mario Hlawitschka and Paul RosenthalIn the simulation of multi-component systems, we often encounter a problem with a lack of ground-truth data. This situation makes the validation of our simulation methods and models a difficult task. In this work we present a guideline to design validation methodologies that can be applied to the validation of multi-component simulations that lack of ground-truth data. Additionally we present an example applied to an Ultrasound Image Simulation for medical training and give an overview of the considerations made and the results for each of the validation methods. With these guidelines we expect to obtain more comparable and reproducible validation results from which other similar work can benefit.Item From a User Study to a Valid Claim: How to Test Your Hypothesis and Avoid Common Pitfalls(The Eurographics Association, 2017) Hoon, Niels H. L. C. de; Eisemann, Elmar; Vilanova, Anna; Kai Lawonn and Noeska Smit and Douglas CunninghamThe evaluation of visualization methods or designs often relies on user studies. Apart from the difficulties involved in the design of the study itself, the existing mechanisms to obtain sound conclusions are often unclear. In this work, we review and summarize some of the common statistical techniques that can be used to validate a claim in the scenarios that are commonly present in user studies in visualization, i.e., hypothesis testing. Usually, the number of participants is small and the mean and variance of the distribution are not known. Therefore, we will focus on the techniques that are adequate within these limitations. Our aim for this paper is to clarify the goals and limitations of hypothesis testing from a user study perspective, that can be interesting for the visualization community. We provide an overview of the most common mistakes made when testing a hypothesis that can lead to erroneous claims. We also present strategies to avoid those.Item Frontmatter: EuroRV3 2015 EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization(Eurographics Association, 2015) Rosenthal, Paul; Aigner, Wolfgang; Scheidegger, Carlos; -Item Gaze into Hierarchy: A Practice-oriented Eye Tracking Study(The Eurographics Association, 2013) Müller, N. H.; Liebold, B.; Pietschmann, D.; Ohler, P.; Rosenthal, P.; P. Rosenthal and R. S. Laramee and M. Kirby and G. L. KindlmannThe visualization of hierarchical data is a wide field and plenty of different approaches have been proposed for various applications and purposes. A comprehensive survey on hierarchy visualizations was recently presented by Schulz et al. [SHS11]. Although every approach has its own claimed advantages, for practitioners it is often unclear what these mean in the specific context and which method to use.Item Guidelines and Recommendations for the Evaluation of New Visualization Techniques by Means of Experimental Studies(The Eurographics Association, 2017) Luz, Maria; Lawonn, Kai; Hansen, Christian; Kai Lawonn and Noeska Smit and Douglas CunninghamThis paper addresses important issues in the evaluation of new visualization techniques. It describes the principle of quantitative research in general and presents the idea of experimental studies. The goal of experimental studies is to provide the base for guidelines, which allow testing of hypotheses that newly-developed visualization solutions are better than older ones. Moreover, the paper provides guidelines for successful planning of experimental studies in terms of independent and dependent variables, participants, tasks, data collection and statistical evaluation of collected data. It describes how the results should be interpreted and reported in publications. Finally, the paper points out useful literature and thus contributes to a better understanding of how to evaluate new visualization techniques.Item High-Performance Motion Correction of Fetal MRI(The Eurographics Association, 2016) Kainz, Bernhard; Lloyd, David F. A.; Alansary, Amir; Murgasova, Maria Kuklisova; Khlebnikov, Rostislav; Rueckert, Daniel; Rutherford, Mary; Razavi, Reza; Hajnal, Jo V.; Kai Lawonn and Mario Hlawitschka and Paul RosenthalFetal Magnetic Resonance Imaging (MRI) shows promising results for pre-natal diagnostics. The detection of potentially lifethreatening abnormalities in the fetus can be difficult with ultrasound alone. MRI is one of the few safe alternative imaging modalities in pregnancy. However, to date it has been limited by unpredictable fetal and maternal motion during acquisition. Motion between the acquisitions of individual slices of a 3D volume results in spatial inconsistencies that can be resolved by slice-to-volume reconstruction (SVR) methods to provide high quality 3D image data. Existing algorithms to solve this problem have evolved from very slow implementations targeting a single organ to general high-performance solutions to reconstruct the whole uterus. In this paper we give a brief overview over the current state-of-the art in fetal motion compensation methods and show currently emerging clinical applications of these techniques.Item An Introduction to Evaluation in Medical Visualization(The Eurographics Association, 2016) Smit, Noeska; Lawonn, Kai; Kai Lawonn and Mario Hlawitschka and Paul RosenthalMedical visualization papers often deal with data that is interpreted by medical domain experts in a research or clinical context. Since visualizations are by definition designed to be interpreted by a human observer, often an evaluation is performed to confirm the utility of a presented method. The exact type of evaluation required is not always clear, especially to new researchers. With this paper, we hope to clarify the different types of evaluation methods that exist and provide practical guidelines to choose the most suitable evaluation method to increase the value of the work.Item A Mixed Approach for the Evaluation of a Guided Exploratory Visualization System(The Eurographics Association, 2015) Boukhelifa, Nadia; Bezerianos, Anastasia; Lutton, Evelyne; W. Aigner and P. Rosenthal and C. ScheideggerWe summarise and reflect upon our experience in evaluating a guided exploratory visualization system. Our system guides users in their exploration of multidimensional datasets to pertinent views of their data, where the notion of pertinence is defined by automatic indicators, such as the amount of visual patterns in the view, and subjective user feedback obtained during their interaction with the tool. To evaluate this type of system, we argue for deploying a collection of validation methods that are: user-centered, observing the utility and effectiveness of the system for the end-user; and algorithm-centered, analysing the computational behaviour of the system. We report on observations and lessons learnt from working with expert users both for the design and the evaluation of our system.Item On the Evaluation of a Semi-Automatic Vortex Flow Classification in 4D PC-MRI Data of the Aorta(The Eurographics Association, 2016) Meuschke, Monique; Köhler, Ben; Preim, Bernhard; Lawonn, Kai; Kai Lawonn and Mario Hlawitschka and Paul RosenthalIn this paper, we report on our experiences that we made during our contributions in the field of the visualization of flow characteristics. Mainly, we focused on the vortex flow classification in 4D PC-MRI as current medical studies assume a strong correlation between cardiovascular diseases and blood flow patterns such as vortices. For further analysis, medical experts are asked to manually extract and classify such vortices according to specific properties. We presented and evaluated techniques that enable a fast and robust vortex classification [MLK 16,MKP 16] that supports medical experts. The main focus in this paper is a report that describes our conversations with the domain experts. The dialog was the fundament that gave us the direction of what the experts need. We derived several requirements that should be fulfilled by our tool. From this, we developed a prototype that supports the experts. Finally, we describe the evaluation of our framework and discuss currently limitations.
- «
- 1 (current)
- 2
- 3
- »