EuroRVVV15
Permanent URI for this collection
Browse
Browsing EuroRVVV15 by Title
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Choosing the Right Sample? Experiences of Selecting Participants for Visualization Evaluation(The Eurographics Association, 2015) Kriglstein, Simone; Pohl, Margit; W. Aigner and P. Rosenthal and C. ScheideggerConducting and reporting evaluation studies has become more and more popular over the last few years in the information visualization community. A big challenge is to describe such studies in a way such that the investigations are repeatable and comparable with other studies. This not only includes the description of methodology, tasks, and procedure of the study but also information about the participants - including the reasons for their selection - to make the work reproducible and to assess its validity. In this paper we give a short overview about our research that we conducted in the past to show in which context and situations which types of test persons (e.g., students or experts) were considered.Item Debugging Vega through Inspection of the Data Flow Graph(The Eurographics Association, 2015) Hoffswell, Jane; Satyanarayan, Arvind; Heer, Jeffrey; W. Aigner and P. Rosenthal and C. ScheideggerVega is a declarative visualization grammar that decouples specification from execution to allow users to focus on the visual representation rather than low-level implementation decisions. However, this representation comes at the cost of effective debugging as its execution is obfuscated. By presenting the developer with Vega's data flow graph along with interactive capabilities, we can bridge the gap between specification and execution to enable direct inspection of the connections between each component. This inspection can augment the developer's mental model of the specification, enabling the developer to more easily identify areas of interest and implement changes to the resulting visualization.Item Frontmatter: EuroRV3 2015 EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization(Eurographics Association, 2015) Rosenthal, Paul; Aigner, Wolfgang; Scheidegger, Carlos; -Item A Mixed Approach for the Evaluation of a Guided Exploratory Visualization System(The Eurographics Association, 2015) Boukhelifa, Nadia; Bezerianos, Anastasia; Lutton, Evelyne; W. Aigner and P. Rosenthal and C. ScheideggerWe summarise and reflect upon our experience in evaluating a guided exploratory visualization system. Our system guides users in their exploration of multidimensional datasets to pertinent views of their data, where the notion of pertinence is defined by automatic indicators, such as the amount of visual patterns in the view, and subjective user feedback obtained during their interaction with the tool. To evaluate this type of system, we argue for deploying a collection of validation methods that are: user-centered, observing the utility and effectiveness of the system for the end-user; and algorithm-centered, analysing the computational behaviour of the system. We report on observations and lessons learnt from working with expert users both for the design and the evaluation of our system.Item On the Reproducibility of Line Integral Convolution for Real-Time Illustration of Molecular Surface Shape and Salient Regions(The Eurographics Association, 2015) Lawonn, Kai; Krone, Michael; Ertl, Thomas; Preim, Bernhard; W. Aigner and P. Rosenthal and C. ScheideggerIn this paper, we discuss the reproducibility of our work presented at EuroVis 2014 [LKEP14], which describes an illustrative rendering method tailored to molecular surfaces.We distinguish between the reproducibility of the data sets that were used for figures and performance analysis and the reproducibility in the sense of re-implementing the method. For the latter, we focus on each step of the algorithm and discuss the implementation challenges. We give further details and explain the most difficult parts. Additionally, we discuss how the models that were used can be recreated and the availability of the underlying data. Finally, we discuss the current state of reproducibility of our method and reflect on the problem of offering the source code of a research project in general.Item On the Reproducibility of our Biomolecular Visualization(The Eurographics Association, 2015) Scharnowski, Katrin; Krone, Michael; Reina, Guido; Ertl, Thomas; W. Aigner and P. Rosenthal and C. ScheideggerWe reflect on the reproducibility of our work presented at EuroVis 2014 [SKR 14], which applies deformable models to compare molecular surfaces. We discuss both negative and positive aspects of our work in terms of reproducibility and put the aspects in a wider, more general context, in particular for the more critical points.Item On the Reproducibility of VisRuption: A Tool for Intuitive and Efficient Visualization of Airline Disruption Data(The Eurographics Association, 2015) Müller, Nicholas Hugo; Pfeiffer, Linda; Ohler, Peter; Rosenthal, Paul; W. Aigner and P. Rosenthal and C. ScheideggerManaging the vast amount of resources and processes of large airlines with several hundred aircraft and several thousand operated flights per day is a very complex task and makes computer-aided operation irreplaceable. Moreover, there is a multitude of disruptions which can occur every day during airline operation and can result in very expensive delays or cancellations [CTA04,MHR10,Now09]. In our paper at EuroVis 2013 [RPMO13], we have presented a design study about the tool VisRuption for providing an intuitive and efficient access to airline disruption data.Item Reproducibility Made Easy(The Eurographics Association, 2015) Freire, Juliana; W. Aigner and P. Rosenthal and C. ScheideggerEver since Francis Bacon, a hallmark of the scientific method has been that experiments should be described in enough detail that they can be repeated and perhaps generalized. When Newton said that he could see farther because he stood on the shoulders of giants, he depended on the truth of his predecessors' observations and the correctness of their calculations. In modern terms, this implies the possibility of repeating results on nominally equal configurations and then generalizing the results by replaying them on new data sets, and seeing how they vary with different parameters. In principle, this should be easier for computational experiments than for natural science experiments, because not only can computational processes be automated but also computational systems do not suffer from the ''biological variation'' that plagues the life sciences. Unfortunately, the state of the art falls far short of this goal. Most computational experiments are specified only informally in papers, where experimental results are briefly described in figure captions; the code that produced the results is seldom available; and configuration parameters change results in unforeseen ways.Item Reproducibility, Verification, and Validation of Experiments on the Marschner-Lobb Test Signal(The Eurographics Association, 2015) Vad, Viktor; Csébfalvi, Balázs; Rautek, Peter; Gröller, Eduard; W. Aigner and P. Rosenthal and C. ScheideggerThe Marschner-Lobb (ML) test signal has been used for two decades to evaluate the visual quality of different volumetric reconstruction schemes. Previously, the reproduction of these experiments was very simple, as the ML signal was used to evaluate only compact filters applied on the traditional Cartesian lattice. As the Cartesian lattice is separable, it is easy to implement these filters as separable tensor-product extensions of well-known 1D filter kernels. Recently, however, non-separable reconstruction filters have received increased attention that are much more difficult to implement than the traditional tensor-product filters. Even if these are piecewise polynomial filters, the space partitions of the polynomial pieces are geometrically rather complicated. Therefore, the reproduction of the ML experiments is getting more and more difficult. Recently, we reproduced a previously published ML experiment for comparing Cartesian Cubic (CC), Body-Centered Cubic (BCC), and Face-Centered Cubic (FCC) lattices in terms of prealiasing. We recognized that the previously applied settings were biased and gave an undue advantage to the FCC-sampled ML representation. This result clearly shows that reproducibility, verification, and validation of the ML experiments is of crucial importance as the ML signal is the most frequently used benchmark for demonstrating the superiority of a reconstruction scheme or volume representations on non-Cartesian lattices.Item Should we Dream the Impossible Dream of Reproducibility in Visual Analytics Evaluation?(The Eurographics Association, 2015) Smuc, Michael; Schreder, Günther; Mayr, Eva; Windhager, Florian; W. Aigner and P. Rosenthal and C. ScheideggerEspecially in the field of Visual Analytics, where a lot of design decisions have to be taken, researchers strive for reproducible results. We present two different evaluation approaches aiming for more general design knowledge: the isolation of features and the abstraction of results. Both approaches have potentials, but also problems with respect to generating reproducible results. We discuss whether reproducibility is possible or even the right aim in the evaluation of Visual Analytics methods.