Browsing by Author "Keim, Daniel"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item A Comprehensive Workflow for Effective Imitation and Reinforcement Learning with Visual Analytics(The Eurographics Association, 2022) Metz, Yannick; Schlegel, Udo; Seebacher, Daniel; El-Assady, Mennatallah; Keim, Daniel; Bernard, Jürgen; Angelini, MarcoMultiple challenges hinder the application of reinforcement learning algorithms in experimental and real-world use cases even with recent successes in such areas. Such challenges occur at different stages of the development and deployment of such models. While reinforcement learning workflows share similarities with machine learning approaches, we argue that distinct challenges can be tackled and overcome using visual analytic concepts. Thus, we propose a comprehensive workflow for reinforcement learning and present an implementation of our workflow incorporating visual analytic concepts integrating tailored views and visualizations for different stages and tasks of the workflow.Item EduClust - A Visualization Application for Teaching Clustering Algorithms(The Eurographics Association, 2019) Fuchs, Johannes; Isenberg, Petra; Bezerianos, Anastasia; Miller, Matthias; Keim, Daniel; Tarini, Marco and Galin, EricWe present EduClust, a visualization application for teaching clustering algorithms. EduClust is an online application that combines visualizations, interactions, and animations to facilitate the understanding and teaching of clustering steps, parameters, and procedures. Traditional classroom settings aim for cognitive processes like remembering and understanding. We designed EduClust for expanded educational objectives like applying and evaluating. Educators can use the tool in class to show the effect of different clustering parameters on various datasets while animating through each algorithm's steps, but also use the tool to prepare traditional teaching material quickly by exporting animations and images. Students, on the other hand, benefit from the ability to compare and contrast the influence of clustering parameters on different datasets, while seeing technical details such as pseudocode and step-by-step explanations.Item Immersive Analytics with Abstract 3D Visualizations: A Survey(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Kraus, Matthias; Fuchs, Johannes; Sommer, Björn; Klein, Karsten; Engelke, Ulrich; Keim, Daniel; Schreiber, Falk; Hauser, Helwig and Alliez, PierreAfter a long period of scepticism, more and more publications describe basic research but also practical approaches to how abstract data can be presented in immersive environments for effective and efficient data understanding. Central aspects of this important research question in immersive analytics research are concerned with the use of 3D for visualization, the embedding in the immersive space, the combination with spatial data, suitable interaction paradigms and the evaluation of use cases. We provide a characterization that facilitates the comparison and categorization of published works and present a survey of publications that gives an overview of the state of the art, current trends, and gaps and challenges in current research.Item Interactive Dense Pixel Visualizations for Time Series and Model Attribution Explanations(The Eurographics Association, 2023) Schlegel, Udo; Keim, Daniel; Archambault, Daniel; Nabney, Ian; Peltonen, JaakkoThe field of Explainable Artificial Intelligence (XAI) for Deep Neural Network models develops significantly, offering numerous techniques to extract explanations from models. However, evaluating explanations is often not trivial, and differences in applied metrics can be subtle, especially with non-intelligible data. Thus, there is a need for visualizations tailored to explore explanations for domains with such data, e.g., time series. We propose DAVOTS, an interactive visual analytics approach to explore raw time series data, activations of neural networks, and attributions in a dense-pixel visualization to gain insights into the data, models' decisions, and explanations. To further support users in exploring large datasets, we apply clustering approaches to the visualized data domains to highlight groups and present ordering strategies for individual and combined data exploration to facilitate finding patterns. We visualize a CNN trained on the FordA dataset to demonstrate the approach.Item Learning Contextualized User Preferences for Co-Adaptive Guidance in Mixed-Initiative Topic Model Refinement(The Eurographics Association and John Wiley & Sons Ltd., 2021) Sperrle, Fabian; Schäfer, Hanna; Keim, Daniel; El-Assady, Mennatallah; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonMixed-initiative visual analytics systems support collaborative human-machine decision-making processes. However, many multiobjective optimization tasks, such as topic model refinement, are highly subjective and context-dependent. Hence, systems need to adapt their optimization suggestions throughout the interactive refinement process to provide efficient guidance. To tackle this challenge, we present a technique for learning context-dependent user preferences and demonstrate its applicability to topic model refinement. We deploy agents with distinct associated optimization strategies that compete for the user's acceptance of their suggestions. To decide when to provide guidance, each agent maintains an intelligible, rule-based classifier over context vectorizations that captures the development of quality metrics between distinct analysis states. By observing implicit and explicit user feedback, agents learn in which contexts to provide their specific guidance operation. An agent in topic model refinement might, for example, learn to react to declining model coherence by suggesting to split a topic. Our results confirm that the rules learned by agents capture contextual user preferences. Further, we show that the learned rules are transferable between similar datasets, avoiding common cold-start problems and enabling a continuous refinement of agents across corpora.Item A Survey of Human-Centered Evaluations in Human-Centered Machine Learning(The Eurographics Association and John Wiley & Sons Ltd., 2021) Sperrle, Fabian; El-Assady, Mennatallah; Guo, Grace; Borgo, Rita; Chau, Duen Horng; Endert, Alex; Keim, Daniel; Smit, Noeska and Vrotsou, Katerina and Wang, BeiVisual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human-centered machine learning. We particularly focus on human-related factors that influence trust, interpretability, and explainability. We analyze the evaluations presented in papers from top conferences and journals in information visualization and human-computer interaction to provide a systematic review of their setup and findings. From this survey, we distill design dimensions for structured evaluations, identify evaluation gaps, and derive future research opportunities.Item VISITOR: Visual Interactive State Sequence Exploration for Reinforcement Learning(The Eurographics Association and John Wiley & Sons Ltd., 2023) Metz, Yannick; Bykovets, Eugene; Joos, Lucas; Keim, Daniel; El-Assady, Mennatallah; Bujack, Roxana; Archambault, Daniel; Schreck, TobiasUnderstanding the behavior of deep reinforcement learning agents is a crucial requirement throughout their development. Existing work has addressed the identification of observable behavioral patterns in state sequences or analysis of isolated internal representations; however, the overall decision-making of deep-learning RL agents remains opaque. To tackle this, we present VISITOR, a visual analytics system enabling the analysis of entire state sequences, the diagnosis of singular predictions, and the comparison between agents. A sequence embedding view enables the multiscale analysis of state sequences, utilizing custom embedding techniques for a stable spatialization of the observations and internal states. We provide multiple layers: (1) a state space embedding, highlighting different groups of states inside the state-action sequences, (2) a trajectory view, emphasizing decision points, (3) a network activation mapping, visualizing the relationship between observations and network activations, (4) a transition embedding, enabling the analysis of state-to-state transitions. The embedding view is accompanied by an interactive reward view that captures the temporal development of metrics, which can be linked directly to states in the embedding. Lastly, a model list allows for the quick comparison of models across multiple metrics. Annotations can be exported to communicate results to different audiences. Our two-stage evaluation with eight experts confirms the effectiveness in identifying states of interest, comparing the quality of policies, and reasoning about the internal decision-making processes.