Browsing by Author "Sperrle, Fabian"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Learning Contextualized User Preferences for Co-Adaptive Guidance in Mixed-Initiative Topic Model Refinement(The Eurographics Association and John Wiley & Sons Ltd., 2021) Sperrle, Fabian; Schäfer, Hanna; Keim, Daniel; El-Assady, Mennatallah; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonMixed-initiative visual analytics systems support collaborative human-machine decision-making processes. However, many multiobjective optimization tasks, such as topic model refinement, are highly subjective and context-dependent. Hence, systems need to adapt their optimization suggestions throughout the interactive refinement process to provide efficient guidance. To tackle this challenge, we present a technique for learning context-dependent user preferences and demonstrate its applicability to topic model refinement. We deploy agents with distinct associated optimization strategies that compete for the user's acceptance of their suggestions. To decide when to provide guidance, each agent maintains an intelligible, rule-based classifier over context vectorizations that captures the development of quality metrics between distinct analysis states. By observing implicit and explicit user feedback, agents learn in which contexts to provide their specific guidance operation. An agent in topic model refinement might, for example, learn to react to declining model coherence by suggesting to split a topic. Our results confirm that the rules learned by agents capture contextual user preferences. Further, we show that the learned rules are transferable between similar datasets, avoiding common cold-start problems and enabling a continuous refinement of agents across corpora.Item A Survey of Human-Centered Evaluations in Human-Centered Machine Learning(The Eurographics Association and John Wiley & Sons Ltd., 2021) Sperrle, Fabian; El-Assady, Mennatallah; Guo, Grace; Borgo, Rita; Chau, Duen Horng; Endert, Alex; Keim, Daniel; Smit, Noeska and Vrotsou, Katerina and Wang, BeiVisual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human-centered machine learning. We particularly focus on human-related factors that influence trust, interpretability, and explainability. We analyze the evaluations presented in papers from top conferences and journals in information visualization and human-computer interaction to provide a systematic review of their setup and findings. From this survey, we distill design dimensions for structured evaluations, identify evaluation gaps, and derive future research opportunities.Item A Typology of Guidance Tasks in Mixed-Initiative Visual Analytics Environments(The Eurographics Association and John Wiley & Sons Ltd., 2022) Pérez-Messina, Ignacio; Ceneda, Davide; El-Assady, Mennatallah; Miksch, Silvia; Sperrle, Fabian; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasGuidance has been proposed as a conceptual framework to understand how mixed-initiative visual analytics approaches can actively support users as they solve analytical tasks. While user tasks received a fair share of attention, it is still not completely clear how they could be supported with guidance and how such support could influence the progress of the task itself. Our observation is that there is a research gap in understanding the effect of guidance on the analytical discourse, in particular, for the knowledge generation in mixed-initiative approaches. As a consequence, guidance in a visual analytics environment is usually indistinguishable from common visualization features, making user responses challenging to predict and measure. To address these issues, we take a system perspective to propose the notion of guidance tasks and we present it as a typology closely aligned to established user task typologies. We derived the proposed typology directly from a model of guidance in the knowledge generation process and illustrate its implications for guidance design. By discussing three case studies, we show how our typology can be applied to analyze existing guidance systems. We argue that without a clear consideration of the system perspective, the analysis of tasks in mixed-initiative approaches is incomplete. Finally, by analyzing matchings of user and guidance tasks, we describe how guidance tasks could either help the user conclude the analysis or change its course.