Browsing by Author "Endert, Alex"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item A Survey of Human-Centered Evaluations in Human-Centered Machine Learning(The Eurographics Association and John Wiley & Sons Ltd., 2021) Sperrle, Fabian; El-Assady, Mennatallah; Guo, Grace; Borgo, Rita; Chau, Duen Horng; Endert, Alex; Keim, Daniel; Smit, Noeska and Vrotsou, Katerina and Wang, BeiVisual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human-centered machine learning. We particularly focus on human-related factors that influence trust, interpretability, and explainability. We analyze the evaluations presented in papers from top conferences and journals in information visualization and human-computer interaction to provide a systematic review of their setup and findings. From this survey, we distill design dimensions for structured evaluations, identify evaluation gaps, and derive future research opportunities.Item A User-based Visual Analytics Workflow for Exploratory Model Analysis(The Eurographics Association and John Wiley & Sons Ltd., 2019) Cashman, Dylan; Humayoun, Shah Rukh; Heimerl, Florian; Park, Kendall; Das, Subhajit; Thompson, John; Saket, Bahador; Mosca, Abigail; Stasko, John; Endert, Alex; Gleicher, Michael; Chang, Remco; Gleicher, Michael and Viola, Ivan and Leitte, HeikeMany visual analytics systems allow users to interact with machine learning models towards the goals of data exploration and insight generation on a given dataset. However, in some situations, insights may be less important than the production of an accurate predictive model for future use. In that case, users are more interested in generating of diverse and robust predictive models, verifying their performance on holdout data, and selecting the most suitable model for their usage scenario. In this paper, we consider the concept of Exploratory Model Analysis (EMA), which is defined as the process of discovering and selecting relevant models that can be used to make predictions on a data source. We delineate the differences between EMA and the well-known term exploratory data analysis in terms of the desired outcome of the analytic process: insights into the data or a set of deployable models. The contributions of this work are a visual analytics system workflow for EMA, a user study, and two use cases validating the effectiveness of the workflow. We found that our system workflow enabled users to generate complex models, to assess them for various qualities, and to select the most relevant model for their task.