Browsing by Author "Chevalier, Fanny"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item RiskFix: Supporting Expert Validation of Predictive Timeseries Models in High-Intensity Settings(The Eurographics Association, 2023) Morgenshtern, Gabriela; Verma, Arnav; Tonekaboni, Sana; Greer, Robert; Bernard, Jürgen; Mazwi, Mjaye; Goldenberg, Anna; Chevalier, Fanny; Hoellt, Thomas; Aigner, Wolfgang; Wang, BeiMany real-world machine learning workflows exist in longitudinal, interactive machine learning (ML) settings. This longitudinal nature is often due to incremental increasing of data, e.g., in clinical settings, where observations about patients evolve over their care period. Additionally, experts may become a bottleneck in the workflow, as their limited availability, combined with their role as human oracles, often leads to a lack of ground truth data. In such cases where ground truth data is small, the validation of interactive machine learning workflows relies on domain experts. Only those humans can assess the validity of a model prediction, especially in new situations that have been covered only weakly by available training data. Based on our experiences working with domain experts of a pediatric hospital's intensive care unit, we derive requirements for the design of support interfaces for the validation of interactive ML workflows in fast-paced, high-intensity environments. We present RiskFix, a software package optimized for the validation workflow of domain experts of such contexts. RiskFix is adapted to the cognitive resources and needs of domain experts in validating and giving feedback to the model. Also, RiskFix supports data scientists in their model-building work, with appropriate data structuring for the re-calibration (and possible retraining) of ML models.Item A Survey of Tasks and Visualizations in Multiverse Analysis Reports(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Hall, Brian D.; Liu, Yang; Jansen, Yvonne; Dragicevic, Pierre; Chevalier, Fanny; Kay, Matthew; Hauser, Helwig and Alliez, PierreAnalysing data from experiments is a complex, multi‐step process, often with multiple defensible choices available at each step. While analysts often report a single analysis without documenting how it was chosen, this can cause serious transparency and methodological issues. To make the sensitivity of analysis results to analytical choices transparent, some statisticians and methodologists advocate the use of ‘multiverse analysis’: reporting the full range of outcomes that result from all combinations of defensible analytic choices. Summarizing this combinatorial explosion of statistical results presents unique challenges; several approaches to visualizing the output of multiverse analyses have been proposed across a variety of fields (e.g. psychology, statistics, economics, neuroscience). In this article, we (1) introduce a consistent conceptual framework and terminology for multiverse analyses that can be applied across fields; (2) identify the tasks researchers try to accomplish when visualizing multiverse analyses and (3) classify multiverse visualizations into ‘archetypes’, assessing how well each archetype supports each task. Our work sets a foundation for subsequent research on developing visualization tools and techniques to support multiverse analysis and its reporting.