Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Ottley, Alvitta"

Now showing 1 - 13 of 13
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Benchmarking Visual Language Models on Standardized Visualization Literacy Tests
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Pandey, Saugat; Ottley, Alvitta; Aigner, Wolfgang; Andrienko, Natalia; Wang, Bei
    The increasing integration of Visual Language Models (VLMs) into visualization systems demands a comprehensive understanding of their visual interpretation capabilities and constraints. While existing research has examined individual models, systematic comparisons of VLMs' visualization literacy remain unexplored. We bridge this gap through a rigorous, first-ofits- kind evaluation of four leading VLMs (GPT-4, Claude, Gemini, and Llama) using standardized assessments: the Visualization Literacy Assessment Test (VLAT) and Critical Thinking Assessment for Literacy in Visualizations (CALVI). Our methodology uniquely combines randomized trials with structured prompting techniques to control for order effects and response variability - a critical consideration overlooked in many VLM evaluations. Our analysis reveals that while specific models demonstrate competence in basic chart interpretation (Claude achieving 67.9% accuracy on VLAT), all models exhibit substantial difficulties in identifying misleading visualization elements (maximum 30.0% accuracy on CALVI). We uncover distinct performance patterns: strong capabilities in interpreting conventional charts like line charts (76-96% accuracy) and detecting hierarchical structures (80-100% accuracy), but consistent difficulties with data-dense visualizations involving multiple encodings (bubble charts: 18.6-61.4%) and anomaly detection (25-30% accuracy). Significantly, we observe distinct uncertainty management behavior across models, with Gemini displaying heightened caution (22.5% question omission) compared to others (7-8%). These findings provide crucial insights for the visualization community by establishing reliable VLM evaluation benchmarks, identifying areas where current models fall short, and highlighting the need for targeted improvements in VLM architectures for visualization tasks. To promote reproducibility, encourage further research, and facilitate benchmarking of future VLMs, our complete evaluation framework, including code, prompts, and analysis scripts, is available at https://github.com/washuvis/VisLit-VLM-Eval.
  • Loading...
    Thumbnail Image
    Item
    EuroVis 2025 Short Papers: Frontmatter
    (The Eurographics Association, 2025) El-Assady, Mennatallah; Ottley, Alvitta; Tominski, Christian; El-Assady, Mennatallah; Ottley, Alvitta; Tominski, Christian
  • Loading...
    Thumbnail Image
    Item
    Follow The Clicks: Learning and Anticipating Mouse Interactions During Exploratory Data Analysis
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Ottley, Alvitta; Garnett, Roman; Wan, Ran; Gleicher, Michael and Viola, Ivan and Leitte, Heike
    The goal of visual analytics is to create a symbiosis between human and computer by leveraging their unique strengths. While this model has demonstrated immense success, we are yet to realize the full potential of such a human-computer partnership. In a perfect collaborative mixed-initiative system, the computer must possess skills for learning and anticipating the users' needs. Addressing this gap, we propose a framework for inferring attention from passive observations of the user's click, thereby allowing accurate predictions of future events. We demonstrate this technique with a crime map and found that users' clicks can appear in our prediction set 92% - 97% of the time. Further analysis shows that we can achieve high prediction accuracy typically after three clicks. Altogether, we show that passive observations of interaction data can reveal valuable information that will allow the system to learn and anticipate future events.
  • Loading...
    Thumbnail Image
    Item
    A Grammar-Based Approach for Applying Visualization Taxonomies to Interaction Logs
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Gathani, Sneha; Monadjemi, Shayan; Ottley, Alvitta; Battle, Leilani; Borgo, Rita; Marai, G. Elisabeta; Schreck, Tobias
    Researchers collect large amounts of user interaction data with the goal of mapping user's workflows and behaviors to their high-level motivations, intuitions, and goals. Although the visual analytics community has proposed numerous taxonomies to facilitate this mapping process, no formal methods exist for systematically applying these existing theories to user interaction logs. This paper seeks to bridge the gap between visualization task taxonomies and interaction log data by making the taxonomies more actionable for interaction log analysis. To achieve this, we leverage structural parallels between how people express themselves through interactions and language by reformulating existing theories as regular grammars.We represent interactions as terminals within a regular grammar, similar to the role of individual words in a language, and patterns of interactions or non-terminals as regular expressions over these terminals to capture common language patterns. To demonstrate our approach, we generate regular grammars for seven existing visualization taxonomies and develop code to apply them to three public interaction log datasets. In analyzing these regular grammars, we find that the taxonomies at the low-level (i.e., terminals) show mixed results in expressing multiple interaction log datasets, and taxonomies at the high-level (i.e., regular expressions) have limited expressiveness, due to primarily two challenges: inconsistencies in interaction log dataset granularity and structure, and under-expressiveness of certain terminals. Based on our findings, we suggest new research directions for the visualization community to augment existing taxonomies, develop new ones, and build better interaction log recording processes to facilitate the data-driven development of user behavior taxonomies.
  • Loading...
    Thumbnail Image
    Item
    Guided By AI: Navigating Trust, Bias, and Data Exploration in AI-Guided Visual Analytics
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Ha, Sunwoo; Monadjemi, Shayan; Ottley, Alvitta; Aigner, Wolfgang; Archambault, Daniel; Bujack, Roxana
    The increasing integration of artificial intelligence (AI) in visual analytics (VA) tools raises vital questions about the behavior of users, their trust, and the potential of induced biases when provided with guidance during data exploration. We present an experiment where participants engaged in a visual data exploration task while receiving intelligent suggestions supplemented with four different transparency levels. We also modulated the difficulty of the task (easy or hard) to simulate a more tedious scenario for the analyst. Our results indicate that participants were more inclined to accept suggestions when completing a more difficult task despite the AI's lower suggestion accuracy. Moreover, the levels of transparency tested in this study did not significantly affect suggestion usage or subjective trust ratings of the participants. Additionally, we observed that participants who utilized suggestions throughout the task explored a greater quantity and diversity of data points. We discuss these findings and the implications of this research for improving the design and effectiveness of AI-guided VA tools.
  • Loading...
    Thumbnail Image
    Item
    Human-Computer Collaboration for Visual Analytics: an Agent-based Framework
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Monadjemi, Shayan; Guo, Mengtian; Gotz, David; Garnett, Roman; Ottley, Alvitta; Bujack, Roxana; Archambault, Daniel; Schreck, Tobias
    The visual analytics community has long aimed to understand users better and assist them in their analytic endeavors. As a result, numerous conceptual models of visual analytics aim to formalize common workflows, techniques, and goals leveraged by analysts. While many of the existing approaches are rich in detail, they each are specific to a particular aspect of the visual analytic process. Furthermore, with an ever-expanding array of novel artificial intelligence techniques and advances in visual analytic settings, existing conceptual models may not provide enough expressivity to bridge the two fields. In this work, we propose an agent-based conceptual model for the visual analytic process by drawing parallels from the artificial intelligence literature. We present three examples from the visual analytics literature as case studies and examine them in detail using our framework. Our simple yet robust framework unifies the visual analytic pipeline to enable researchers and practitioners to reason about scenarios that are becoming increasingly prominent in the field, namely mixed-initiative, guided, and collaborative analysis. Furthermore, it will allow us to characterize analysts, visual analytic settings, and guidance from the lenses of human agents, environments, and artificial agents, respectively.
  • Loading...
    Thumbnail Image
    Item
    Inferential Tasks as an Evaluation Technique for Visualization
    (The Eurographics Association, 2022) Suh, Ashley; Mosca, Ab; Robinson, Shannon; Pham, Quinn; Cashman, Dylan; Ottley, Alvitta; Chang, Remco; Agus, Marco; Aigner, Wolfgang; Hoellt, Thomas
    Designing suitable tasks for visualization evaluation remains challenging. Traditional evaluation techniques commonly rely on 'low-level' or 'open-ended' tasks to assess the efficacy of a proposed visualization, however, nontrivial trade-offs exist between the two. Low-level tasks allow for robust quantitative evaluations, but are not indicative of the complex usage of a visualization. Open-ended tasks, while excellent for insight-based evaluations, are typically unstructured and require time-consuming interviews. Bridging this gap, we propose inferential tasks: a complementary task category based on inferential learning in psychology. Inferential tasks produce quantitative evaluation data in which users are prompted to form and validate their own findings with a visualization. We demonstrate the use of inferential tasks through a validation experiment on two well-known visualization tools.
  • Loading...
    Thumbnail Image
    Item
    Investigating the Role of Locus of Control in Moderating Complex Analytic Workflows
    (The Eurographics Association, 2020) Crouser, R. Jordan; Ottley, Alvitta; Swanson, Kendra; Montoly, Ananda; Kerren, Andreas and Garth, Christoph and Marai, G. Elisabeta
    Throughout the last decade, researchers have shown that the effectiveness of a visualization tool depends on the experience, personality, and cognitive abilities of the user. This work has also demonstrated that these individual traits can have significant implications for tools that support reasoning and decision-making with data. However, most studies in this area to date have involved only short-duration tasks performed by lay users. This short paper presents a preliminary analysis of a series of exercises with 22 trained intelligence analysts that seeks to deepen our understanding of how individual differences modulate expert behavior in complex analysis tasks.
  • Loading...
    Thumbnail Image
    Item
    Linking and Layout: Exploring the Integration of Text and Visualization in Storytelling
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Zhi, Qiyu; Ottley, Alvitta; Metoyer, Ronald; Gleicher, Michael and Viola, Ivan and Leitte, Heike
    Modern web technologies are enabling authors to create various forms of text visualization integration for storytelling. This integration may shape the stories' flow and thereby affect the reading experience. In this paper, we seek to understand two text visualization integration forms: (i) different text and visualization spatial arrangements (layout), namely, vertical and slideshow; and (ii) interactive linking of text and visualization (linking). Here, linking refers to a bidirectional interaction mode that explicitly highlights the explanatory visualization element when selecting narrative text and vice versa. Through a crowdsourced study with 180 participants, we measured the effect of layout and linking on the degree to which users engage with the story (user engagement), their understanding of the story content (comprehension), and their ability to recall the story information (recall). We found that participants performed significantly better in comprehension tasks with the slideshow layout. Participant recall was better with the slideshow layout under conditions with linking versus no linking. We also found that linking significantly increased user engagement. Additionally, linking and the slideshow layout were preferred by the participants. We also explored user reading behaviors with different conditions.
  • Loading...
    Thumbnail Image
    Item
    Mini-VLAT: A Short and Effective Measure of Visualization Literacy
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Pandey, Saugat; Ottley, Alvitta; Bujack, Roxana; Archambault, Daniel; Schreck, Tobias
    The visualization community regards visualization literacy as a necessary skill. Yet, despite the recent increase in research into visualization literacy by the education and visualization communities, we lack practical and time-effective instruments for the widespread measurements of people's comprehension and interpretation of visual designs. We present Mini-VLAT, a brief but practical visualization literacy test. The Mini-VLAT is a 12-item short form of the 53-item Visualization Literacy Assessment Test (VLAT). The Mini-VLAT is reliable (coefficient omega = 0.72) and strongly correlates with the VLAT. Five visualization experts validated the Mini-VLAT items, yielding an average content validity ratio (CVR) of 0.6. We further validate Mini-VLAT by demonstrating a strong positive correlation between study participants' Mini-VLAT scores and their aptitude for learning an unfamiliar visualization using a Parallel Coordinate Plot test. Overall, the Mini-VLAT items showed a similar pattern of validity and reliability as the 53-item VLAT. The results show that Mini-VLAT is a psychometrically sound and practical short measure of visualization literacy.
  • Loading...
    Thumbnail Image
    Item
    The State of the Art in User‐Adaptive Visualizations
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Yanez, Fernando; Conati, Cristina; Ottley, Alvitta; Nobre, Carolina
    Research shows that user traits can modulate the use of visualization systems and have a measurable influence on users' accuracy, speed, and attention when performing visual analysis. This highlights the importance of user‐adaptive visualization that can modify themselves to the characteristics and preferences of the user. However, there are very few such visualization systems, as creating them requires broad knowledge from various sub‐domains of the visualization community. A user‐adaptive system must consider which user traits they adapt to, their adaptation logic and the types of interventions they support. In this STAR, we survey a broad space of existing literature and consolidate them to structure the process of creating user‐adaptive visualizations into five components: Capture Ⓐ from the user and any relevant peripheral information. Perform computational Ⓑ with this input to construct a Ⓒ . Employ Ⓓ logic to identify when and how to introduce Ⓔ . Our novel taxonomy provides a road map for work in this area, describing the rich space of current approaches and highlighting open areas for future work.
  • Loading...
    Thumbnail Image
    Item
    Survey on Individual Differences in Visualization
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Liu, Zhengliang; Crouser, R. Jordan; Ottley, Alvitta; Smit, Noeska and Oeltze-Jafra, Steffen and Wang, Bei
    Developments in data visualization research have enabled visualization systems to achieve great general usability and application across a variety of domains. These advancements have improved not only people's understanding of data, but also the general understanding of people themselves, and how they interact with visualization systems. In particular, researchers have gradually come to recognize the deficiency of having one-size-fits-all visualization interfaces, as well as the significance of individual differences in the use of data visualization systems. Unfortunately, the absence of comprehensive surveys of the existing literature impedes the development of this research. In this paper, we review the research perspectives, as well as the personality traits and cognitive abilities, visualizations, tasks, and measures investigated in the existing literature. We aim to provide a detailed summary of existing scholarship, produce evidence-based reviews, and spur future inquiry.
  • Loading...
    Thumbnail Image
    Item
    Survey on the Analysis of User Interactions and Visualization Provenance
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Kai; Ottley, Alvitta; Walchshofer, Conny; Streit, Marc; Chang, Remco; Wenskovitch, John; Smit, Noeska and Oeltze-Jafra, Steffen and Wang, Bei
    There is fast-growing literature on provenance-related research, covering aspects such as its theoretical framework, use cases, and techniques for capturing, visualizing, and analyzing provenance data. As a result, there is an increasing need to identify and taxonomize the existing scholarship. Such an organization of the research landscape will provide a complete picture of the current state of inquiry and identify knowledge gaps or possible avenues for further investigation. In this STAR, we aim to produce a comprehensive survey of work in the data visualization and visual analytics field that focus on the analysis of user interaction and provenance data. We structure our survey around three primary questions: (1) WHY analyze provenance data, (2) WHAT provenance data to encode and how to encode it, and (3) HOW to analyze provenance data. A concluding discussion provides evidence-based guidelines and highlights concrete opportunities for future development in this emerging area.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback