EuroVisEducation2024
Permanent URI for this collection
Browse
Browsing EuroVisEducation2024 by Subject "Empirical studies in HCI"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Impacts of Student LLM Usage on Creativity in Data Visualization Education(The Eurographics Association, 2024) Ahmad, Mak; Ma, Kwan-Liu; Firat, Elif E.; Laramee, Robert S.; Andersen, Nicklas SindelvLarge language models (LLMs) offer new possibilities for enhancing data visualization education, but the impacts on student experiences remain underexplored. Leveraging tenets of behaviorism, constructivism and experiential learning theories, our mixed-methods study examines LLM integration strategies. We conducted two experiments with different groups of students. The first experiment involved 95 Masters of Business Analytics students who created data narratives based on the Titanic dataset either with or without LLM assistance. The second experiment involved 30 Masters of Information and Data Science students who suggested effective visual encodings for different scenarios with or without LLM assistance in a Viz of the Day activity. We collected quantitative data from surveys and project scores and qualitative data from open-ended responses. Our results show that LLMs can enhance students' ability to create clear, accurate, and effective data stories and visualizations, but they can also pose challenges, such as requiring careful prompt crafting, producing inconsistent or inaccurate outputs, and potentially reducing students' creativity and critical thinking. We discuss how our findings suggest a nuanced balance between LLM guidance and human creativity in data storytelling education and practice, and provide specific directions for future research on LLMs and data visualization.Item More Than Chatting: Conversational LLMs for Enhancing Data Visualization Competencies(The Eurographics Association, 2024) Ahmad, Mak; Ma, Kwan-Liu; Firat, Elif E.; Laramee, Robert S.; Andersen, Nicklas SindelvThis study investigates the integration of Large Language Models (LLMs) like ChatGPT and Claude into data visualization courses to enhance literacy among computer science students. Through a structured 3-week workshop involving 30 graduate students, we examine the effects of LLM-assisted conversational prompting on students' visualization skills and confidence. Our findings reveal that while engagement and confidence levels increased significantly, improvements in actual visualization proficiency were modest. Our study underscores the importance of prompt engineering skills in maximizing the educational value of LLMs and offers evidence-based insights for software engineering educators on effectively leveraging conversational AI. This research contributes to the ongoing discussion on incorporating AI tools in education, providing a foundation for future ethical and effective LLM integration strategies.