Browsing by Author "Isenberg, Tobias"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Hybrid Touch/Tangible Spatial Selection in Augmented Reality(The Eurographics Association and John Wiley & Sons Ltd., 2022) Sereno, Mickael; Gosset, Stéphane; Besançon, Lonni; Isenberg, Tobias; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasWe study tangible touch tablets combined with Augmented Reality Head-Mounted Displays (AR-HMDs) to perform spatial 3D selections. We are primarily interested in the exploration of 3D unstructured datasets such as cloud points or volumetric datasets. AR-HMDs immerse users by showing datasets stereoscopically, and tablets provide a set of 2D exploration tools. Because AR-HMDs merge the visualization, interaction, and the users' physical spaces, users can also use the tablets as tangible objects in their 3D space. Nonetheless, the tablets' touch displays provide their own visualization and interaction spaces, separated from those of the AR-HMD. This raises several research questions compared to traditional setups. In this paper, we theorize, discuss, and study different available mappings for manual spatial selections using a tangible tablet within an AR-HMD space. We then study the use of this tablet within a 3D AR environment, compared to its use with a 2D external screen.Item LineageD: An Interactive Visual System for Plant Cell Lineage Assignments based on Correctable Machine Learning(The Eurographics Association and John Wiley & Sons Ltd., 2022) Hong, Jiayi; Trubuil, Alain; Isenberg, Tobias; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasWe describe LineageD-a hybrid web-based system to predict, visualize, and interactively adjust plant embryo cell lineages. Currently, plant biologists explore the development of an embryo and its hierarchical cell lineage manually, based on a 3D dataset that represents the embryo status at one point in time. This human decision-making process, however, is time-consuming, tedious, and error-prone due to the lack of integrated graphical support for specifying the cell lineage. To fill this gap, we developed a new system to support the biologists in their tasks using an interactive combination of 3D visualization, abstract data visualization, and correctable machine learning to modify the proposed cell lineage. We use existing manually established cell lineages to obtain a neural network model. We then allow biologists to use this model to repeatedly predict assignments of a single cell division stage. After each hierarchy level prediction, we allow them to interactively adjust the machine learning based assignment, which we then integrate into the pool of verified assignments for further predictions. In addition to building the hierarchy this way in a bottom-up fashion, we also offer users to divide the whole embryo and create the hierarchy tree in a top-down fashion for a few steps, improving the ML-based assignments by reducing the potential for wrong predictions. We visualize the continuously updated embryo and its hierarchical development using both 3D spatial and abstract tree representations, together with information about the model's confidence and spatial properties. We conducted case study validations with five expert biologists to explore the utility of our approach and to assess the potential for LineageD to be used in their daily workflow. We found that the visualizations of both 3D representations and abstract representations help with decision making and the hierarchy tree top-down building approach can reduce assignments errors in real practice.Item Reducing Affective Responses to Surgical Images and Videos Through Stylization(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Besançon, Lonni; Semmo, Amir; Biau, David; Frachet, Bruno; Pineau, Virginie; Sariali, El Hadi; Soubeyrand, Marc; Taouachi, Rabah; Isenberg, Tobias; Dragicevic, Pierre; Benes, Bedrich and Hauser, HelwigWe present the first empirical study on using colour manipulation and stylization to make surgery images/videos more palatable. While aversion to such material is natural, it limits many people's ability to satisfy their curiosity, educate themselves and make informed decisions. We selected a diverse set of image processing techniques to test them both on surgeons and lay people. While colour manipulation techniques and many artistic methods were found unusable by surgeons, edge‐preserving image smoothing yielded good results both for preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). We then conducted a second set of interview with surgeons to assess whether these methods could also be used on videos and derive good default parameters for information preservation. We provide extensive supplemental material at .Item The State of the Art of Spatial Interfaces for 3D Visualization(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Besançon, Lonni; Ynnerman, Anders; Keefe, Daniel F.; Yu, Lingyun; Isenberg, Tobias; Benes, Bedrich and Hauser, HelwigWe survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under‐explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction.Item Supporting Volumetric Data Visualization and Analysis by Combining Augmented Reality Visuals with Multi-Touch Input(The Eurographics Association, 2019) Sereno, Mickael; Besançon, Lonni; Isenberg, Tobias; Madeiras Pereira, João and Raidou, Renata GeorgiaWe present our vision and steps toward implementing a collaborative 3D data analysis tool based on wearable Augmented Reality Head-Mounted Display (AR-HMD). We envision a hybrid environment which combines such AR-HMD devices with multi-touch devices to allow multiple collaborators to visualize and jointly discuss volumetric datasets. The multi-touch devices permit users to manipulate the datasets' states, either publicly or privately, while also proposing means for 2D input for, e. g., drawing annotations. The headsets allow each user to visualize the dataset in physically correct perspective stereoscopy, either in public or in their private space. The public space is viewed by all, with modifications shared in real-time. The private space allows each user to investigate the same dataset with their own preferences, for instance, with a different clipping range. The user can later decide to merge their private space with the public one or cancel the changes.