Browsing by Author "Wang, Bei"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item EuroVis 2020 CGF 39-3 STARs: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2020) Smit, Noeska; Oeltze-Jafra, Steffen; Wang, Bei; Smit, Noeska and Oeltze-Jafra, Steffen and Wang, BeiItem EuroVis 2023 Short Papers: Frontmatter(The Eurographics Association, 2023) Hoellt, Thomas; Aigner, Wolfgang; Wang, Bei; Hoellt, Thomas; Aigner, Wolfgang; Wang, BeiItem State of the Art in Time-Dependent Flow Topology: Interpreting Physical Meaningfulness Through Mathematical Properties(The Eurographics Association and John Wiley & Sons Ltd., 2020) Bujack, Roxana; Yan, Lin; Hotz, Ingrid; Garth, Christoph; Wang, Bei; Smit, Noeska and Oeltze-Jafra, Steffen and Wang, BeiWe present a state-of-the-art report on time-dependent flow topology. We survey representative papers in visualization and provide a taxonomy of existing approaches that generalize flow topology from time-independent to time-dependent settings. The approaches are classified based upon four categories: tracking of steady topology, reference frame adaption, pathline classification or clustering, and generalization of critical points. Our unique contributions include introducing a set of desirable mathematical properties to interpret physical meaningfulness for time-dependent flow visualization, inferring mathematical properties associated with selective research papers, and utilizing such properties for classification. The five most important properties identified in the existing literature include coincidence with the steady case, induction of a partition within the domain, Lagrangian invariance, objectivity, and Galilean invariance.Item TopoAct: Visually Exploring the Shape of Activations in Deep Learning(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Rathore, Archit; Chalapathi, Nithin; Palande, Sourabh; Wang, Bei; Benes, Bedrich and Hauser, HelwigDeep neural networks such as GoogLeNet, ResNet, and BERT have achieved impressive performance in tasks such as image and text classification. To understand how such performance is achieved, we probe a trained deep neural network by studying neuron activations, i.e.combinations of neuron firings, at various layers of the network in response to a particular input. With a large number of inputs, we aim to obtain a global view of what neurons detect by studying their activations. In particular, we develop visualizations that show the shape of the activation space, the organizational principle behind neuron activations, and the relationships of these activations within a layer. Applying tools from topological data analysis, we present , a visual exploration system to study topological summaries of activation vectors. We present exploration scenarios using that provide valuable insights into learned representations of neural networks. We expect to give a topological perspective that enriches the current toolbox of neural network analysis, and to provide a basis for network architecture diagnosis and data anomaly detection.