Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Hauptmann, Hanna"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    LMFingerprints: Visual Explanations of Language Model Embedding Spaces through Layerwise Contextualization Scores
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Sevastjanova, Rita; Kalouli, Aikaterini-Lida; Beck, Christin; Hauptmann, Hanna; El-Assady, Mennatallah; Borgo, Rita; Marai, G. Elisabeta; Schreck, Tobias
    Language models, such as BERT, construct multiple, contextualized embeddings for each word occurrence in a corpus. Understanding how the contextualization propagates through the model's layers is crucial for deciding which layers to use for a specific analysis task. Currently, most embedding spaces are explained by probing classifiers; however, some findings remain inconclusive. In this paper, we present LMFingerprints, a novel scoring-based technique for the explanation of contextualized word embeddings. We introduce two categories of scoring functions, which measure (1) the degree of contextualization, i.e., the layerwise changes in the embedding vectors, and (2) the type of contextualization, i.e., the captured context information. We integrate these scores into an interactive explanation workspace. By combining visual and verbal elements, we provide an overview of contextualization in six popular transformer-based language models. We evaluate hypotheses from the domain of computational linguistics, and our results not only confirm findings from related work but also reveal new aspects about the information captured in the embedding spaces. For instance, we show that while numbers are poorly contextualized, stopwords have an unexpected high contextualization in the models' upper layers, where their neighborhoods shift from similar functionality tokens to tokens that contribute to the meaning of the surrounding sentences.
  • Loading...
    Thumbnail Image
    Item
    Why am I reading this? Explaining Personalized News Recommender Systems
    (The Eurographics Association, 2023) Arnórsson, Sverrir; Abeillon, Florian; Al-Hazwani, Ibrahim; Bernard, Jürgen; Hauptmann, Hanna; El-Assady, Mennatallah; Angelini, Marco; El-Assady, Mennatallah
    Social media and online platforms significantly impact what millions of people get exposed to daily, mainly through recommended content. Hence, recommendation processes have to benefit individuals and society. With this in mind, we present the visual workspace NewsRecXplain, with the goals of (1) explaining and raising awareness about recommender systems, (2) enabling individuals to control and customize news recommendations, and (3) empowering users to contextualize their news recommendations to escape from their filter bubbles. This visual workspace achieves these goals by allowing users to configure their own individualized recommender system, whose news recommendations can then be explained within the workspace by way of embeddings and statistics on content diversity.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback