Eurographics Digital Library

This is the DSpace 7 platform of the Eurographics Digital Library.
  • The contents of the Eurographics Digital Library Archive are freely accessible. Only access to the full-text documents of the journal Computer Graphics Forum (joint property of Wiley and Eurographics) is restricted to Eurographics members, people from institutions who have an Institutional Membership at Eurographics, or users of the TIB Hannover. On the item pages you will find so-called purchase links to the TIB Hannover.
  • As a Eurographics member, you can log in with your email address and password from https://services.eg.org. If you are part of an institutional member and you are on a computer with a Eurographics registered IP domain, you can proceed immediately.
  • From 2022, all new releases published by Eurographics will be licensed under Creative Commons. Publishing with Eurographics is Plan-S compliant. Please visit Eurographics Licensing and Open Access Policy for more details.
 

Recent Submissions

Item
Passive Electric Field Sensing for Ubiquitous and Environmental Perception
(E-Publishing-Service der TU Darmstadt, 2022) Wilmsdorff, Julian von
Electric Field Sensing plays an important role in the research branches of Environmental Perception as well as in Ubiquitous Computing. Environmental Perception aims to collect data of the surroundings, while Ubiquitous Computing has the objective of making computing available at any time. This includes the integration of sensors to perceive environmental influences in an unobtrusive way. Electric Field Sensing, also referenced as Capacitive Sensing, is an often used sensing modality in these research fields, for example, to detect the presence of persons or to locate touches and interactions on user interfaces. Electric Field Sensing has a number of advantages over other technologies, such as the fact that Capacitive Sensing does not require direct line-of-sight contact with the object being sensed and that the sensing system can be compact in design. These advantages facilitate high integrability and allow the collection of data as required in Environmental Perception, as well as the invisible incorporation into a user's environment, needed in Ubiquitous Computing. However, disadvantages are often attributed to Capacitive Sensing principles, such as a low sensing range of only a few centimeters and the generation of electric fields, which wastes energy and has several more problems concerning the implementation. As shown in this thesis, this only affects a subset of this sensing technology, namely the subcategory of active capacitive measurements. Therefore, this thesis focuses on the mainly open area of Passive Electric Field Sensing in the context of Ubiquitous Computing and Environmental Perception, as active Capacitive Sensing is an open research field which already gains a lot of attention. The thesis is divided into three main research questions. First, I address the question of whether and how Passive Electric Field Sensing can be made available in a cost-effective and simple manner. To this end, I present various techniques for reducing installation costs and simplifying the handling of these sensor systems. After the question of low-cost applicability, I examine for which applications passive electric field sensor technology is suitable at all. Therefore I present several fields of application where Passive Electric Field Sensing data can be collected. Taking into account the possible fields of application, this work is finally dedicated to the optimization of Passive Electric Field Sensing in these cases of application. For this purpose, different, already known signal processing methods are investigated for their application for Passive Electric Field sensor data. Furthermore, besides these software optimizations, hardware optimizations for the improved use of the technology are presented.
Item
Constrained Spectral Uplifting for HDR Environment Maps
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Tódová, L.; Wilkie, A.
Spectral representation of assets is an important precondition for achieving physical realism in rendering. However, defining assets by their spectral distribution is complicated and tedious. Therefore, it has become general practice to create RGB assets and convert them into their spectral counterparts prior to rendering. This process is called . While a multitude of techniques focusing on reflectance uplifting exist, the current state of the art of uplifting emission for image‐based lighting consists of simply scaling reflectance uplifts. Although this is usable insofar as the obtained overall scene appearance is not unrealistic, the generated emission spectra are only metamers of the original illumination. This, in turn, can cause deviations from the expected appearance even if the rest of the scene corresponds to real‐world data. In a recent publication, we proposed a method capable of uplifting HDR environment maps based on spectral measurements of light sources similar to those present in the maps. To identify the illuminants, we employ an extensive set of emission measurements, and we combine the results with an existing reflectance uplifting method. In addition, we address the problem of environment map capture for the purposes of a spectral rendering pipeline, for which we propose a novel solution. We further extend this work with a detailed evaluation of the method, both in terms of improved colour error and performance.
Item
Erratum to “Rational Bézier Guarding”
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025)
Item
THGS: Lifelike Talking Human Avatar Synthesis From Monocular Video Via 3D Gaussian Splatting
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Chen, Chuang; Yu, Lingyun; Yang, Quanwei; Zheng, Aihua; Xie, Hongtao
Despite the remarkable progress in 3D talking head generation, directly generating 3D talking human avatars still suffers from rigid facial expressions, distorted hand textures and out‐of‐sync lip movements. In this paper, we extend speaker‐specific talking head generation task to and propose a novel pipeline, , that animates lifelike Talking Human avatars using 3D Gaussian Splatting (3DGS). Given speech audio, expression and body poses as input, effectively overcomes the limitations of 3DGS human re‐construction methods in capturing expressive dynamics, such as , from a short monocular video. Firstly, we introduce a simple yet effective for facial dynamics re‐construction, where subtle facial dynamics can be generated by linearly combining the static head model and expression blendshapes. Secondly, a is proposed for lip‐synced mouth movement animation, building connections between speech audio and mouth Gaussian movements. Thirdly, we employ a to optimize these parameters on the fly, which aligns hand movements and expressions better with video input. Experimental results demonstrate that can achieve high‐fidelity 3D talking human avatar animation at 150+ fps on a web‐based rendering system, improving the requirements of real‐time applications. Our project page is at .
Item
The State of the Art in User‐Adaptive Visualizations
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Yanez, Fernando; Conati, Cristina; Ottley, Alvitta; Nobre, Carolina
Research shows that user traits can modulate the use of visualization systems and have a measurable influence on users' accuracy, speed, and attention when performing visual analysis. This highlights the importance of user‐adaptive visualization that can modify themselves to the characteristics and preferences of the user. However, there are very few such visualization systems, as creating them requires broad knowledge from various sub‐domains of the visualization community. A user‐adaptive system must consider which user traits they adapt to, their adaptation logic and the types of interventions they support. In this STAR, we survey a broad space of existing literature and consolidate them to structure the process of creating user‐adaptive visualizations into five components: Capture Ⓐ from the user and any relevant peripheral information. Perform computational Ⓑ with this input to construct a Ⓒ . Employ Ⓓ logic to identify when and how to introduce Ⓔ . Our novel taxonomy provides a road map for work in this area, describing the rich space of current approaches and highlighting open areas for future work.