43-Issue 2
Permanent URI for this collection
Browse
Browsing 43-Issue 2 by Issue Date
Now showing 1 - 20 of 54
Results Per Page
Sort Options
Item TailorMe: Self-Supervised Learning of an Anatomically Constrained Volumetric Human Shape Model(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wenninger, Stephan; Kemper, Fabian; Schwanecke, Ulrich; Botsch, Mario; Bermano, Amit H.; Kalogerakis, EvangelosHuman shape spaces have been extensively studied, as they are a core element of human shape and pose inference tasks. Classic methods for creating a human shape model register a surface template mesh to a database of 3D scans and use dimensionality reduction techniques, such as Principal Component Analysis, to learn a compact representation. While these shape models enable global shape modifications by correlating anthropometric measurements with the learned subspace, they only provide limited localized shape control. We instead register a volumetric anatomical template, consisting of skeleton bones and soft tissue, to the surface scans of the CAESAR database. We further enlarge our training data to the full Cartesian product of all skeletons and all soft tissues using physically plausible volumetric deformation transfer. This data is then used to learn an anatomically constrained volumetric human shape model in a self-supervised fashion. The resulting TAILORME model enables shape sampling, localized shape manipulation, and fast inference from given surface scans.Item HaLo-NeRF: Learning Geometry-Guided Semantics for Exploring Unconstrained Photo Collections(The Eurographics Association and John Wiley & Sons Ltd., 2024) Dudai, Chen; Alper, Morris; Bezalel, Hana; Hanocka, Rana; Lang, Itai; Averbuch-Elor, Hadar; Bermano, Amit H.; Kalogerakis, EvangelosInternet image collections containing photos captured by crowds of photographers show promise for enabling digital exploration of large-scale tourist landmarks. However, prior works focus primarily on geometric reconstruction and visualization, neglecting the key role of language in providing a semantic interface for navigation and fine-grained understanding. In more constrained 3D domains, recent methods have leveraged modern vision-and-language models as a strong prior of 2D visual semantics. While these models display an excellent understanding of broad visual semantics, they struggle with unconstrained photo collections depicting such tourist landmarks, as they lack expert knowledge of the architectural domain and fail to exploit the geometric consistency of images capturing multiple views of such scenes. In this work, we present a localization system that connects neural representations of scenes depicting large-scale landmarks with text describing a semantic region within the scene, by harnessing the power of SOTA vision-and-language models with adaptations for understanding landmark scene semantics. To bolster such models with fine-grained knowledge, we leverage large-scale Internet data containing images of similar landmarks along with weakly-related textual information. Our approach is built upon the premise that images physically grounded in space can provide a powerful supervision signal for localizing new concepts, whose semantics may be unlocked from Internet textual metadata with large language models. We use correspondences between views of scenes to bootstrap spatial understanding of these semantics, providing guidance for 3D-compatible segmentation that ultimately lifts to a volumetric scene representation. To evaluate our method, we present a new benchmark dataset containing large-scale scenes with groundtruth segmentations for multiple semantic concepts. Our results show that HaLo-NeRF can accurately localize a variety of semantic concepts related to architectural landmarks, surpassing the results of other 3D models as well as strong 2D segmentation baselines. Our code and data are publicly available at https://tau-vailab.github.io/HaLo-NeRF/.Item TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Franke, Linus; Rückert, Darius; Fink, Laura; Stamminger, Marc; Bermano, Amit H.; Kalogerakis, EvangelosPoint-based radiance field rendering has demonstrated impressive results for novel view synthesis, offering a compelling blend of rendering quality and computational efficiency. However, also latest approaches in this domain are not without their shortcomings. 3D Gaussian Splatting [KKLD23] struggles when tasked with rendering highly detailed scenes, due to blurring and cloudy artifacts. On the other hand, ADOP [RFS22] can accommodate crisper images, but the neural reconstruction network decreases performance, it grapples with temporal instability and it is unable to effectively address large gaps in the point cloud. In this paper, we present TRIPS (Trilinear Point Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP. The fundamental concept behind our novel technique involves rasterizing points into a screen-space image pyramid, with the selection of the pyramid layer determined by the projected point size. This approach allows rendering arbitrarily large points using a single trilinear write. A lightweight neural network is then used to reconstruct a hole-free image including detail beyond splat resolution. Importantly, our render pipeline is entirely differentiable, allowing for automatic optimization of both point sizes and positions. Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality while maintaining a real-time frame rate of 60 frames per second on readily available hardware. This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage. The project page is located at: https://lfranke.github.io/tripsItem Practical Method to Estimate Fabric Mechanics from Metadata(The Eurographics Association and John Wiley & Sons Ltd., 2024) Dominguez-Elvira, Henar; Nicás, Alicia; Cirio, Gabriel; Rodríguez, Alejandro; Garces, Elena; Bermano, Amit H.; Kalogerakis, EvangelosEstimating fabric mechanical properties is crucial to create realistic digital twins. Existing methods typically require testing physical fabric samples with expensive devices or cumbersome capture setups. In this work, we propose a method to estimate fabric mechanics just from known manufacturer metadata such as the fabric family, the density, the composition, and the thickness. Further, to alleviate the need to know the fabric family –which might be ambiguous or unknown for nonspecialists– we propose an end-to-end neural method that works with planar images of the textile as input. We evaluate our methods using extensive tests that include the industry standard Cusick and demonstrate that both of them produce drapes that strongly correlate with the ground truth estimates provided by lab equipment. Our method is the first to propose such a simple capture method for mechanical properties outperforming other methods that require testing the fabric in specific setups.Item Computational Smocking through Fabric-Thread Interaction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhou, Ningfeng; Ren, Jing; Sorkine-Hornung, Olga; Bermano, Amit H.; Kalogerakis, EvangelosWe formalize Italian smocking, an intricate embroidery technique that gathers flat fabric into pleats along meandering lines of stitches, resulting in pleats that fold and gather where the stitching veers. In contrast to English smocking, characterized by colorful stitches decorating uniformly shaped pleats, and Canadian smocking, which uses localized knots to form voluminous pleats, Italian smocking permits the fabric to move freely along the stitched threads following curved paths, resulting in complex and unpredictable pleats with highly diverse, irregular structures, achieved simply by pulling on the threads. We introduce a novel method for digital previewing of Italian smocking results, given the thread stitching path as input. Our method uses a coarse-grained mass-spring system to simulate the interaction between the threads and the fabric. This configuration guides the fine-level fabric deformation through an adaptation of the state-of-the-art simulator, C-IPC [LKJ21]. Our method models the general problem of fabric-thread interaction and can be readily adapted to preview Canadian smocking as well.We compare our results to baseline approaches and physical fabrications to demonstrate the accuracy of our method.Item Cinematographic Camera Diffusion Model(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jiang, Hongda; Wang, Xi; Christie, Marc; Liu, Libin; Chen, Baoquan; Bermano, Amit H.; Kalogerakis, EvangelosDesigning effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from real examples,...), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control.Item Real-time Neural Rendering of Dynamic Light Fields(The Eurographics Association and John Wiley & Sons Ltd., 2024) Coomans, Arno; Dominici, Edoardo Alberto; Döring, Christian; Mueller, Joerg H.; Hladky, Jozef; Steinberger, Markus; Bermano, Amit H.; Kalogerakis, EvangelosSynthesising high-quality views of dynamic scenes via path tracing is prohibitively expensive. Although caching offline-quality global illumination in neural networks alleviates this issue, existing neural view synthesis methods are limited to mainly static scenes, have low inference performance or do not integrate well with existing rendering paradigms. We propose a novel neural method that is able to capture a dynamic light field, renders at real-time frame rates at 1920x1080 resolution and integrates seamlessly with Monte Carlo ray tracing frameworks. We demonstrate how a combination of spatial, temporal and a novel surface-space encoding are each effective at capturing different kinds of spatio-temporal signals. Together with a compact fully-fused neural network and architectural improvements, we achieve a twenty-fold increase in network inference speed compared to related methods at equal or better quality. Our approach is suitable for providing offline-quality real-time rendering in a variety of scenarios, such as free-viewpoint video, interactive multi-view rendering, or streaming rendering. Finally, our work can be integrated into other rendering paradigms, e.g., providing a dynamic background for interactive scenarios where the foreground is rendered with traditional methods.Item Wavelet Potentials: An Efficient Potential Recovery Technique for Pointwise Incompressible Fluids(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lyu, Luan; Ren, Xiaohua; Cao, Wei; Zhu, Jian; Wu, Enhua; Yang, Zhi-Xin; Bermano, Amit H.; Kalogerakis, EvangelosWe introduce an efficient technique for recovering the vector potential in wavelet space to simulate pointwise incompressible fluids. This technique ensures that fluid velocities remain divergence-free at any point within the fluid domain and preserves local volume during the simulation. Divergence-free wavelets are utilized to calculate the wavelet coefficients of the vector potential, resulting in a smooth vector potential with enhanced accuracy, even when the input velocities exhibit some degree of divergence. This enhanced accuracy eliminates the need for additional computational time to achieve a specific accuracy threshold, as fewer iterations are required for the pressure Poisson solver. Additionally, in 3D, since the wavelet transform is taken in-place, only the memory for storing the vector potential is required. These two features make the method remarkably efficient for recovering vector potential for fluid simulation. Furthermore, the method can handle various boundary conditions during the wavelet transform, making it adaptable for simulating fluids with Neumann and Dirichlet boundary conditions. Our approach is highly parallelizable and features a time complexity of O(n), allowing for seamless deployment on GPUs and yielding remarkable computational efficiency. Experiments demonstrate that, taking into account the time consumed by the pressure Poisson solver, the method achieves an approximate 2x speedup on GPUs compared to state-of-the-art vector potential recovery techniques while maintaining a precision level of 10-6 when single float precision is employed. The source code of ’'Wavelet Potentials' can be found in https://github.com/yours321dog/WaveletPotentials.Item DivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced 3-Point Trackers(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yang, Dongseok; Kang, Jiho; Ma, Lingni; Greer, Joseph; Ye, Yuting; Lee, Sung-Hee; Bermano, Amit H.; Kalogerakis, EvangelosFull-body avatar presence is important for immersive social and environmental interactions in digital reality. However, current devices only provide three six degrees of freedom (DOF) poses from the headset and two controllers (i.e. three-point trackers). Because it is a highly under-constrained problem, inferring full-body pose from these inputs is challenging, especially when supporting the full range of body proportions and use cases represented by the general population. In this paper, we propose a deep learning framework, DivaTrack, which outperforms existing methods when applied to diverse body sizes and activities. We augment the sparse three-point inputs with linear accelerations from Inertial Measurement Units (IMU) to improve foot contact prediction. We then condition the otherwise ambiguous lower-body pose with the predictions of foot contact and upper-body pose in a two-stage model. We further stabilize the inferred full-body pose in a wide range of configurations by learning to blend predictions that are computed in two reference frames, each of which is designed for different types of motions. We demonstrate the effectiveness of our design on a large dataset that captures 22 subjects performing challenging locomotion for three-point tracking, including lunges, hula-hooping, and sitting. As shown in a live demo using the Meta VR headset and Xsens IMUs, our method runs in real-time while accurately tracking a user's motion when they perform a diverse set of movements.Item Estimating Cloth Simulation Parameters From Tag Information and Cusick Drape Test(The Eurographics Association and John Wiley & Sons Ltd., 2024) Ju, Eunjung; Kim, Kwang-yun; Yoon, Sungjin; Shim, Eungjune; Kang, Gyoo-Chul; Chang, Phil Sik; Choi, Myung Geol; Bermano, Amit H.; Kalogerakis, EvangelosIn recent years, the fashion apparel industry has been increasingly employing virtual simulations for the development of new products. The first step in virtual garment simulation involves identifying the optimal simulation parameters that accurately reproduce the drape properties of the actual fabric. Recent techniques advocate for a data-driven approach, estimating parameters from outcomes of a Cusick drape test. Such methods deviate from standard Cusick drape tests, introducing high-cost tools, which reduces practicality. Our research presents a more practical model, utilizing 2D silhouette images from the ISO-standardized Cusick drape test. Notably, while past models have shown limitations in estimating stretching parameters, our novel approach leverages the fabric's tag information including fabric type and fiber composition. Our proposed model functions as a cascaded system: first, it estimates stretching parameters using tag information, then, in the subsequent step, it considers the estimated stretching parameters alongside the fabric sample's Cusick drape test results to determine bending parameters. We validated our model against existing methods and applied it in practical scenarios, showing promising outcomes.Item Stylized Face Sketch Extraction via Generative Prior with Limited Data(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yun, Kwan; Seo, Kwanggyoon; Seo, Chang Wook; Yoon, Soyeon; Kim, Seongcheol; Ji, Soohyun; Ashtari, Amirsaman; Noh, Junyong; Bermano, Amit H.; Kalogerakis, EvangelosFacial sketches are both a concise way of showing the identity of a person and a means to express artistic intention. While a few techniques have recently emerged that allow sketches to be extracted in different styles, they typically rely on a large amount of data that is difficult to obtain. Here, we propose StyleSketch, a method for extracting high-resolution stylized sketches from a face image. Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images. The sketch generator utilizes part-based losses with two-stage learning for fast convergence during training for high-quality sketch extraction. Through a set of comparisons, we show that StyleSketch outperforms existing state-of-the-art sketch extraction methods and few-shot image adaptation methods for the task of extracting high-resolution abstract face sketches.We further demonstrate the versatility of StyleSketch by extending its use to other domains and explore the possibility of semantic editing. The project page can be found in https://kwanyun.github.io/stylesketch_project.Item Neural Denoising for Deep-Z Monte Carlo Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, EvangelosWe present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.Item Navigating the Manifold of Translucent Appearance(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lanza, Dario; Masia, Belen; Jarabo, Adrian; Bermano, Amit H.; Kalogerakis, EvangelosWe present a perceptually-motivated manifold for translucent appearance, designed for intuitive editing of translucent materials by navigating through the manifold. Classic tools for editing translucent appearance, based on the use of sliders to tune a number of parameters, are challenging for non-expert users: These parameters have a highly non-linear effect on appearance, and exhibit complex interplay and similarity relations between them. Instead, we pose editing as a navigation task in a low-dimensional space of appearances, which abstracts the user from the underlying optical parameters. To achieve this, we build a low-dimensional continuous manifold of translucent appearance that correlates with how humans perceive this type of materials. We first analyze the correlation of different distance metrics in image space with human perception. We select the best-performing metric to build a low-dimensional manifold, which can be used to navigate the space of translucent appearance. To evaluate the validity of our proposed manifold within its intended application scenario, we build an editing interface that leverages the manifold, and relies on image navigation plus a fine-tuning step to edit appearance. We compare our intuitive interface to a traditional, slider-based one in a user study, demonstrating its effectiveness and superior performance when editing translucent objects.Item An Empirically Derived Adjustable Model for Particle Size Distributions in Advection Fog(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kolárová, Monika; Lachiver, Loïc; Wilkie, Alexander; Bermano, Amit H.; Kalogerakis, EvangelosRealistically modelled atmospheric phenomena are a long-standing research topic in rendering. While significant progress has been made in modelling clear skies and clouds, fog has often been simplified as a medium that is homogeneous throughout, or as a simple density gradient. However, these approximations neglect the characteristic variations real advection fog shows throughout its vertical span, and do not provide the particle distribution data needed for accurate rendering. Based on data from meteorological literature, we developed an analytical model that yields the distribution of particle size as a function of altitude within an advection fog layer. The thickness of the fog layer is an additional input parameter, so that fog layers of varying thickness can be realistically represented. We also demonstrate that based on Mie scattering, one can easily integrate this model into a Monte Carlo renderer. Our model is the first ever non-trivial volumetric model for advection fog that is based on real measurement data, and that contains all the components needed for inclusion in a modern renderer. The model is provided as open source component, and can serve as reference for rendering problems that involve fog layers.Item Physically Based Real-Time Rendering of Atmospheres using Mie Theory(The Eurographics Association and John Wiley & Sons Ltd., 2024) Schneegans, Simon; Meyran, Tim; Ginkel, Ingo; Zachmann, Gabriel; Gerndt, Andreas; Bermano, Amit H.; Kalogerakis, EvangelosMost real-time rendering models for atmospheric effects have been designed and optimized for Earth's atmosphere. Some authors have proposed approaches for rendering other atmospheres, but these methods still use approximations that are only valid on Earth. For instance, the iconic blue glow of Martian sunsets can not be represented properly as the complex interference effects of light scattered at dust particles can not be captured by these approximations. In this paper, we present an approach for generalizing an existing model to make it capable of rendering extraterrestrial atmospheres. This is done by replacing the approximations with a physical model based on Mie Theory. We use the particle-size distribution, the particle-density distribution as well as the wavelength-dependent refractive index of atmospheric particles as input. To demonstrate the feasibility of this idea, we extend the model by Bruneton et al. [BN08] and implement it into CosmoScout VR, an open-source visualization of our Solar System. In a first step, we use Mie Theory to precompute the scattering behaviour of a particle mixture. Then, multi-scattering is simulated, and finally the precomputation results are used for real-time rendering. We demonstrate that this not only improves the visualization of the Martian atmosphere, but also creates more realistic results for our own atmosphere.Item Surface-aware Mesh Texture Synthesis with Pre-trained 2D CNNs(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Bermano, Amit H.; Kalogerakis, EvangelosMesh texture synthesis is a key component in the automatic generation of 3D content. Existing learning-based methods have drawbacks-either by disregarding the shape manifold during texture generation or by requiring a large number of different views to mitigate occlusion-related inconsistencies. In this paper, we present a novel surface-aware approach for mesh texture synthesis that overcomes these drawbacks by leveraging the pre-trained weights of 2D Convolutional Neural Networks (CNNs) with the same architecture, but with convolutions designed for 3D meshes. Our proposed network keeps track of the oriented patches surrounding each texel, enabling seamless texture synthesis and retaining local similarity to classical 2D convolutions with square kernels. Our approach allows us to synthesize textures that account for the geometric content of mesh surfaces, eliminating discontinuities and achieving comparable quality to 2D image synthesis algorithms. We compare our approach with state-of-the-art methods where, through qualitative and quantitative evaluations, we demonstrate that our approach is more effective for a variety of meshes and styles, while also producing visually appealing and consistent textures on meshes.Item Interactive Exploration of Vivid Material Iridescence based on Bragg Mirrors(The Eurographics Association and John Wiley & Sons Ltd., 2024) Fourneau, Gary; Pacanowski, Romain; Barla, Pascal; Bermano, Amit H.; Kalogerakis, EvangelosMany animals, plants or gems exhibit iridescent material appearance in nature. These are due to specific geometric structures at scales comparable to visible wavelengths, yielding so-called structural colors. The most vivid examples are due to photonic crystals, where a same structure is repeated in one, two or three dimensions, augmenting the magnitude and complexity of interference effects. In this paper, we study the appearance of 1D photonic crystals (repetitive pairs of thin films), also called Bragg mirrors. Previous work has considered the effect of multiple thin films using the classical transfer matrix approach, which increases in complexity when the number of repetitions increases. Our first contribution is to introduce a more efficient closedform reflectance formula [Yeh88] for Bragg mirror reflectance to the Graphics community, as well as an approximation that lends itself to efficient spectral integration for RGB rendering. We then explore the appearance of stacks made of rough Bragg layers. Here our contribution is to show that they may lead to a ballistic transmission, significantly speeding up position-free rendering and leading to an efficient single-reflection BRDF model.Item Learning to Stabilize Faces(The Eurographics Association and John Wiley & Sons Ltd., 2024) Bednarik, Jan; Wood, Erroll; Choutas, Vassilis; Bolkart, Timo; Wang, Daoye; Wu, Chenglei; Beeler, Thabo; Bermano, Amit H.; Kalogerakis, EvangelosNowadays, it is possible to scan faces and automatically register them with high quality. However, the resulting face meshes often need further processing: we need to stabilize them to remove unwanted head movement. Stabilization is important for tasks like game development or movie making which require facial expressions to be cleanly separated from rigid head motion. Since manual stabilization is labor-intensive, there have been attempts to automate it. However, previous methods remain impractical: they either still require some manual input, produce imprecise alignments, rely on dubious heuristics and slow optimization, or assume a temporally ordered input. Instead, we present a new learning-based approach that is simple and fully automatic. We treat stabilization as a regression problem: given two face meshes, our network directly predicts the rigid transform between them that brings their skulls into alignment. We generate synthetic training data using a 3D Morphable Model (3DMM), exploiting the fact that 3DMM parameters separate skull motion from facial skin motion. Through extensive experiments we show that our approach outperforms the state-of-the-art both quantitatively and qualitatively on the tasks of stabilizing discrete sets of facial expressions as well as dynamic facial performances. Furthermore, we provide an ablation study detailing the design choices and best practices to help others adopt our approach for their own uses.Item EUROGRAPHICS 2024: CGF 43-2 Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2024) Bermano, Amit H.; Kalogerakis, Evangelos; Bermano, Amit H.; Kalogerakis, EvangelosItem Real-Time Underwater Spectral Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Monzon, Nestor; Gutierrez, Diego; Akkaynak, Derya; Muñoz, Adolfo; Bermano, Amit H.; Kalogerakis, EvangelosThe light field in an underwater environment is characterized by complex multiple scattering interactions and wavelengthdependent attenuation, requiring significant computational resources for the simulation of underwater scenes. We present a novel approach that makes it possible to simulate multi-spectral underwater scenes, in a physically-based manner, in real time. Our key observation is the following: In the vertical direction, the steady decay in irradiance as a function of depth is characterized by the diffuse downwelling attenuation coefficient, which oceanographers routinely measure for different types of waters. We rely on a database of such real-world measurements to obtain an analytical approximation to the Radiative Transfer Equation, allowing for real-time spectral rendering with results comparable to Monte Carlo ground-truth references, in a fraction of the time. We show results simulating underwater appearance for the different optical water types, including volumetric shadows and dynamic, spatially varying lighting near the water surface.
- «
- 1 (current)
- 2
- 3
- »