43-Issue 7
Permanent URI for this collection
Browse
Browsing 43-Issue 7 by Issue Date
Now showing 1 - 20 of 57
Results Per Page
Sort Options
Item Curved Image Triangulation Based on Differentiable Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Wanyi; Chen, Zhonggui; Fang, Lincong; Cao, Juan; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyImage triangulation methods, which decompose an image into a series of triangles, are fundamental in artistic creation and image processing. This paper introduces a novel framework that integrates cubic Bézier curves into image triangulation, enabling the precise reconstruction of curved image features. Our developed framework constructs a well-structured curved triangle mesh, effectively preventing overlaps between curves. A refined energy function, grounded in differentiable rendering, establishes a direct link between mesh geometry and rendering effects and is instrumental in guiding the curved mesh generation. Additionally, we derive an explicit gradient formula with respect to mesh parameters, facilitating the adaptive and efficient optimization of these parameters to fully leverage the capabilities of cubic Bézier curves. Through experimental and comparative analyses with state-of-the-art methods, our approach demonstrates a significant enhancement in both numerical accuracy and visual quality.Item Exploring Fast and Flexible Zero-Shot Low-Light Image/Video Enhancement(The Eurographics Association and John Wiley & Sons Ltd., 2024) Han, Xianjun; Bao, Taoli; Yang, Hongyu; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyLow-light image/video enhancement is a challenging task when images or video are captured under harsh lighting conditions. Existing methods mostly formulate this task as an image-to-image conversion task via supervised or unsupervised learning. However, such conversion methods require an extremely large amount of data for training, whether paired or unpaired. In addition, these methods are restricted to specific training data, making it difficult for the trained model to enhance other types of images or video. In this paper, we explore a novel, fast and flexible, zero-shot, low-light image or video enhancement framework. Without relying on prior training or relationships among neighboring frames, we are committed to estimating the illumination of the input image/frame by a well-designed network. The proposed zero-shot, low-light image/video enhancement architecture includes illumination estimation and residual correction modules. The network architecture is very concise and does not require any paired or unpaired data during training, which allows low-light enhancement to be performed with several simple iterations. Despite its simplicity, we show that the method is fast and generalizes well to diverse lighting conditions. Many experiments on various images and videos qualitatively and quantitatively demonstrate the advantages of our method over state-of-the-art methods.Item GSEditPro: 3D Gaussian Splatting Editing with Attention-based Progressive Localization(The Eurographics Association and John Wiley & Sons Ltd., 2024) Sun, Yanhao; Tian, Runze; Han, Xiao; Liu, Xinyao; Zhang, Yan; Xu, Kai; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWith the emergence of large-scale Text-to-Image(T2I) models and implicit 3D representations like Neural Radiance Fields (NeRF), many text-driven generative editing methods based on NeRF have appeared. However, the implicit encoding of geometric and textural information poses challenges in accurately locating and controlling objects during editing. Recently, significant advancements have been made in the editing methods of 3D Gaussian Splatting, a real-time rendering technology that relies on explicit representation. However, these methods still suffer from issues including inaccurate localization and limited manipulation over editing. To tackle these challenges, we propose GSEditPro, a novel 3D scene editing framework which allows users to perform various creative and precise editing using text prompts only. Leveraging the explicit nature of the 3D Gaussian distribution, we introduce an attention-based progressive localization module to add semantic labels to each Gaussian during rendering. This enables precise localization on editing areas by classifying Gaussians based on their relevance to the editing prompts derived from cross-attention layers of the T2I model. Furthermore, we present an innovative editing optimization method based on 3D Gaussian Splatting, obtaining stable and refined editing results through the guidance of Score Distillation Sampling and pseudo ground truth. We prove the efficacy of our method through extensive experiments.Item SOD-diffusion: Salient Object Detection via Diffusion-Based Image Generators(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Shuo; Huang, Jiaming; Chen, Shizhe; Wu, Yan; Hu, Tao; Liu, Jing; Chen, Renjie; Ritschel, Tobias; Whiting, EmilySalient Object Detection (SOD) is a challenging task that aims to precisely identify and segment the salient objects. However, existing SOD methods still face challenges in making explicit predictions near the edges and often lack end-to-end training capabilities. To alleviate these problems, we propose SOD-diffusion, a novel framework that formulates salient object detection as a denoising diffusion process from noisy masks to object masks. Specifically, object masks diffuse from ground-truth masks to random distribution in latent space, and the model learns to reverse this noising process to reconstruct object masks. To enhance the denoising learning process, we design an attention feature interaction module (AFIM) and a specific fine-tuning protocol to integrate conditional semantic features from the input image with diffusion noise embedding. Extensive experiments on five widely used SOD benchmark datasets demonstrate that our proposed SOD-diffusion achieves favorable performance compared to previous well-established methods. Furthermore, leveraging the outstanding generalization capability of SOD-diffusion, we applied it to publicly available images, generating high-quality masks that serve as an additional SOD benchmark testset.Item CoupNeRF: Property-aware Neural Radiance Fields for Multi-Material Coupled Scenario Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Jin; Gao, Yang; Song, Wenfeng; Li, Yacong; Li, Shuai; Hao, Aimin; Qin, Hong; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyNeural Radiance Fields (NeRFs) have achieved significant recognition for their proficiency in scene reconstruction and rendering by utilizing neural networks to depict intricate volumetric environments. Despite considerable research dedicated to reconstructing physical scenes, rare works succeed in challenging scenarios involving dynamic, multi-material objects. To alleviate, we introduce CoupNeRF, an efficient neural network architecture that is aware of multiple material properties. This architecture combines physically grounded continuum mechanics with NeRF, facilitating the identification of motion systems across a wide range of physical coupling scenarios. We first reconstruct specific-material of objects within 3D physical fields to learn material parameters. Then, we develop a method to model the neighbouring particles, enhancing the learning process specifically in regions where material transitions occur. The effectiveness of CoupNeRF is demonstrated through extensive experiments, showcasing its proficiency in accurately coupling and identifying the behavior of complex physical scenes that span multiple physics domains.Item G-Style: Stylized Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe introduce G -Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as-compared to other approaches based on Neural Radiance Fields-it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that G -Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitativelyItem Spatially and Temporally Optimized Audio-Driven Talking Face Generation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Dong, Biao; Ma, Bo-Yao; Zhang, Lei; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyAudio-driven talking face generation is essentially a cross-modal mapping from audio to video frames. The main challenge lies in the intricate one-to-many mapping, which affects lip sync accuracy. And the loss of facial details during image reconstruction often results in visual artifacts in the generated video. To overcome these challenges, this paper proposes to enhance the quality of generated talking faces with a new spatio-temporal consistency. Specifically, the temporal consistency is achieved through consecutive frames of the each phoneme, which form temporal modules that exhibit similar lip appearance changes. This allows for adaptive adjustment in the lip movement for accurate sync. The spatial consistency pertains to the uniform distribution of textures within local regions, which form spatial modules and regulate the texture distribution in the generator. This yields fine details in the reconstructed facial images. Extensive experiments show that our method can generate more natural talking faces than previous state-of-the-art methods in both accurate lip sync and realistic facial details.Item Adversarial Unsupervised Domain Adaptation for 3D Semantic Segmentation with 2D Image Fusion of Dense Depth(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xindan; Li, Ying; Sheng, Huankun; Zhang, Xinnian; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyUnsupervised domain adaptation (UDA) is increasingly used for 3D point cloud semantic segmentation tasks due to its ability to address the issue of missing labels for new domains. However, most existing unsupervised domain adaptation methods focus only on uni-modal data and are rarely applied to multi-modal data. Therefore, we propose a cross-modal UDA on multimodal datasets that contain 3D point clouds and 2D images for 3D Semantic Segmentation. Specifically, we first propose a Dual discriminator-based Domain Adaptation (Dd-bDA) module to enhance the adaptability of different domains. Second, given that the robustness of depth information to domain shifts can provide more details for semantic segmentation, we further employ a Dense depth Feature Fusion (DdFF) module to extract image features with rich depth cues. We evaluate our model in four unsupervised domain adaptation scenarios, i.e., dataset-to-dataset (A2D2→SemanticKITTI), Day-to-Night, country-tocountry (USA→Singapore), and synthetic-to-real (VirtualKITTI→SemanticKITTI). In all settings, the experimental results achieve significant improvements and surpass state-of-the-art models.Item LightUrban: Similarity Based Fine-grained Instancing for Lightweighting Complex Urban Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lu, Zi Ang; Xiong, Wei Dan; Ren, Peng; Jia, Jin Yuan; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyLarge-scale urban point clouds play a vital role in various applications, while rendering and transmitting such data remains challenging due to its large volume, complicated structures, and significant redundancy. In this paper, we present LightUrban, the first point cloud instancing framework for efficient rendering and transmission of fine-grained complex urban scenes.We first introduce a segmentation method to organize the point clouds into individual buildings and vegetation instances from coarse to fine. Next, we propose an unsupervised similarity detection approach to accurately group instances with similar shapes. Furthermore, a fast pose and size estimation component is applied to calculate the transformations between the representative instance and the corresponding similar instances in each group. By replacing individual instances with their group's representative instances, the data volume and redundancy can be dramatically reduced. Experimental results on large-scale urban scenes demonstrate the effectiveness of our algorithm. To sum up, our method not only structures the urban point clouds but also significantly reduces data volume and redundancy, filling the gap in lightweighting urban landscapes through instancing.Item CrystalNet: Texture-Aware Neural Refraction Baking for Global Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Ziyang; Simo-Serra, Edgar; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyNeural rendering bakes global illumination and other computationally costly effects into the weights of a neural network, allowing to efficiently synthesize photorealistic images without relying on path tracing. In neural rendering approaches, G-buffers obtained from rasterization through direct rendering provide information regarding the scene such as position, normal, and textures to the neural network, achieving accurate and stable rendering quality in real-time. However, due to the use of G-buffers, existing methods struggle to accurately render transparency and refraction effects, as G-buffers do not capture any ray information from multiple light ray bounces. This limitation results in blurriness, distortions, and loss of detail in rendered images that contain transparency and refraction, and is particularly notable in scenes with refracted objects that have high-frequency textures. In this work, we propose a neural network architecture to encode critical rendering information, including texture coordinates from refracted rays, and enable reconstruction of high-frequency textures in areas with refraction. Our approach is able to achieve accurate refraction rendering in challenging scenes with a diversity of overlapping transparent objects. Experimental results demonstrate that our method can interactively render high quality refraction effects with global illumination, unlike existing neural rendering approaches. Our code can be found at https://github.com/ziyangz5/CrystalNetItem FastFlow: GPU Acceleration of Flow and Depression Routing for Landscape Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jain, Aryamaan; Kerbl, Bernhard; Gain, James; Finley, Brandon; Cordonnier, Guillaume; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyTerrain analysis plays an important role in computer graphics, hydrology and geomorphology. In particular, analyzing the path of material flow over a terrain with consideration of local depressions is a precursor to many further tasks in erosion, river formation, and plant ecosystem simulation. For example, fluvial erosion simulation used in terrain modeling computes water discharge to repeatedly locate erosion channels for soil removal and transport. Despite its significance, traditional methods face performance constraints, limiting their broader applicability. In this paper, we propose a novel GPU flow routing algorithm that computes the water discharge in O(logn) iterations for a terrain with n vertices (assuming n processors). We also provide a depression routing algorithm to route the water out of local minima formed by depressions in the terrain, which converges in O(log2 n) iterations. Our implementation of these algorithms leads to a 5× speedup for flow routing and 34× to 52× speedup for depression routing compared to previous work on a 10242 terrain, enabling interactive control of terrain simulation.Item Anisotropic Specular Image-Based Lighting Based on BRDF Major Axis Sampling(The Eurographics Association and John Wiley & Sons Ltd., 2024) Cocco, Giovanni; Zanni, Cédric; Chermain, Xavier; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyAnisotropic specular appearances are ubiquitous in the environment: brushed stainless steel pans, kettles, elevator walls, fur, or scratched plastics. Real-time rendering of these materials with image-based lighting is challenging due to the complex shape of the bidirectional reflectance distribution function (BRDF). We propose an anisotropic specular image-based lighting method that can serve as a drop-in replacement for the standard bent normal technique [Rev11]. Our method yields more realistic results with a 50% increase in computation time of the previous technique, using the same high dynamic range (HDR) preintegrated environment image. We use several environment samples positioned along the major axis of the specular microfacet BRDF. We derive an analytic formula to determine the two closest and two farthest points from the reflected direction on an approximation of the BRDF confidence region boundary. The two farthest points define the BRDF major axis, while the two closest points are used to approximate the BRDF width. The environment level of detail is derived from the BRDF width and the distance between the samples. We extensively compare our method with the bent normal technique and the ground truth using the GGX specular BRDF.Item Multiscale Spectral Manifold Wavelet Regularizer for Unsupervised Deep Functional Maps(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Haibo; Meng, Jing; Li, Qinsong; Hu, Ling; Guo, Yueyu; Liu, Xinru; Yang, Xiaoxia; Liu, Shengjun; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyIn deep functional maps, the regularizer computing the functional map is especially crucial for ensuring the global consistency of the computed pointwise map. As the regularizers integrated into deep learning should be differentiable, it is not trivial to incorporate informative axiomatic structural constraints into the deep functional map, such as the orientation-preserving term. Although commonly used regularizers include the Laplacian-commutativity term and the resolvent Laplacian commutativity term, these are limited to single-scale analysis for capturing geometric information. To this end, we propose a novel and theoretically well-justified regularizer commuting the functional map with the multiscale spectral manifold wavelet operator. This regularizer enhances the isometric constraints of the functional map and is conducive to providing it with better structural properties with multiscale analysis. Furthermore, we design an unsupervised deep functional map with the regularizer in a fully differentiable way. The quantitative and qualitative comparisons with several existing techniques on the (near-)isometric and non-isometric datasets show our method's superior accuracy and generalization capabilities. Additionally, we illustrate that our regularizer can be easily inserted into other functional map methods and improve their accuracy.Item TempDiff: Enhancing Temporal-awareness in Latent Diffusion for Real-World Video Super-Resolution(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jiang, Qin; Wang, Qing Lin; Chi, Li Hua; Chen, Xin Hai; Zhang, Qing Yang; Zhou, Richard; Deng, Zheng Qiu; Deng, Jin Sheng; Tang, Bin Bing; Lv, Shao He; Liu, Jie; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyLatent diffusion models (LDMs) have demonstrated remarkable success in generative modeling. It is promising to leverage the potential of diffusion priors to enhance performance in image and video tasks. However, applying LDMs to video superresolution (VSR) presents significant challenges due to the high demands for realistic details and temporal consistency in generated videos, exacerbated by the inherent stochasticity in the diffusion process. In this work, we propose a novel diffusionbased framework, Temporal-awareness Latent Diffusion Model (TempDiff), specifically designed for real-world video superresolution, where degradations are diverse and complex. TempDiff harnesses the powerful generative prior of a pre-trained diffusion model and enhances temporal awareness through the following mechanisms: 1) Incorporating temporal layers into the denoising U-Net and VAE-Decoder, and fine-tuning these added modules to maintain temporal coherency; 2) Estimating optical flow guidance using a pre-trained flow net for latent optimization and propagation across video sequences, ensuring overall stability in the generated high-quality video. Extensive experiments demonstrate that TempDiff achieves compelling results, outperforming state-of-the-art methods on both synthetic and real-world VSR benchmark datasets. Code will be available at https://github.com/jiangqin567/TempDiffItem Point-AGM : Attention Guided Masked Auto-Encoder for Joint Self-supervised Learning on Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2024) Liu, Jie; Yang, Mengna; Tian, Yu; Li, Yancui; Song, Da; Li, Kang; Cao, Xin; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyMasked point modeling (MPM) has gained considerable attention in self-supervised learning for 3D point clouds. While existing self-supervised methods have progressed in learning from point clouds, we aim to address their limitation of capturing high-level semantics through our novel attention-guided masking framework, Point-AGM. Our approach introduces an attention-guided masking mechanism that selectively masks low-attended regions, enabling the model to concentrate on reconstructing more critical areas and addressing the limitations of random and block masking strategies. Furthermore, we exploit the inherent advantages of the teacher-student network to enable cross-view contrastive learning on augmented dual-view point clouds, enforcing consistency between complete and partially masked views of the same 3D shape in the feature space. This unified framework leverages the complementary strengths of masked point modeling, attention-guided masking, and contrastive learning for robust representation learning. Extensive experiments have shown the effectiveness of our approach and its well-transferable performance across various downstream tasks. Specifically, our model achieves an accuracy of 94.12% on ModelNet40 and 87.16% on the PB-T50-RS setting of ScanObjectNN, outperforming other self-supervised learning methods.Item Symmetric Piecewise Developable Approximations(The Eurographics Association and John Wiley & Sons Ltd., 2024) He, Ying; Fang, Qing; Zhang, Zheng; Dai, Tielin; Wu, Kang; Liu, Ligang; Fu, Xiao-Ming; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe propose a novel method for generating symmetric piecewise developable approximations for shapes in approximately global reflectional or rotational symmetry. Given a shape and its symmetry constraint, the algorithm contains two crucial steps: (i) a symmetric deformation to achieve a nearly developable model and (ii) a symmetric segmentation aided by the deformed shape. The key to the deformation step is the use of the symmetric implicit neural representations of the shape and the deformation field. A new mesh extraction from the implicit function is introduced to construct a strictly symmetric mesh for the subsequent segmentation. The symmetry constraint is carefully integrated into the partition to achieve the symmetric piecewise developable approximation. We demonstrate the effectiveness of our algorithm over various meshes.Item Strictly Conservative Neural Implicits(The Eurographics Association and John Wiley & Sons Ltd., 2024) Ludwig, Ingmar; Campen, Marcel; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe describe a method to convert 3D shapes into neural implicit form such that the shape is approximated in a guaranteed conservative manner. This means the input shape is strictly contained inside the neural implicit or, alternatively, vice versa. Such conservative approximations are of interest in a variety of applications, including collision detection, occlusion culling, or intersection testing. Our approach is the first to guarantee conservativeness in this context of neural implicits. We support input given as mesh, voxel set, or implicit function. Adaptive affine arithmetic is employed in the neural network fitting process, enabling the reasoning over infinite sets of points despite using a finite set of training data. Combined with an interior point style optimization approach this yields the desired guarantee.Item MISNeR: Medical Implicit Shape Neural Representation for Image Volume Visualisation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jin, Ge; Jung, Younhyun; Bi, Lei; Kim, Jinman; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyThree-dimensional visualisation of mesh reconstruction of the medical images is commonly used for various clinical applications including pre / post-surgical planning. Such meshes are conventionally generated by extracting the surface from volumetric segmentation masks. Therefore, they have inherent limitations of staircase artefacts due to their anisotropic voxel dimensions. The time-consuming process for manual refinement to remove artefacts and/or the isolated regions further adds to these limitations. Methods for directly generating meshes from volumetric data by template deformation are often limited to simple topological structures, and methods that use implicit functions for continuous surfaces, do not achieve the level of mesh reconstruction accuracy when compared to segmentation-based methods. In this study, we address these limitations by combining the implicit function representation with a multi-level deep learning architecture. We introduce a novel multi-level local feature sampling component which leverages the spatial features for the implicit function regression to enhance the segmentation result. We further introduce a shape boundary estimator that accelerates the explicit mesh reconstruction by minimising the number of the signed distance queries during model inference. The result is a multi-level deep learning network that directly regresses the implicit function from medical image volumes to a continuous surface model, which can be used for mesh reconstruction from arbitrary high volume resolution to minimise staircase artefacts. We evaluated our method using pelvic computed tomography (CT) dataset from two public sources with varying z-axis resolutions. We show that our method minimised the staircase artefacts while achieving comparable results in surface accuracy when compared to the state-of-the-art segmentation algorithms. Furthermore, our method was 9 times faster in volume reconstruction than comparable implicit shape representation networks.Item Surface Cutting and Flattening to Target Shapes(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Yuanhao; Wu, Wenzheng; Liu, Ligang; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe introduce a novel framework for surface cutting and flattening, aiming to align the boundary of planar parameterization with a target shape. Diverging from traditional methods focused on minimizing distortion, we intend to also achieve shape similarity between the parameterized mesh and a specific planar target, which is important in some applications of art design and texture mapping. However, with existing methods commonly limited to ellipsoidal surfaces, it still remains a challenge to solve this problem on general surfaces. Our framework models the general case as a joint optimization of cuts and parameterization, guided by a novel metric assessing shape similarity. To circumvent the common issue of local minima, we introduce an extra global seam updating strategy which is guided by the target shape. Experimental results show that our framework not only aligns with previous approaches on ellipsoidal surfaces but also achieves satisfactory results on more complex ones.Item Color-Accurate Camera Capture with Multispectral Illumination and Multiple Exposures(The Eurographics Association and John Wiley & Sons Ltd., 2024) Gao, Hongyun; Mantiuk, Rafal K.; Finlayson, Graham D.; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyCameras cannot capture the same colors as those seen by the human eye because the eye and the cameras' sensors differ in their spectral sensitivity. To obtain a plausible approximation of perceived colors, the camera's Image Signal Processor (ISP) employs a color correction step. However, even advanced color correction methods cannot solve this underdetermined problem, and visible color inaccuracies are always present. Here, we explore an approach in which we can capture accurate colors with a regular camera by optimizing the spectral composition of the illuminant and capturing one or more exposures. We jointly optimize for the signal-to-noise ratio and for the color accuracy irrespective of the spectral composition of the scene. One or more images captured under controlled multispectral illuminants are then converted into a color-accurate image as seen under the standard illuminant of D65. Our optimization allows us to reduce the color error by 20-60% (in terms of CIEDE 2000), depending on the number of exposures and camera type. The method can be used in applications in which illumination can be controlled, and high colour accuracy is required, such as product photography or with a multispectral camera flash. The code is available at https://github.com/gfxdisp/multispectral_color_correction.
- «
- 1 (current)
- 2
- 3
- »