Browsing by Author "Peers, Pieter"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item An Adaptive BRDF Fitting Metric(The Eurographics Association and John Wiley & Sons Ltd., 2020) Bieron, James; Peers, Pieter; Dachsbacher, Carsten and Pharr, MattWe propose a novel image-driven fitting strategy for isotropic BRDFs. Whereas existing BRDF fitting methods minimize a cost function directly on the error between the fitted analytical BRDF and the measured isotropic BRDF samples, we also take into account the resulting material appearance in visualizations of the BRDF. This change of fitting paradigm improves the appearance reproduction fidelity, especially for analytical BRDF models that lack the expressiveness to reproduce the measured surface reflectance. We formulate BRDF fitting as a two-stage process that first generates a series of candidate BRDF fits based only on the BRDF error with measured BRDF samples. Next, from these candidates, we select the BRDF fit that minimizes the visual error. We demonstrate qualitatively and quantitatively improved fits for the Cook-Torrance and GGX microfacet BRDF models. Furthermore, we present an analysis of the BRDF fitting results, and show that the image-driven isotropic BRDF fits generalize well to other light conditions, and that depending on the measured material, a different weighting of errors with respect to the measured BRDF is necessary.Item An Adaptive Metric for BRDF Appearance Matching(The Eurographics Association, 2020) Bieron, James; Peers, Pieter; Klein, Reinhard and Rushmeier, HollyImage-based BRDF matching is a special case of inverse rendering, where the parameters of a BRDF model are optimized based on a photograph of a homogeneous material under natural lighting. Using a perceptual image metric, directly optimizing the difference between a rendering and a reference image can provide a close visual match between the model and reference material. However, perceptual image metrics rely on image-features and thus require full resolution renderings that can be costly to produce especially when embedded in a non-linear search procedure for the optimal BRDF parameters. Using a pixel-based metric, such as the squared difference, can approximate the image error from a small subset of pixels. Unfortunately, pixel-based metrics are often a poor approximation of human perception of the material's appearance. We show that comparable quality results to a perceptual metric can be obtained using an adaptive pixel-based metric that is optimized based on the appearance similarity of the material. As the core of our adaptive metric is pixel-based, our method is amendable to imagesubsampling, thereby greatly reducing the computational cost.Item Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Ye, Wenjie; Dong, Yue; Peers, Pieter; Guo, Baining; Benes, Bedrich and Hauser, HelwigIn this paper we present a novel method for recovering high‐resolution spatially‐varying isotropic surface reflectance of a planar exemplar from a flash‐lit close‐up video sequence captured with a regular hand‐held mobile phone. We do not require careful calibration of the camera and lighting parameters, but instead compute a per‐pixel flow map using a deep neural network to align the input video frames. For each video frame, we also extract the reflectance parameters, and warp the neural reflectance features directly using the per‐pixel flow, and subsequently pool the warped features. Our method facilitates convenient hand‐held acquisition of spatially‐varying surface reflectance with commodity hardware by non‐expert users. Furthermore, our method enables aggregation of reflectance features from surface points visible in only a subset of the captured video frames, enabling the creation of high‐resolution reflectance maps that exceed the native camera resolution. We demonstrate and validate our method on a variety of synthetic and real‐world spatially‐varying materials.Item Deep Separation of Direct and Global Components from a Single Photograph under Structured Lighting(The Eurographics Association and John Wiley & Sons Ltd., 2020) Duan, Zhaoliang; Bieron, James; Peers, Pieter; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueWe present a deep learning based solution for separating the direct and global light transport components from a single photograph captured under high frequency structured lighting with a co-axial projector-camera setup. We employ an architecture with one encoder and two decoders that shares information between the encoder and the decoders, as well as between both decoders to ensure a consistent decomposition between both light transport components. Furthermore, our deep learning separation approach does not require binary structured illumination, allowing us to utilize the full resolution capabilities of the projector. Consequently, our deep separation network is able to achieve high fidelity decompositions for lighting frequency sensitive features such as subsurface scattering and specular reflections. We evaluate and demonstrate our direct and global separation method on a wide variety of synthetic and captured scenes.Item Estimating Homogeneous Data-driven BRDF Parameters from a Reflectance Map under Known Natural Lighting(The Eurographics Association, 2019) Cooper, Victoria L.; Bieron, James C.; Peers, Pieter; Klein, Reinhard and Rushmeier, HollyIn this paper we demonstrate robust estimation of the model parameters of a fully-linear data-driven BRDF model from a reflectance map under known natural lighting. To regularize the estimation of the model parameters, we leverage the reflectance similarities within a material class. We approximate the space of homogeneous BRDFs using a Gaussian mixture model, and assign a material class to each Gaussian in the mixture model. Next, we compute a linear solution per material class. Finally, we select the best candidate as the final estimate. We demonstrate the efficacy and robustness of our method using the MERL BRDF database under a variety of natural lighting conditions.Item Interactive Curation of Datasets for Training and Refining Generative Models(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ye, Wenjie; Dong, Yue; Peers, Pieter; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a novel interactive learning-based method for curating datasets using user-defined criteria for training and refining Generative Adversarial Networks. We employ a novel batch-mode active learning strategy to progressively select small batches of candidate exemplars for which the user is asked to indicate whether they match the, possibly subjective, selection criteria. After each batch, a classifier that models the user's intent is refined and subsequently used to select the next batch of candidates. After the selection process ends, the final classifier, trained with limited but adaptively selected training data, is used to sift through the large collection of input exemplars to extract a sufficiently large subset for training or refining the generative model that matches the user's selection criteria. A key distinguishing feature of our system is that we do not assume that the user can always make a firm binary decision (i.e., ''meets'' or ''does not meet'' the selection criteria) for each candidate exemplar, and we allow the user to label an exemplar as ''undecided''. We rely on a non-binary query-by-committee strategy to distinguish between the user's uncertainty and the trained classifier's uncertainty, and develop a novel disagreement distance metric to encourage a diverse candidate set. In addition, a number of optimization strategies are employed to achieve an interactive experience. We demonstrate our interactive curation system on several applications related to training or refining generative models: training a Generative Adversarial Network that meets a user-defined criteria, adjusting the output distribution of an existing generative model, and removing unwanted samples from a generative model.Item Mean Value Caching for Walk on Spheres(The Eurographics Association, 2023) Bakbouk, Ghada; Peers, Pieter; Ritschel, Tobias; Weidlich, AndreaWalk on Spheres (WoS) is a grid-free Monte Carlo method for numerically estimating solutions for elliptical partial differential equations (PDE) such as the Laplace and Poisson PDEs. While WoS is efficient for computing a solution value at a single evaluation point, it becomes less efficient when the solution is required over a whole domain or a region of interest. WoS computes a solution for each evaluation point separately, possibly recomputing similar sub-walks multiple times over multiple evaluation points. In this paper, we introduce a novel filtering and caching strategy that leverages the volume mean value property (in contrast to the boundary mean value property that forms the core of WoS). In addition, to improve quality under sparse cache regimes, we describe a weighted mean as well as a non-uniform sampling method. Finally, we show that we can reduce the variance within the cache by recursively applying the volume mean value property on the cached elements.Item On-Site Example-Based Material Appearance Acquisition(The Eurographics Association and John Wiley & Sons Ltd., 2019) Lin, Yiming; Peers, Pieter; Ghosh, Abhijeet; Boubekeur, Tamy and Sen, PradeepWe present a novel example-based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on-site appearance acquisition to a lightweight photography process suited for non-expert users. As our central contribution, we propose a shape-agnostic BRDF estimation procedure based on binary RGB profile matching.We also model the appearance of materials exhibiting a regular or stationary texture-like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on-site shape-agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible ''rapid-appearance-modeling''.Item Single Image Surface Appearance Modeling with Self-augmented CNNs and Inexact Supervision(The Eurographics Association and John Wiley & Sons Ltd., 2018) Ye, Wenjie; Li, Xiao; Dong, Yue; Peers, Pieter; Tong, Xin; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesThis paper presents a deep learning based method for estimating the spatially varying surface reflectance properties from a single image of a planar surface under unknown natural lighting trained using only photographs of exemplar materials without referencing any artist generated or densely measured spatially varying surface reflectance training data. Our method is based on an empirical study of Li et al.'s [LDPT17] self-augmentation training strategy that shows that the main role of the initial approximative network is to provide guidance on the inherent ambiguities in single image appearance estimation. Furthermore, our study indicates that this initial network can be inexact (i.e., trained from other data sources) as long as it resolves the inherent ambiguities. We show that the single image estimation network trained without manually labeled data outperforms prior work in terms of accuracy as well as generality.