Browsing by Author "Dong, Zhao"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Efficient Path-Space Differentiable Volume Rendering With Respect To Shapes(The Eurographics Association and John Wiley & Sons Ltd., 2023) Yu, Zihan; Zhang, Cheng; Maury, Olivier; Hery, Christophe; Dong, Zhao; Zhao, Shuang; Ritschel, Tobias; Weidlich, AndreaDifferentiable rendering of translucent objects with respect to their shapes has been a long-standing problem. State-of-theart methods require detecting object silhouettes or specifying change rates inside translucent objects-both of which can be expensive for translucent objects with complex shapes. In this paper, we address this problem for translucent objects with no refractive or reflective boundaries. By reparameterizing interior components of differential path integrals, our new formulation does not require change rates to be specified in the interior of objects. Further, we introduce new Monte Carlo estimators based on this formulation that do not require explicit detection of object silhouettes.Item Material and Lighting Reconstruction for Complex Indoor Scenes with Texture-space Differentiable Rendering(The Eurographics Association, 2021) Nimier-David, Merlin; Dong, Zhao; Jakob, Wenzel; Kaplanyan, Anton; Bousseau, Adrien and McGuire, MorganModern geometric reconstruction techniques achieve impressive levels of accuracy in indoor environments. However, such captured data typically keeps lighting and materials entangled. It is then impossible to manipulate the resulting scenes in photorealistic settings, such as augmented / mixed reality and robotics simulation. Moreover, various imperfections in the captured data, such as missing detailed geometry, camera misalignment, uneven coverage of observations, etc., pose challenges for scene recovery. To address these challenges, we present a robust optimization pipeline based on differentiable rendering to recover physically based materials and illumination, leveraging RGB and geometry captures. We introduce a novel texture-space sampling technique and carefully chosen inductive priors to help guide reconstruction, avoiding low-quality or implausible local minima. Our approach enables robust and high-resolution reconstruction of complex materials and illumination in captured indoor scenes. This enables a variety of applications including novel view synthesis, scene editing, local & global relighting, synthetic data augmentation, and other photorealistic manipulations.Item Physics-Based Inverse Rendering using Combined Implicit and Explicit Geometries(The Eurographics Association and John Wiley & Sons Ltd., 2022) Cai, Guangyan; Yan, Kai; Dong, Zhao; Gkioulekas, Ioannis; Zhao, Shuang; Ghosh, Abhijeet; Wei, Li-YiMathematically representing the shape of an object is a key ingredient for solving inverse rendering problems. Explicit representations like meshes are efficient to render in a differentiable fashion but have difficulties handling topology changes. Implicit representations like signed-distance functions, on the other hand, offer better support of topology changes but are much more difficult to use for physics-based differentiable rendering. We introduce a new physics-based inverse rendering pipeline that uses both implicit and explicit representations. Our technique enjoys the benefit of both representations by supporting both topology changes and differentiable rendering of complex effects such as environmental illumination, soft shadows, and interreflection. We demonstrate the effectiveness of our technique using several synthetic and real examples.Item PSAO: Point-Based Split Rendering for Ambient Occlusion(The Eurographics Association, 2023) Neff, Thomas; Budge, Brian; Dong, Zhao; Schmalstieg, Dieter; Steinberger, Markus; Bikker, Jacco; Gribble, ChristiaanRecent advances in graphics hardware have enabled ray tracing to produce high-quality ambient occlusion (AO) in real-time, which is not plagued by the artifacts typically found in real-time screen-space approaches. However, the high computational cost of ray tracing remains a significant hurdle for low-power devices like standalone VR headsets or smartphones. To address this challenge, inspired by point-based global illumination and texture-space split rendering, we propose point-based split ambient occlusion (PSAO), a novel split-rendering system that streams points sparsely from server to client. PSAO first evenly distributes points across the scene, and then subsequently only transmits points that changed more than a given threshold, using an efficient hash grid to blend neighboring points for the final compositing pass on the client. PSAO outperforms recent texture-space shading approaches in terms of quality and required network bit rate, while demonstrating performance similar to commonly used lower-quality screen-space approaches. Our point-based split rendering representation lends itself to highly compressible signals such as AO and is scalable towards quality or bandwidth requirements by adjusting the number of points in the scene.Item Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2021) Luan, Fujun; Zhao, Shuang; Bala, Kavita; Dong, Zhao; Bousseau, Adrien and McGuire, MorganReconstructing the shape and appearance of real-world objects using measured 2D images has been a long-standing inverse rendering problem. In this paper, we introduce a new analysis-by-synthesis technique capable of producing high-quality reconstructions through robust coarse-to-fine optimization and physics-based differentiable rendering. Unlike most previous methods that handle geometry and reflectance largely separately, our method unifies the optimization of both by leveraging image gradients with respect to both object reflectance and geometry. To obtain physically accurate gradient estimates, we develop a new GPU-based Monte Carlo differentiable renderer leveraging recent advances in differentiable rendering theory to offer unbiased gradients while enjoying better performance than existing tools like PyTorch3D [RRN*20] and redner [LADL18]. To further improve robustness, we utilize several shape and material priors as well as a coarse-to-fine optimization strategy to reconstruct geometry. Using both synthetic and real input images, we demonstrate that our technique can produce reconstructions with higher quality than previous methods.