Browsing by Author "Harada, Takahiro"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Local Positional Encoding for Multi-Layer Perceptrons(The Eurographics Association, 2023) Fujieda, Shin; Yoshimura, Atsushi; Harada, Takahiro; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.A multi-layer perceptron (MLP) is a type of neural networks which has a long history of research and has been studied actively recently in computer vision and graphics fields. One of the well-known problems of an MLP is the capability of expressing highfrequency signals from low-dimensional inputs. There are several studies for input encodings to improve the reconstruction quality of an MLP by applying pre-processing against the input data. This paper proposes a novel input encoding method, local positional encoding, which is an extension of positional and grid encodings. Our proposed method combines these two encoding techniques so that a small MLP learns high-frequency signals by using positional encoding with fewer frequencies under the lower resolution of the grid to consider the local position and scale in each grid cell. We demonstrate the effectiveness of our proposed method by applying it to common 2D and 3D regression tasks where it shows higher-quality results compared to positional and grid encodings, and comparable results to hierarchical variants of grid encoding such as multi-resolution grid encoding with equivalent memory footprint.Item Multi-Fragment Rendering for Glossy Bounces on the GPU(The Eurographics Association, 2022) Yoshimura, Atsushi; Tokuyoshi, Yusuke; Harada, Takahiro; Ghosh, Abhijeet; Wei, Li-YiMulti-fragment rendering provides additional degrees of freedom in postprocessing. It allows us to edit images rendered with antialiasing, motion blur, depth of field, and transparency. To store multiple fragments, relationships between pixels and scene elements are often encoded into an existing image format. Most multi-fragment rendering systems, however, take into account only directly visible fragments on primary rays. The pixel coverage of indirectly visible fragments on reflected or refracted rays has not been well discussed. In this paper, we extend the generation of multiple fragments to support the indirect visibility in multiple bounces, which is often required by artists for image manipulation in productions. Our method is compatible with an existing multi-fragment image format such as Cryptomatte, and does not need any additional ray traversals during path tracing.Item Neural Intersection Function(The Eurographics Association, 2023) Fujieda, Shin; Kao, Chih Chen; Harada, Takahiro; Bikker, Jacco; Gribble, ChristiaanThe ray casting operation in the Monte Carlo ray tracing algorithm usually adopts a bounding volume hierarchy (BVH) to accelerate the process of finding intersections to evaluate visibility. However, its characteristics are irregular, with divergence in memory access and branch execution, so it cannot achieve maximum efficiency on GPUs. This paper proposes a novel Neural Intersection Function based on a multilayer perceptron whose core operation contains only dense matrix multiplication with predictable memory access. Our method is the first solution integrating the neural network-based approach and BVH-based ray tracing pipeline into one unified rendering framework. We can evaluate the visibility and occlusion of secondary rays without traversing the most irregular and time-consuming part of the BVH and thus accelerate ray casting. The experiments show the proposed method can reduce the secondary ray casting time for direct illumination by up to 35% compared to a BVH-based implementation and still preserve the image quality.Item Stochastic Light Culling for Single Scattering in Participating Media(The Eurographics Association, 2022) Fujieda, Shin; Tokuyoshi, Yusuke; Harada, Takahiro; Pelechano, Nuria; Vanderhaeghe, DavidWe introduce a simple but efficient method to compute single scattering from point and arbitrarily shaped area light sources in participating media. Our method extends the stochastic light culling method to volume rendering by considering the intersection of a ray and spherical bounds of light influence ranges. For primary rays, this allows simple computation of the lighting in participating media without hierarchical data structures such as a light tree. First, we show how to combine equiangular sampling with the proposed light culling method in a simple case of point lights. We then apply it to arbitrarily shaped area lights by considering virtual point lights on the surface of area lights. Using our method, we are able to improve the rendering quality for scenes with many lights without tree construction and traversal.