NeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent Objects

dc.contributor.authorTg, Thomsonen_US
dc.contributor.authorFrisvad, Jeppe Revallen_US
dc.contributor.authorRamamoorthi, Ravien_US
dc.contributor.authorJensen, Henrik W.en_US
dc.contributor.editorChen, Renjieen_US
dc.contributor.editorRitschel, Tobiasen_US
dc.contributor.editorWhiting, Emilyen_US
dc.date.accessioned2024-10-13T18:08:46Z
dc.date.available2024-10-13T18:08:46Z
dc.date.issued2024
dc.description.abstractMonte Carlo rendering of translucent objects with heterogeneous scattering properties is often expensive both in terms of memory and computation. If the scattering properties are described by a 3D texture, memory consumption is high. If we do path tracing and use a high dynamic range lighting environment, the computational cost of the rendering can easily become significant. We propose a compact and efficient neural method for representing and rendering the appearance of heterogeneous translucent objects. Instead of assuming only surface variation of optical properties, our method represents the appearance of a full object taking its geometry and volumetric heterogeneities into account. This is similar to a neural radiance field, but our representation works for an arbitrary distant lighting environment. In a sense, we present a version of neural precomputed radiance transfer that captures relighting of heterogeneous translucent objects. We use a multi-layer perceptron (MLP) with skip connections to represent the appearance of an object as a function of spatial position, direction of observation, and direction of incidence. The latter is considered a directional light incident across the entire non-self-shadowed part of the object. We demonstrate the ability of our method to compactly store highly complex materials while having high accuracy when comparing to reference images of the represented object in unseen lighting environments. As compared with path tracing of a heterogeneous light scattering volume behind a refractive interface, our method more easily enables importance sampling of the directions of incidence and can be integrated into existing rendering frameworks while achieving interactive frame rates.en_US
dc.description.number7
dc.description.sectionheadersRendering and Lighting II
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume43
dc.identifier.doi10.1111/cgf.15234
dc.identifier.issn1467-8659
dc.identifier.pages13 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.15234
dc.identifier.urihttps://diglib.eg.org/handle/10.1111/cgf15234
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Reflectance modeling; Neural networks
dc.subjectComputing methodologies → Reflectance modeling
dc.subjectNeural networks
dc.titleNeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent Objectsen_US
Files
Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
cgf15234.pdf
Size:
83.78 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1511_mm.mp4
Size:
140.74 MB
Format:
Video MP4
Collections