Browsing by Author "Rhee, Taehyun"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Art-directing Appearance using an Environment Map Latent Space(The Eurographics Association, 2021) Petikam, Lohit; Chalmers, Andrew; Anjyo, Ken; Rhee, Taehyun; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, BurkhardIn look development, environment maps (EMs) are used to verify 3D appearance in varied lighting (e.g., overcast, sunny, and indoor). Artists can only assign one fixed material, making it laborious to edit appearance uniquely for all EMs. Artists can artdirect material and lighting in film post-production. However, this is impossible in dynamic real-time games and live augmented reality (AR), where environment lighting is unpredictable. We present a new workflow to customize appearance variation across a wide range of EM lighting, for live applications. Appearance edits can be predefined, and then automatically adapted to environment lighting changes. We achieve this by learning a novel 2D latent space of varied EM lighting. The latent space lets artists browse EMs in a semantically meaningful 2D view. For different EMs, artists can paint different material and lighting parameter values directly on the latent space. We robustly encode new EMs into the same space, for automatic look-up of the desired appearance. This solves a new problem of preserving art-direction in live applications, without any artist intervention.Item Illumination Space: A Feature Space for Radiance Maps(The Eurographics Association, 2020) Chalmers, Andrew; Zickler, Todd; Rhee, Taehyun; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardRadiance maps (RM) are used for capturing the lighting properties of real-world environments. Databases of RMs are useful for various rendering applications such as Look Development, live action composition, mixed reality, and machine learning. Such databases are not useful if they cannot be organized in a meaningful way. To address this, we introduce the illumination space, a feature space that arranges RM databases based on illumination properties. We avoid manual labeling by automatically extracting features from an RM that provides a concise and semantically meaningful representation of its typical lighting effects. This is made possible with the following contributions: a method to automatically extract a small set of dominant and ambient lighting properties from RMs, and a low-dimensional (5D) light feature vector summarizing these properties to form the illumination space. Our method is motivated by how the RM illuminates the scene as opposed to describing the textural content of the RM.Item Neural Screen Space Rendering of Direct Illumination(The Eurographics Association, 2021) Suppan, Christian; Chalmers, Andrew; Zhao, Junhong; Doronin, Alex; Rhee, Taehyun; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, BurkhardNeural rendering is a class of methods that use deep learning to produce novel images of scenes from more limited information than traditional rendering methods. This is useful for information scarce applications like mixed reality or semantic photo synthesis but comes at the cost of control over the final appearance. We introduce the Neural Direct-illumination Renderer (NDR), a neural screen space renderer capable of rendering direct-illumination images of any geometry, with opaque materials, under distant illuminant. The NDR uses screen space buffers describing material, geometry, and illumination as inputs to provide direct control over the output. We introduce the use of intrinsic image decomposition to allow a Convolutional Neural Network (CNN) to learn a mapping from a large number of pixel buffers to rendered images. The NDR predicts shading maps, which are subsequently combined with albedo maps to create a rendered image. We show that the NDR produces plausible images that can be edited by modifying the input maps and marginally outperforms the state of the art while also providing more functionality.