34-Issue 4
Permanent URI for this collection
Browse
Browsing 34-Issue 4 by Subject "shading"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Extracting Microfacet-based BRDF Parameters from Arbitrary Materials with Power Iterations(The Eurographics Association and John Wiley & Sons Ltd., 2015) Dupuy, Jonathan; Heitz, Eric; Iehl, Jean-Claude; Poulin, Pierre; Ostromoukhov, Victor; Jaakko Lehtinen and Derek NowrouzezahraiWe introduce a novel fitting procedure that takes as input an arbitrary material, possibly anisotropic, and automatically converts it to a microfacet BRDF. Our algorithm is based on the property that the distribution of microfacets may be retrieved by solving an eigenvector problem that is built solely from backscattering samples. We show that the eigenvector associated to the largest eigenvalue is always the only solution to this problem, and compute it using the power iteration method. This approach is straightforward to implement, much faster to compute, and considerably more robust than solutions based on nonlinear optimizations. In addition, we provide simple conversion procedures of our fits into both Beckmann and GGX roughness parameters, and discuss the advantages of microfacet slope space to make our fits editable. We apply our method to measured materials from two large databases that include anisotropic materials, and demonstrate the benefits of spatially varying roughness on texture mapped geometric models.Item Path-space Motion Estimation and Decomposition for Robust Animation Filtering(The Eurographics Association and John Wiley & Sons Ltd., 2015) Zimmer, Henning; Rousselle, Fabrice; Jakob, Wenzel; Wang, Oliver; Adler, David; Jarosz, Wojciech; Sorkine-Hornung, Olga; Sorkine-Hornung, Alexander; Jaakko Lehtinen and Derek NowrouzezahraiRenderings of animation sequences with physics-based Monte Carlo light transport simulations are exceedingly costly to generate frame-by-frame, yet much of this computation is highly redundant due to the strong coherence in space, time and among samples. A promising approach pursued in prior work entails subsampling the sequence in space, time, and number of samples, followed by image-based spatio-temporal upsampling and denoising. These methods can provide significant performance gains, though major issues remain: firstly, in a multiple scattering simulation, the final pixel color is the composite of many different light transport phenomena, and this conflicting information causes artifacts in image-based methods. Secondly, motion vectors are needed to establish correspondence between the pixels in different frames, but it is unclear how to obtain them for most kinds of light paths (e.g. an object seen through a curved glass panel). To reduce these ambiguities, we propose a general decomposition framework, where the final pixel color is separated into components corresponding to disjoint subsets of the space of light paths. Each component is accompanied by motion vectors and other auxiliary features such as reflectance and surface normals. The motion vectors of specular paths are computed using a temporal extension of manifold exploration and the remaining components use a specialized variant of optical flow. Our experiments show that this decomposition leads to significant improvements in three image-based applications: denoising, spatial upsampling, and temporal interpolation.Item Physically Meaningful Rendering using Tristimulus Colours(The Eurographics Association and John Wiley & Sons Ltd., 2015) Meng, Johannes; Simon, Florian; Hanika, Johannes; Dachsbacher, Carsten; Jaakko Lehtinen and Derek NowrouzezahraiIn photorealistic image synthesis the radiative transfer equation is often not solved by simulating every wavelength of light, but instead by computing tristimulus transport, for instance using sRGB primaries as a basis. This choice is convenient, because input texture data is usually stored in RGB colour spaces. However, there are problems with this approach which are often overlooked or ignored. By comparing to spectral reference renderings, we show how rendering in tristimulus colour spaces introduces colour shifts in indirect light, violation of energy conservation, and unexpected behaviour in participating media. Furthermore, we introduce a fast method to compute spectra from almost any given XYZ input colour. It creates spectra that match the input colour precisely. Additionally, like in natural reflectance spectra, their energy is smoothly distributed over wide wavelength bands. This method is both useful to upsample RGB input data when spectral transport is used and as an intermediate step for corrected tristimulus-based transport. Finally, we show how energy conservation can be enforced in RGB by mapping colours to valid reflectances.Item Stochastic Soft Shadow Mapping(The Eurographics Association and John Wiley & Sons Ltd., 2015) Liktor, Gabor; Spassov, Stanislav; Mückl, Gregor; Dachsbacher, Carsten; Jaakko Lehtinen and Derek NowrouzezahraiIn this paper, we extend the concept of pre-filtered shadow mapping to stochastic rasterization, enabling real-time rendering of soft shadows from planar area lights. Most existing soft shadow mapping methods lose important visibility information by relying on pinhole renderings from an area light source, providing plausible results only for small light sources. Since we sample the entire 4D shadow light field stochastically, we are able to closely approximate shadows of large area lights as well. In order to efficiently reconstruct smooth shadows from this sparse data, we exploit the analogy of soft shadow computation to rendering defocus blur, and introduce a multiplane pre-filtering algorithm. We demonstrate how existing pre-filterable approximations of the visibility function, such as variance shadow mapping, can be extended to four dimensions within our framework.Item Unifying Color and Texture Transfer for Predictive Appearance Manipulation(The Eurographics Association and John Wiley & Sons Ltd., 2015) Okura, Fumio; Vanhoey, Kenneth; Bousseau, Adrien; Efros, Alexei A.; Drettakis, George; Jaakko Lehtinen and Derek NowrouzezahraiRecent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from ''sunny'' to ''overcast''. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season - e.g., leaves on bare trees or piles of snow on a street - and flooding.