Browsing by Author "Montazeri, Zahra"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A Practical and Hierarchical Yarn-based Shading Model for Cloth(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Junqiu; Montazeri, Zahra; Aubry, Jean-Marie; Yan, Ling-Qi; Weidlich, Andrea; Ritschel, Tobias; Weidlich, AndreaRealistic cloth rendering is a longstanding challenge in computer graphics due to the intricate geometry and hierarchical structure of cloth: Fibers form plies which in turn are combined into yarns which then are woven or knitted into fabrics. Previous fiber-based models have achieved high-quality close-up rendering, but they suffer from high computational cost, which limits their practicality. In this paper, we propose a novel hierarchical model that analytically aggregates light simulation on the fiber level by building on dual-scattering theory. Based on this, we can perform an efficient simulation of ply and yarn shading. Compared to previous methods, our approach is faster and uses less memory while preserving a similar accuracy. We demonstrate both through comparison with existing fiber-based shading models. Our yarn shading model can be applied to curves or surfaces, making it highly versatile for cloth shading. This duality paired with its simplicity and flexibility makes the model particularly useful for film and games production.Item Practical Ply-Based Appearance Modeling for Knitted Fabrics(The Eurographics Association, 2021) Montazeri, Zahra; Gammelmark, Søren; Jensen, Henrik Wann; Zhao, Shuang; Bousseau, Adrien and McGuire, MorganAbstract Modeling the geometry and the appearance of knitted fabrics has been challenging due to their complex geometries and interactions with light. Previous surface-based models have difficulties capturing fine-grained knit geometries; Micro-appearance models, on the other hands, typically store individual cloth fibers explicitly and are expensive to be generated and rendered. Further, neither of the models offers the flexibility to accurately capture both the reflection and the transmission of light simultaneously. In this paper, we introduce an efficient technique to generate knit models with user-specified knitting patterns. Our model stores individual knit plies with fiber-level detailed depicted using normal and tangent mapping. We evaluate our generated models using a wide array of knitting patterns. Further, we compare qualitatively renderings to our models to photos of real samples.Item Velocity-Based LOD Reduction in Virtual Reality: A Psychophysical Approach(The Eurographics Association, 2023) Petrescu, David; Warren, Paul A.; Montazeri, Zahra; Pettifer, Steve; Babaei, Vahid; Skouras, MelinaVirtual Reality headsets enable users to explore the environment by performing self-induced movements. The retinal velocity produced by such motion reduces the visual system's ability to resolve fine detail. We measured the impact of self-induced head rotations on the ability to detect quality changes of a realistic 3D model in an immersive virtual reality environment. We varied the Level of Detail (LOD) as a function of rotational head velocity with different degrees of severity. Using a psychophysical method, we asked 17 participants to identify which of the two presented intervals contained the higher quality model under two different maximum velocity conditions. After fitting psychometric functions to data relating the percentage of correct responses to the aggressiveness of LOD manipulations, we identified the threshold severity for which participants could reliably (75%) detect the lower LOD model. Participants accepted an approximately four-fold LOD reduction even in the low maximum velocity condition without a significant impact on perceived quality, suggesting that there is considerable potential for optimisation when users are moving (increased range of perceptual uncertainty). Moreover, LOD could be degraded significantly more (around 84%) in the maximum head velocity condition, suggesting these effects are indeed speed-dependent.