Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Raman, Shanmuganathan"

Now showing 1 - 5 of 5
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Hand Shadow Art: A Differentiable Rendering Perspective
    (The Eurographics Association, 2023) Gangopadhyay, Aalok; Singh, Prajwal; Tiwari, Ashish; Raman, Shanmuganathan; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Shadow art is an exciting form of sculptural art that produces captivating artistic effects through the 2D shadows cast by 3D shapes. Hand shadows, also known as shadow puppetry or shadowgraphy, involve creating various shapes and figures using your hands and fingers to cast meaningful shadows on a wall. In this work, we propose a differentiable rendering-based approach to deform hand models such that they cast a shadow consistent with a desired target image and the associated lighting configuration. We showcase the results of shadows cast by a pair of two hands and the interpolation of hand poses between two desired shadow images. We believe that this work will be a useful tool for the graphics community.
  • Loading...
    Thumbnail Image
    Item
    Search Me Knot, Render Me Knot: Embedding Search and Differentiable Rendering of Knots in 3D
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Gangopadhyay, Aalok; Gupta, Paras; Sharma, Tarun; Singh, Prajwal; Raman, Shanmuganathan; Hu, Ruizhen; Lefebvre, Sylvain
    We introduce the problem of knot-based inverse perceptual art. Given multiple target images and their corresponding viewing configurations, the objective is to find a 3D knot-based tubular structure whose appearance resembles the target images when viewed from the specified viewing configurations. To solve this problem, we first design a differentiable rendering algorithm for rendering tubular knots embedded in 3D for arbitrary perspective camera configurations. Utilizing this differentiable rendering algorithm, we search over the space of knot configurations to find the ideal knot embedding. We represent the knot embeddings via homeomorphisms of the desired template knot, where the weights of an invertible neural network parametrize the homeomorphisms. Our approach is fully differentiable, making it possible to find the ideal 3D tubular structure for the desired perceptual art using gradient-based optimization. We propose several loss functions that impose additional physical constraints, enforcing that the tube is free of self-intersection, lies within a predefined region in space, satisfies the physical bending limits of the tube material, and the material cost is within a specified budget. We demonstrate through results that our knot representation is highly expressive and gives impressive results even for challenging target images in both single-view and multiple-view constraints. Through extensive ablation study, we show that each proposed loss function effectively ensures physical realizability. We construct a real-world 3D-printed object to demonstrate the practical utility of our approach.
  • Loading...
    Thumbnail Image
    Item
    SS-SfP: Neural Inverse Rendering for Self Supervised Shape from (Mixed) Polarization
    (The Eurographics Association, 2023) Tiwari, Ashish; Raman, Shanmuganathan; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We present a novel inverse rendering-based framework to estimate the 3D shape (per-pixel surface normals and depth) of objects and scenes from single-view polarization images, the problem popularly known as Shape from Polarization (SfP). The existing physics-based and learning-based methods for SfP perform under certain restrictions, i.e., (a) purely diffuse or purely specular reflections, which are seldom in the real surfaces, (b) availability of the ground truth surface normals for direct supervision that are hard to acquire and are limited by the scanner's resolution, and (c) known refractive index. To overcome these restrictions, we start by learning to separate the partially-polarized diffuse and specular reflection components, which we call reflectance cues, based on a modified polarization reflection model and then estimate shape under mixed polarization through an inverse-rendering based self-supervised deep learning framework called SS-SfP, guided by the polarization data and estimated reflectance cues. Furthermore, we also obtain the refractive index as a non-linear least squares solution. Through extensive quantitative and qualitative evaluation, we establish the efficacy of the proposed framework over simple single-object scenes from DeepSfP dataset and complex in-the-wild scenes from SPW dataset in an entirely self-supervised setting. To the best of our knowledge, this is the first learning-based approach to address SfP under mixed polarization in a completely selfsupervised framework. Code will be made publicly available.
  • Loading...
    Thumbnail Image
    Item
    TensoIS: A Step Towards Feed-Forward Tensorial Inverse Subsurface Scattering for Perlin Distributed Heterogeneous Media
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Tiwari, Ashish; Bhardwaj, Satyam; Bachwana, Yash; Sahu, Parag Sarvoday; Ali, T. M. Feroz; Chintalapati, Bhargava; Raman, Shanmuganathan; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Estimating scattering parameters of heterogeneous media from images is a severely under-constrained and challenging problem. Most of the existing approaches model BSSRDF either through an analysis-by-synthesis approach, approximating complex path integrals, or using differentiable volume rendering techniques to account for heterogeneity. However, only a few studies have applied learning-based methods to estimate subsurface scattering parameters, but they assume homogeneous media. Interestingly, no specific distribution is known to us that can explicitly model the heterogeneous scattering parameters in the real world. Notably, procedural noise models such as Perlin and Fractal Perlin noise have been effective in representing intricate heterogeneities of natural, organic, and inorganic surfaces. Leveraging this, we first create HeteroSynth, a synthetic dataset comprising photorealistic images of heterogeneous media whose scattering parameters are modeled using Fractal Perlin noise. Furthermore, we propose Tensorial Inverse Scattering (TensoIS), a learning-based feed-forward framework to estimate these Perlin-distributed heterogeneous scattering parameters from sparse multi-view image observations. Instead of directly predicting the 3D scattering parameter volume, TensoIS uses learnable low-rank tensor components to represent the scattering volume. We evaluate TensoIS on unseen heterogeneous variations over shapes from the HeteroSynth test set, smoke and cloud geometries obtained from open-source realistic volumetric simulations, and some real-world samples to establish its effectiveness for inverse scattering. Overall, this study is an attempt to explore Perlin noise distribution, given the lack of any such well-defined distribution in literature, to potentially model real-world heterogeneous scattering in a feed-forward manner. Project Page: https://yashbachwana.github.io/TensoIS/
  • Loading...
    Thumbnail Image
    Item
    TreeGCN-ED: A Tree-Structured Graph-Based Autoencoder Framework For Point Cloud Processing
    (The Eurographics Association, 2023) Singh, Prajwal; Tiwari, Ashish; Sadekar, Kaustubh; Raman, Shanmuganathan; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Point cloud is a widely used technique for representing and storing 3D geometric data. Several methods have been proposed for processing point clouds for tasks such as 3D shape classification and clustering. This work presents a tree-structured autoencoder framework to generate robust embeddings of point clouds through hierarchical information aggregation using graph convolution. We visualize the t-SNE map to highlight the ability of learned embeddings to distinguish between different object classes. We further demonstrate the robustness of these embeddings in applications such as point cloud interpolation, completion, and single image-based point cloud reconstruction. The anonymized code is available here for research purposes.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback