Browsing by Author "Raman, Shanmuganathan"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Hand Shadow Art: A Differentiable Rendering Perspective(The Eurographics Association, 2023) Gangopadhyay, Aalok; Singh, Prajwal; Tiwari, Ashish; Raman, Shanmuganathan; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Shadow art is an exciting form of sculptural art that produces captivating artistic effects through the 2D shadows cast by 3D shapes. Hand shadows, also known as shadow puppetry or shadowgraphy, involve creating various shapes and figures using your hands and fingers to cast meaningful shadows on a wall. In this work, we propose a differentiable rendering-based approach to deform hand models such that they cast a shadow consistent with a desired target image and the associated lighting configuration. We showcase the results of shadows cast by a pair of two hands and the interpolation of hand poses between two desired shadow images. We believe that this work will be a useful tool for the graphics community.Item Search Me Knot, Render Me Knot: Embedding Search and Differentiable Rendering of Knots in 3D(The Eurographics Association and John Wiley & Sons Ltd., 2024) Gangopadhyay, Aalok; Gupta, Paras; Sharma, Tarun; Singh, Prajwal; Raman, Shanmuganathan; Hu, Ruizhen; Lefebvre, SylvainWe introduce the problem of knot-based inverse perceptual art. Given multiple target images and their corresponding viewing configurations, the objective is to find a 3D knot-based tubular structure whose appearance resembles the target images when viewed from the specified viewing configurations. To solve this problem, we first design a differentiable rendering algorithm for rendering tubular knots embedded in 3D for arbitrary perspective camera configurations. Utilizing this differentiable rendering algorithm, we search over the space of knot configurations to find the ideal knot embedding. We represent the knot embeddings via homeomorphisms of the desired template knot, where the weights of an invertible neural network parametrize the homeomorphisms. Our approach is fully differentiable, making it possible to find the ideal 3D tubular structure for the desired perceptual art using gradient-based optimization. We propose several loss functions that impose additional physical constraints, enforcing that the tube is free of self-intersection, lies within a predefined region in space, satisfies the physical bending limits of the tube material, and the material cost is within a specified budget. We demonstrate through results that our knot representation is highly expressive and gives impressive results even for challenging target images in both single-view and multiple-view constraints. Through extensive ablation study, we show that each proposed loss function effectively ensures physical realizability. We construct a real-world 3D-printed object to demonstrate the practical utility of our approach.Item SS-SfP: Neural Inverse Rendering for Self Supervised Shape from (Mixed) Polarization(The Eurographics Association, 2023) Tiwari, Ashish; Raman, Shanmuganathan; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.We present a novel inverse rendering-based framework to estimate the 3D shape (per-pixel surface normals and depth) of objects and scenes from single-view polarization images, the problem popularly known as Shape from Polarization (SfP). The existing physics-based and learning-based methods for SfP perform under certain restrictions, i.e., (a) purely diffuse or purely specular reflections, which are seldom in the real surfaces, (b) availability of the ground truth surface normals for direct supervision that are hard to acquire and are limited by the scanner's resolution, and (c) known refractive index. To overcome these restrictions, we start by learning to separate the partially-polarized diffuse and specular reflection components, which we call reflectance cues, based on a modified polarization reflection model and then estimate shape under mixed polarization through an inverse-rendering based self-supervised deep learning framework called SS-SfP, guided by the polarization data and estimated reflectance cues. Furthermore, we also obtain the refractive index as a non-linear least squares solution. Through extensive quantitative and qualitative evaluation, we establish the efficacy of the proposed framework over simple single-object scenes from DeepSfP dataset and complex in-the-wild scenes from SPW dataset in an entirely self-supervised setting. To the best of our knowledge, this is the first learning-based approach to address SfP under mixed polarization in a completely selfsupervised framework. Code will be made publicly available.Item TreeGCN-ED: A Tree-Structured Graph-Based Autoencoder Framework For Point Cloud Processing(The Eurographics Association, 2023) Singh, Prajwal; Tiwari, Ashish; Sadekar, Kaustubh; Raman, Shanmuganathan; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Point cloud is a widely used technique for representing and storing 3D geometric data. Several methods have been proposed for processing point clouds for tasks such as 3D shape classification and clustering. This work presents a tree-structured autoencoder framework to generate robust embeddings of point clouds through hierarchical information aggregation using graph convolution. We visualize the t-SNE map to highlight the ability of learned embeddings to distinguish between different object classes. We further demonstrate the robustness of these embeddings in applications such as point cloud interpolation, completion, and single image-based point cloud reconstruction. The anonymized code is available here for research purposes.