Browsing by Author "Mitra, Niloy J."
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Computational Design and Optimization of Non-Circular Gears(The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Hao; Fu, Tianwen; Song, Peng; Zhou, Mingjun; Fu, Chi-Wing; Mitra, Niloy J.; Panozzo, Daniele and Assarsson, UlfWe study a general form of gears known as non-circular gears that can transfer periodic motion with variable speed through their irregular shapes and eccentric rotation centers. To design functional non-circular gears is nontrivial, since the gear pair must have compatible shape to keep in contact during motion, so the driver gear can push the follower to rotate via a bounded torque that the motor can exert. To address the challenge, we model the geometry, kinematics, and dynamics of non-circular gears, formulate the design problem as a shape optimization, and identify necessary independent variables in the optimization search. Taking a pair of 2D shapes as inputs, our method optimizes them into gears by locating the rotation center on each shape, minimally modifying each shape to form the gear's boundary, and constructing appropriate teeth for gear meshing. Our optimized gears not only resemble the inputs but can also drive the motion with relatively small torque. We demonstrate our method's usability by generating a rich variety of non-circular gears from various inputs and 3D printing several of them.Item Deep Detail Enhancement for Any Garment(The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Meng; Wang, Tuanfeng; Ceylan, Duygu; Mitra, Niloy J.; Mitra, Niloy and Viola, IvanCreating fine garment details requires significant efforts and huge computational resources. In contrast, a coarse shape may be easy to acquire in many scenarios (e.g., via low-resolution physically-based simulation, linear blend skinning driven by skeletal motion, portable scanners). In this paper, we show how to enhance, in a data-driven manner, rich yet plausible details starting from a coarse garment geometry. Once the parameterization of the garment is given, we formulate the task as a style transfer problem over the space of associated normal maps. In order to facilitate generalization across garment types and character motions, we introduce a patch-based formulation, that produces high-resolution details by matching a Gram matrix based style loss, to hallucinate geometric details (i.e., wrinkle density and shape). We extensively evaluate our method on a variety of production scenarios and show that our method is simple, light-weight, efficient, and generalizes across underlying garment types, sewing patterns, and body motion. Project page: http://geometry.cs.ucl.ac.uk/projects/2021/DeepDetailEnhance/Item Factored Neural Representation for Scene Understanding(The Eurographics Association and John Wiley & Sons Ltd., 2023) Wong, Yu-Shiang; Mitra, Niloy J.; Memari, Pooran; Solomon, JustinA long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). Code and data are available at: http://geometry.cs.ucl.ac.uk/projects/2023/factorednerf/.Item MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chen, Xuelin; Li, Weiyu; Cohen-Or, Daniel; Mitra, Niloy J.; Chen, Baoquan; Chaine, Raphaëlle; Kim, Min H.Synthesizing novel views of dynamic humans from stationary monocular cameras is a specialized but desirable setup. This is particularly attractive as it does not require static scenes, controlled environments, or specialized capture hardware. In contrast to techniques that exploit multi-view observations, the problem of modeling a dynamic scene from a single view is significantly more under-constrained and ill-posed. In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models dynamic humans in stationary monocular cameras using a 4D continuous time-variant function. We learn the proposed representation by optimizing for a dynamic scene that minimizes the total rendering error, over all the observed images. At the heart of our work lies a carefully designed optimization scheme, which includes a dedicated initialization step and is constrained by a motion consensus regularization on the estimated motion flow. We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity, and compare, both qualitatively and quantitatively, to several baselines and ablated variations of our methods, showing the efficacy and merits of the proposed approach. Pretrained model, code, and data will be released for research purposes upon paper acceptance.Item Neurosymbolic Models for Computer Graphics(The Eurographics Association and John Wiley & Sons Ltd., 2023) Ritchie, Daniel; Guerrero, Paul; Jones, R. Kenny; Mitra, Niloy J.; Schulz, Adriana; Willis, Karl D. D.; Wu, Jiajun; Bousseau, Adrien; Theobalt, ChristianProcedural models (i.e. symbolic programs that output visual data) are a historically-popular method for representing graphics content: vegetation, buildings, textures, etc. They offer many advantages: interpretable design parameters, stochastic variations, high-quality outputs, compact representation, and more. But they also have some limitations, such as the difficulty of authoring a procedural model from scratch. More recently, AI-based methods, and especially neural networks, have become popular for creating graphic content. These techniques allow users to directly specify desired properties of the artifact they want to create (via examples, constraints, or objectives), while a search, optimization, or learning algorithm takes care of the details. However, this ease of use comes at a cost, as it's often hard to interpret or manipulate these representations. In this state-of-the-art report, we summarize research on neurosymbolic models in computer graphics: methods that combine the strengths of both AI and symbolic programs to represent, generate, and manipulate visual data. We survey recent work applying these techniques to represent 2D shapes, 3D shapes, and materials & textures. Along the way, we situate each prior work in a unified design space for neurosymbolic models, which helps reveal underexplored areas and opportunities for future research.Item RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects(The Eurographics Association and John Wiley & Sons Ltd., 2021) Wong, Yu-Shiang; Li, Changjian; Nießner, Matthias; Mitra, Niloy J.; Mitra, Niloy and Viola, IvanAlthough surface reconstruction from depth data has made significant advances in the recent years, handling changing environments remains a major challenge. This is unsatisfactory, as humans regularly move objects in their environments. Existing solutions focus on a restricted set of objects (e.g., those detected by semantic classifiers) possibly with template meshes, assume static camera, or mark objects touched by humans as moving. We remove these assumptions by introducing RigidFusion. Our core idea is a novel asynchronous moving-object detection method, combined with a modified volumetric fusion. This is achieved by a model-to-frame TSDF decomposition leveraging free-space carving of tracked depth values of the current frame with respect to the background model during run-time. As output, we produce separate volumetric reconstructions for the background and each moving object in the scene, along with its trajectory over time. Our method does not rely on the object priors (e.g., semantic labels or pre-scanned meshes) and is insensitive to the motion residuals between objects and the camera. In comparison to state-of-the-art methods (e.g., Co-Fusion, MaskFusion), we handle significantly more challenging reconstruction scenarios involving moving camera and improve moving-object detection (26% on the miss-detection ratio), tracking (27% on MOTA), and reconstruction (3% on the reconstruction F1) on the synthetic dataset. Please refer the supplementary and the project website for the video demonstration (geometry.cs.ucl.ac.uk/projects/2021/rigidfusion).Item Towards a Neural Graphics Pipeline for Controllable Image Generation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Chen, Xuelin; Cohen-Or, Daniel; Chen, Baoquan; Mitra, Niloy J.; Mitra, Niloy and Viola, IvanIn this paper, we leverage advances in neural networks towards forming a neural rendering for controllable image generation, and thereby bypassing the need for detailed modeling in conventional graphics pipeline. To this end, we present Neural Graphics Pipeline (NGP), a hybrid generative model that brings together neural and traditional image formation models. NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation. To form an image, NGP generates coarse 3D models that are fed into neural rendering modules to produce view-specific interpretable 2D maps, which are then composited into the final output image using a traditional image formation model. Our approach offers control over image generation by providing direct handles controlling illumination and camera parameters, in addition to control over shape and appearance variations. The key challenge is to learn these controls through unsupervised training that links generated coarse 3D models with unpaired real images via neural and traditional (e.g., Blinn- Phong) rendering functions, without establishing an explicit correspondence between them. We demonstrate the effectiveness of our approach on controllable image generation of single-object scenes. We evaluate our hybrid modeling framework, compare with neural-only generation methods (namely, DCGAN, LSGAN, WGAN-GP, VON, and SRNs), report improvement in FID scores against real images, and demonstrate that NGP supports direct controls common in traditional forward rendering. Code is available at http://geometry.cs.ucl.ac.uk/projects/2021/ngp.Item Z2P: Instant Visualization of Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2022) Metzer, Gal; Hanocka, Rana; Giryes, Raja; Mitra, Niloy J.; Cohen-Or, Daniel; Chaine, Raphaëlle; Kim, Min H.We present a technique for visualizing point clouds using a neural network. Our technique allows for an instant preview of any point cloud, and bypasses the notoriously difficult surface reconstruction problem or the need to estimate oriented normals for splat-based rendering. We cast the preview problem as a conditional image-to-image translation task, and design a neural network that translates point depth-map directly into an image, where the point cloud is visualized as though a surface was reconstructed from it. Furthermore, the resulting appearance of the visualized point cloud can be, optionally, conditioned on simple control variables (e.g., color and light). We demonstrate that our technique instantly produces plausible images, and can, on-the-fly effectively handle noise, non-uniform sampling, and thin surfaces sheets.