Browsing by Author "Cohen-Or, Daniel"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Deep Video-Based Performance Cloning(The Eurographics Association and John Wiley & Sons Ltd., 2019) Aberman, Kfir; Shi, Mingyi; Liao, Jing; Lischinski, Dani; Chen, Baoquan; Cohen-Or, Daniel; Alliez, Pierre and Pellacini, FabioWe present a new video-based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses paired training data, self-generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. Through data augmentation, our network is able to synthesize images of the target actor in poses never captured by the reference video. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances.Item MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chen, Xuelin; Li, Weiyu; Cohen-Or, Daniel; Mitra, Niloy J.; Chen, Baoquan; Chaine, Raphaƫlle; Kim, Min H.Synthesizing novel views of dynamic humans from stationary monocular cameras is a specialized but desirable setup. This is particularly attractive as it does not require static scenes, controlled environments, or specialized capture hardware. In contrast to techniques that exploit multi-view observations, the problem of modeling a dynamic scene from a single view is significantly more under-constrained and ill-posed. In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models dynamic humans in stationary monocular cameras using a 4D continuous time-variant function. We learn the proposed representation by optimizing for a dynamic scene that minimizes the total rendering error, over all the observed images. At the heart of our work lies a carefully designed optimization scheme, which includes a dedicated initialization step and is constrained by a motion consensus regularization on the estimated motion flow. We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity, and compare, both qualitatively and quantitatively, to several baselines and ablated variations of our methods, showing the efficacy and merits of the proposed approach. Pretrained model, code, and data will be released for research purposes upon paper acceptance.Item NeuralMLS: Geometry-Aware Control Point Deformation(The Eurographics Association, 2022) Shechter, Meitar; Hanocka, Rana; Metzer, Gal; Giryes, Raja; Cohen-Or, Daniel; Pelechano, Nuria; Vanderhaeghe, DavidWe introduce NeuralMLS, a space-based deformation technique, guided by a set of displaced control points. We leverage the power of neural networks to inject the underlying shape geometry into the deformation parameters. The goal of our technique is to enable a realistic and intuitive shape deformation. Our method is built upon moving least-squares (MLS), since it minimizes a weighted sum of the given control point displacements. Traditionally, the influence of each control point on every point in space (i.e., the weighting function) is defined using inverse distance heuristics. In this work, we opt to learn the weighting function, by training a neural network on the control points from a single input shape, and exploit the innate smoothness of neural networks. Our geometry-aware control point deformation is agnostic to the surface representation and quality; it can be applied to point clouds or meshes, including non-manifold and disconnected surface soups. We show that our technique facilitates intuitive piecewise smooth deformations, which are well suited for manufactured objects. We show the advantages of our approach compared to existing surface and space-based deformation techniques, both quantitatively and qualitatively.Item Z2P: Instant Visualization of Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2022) Metzer, Gal; Hanocka, Rana; Giryes, Raja; Mitra, Niloy J.; Cohen-Or, Daniel; Chaine, Raphaƫlle; Kim, Min H.We present a technique for visualizing point clouds using a neural network. Our technique allows for an instant preview of any point cloud, and bypasses the notoriously difficult surface reconstruction problem or the need to estimate oriented normals for splat-based rendering. We cast the preview problem as a conditional image-to-image translation task, and design a neural network that translates point depth-map directly into an image, where the point cloud is visualized as though a surface was reconstructed from it. Furthermore, the resulting appearance of the visualized point cloud can be, optionally, conditioned on simple control variables (e.g., color and light). We demonstrate that our technique instantly produces plausible images, and can, on-the-fly effectively handle noise, non-uniform sampling, and thin surfaces sheets.