Browsing by Author "Cohen-Or, Daniel"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item State-of-the-Art in the Architecture, Methods and Applications of StyleGAN(The Eurographics Association and John Wiley & Sons Ltd., 2022) Bermano, Amit Haim; Gal, Rinon; Alaluf, Yuval; Mokady, Ron; Nitzan, Yotam; Tov, Omer; Patashnik, Or; Cohen-Or, Daniel; Meneveaux, Daniel; Patanè, GiuseppeGenerative Adversarial Networks (GANs) have established themselves as a prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks. This state-of-the-art report covers the StyleGAN architecture, and the ways it has been employed since its conception, while also analyzing its severe limitations. It aims to be of use for both newcomers, who wish to get a grasp of the field, and for more experienced readers that might benefit from seeing current research trends and existing tools laid out. Among StyleGAN's most interesting aspects is its learned latent space. Despite being learned with no supervision, it is surprisingly well-behaved and remarkably disentangled. Combined with StyleGAN's visual quality, these properties gave rise to unparalleled editing capabilities. However, the control offered by StyleGAN is inherently limited to the generator's learned distribution, and can only be applied to images generated by StyleGAN itself. Seeking to bring StyleGAN's latent control to real-world scenarios, the study of GAN inversion and latent space embedding has quickly gained in popularity. Meanwhile, this same study has helped shed light on the inner workings and limitations of StyleGAN. We map out StyleGAN's impressive story through these investigations, and discuss the details that have made StyleGAN the go-to generator. We further elaborate on the visual priors StyleGAN constructs, and discuss their use in downstream discriminative tasks. Looking forward, we point out StyleGAN's limitations and speculate on current trends and promising directions for future research, such as task and target specific fine-tuning.Item Towards a Neural Graphics Pipeline for Controllable Image Generation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Chen, Xuelin; Cohen-Or, Daniel; Chen, Baoquan; Mitra, Niloy J.; Mitra, Niloy and Viola, IvanIn this paper, we leverage advances in neural networks towards forming a neural rendering for controllable image generation, and thereby bypassing the need for detailed modeling in conventional graphics pipeline. To this end, we present Neural Graphics Pipeline (NGP), a hybrid generative model that brings together neural and traditional image formation models. NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation. To form an image, NGP generates coarse 3D models that are fed into neural rendering modules to produce view-specific interpretable 2D maps, which are then composited into the final output image using a traditional image formation model. Our approach offers control over image generation by providing direct handles controlling illumination and camera parameters, in addition to control over shape and appearance variations. The key challenge is to learn these controls through unsupervised training that links generated coarse 3D models with unpaired real images via neural and traditional (e.g., Blinn- Phong) rendering functions, without establishing an explicit correspondence between them. We demonstrate the effectiveness of our approach on controllable image generation of single-object scenes. We evaluate our hybrid modeling framework, compare with neural-only generation methods (namely, DCGAN, LSGAN, WGAN-GP, VON, and SRNs), report improvement in FID scores against real images, and demonstrate that NGP supports direct controls common in traditional forward rendering. Code is available at http://geometry.cs.ucl.ac.uk/projects/2021/ngp.Item What's in a Face? Metric Learning for Face Characterization(The Eurographics Association and John Wiley & Sons Ltd., 2019) Sendik, Omry; Lischinski, Dani; Cohen-Or, Daniel; Alliez, Pierre and Pellacini, FabioWe present a method for determining which facial parts (mouth, nose, etc.) best characterize an individual, given a set of that individual's portraits. We introduce a novel distinctiveness analysis of a set of portraits, which leverages the deep features extracted by a pre-trained face recognition CNN and a hair segmentation FCN, in the context of a weakly supervised metric learning scheme. Our analysis enables the generation of a polarized class activation map (PCAM) for an individual's portrait via a transformation that localizes and amplifies the discriminative regions of the deep feature maps extracted by the aforementioned networks. A user study that we conducted shows that there is a surprisingly good agreement between the face parts that users indicate as characteristic and the face parts automatically selected by our method. We demonstrate a few applications of our method, including determining the most and the least representative portraits among a set of portraits of an individual, and the creation of facial hybrids: portraits that combine the characteristic recognizable facial features of two individuals. Our face characterization analysis is also effective for ranking portraits in order to find an individual's look-alikes (Doppelgängers).