Browsing by Author "Ashtari, Amirsaman"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks(The Eurographics Association and John Wiley & Sons Ltd., 2023) Cha, Sihun; Seo, Kwanggyoon; Ashtari, Amirsaman; Noh, Junyong; Myszkowski, Karol; Niessner, MatthiasThere has been significant progress in generating an animatable 3D human avatar from a single image. However, recovering texture for the 3D human avatar from a single image has been relatively less addressed. Because the generated 3D human avatar reveals the occluded texture of the given image as it moves, it is critical to synthesize the occluded texture pattern that is unseen from the source image. To generate a plausible texture map for 3D human avatars, the occluded texture pattern needs to be synthesized with respect to the visible texture from the given image. Moreover, the generated texture should align with the surface of the target 3D mesh. In this paper, we propose a texture synthesis method for a 3D human avatar that incorporates geometry information. The proposed method consists of two convolutional networks for the sampling and refining process. The sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh using the geometry information. The sampled texture is further refined and adjusted by the refiner network. To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map. To effectively guide the sampler network to achieve its goal, we designed a curriculum learning scheme that starts from a simple sampling task and gradually progresses to the task where the alignment needs to be considered. We conducted experiments to show that our method outperforms previous methods qualitatively and quantitatively.Item Stylized Face Sketch Extraction via Generative Prior with Limited Data(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yun, Kwan; Seo, Kwanggyoon; Seo, Chang Wook; Yoon, Soyeon; Kim, Seongcheol; Ji, Soohyun; Ashtari, Amirsaman; Noh, Junyong; Bermano, Amit H.; Kalogerakis, EvangelosFacial sketches are both a concise way of showing the identity of a person and a means to express artistic intention. While a few techniques have recently emerged that allow sketches to be extracted in different styles, they typically rely on a large amount of data that is difficult to obtain. Here, we propose StyleSketch, a method for extracting high-resolution stylized sketches from a face image. Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images. The sketch generator utilizes part-based losses with two-stage learning for fast convergence during training for high-quality sketch extraction. Through a set of comparisons, we show that StyleSketch outperforms existing state-of-the-art sketch extraction methods and few-shot image adaptation methods for the task of extracting high-resolution abstract face sketches.We further demonstrate the versatility of StyleSketch by extending its use to other domains and explore the possibility of semantic editing. The project page can be found in https://kwanyun.github.io/stylesketch_project.