Browsing by Author "Gao, Chengying"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Controllable Garment Image Synthesis Integrated with Frequency Domain Features(The Eurographics Association and John Wiley & Sons Ltd., 2023) Liang, Xinru; Mo, Haoran; Gao, Chengying; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Using sketches and textures to synthesize garment images is able to conveniently display the realistic visual effect in the design phase, which greatly increases the efficiency of fashion design. Existing garment image synthesis methods from a sketch and a texture tend to fail in working on complex textures, especially those with periodic patterns. We propose a controllable garment image synthesis framework that takes as inputs an outline sketch and a texture patch and generates garment images with complicated and diverse texture patterns. To improve the performance of global texture expansion, we exploit the frequency domain features in the generative process, which are from a Fast Fourier Transform (FFT) and able to represent the periodic information of the patterns. We also introduce a perceptual loss in the frequency domain to measure the similarity of two texture pattern patches in terms of their intrinsic periodicity and regularity. Comparisons with existing approaches and sufficient ablation studies demonstrate the effectiveness of our method that is capable of synthesizing impressive garment images with diverse texture patterns while guaranteeing proper texture expansion and pattern consistency.Item L0 Gradient-Preserving Color Transfer(The Eurographics Association and John Wiley & Sons Ltd., 2016) Wang, Dong; Zou, Changqing; Li, Guiqing; Gao, Chengying; Su, Zhuo; Tan, Ping; Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-HornungThis paper presents a new two-step color transfer method which includes color mapping and detail preservation. To map source colors to target colors, which are from an image or palette, the proposed similarity-preserving color mapping algorithm uses the similarities between pixel color and dominant colors as existing algorithms and emphasizes the similarities between source image pixel colors. Detail preservation is performed by an L0 gradient-preserving algorithm. It relaxes the large gradients of the sparse pixels along color region boundaries and preserves the small gradients of pixels within color regions. The proposed method preserves source image color similarity and image details well. Extensive experiments demonstrate that the proposed approach has achieved a state-of-art visual performance.Item Line Art Colorization Based on Explicit Region Segmentation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Cao, Ruizhi; Mo, Haoran; Gao, Chengying; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranAutomatic line art colorization plays an important role in anime and comic industry. While existing methods for line art colorization are able to generate plausible colorized results, they tend to suffer from the color bleeding issue. We introduce an explicit segmentation fusion mechanism to aid colorization frameworks in avoiding color bleeding artifacts. This mechanism is able to provide region segmentation information for the colorization process explicitly so that the colorization model can learn to avoid assigning the same color across regions with different semantics or inconsistent colors inside an individual region. The proposed mechanism is designed in a plug-and-play manner, so it can be applied to a diversity of line art colorization frameworks with various kinds of user guidances. We evaluate this mechanism in tag-based and referencebased line art colorization tasks by incorporating it into the state-of-the-art models. Comparisons with these existing models corroborate the effectiveness of our method which largely alleviates the color bleeding artifacts. The code is available at https://github.com/Ricardo-L-C/ColorizationWithRegion.Item Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism(The Eurographics Association, 2022) Ling, Peng; Mo, Haoran; Gao, Chengying; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakScene sketch segmentation based on referring expression plays an important role in sketch editing of anime industry. While most existing referring image segmentation approaches are designed for the standard task of generating a binary segmentation mask for a single or a group of target(s), we think it necessary to equip these models with the ability of multi-instance segmentation. To this end, we propose GRM-Net, a one-stage framework tailored for multi-instance referring image segmentation of scene sketches. We extract the language features from the expression and fuse it into a conventional instance segmentation pipeline for filtering out the undesired instances in a coarse-to-fine manner and keeping the matched ones. To model the relative arrangement of the objects and the relationship among them from a global view, we propose a global reference mechanism (GRM) to assign references to each detected candidate to identify its position. We compare with existing methods designed for multi-instance referring image segmentation of scene sketches and for the standard task of referring image segmentation, and the results demonstrate the effectiveness and superiority of our approach.Item PencilArt: A Chromatic Penciling Style Generation Framework(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Gao, Chengying; Tang, Mengyue; Liang, Xiangguo; Su, Zhuo; Zou, Changqing; Chen, Min and Benes, BedrichNon‐photorealistic rendering has been an active area of research for decades whereas few of them concentrate on rendering chromatic penciling style. In this paper, we present a framework named as PencilArt for the chromatic penciling style generation from wild photographs. The structural outline and textured map for composing the chromatic pencil drawing are generated, respectively. First, we take advantage of deep neural network to produce the structural outline with proper intensity variation and conciseness. Next, for the textured map, we follow the painting process of artists to adjust the tone of input images to match the luminance histogram and pencil textures of real drawings. Eventually, we evaluate PencilArt via a series of comparisons to previous work, showing that our results better capture the main features of real chromatic pencil drawings and have an improved visual appearance.Non‐photorealistic rendering has been an active area of research for decades whereas few of them concentrate on rendering chromatic penciling style. In this paper, we present a framework named as PencilArt for the chromatic penciling style generation from wild photographs. The structural outline and textured map for composing the chromatic pencil drawing are generated, respectively. First, we take advantage of deep neural network to produce the structural outline with proper intensity variation and conciseness. Next, for the textured map, we follow the painting process of artists to adjust the tone of input images to match the luminance histogram and pencil textures of real drawings. Eventually, we evaluate PencilArt via a series of comparisons to previous work, showing that our results better capture the main features of real chromatic pencil drawings and have an improved visual appearance.Item Rain Wiper: An Incremental RandomlyWired Network for Single Image Deraining(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liang, Xiwen; Qiu, Bin; Su, Zhuo; Gao, Chengying; Shi, Xiaohong; Wang, Ruomei; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonSingle image rain removal is a challenging ill-posed problem due to various shapes and densities of rain streaks. We present a novel incremental randomly wired network (IRWN) for single image deraining. Different from previous methods, most structures of modules in IRWN are generated by a stochastic network generator based on the random graph theory, which ease the burden of manual design and further help to characterize more complex rain streaks. To decrease network parameters and extract more details efficiently, the image pyramid is fused via the multi-scale network structure. An incremental rectified loss is proposed to better remove rain streaks in different rain conditions and recover the texture information of target objects. Extensive experiments on synthetic and real-world datasets demonstrate that the proposed method outperforms the state-ofthe- art methods significantly. In addition, an ablation study is conducted to illustrate the improvements obtained by different modules and loss items in IRWN.