Browsing by Author "Yamaguchi, Yasushi"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item An Interactive Tuning Method for Generator Networks Trained by GAN(The Eurographics Association, 2022) Zhou, Mengyuan; Yamaguchi, Yasushi; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoThe recent studies on GAN achieved impressive results in image synthesis. However, they are still not so perfect that output images may contain unnatural regions. We propose a tuning method for generator networks trained by GAN to improve their results by interactively removing unexpected objects and textures or changing the object colors. Our method could find and ablate those units in the generator networks that are highly related to the specific regions or their colors. Compared to the related studies, our proposed method can tune pre-trained generator networks without relying on any additional information like segmentation-based networks. We built the interactive system based on our method, capable of tuning the generator networks to make the resulting images as expected. The experiments show that our method could remove only unexpected objects and textures. It could change the selected area color as well. The method also gives us some hints to discuss the properties of generator networks which layers and units are associated with objects, textures, or colors.Item JPEG Line-drawing Restoration With Masks(The Eurographics Association, 2023) Zhu, Yan; Yamaguchi, Yasushi; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaLearning-based JPEG restoration methods usually lack consideration on the visual content of images. Even though these methods achieve satisfying results on photos, the direct application of them on line drawings, which consist of lines and white background, is not suitable. The large area of background in digital line drawings does not contain intensity information and should be constantly white (the maximum brightness). Existing JPEG restoration networks consistently fail to output constant white pixels for the background area. What's worse, training on the background can negatively impact the learning efficiency for areas where texture exists. To tackle these problems, we propose a line-drawing restoration framework that can be applied to existing state-of-the-art restoration networks. Our framework takes existing restoration networks as backbones and processes an input rasterized JPEG line drawing in two steps. First, a proposed mask-predicting network predicts a binary mask which indicates the location of lines and background in the potential undeteriorated line drawing. Then, the mask is concatenated with the input JPEG line drawing and fed into the backbone restoration network, where the conventional L1 loss is replaced by a masked Mean Square Error (MSE) loss. Besides learning-based mask generation, we also evaluate other direct mask generation methods. Experiments show that our framework with learnt binary masks achieves both better visual quality and better performance on quantitative metrics than the state-of-the-art methods in the task of JPEG line-drawing restoration.