Predicting Perceived Gloss: Do Weak Labels Suffice?
dc.contributor.author | Guerrero-Viu, Julia | en_US |
dc.contributor.author | Subias, Jose Daniel | en_US |
dc.contributor.author | Serrano, Ana | en_US |
dc.contributor.author | Storrs, Katherine R. | en_US |
dc.contributor.author | Fleming, Roland W. | en_US |
dc.contributor.author | Masia, Belen | en_US |
dc.contributor.author | Gutierrez, Diego | en_US |
dc.contributor.editor | Bermano, Amit H. | en_US |
dc.contributor.editor | Kalogerakis, Evangelos | en_US |
dc.date.accessioned | 2024-04-30T09:08:48Z | |
dc.date.available | 2024-04-30T09:08:48Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Estimating perceptual attributes of materials directly from images is a challenging task due to their complex, not fullyunderstood interactions with external factors, such as geometry and lighting. Supervised deep learning models have recently been shown to outperform traditional approaches, but rely on large datasets of human-annotated images for accurate perception predictions. Obtaining reliable annotations is a costly endeavor, aggravated by the limited ability of these models to generalise to different aspects of appearance. In this work, we show how a much smaller set of human annotations (''strong labels'') can be effectively augmented with automatically derived ''weak labels'' in the context of learning a low-dimensional image-computable gloss metric. We evaluate three alternative weak labels for predicting human gloss perception from limited annotated data. Incorporating weak labels enhances our gloss prediction beyond the current state of the art. Moreover, it enables a substantial reduction in human annotation costs without sacrificing accuracy, whether working with rendered images or real photographs. | en_US |
dc.description.number | 2 | |
dc.description.sectionheaders | Perceptual Rendering | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 43 | |
dc.identifier.doi | 10.1111/cgf.15037 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 13 pages | |
dc.identifier.uri | https://doi.org/10.1111/cgf.15037 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.1111/cgf15037 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution-NonCommercial 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies -> Perception; Dimensionality reduction and manifold learning; Supervised learning | |
dc.subject | Computing methodologies | |
dc.subject | Perception | |
dc.subject | Dimensionality reduction and manifold learning | |
dc.subject | Supervised learning | |
dc.title | Predicting Perceived Gloss: Do Weak Labels Suffice? | en_US |