Variational Image Fusion with Optimal Local Contrast
dc.contributor.author | Hafner, David | en_US |
dc.contributor.author | Weickert, Joachim | en_US |
dc.contributor.editor | Chen, Min and Zhang, Hao (Richard) | en_US |
dc.date.accessioned | 2016-03-01T14:13:09Z | |
dc.date.available | 2016-03-01T14:13:09Z | |
dc.date.issued | 2016 | en_US |
dc.description.abstract | In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization. For all application domains, we conduct thorough validations that illustrate the improvements compared to state‐of‐the‐art approaches that are tailored to the individual tasks. In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization. | en_US |
dc.description.number | 1 | en_US |
dc.description.sectionheaders | Articles | en_US |
dc.description.seriesinformation | Computer Graphics Forum | en_US |
dc.description.volume | 35 | en_US |
dc.identifier.doi | 10.1111/cgf.12690 | en_US |
dc.identifier.uri | https://doi.org/10.1111/cgf.12690 | en_US |
dc.publisher | Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | image fusion | en_US |
dc.subject | multispectral imaging | en_US |
dc.subject | exposure fusion | en_US |
dc.subject | decolourization | en_US |
dc.subject | contrast | en_US |
dc.subject | variational | en_US |
dc.subject | I.3.3 [Computer Graphics]: Picture/Image Generation Display algorithms I.4.3 [Image Processing and Computer Vision]: Enhancement Grayscale manipulation I.4.8 [Image Processing and Computer Vision]: Scene Analysis Color | en_US |
dc.subject | Photometry | en_US |
dc.subject | Sensor Fusion | en_US |
dc.title | Variational Image Fusion with Optimal Local Contrast | en_US |