Browsing by Author "Fu, Qiang"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Linear Polarization Demosaicking for Monochrome and Colour Polarization Focal Plane Arrays(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Qiu, Simeng; Fu, Qiang; Wang, Congli; Heidrich, Wolfgang; Benes, Bedrich and Hauser, HelwigDivision‐of‐focal‐plane (DoFP) polarization image sensors allow for snapshot imaging of linear polarization effects with inexpensive and straightforward setups. However, conventional interpolation based image reconstruction methods for such sensors produce unreliable and noisy estimates of quantities such as Degree of Linear Polarization (DoLP) or Angle of Linear Polarization (AoLP). In this paper, we propose a polarization demosaicking algorithm by inverting the polarization image formation model for both monochrome and colour DoFP cameras. Compared to previous interpolation methods, our approach can significantly reduce noise induced artefacts and drastically increase the accuracy in estimating polarization states. We evaluate and demonstrate the performance of the methods on a new high‐resolution colour polarization dataset. Simulation and experimental results show that the proposed reconstruction and analysis tools offer an effective solution to polarization imaging.Item Transfer Deep Learning for Reconfigurable Snapshot HDR Imaging Using Coded Masks(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Alghamdi, Masheal; Fu, Qiang; Thabet, Ali; Heidrich, Wolfgang; Benes, Bedrich and Hauser, HelwigHigh dynamic range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this paper we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware and building a deep learning algorithm to reconstruct the HDR image. We leverage transfer learning to overcome the lack of sufficiently large HDR datasets available. We show how transferring from a different large‐scale task (image classification on ImageNet) leads to considerable improvements in HDR reconstruction. We achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware–software so lution offers a flexible yet robust way to modulate per‐pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparison results show that our method outperforms the state of the art in terms of visual perception quality.