Eurographics Local Chapter Events
Permanent URI for this community
Browse
Browsing Eurographics Local Chapter Events by Subject "3D imaging"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Deep Tracking for Robust Real-time Object Scanning(The Eurographics Association, 2022) Lombardi, Marco; Savardi, Mattia; Signoroni, Alberto; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoNowadays, a high-fidelity 3d model representation can be obtained easily by means of handheld optical scanners, which offer a good level of reconstruction quality, portability, and low latency in scan-to-data. However, it is well known that the tracking process can be critical for such devices: sub-optimal lighting conditions, smooth surfaces in the scene, or occluded views and repetitive patterns are all sources of error. In this scenario, recent disruptive technologies such as sparse convolutional neural networks have been tailored to address common problems in 3D vision and analysis. Our work aims to integrate the most promising solutions into an operating framework which can then be used to achieve compelling results in 3D real-time reconstruction. Several scenes from a dataset containing dense views of objects are tested using our proposed pipeline and compared with the current state-of-the-art of online reconstruction.Item A Stereo-Integrated Novel View Synthesis Pipeline for the Enhancement of Road Surface Reconstruction Dataset(The Eurographics Association, 2024) Zhan, Mochuan; Morley, Terence; Turner, Martin; Hunter, David; Slingsby, AidanThis proposal outlines a novel view synthesis pipeline designed for road reconstruction in autonomous driving scenarios that leverages virtual camera technology to synthesise images from unvisited camera poses, thereby enhancing and expanding current datasets. It consists of three main steps: data acquisition, data preprocessing and fusion, and then importantly interacting with new 3D view synthesis with geometric priors. The modular design allows each component to be independently optimised and upgraded, ensuring flexibility and adaptability to various datasets and task requirements. The proposed approach aims to improve the robustness, realism, and photometric consistency of novel view synthesis, effectively handling dynamic scenes and varying lighting conditions. Additionally, this research plans to open-source a low-cost stereo camera hardware solution with the included software implementation.Item VarIS: Variable Illumination Sphere for Facial Capture, Model Scanning, and Spatially Varying Appearance Acquisition(The Eurographics Association, 2023) Baron, Jessica; Li, Xiang; Joshi, Parisha; Itty, Nathaniel; Greene, Sarah; Dhillon, Daljit Singh J.; Patterson, Eric; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaWe introduce VarIS, our Variable Illumination Sphere – a multi-purpose system for acquiring and processing real-world geometric and appearance data for computer-graphics research and production. Its key applications among many are (1) human-face capture, (2) model scanning, and (3) spatially varying material acquisition. Facial capture requires high-resolution cameras at multiple viewpoints, photometric capabilities, and a swift process due to human movement. Acquiring a digital version of a physical model is somewhat similar but with different constraints for image processing and more allowable time. Each requires detailed estimations of geometry and physically based shading properties. Measuring spatially varying light-scattering properties requires spanning four dimensions of illumination and viewpoint with angular, spatial, and spectral accuracy, and this process can also be assisted using multiple, simultaneous viewpoints or rapid switching of lights with no movement necessary. VarIS is a system of hardware and software for spherical illumination and imaging that has been custom designed and developed by our team. It has been inspired by Light Stages and goniophotometers, but costs less through use of primarily off-the-shelf components, and additionally extends capabilities beyond these devices. In this paper we describe the unique system and contributions, including practical details that could assist other researchers and practitioners.