Browsing by Author "Corsini, Massimiliano"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Another Brick in the Wall: Improving the Assisted Semantic Segmentation of Masonry Walls(The Eurographics Association, 2020) Pavoni, Gaia; Giuliani, Francesca; Falco, Anna De; Corsini, Massimiliano; Ponchio, Federico; Callieri, Marco; Cignoni, Paolo; Spagnuolo, Michela and Melero, Francisco JavierIn Architectural Heritage, the masonry's interpretation is an essential instrument for analyzing the construction phases, the assessment of structural properties, and the monitoring of its state of conservation. This work is generally carried out by specialists that, based on visual observation and their knowledge, manually annotate ortho-images of the masonry generated by photogrammetric surveys. This results in vectorial thematic maps segmented according to their construction technique (isolating areas of homogeneous materials/structure/texture) or state of conservation, including degradation areas and damaged parts. This time-consuming manual work, often done with tools that have not been designed for this purpose, represents a bottleneck in the documentation and management workflow and is a severely limiting factor in monitoring large-scale monuments (e.g.city walls). This paper explores the potential of AI-based solutions to improve the efficiency of masonry annotation in Architectural Heritage. This experimentation aims at providing interactive tools that support and empower the current workflow, benefiting from specialists' expertise.Item Enhanced Visualization of Detected 3D Geometric Differences(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Palma, Gianpaolo; Sabbadin, Manuele; Corsini, Massimiliano; Cignoni, Paolo; Chen, Min and Benes, BedrichThe wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time‐based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes.Item Evaluating Deep Learning Methods for Low Resolution Point Cloud Registration in Outdoor Scenarios(The Eurographics Association, 2021) Siddique, Arslan; Corsini, Massimiliano; Ganovelli, Fabio; Cignoni, Paolo; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, EmanuelePoint cloud registration is a fundamental task in 3D reconstruction and environment perception. We explore the performance of modern Deep Learning-based registration techniques, in particular Deep Global Registration (DGR) and Learning Multiview Registration (LMVR), on an outdoor real world data consisting of thousands of range maps of a building acquired by a Velodyne LIDAR mounted on a drone. We used these pairwise registration methods in a sequential pipeline to obtain an initial rough registration. The output of this pipeline can be further globally refined. This simple registration pipeline allow us to assess if these modern methods are able to deal with this low quality data. Our experiments demonstrated that, despite some design choices adopted to take into account the peculiarities of the data, more work is required to improve the results of the registration.Item STAG 2019: Frontmatter(Eurographics Association, 2019) Agus, Marco; Corsini, Massimiliano; Pintus, Ruggero; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroItem A Validation Tool For Improving Semantic Segmentation of Complex Natural Structures(The Eurographics Association, 2019) Pavoni, Gaia; Corsini, Massimiliano; Palma, Marco; Scopigno, Roberto; Cignoni, Paolo and Miguel, EderThe automatic recognition of natural structures is a challenging task in the supervised learning field. Complex morphologies are difficult to detect both from the networks, that may suffer from generalization issues, and from human operators, affecting the consistency of training datasets. The task of manual annotating biological structures is not comparable to a generic task of detecting an object (a car, a cat, or a flower) within an image. Biological structures are more similar to textures, and specimen borders exhibit intricate shapes. In this specific context, manual labelling is very sensitive to human error. The interactive validation of the predictions is a valuable resource to improve the network performance and address the inaccuracy caused by the lack of annotation consistency of human operators reported in literature. The proposed tool, inspired by the Yes/No Answer paradigm, integrates the semantic segmentation results coming from a CNN with the previous human labeling, allowing a more accurate annotation of thousands of instances in a short time. At the end of the validation, it is possible to obtain corrected statistics or export the integrated dataset and re-train the network.Item ViDA 3D: Towards a View-based Dataset for Aesthetic prediction on 3D models(The Eurographics Association, 2020) Angelini, Mattia; Ferrulli, Vito; Banterle, Francesco; Corsini, Massimiliano; Pascali, Maria Antonietta; Cignoni, Paolo; Giorgi, Daniela; Biasotti, Silvia and Pintus, Ruggero and Berretti, StefanoWe present the ongoing effort to build the first benchmark dataset for aestethic prediction on 3D models. The dataset is built on top of Sketchfab, a popular platform for 3D content sharing. In our dataset, the visual 3D content is aligned with aestheticsrelated metadata: each 3D model is associated with a number of snapshots taken from different camera positions, the number of times the model has been viewed in-between its upload and its retrieval, the number of likes the model got, and the tags and comments received from users. The metadata provide precious supervisory information for data-driven research on 3D visual attractiveness and preference prediction. The paper contribution is twofold. First, we introduce an interactive platform for visualizing data about Sketchfab. We report a detailed qualitative and quantitative analysis of numerical scores (views and likes collected by 3D models) and textual information (tags and comments) for different 3D object categories. The analysis of the content of Sketchfab provided us the base for selecting a reasoned subset of annotated models. The second contribution is the first version of the ViDA 3D dataset, which contains the full set of content required for data-driven approaches to 3D aesthetic analysis. While similar datasets are available for images, to our knowledge this is the first attempt to create a benchmark for aestethic prediction for 3D models. We believe our dataset can be a great resource to boost research on this hot and far-from-solved problem.