An approach for precise 2D/3D semantic annotation of spatially-oriented images for in situ visualization applications
dc.contributor.author | Manuel, Adeline | en_US |
dc.contributor.author | Gattet, Eloi | en_US |
dc.contributor.author | Luca, Livio De | en_US |
dc.contributor.author | Veron, Philippe | en_US |
dc.contributor.editor | - | en_US |
dc.date.accessioned | 2015-04-27T14:57:45Z | |
dc.date.available | 2015-04-27T14:57:45Z | |
dc.date.issued | 2013 | en_US |
dc.description.abstract | Thanks to nowadays technologies, innovative tools afford to increase our knowledge of historic monuments, in the field of preservation and valuation of cultural heritage. These tools are aimed to help experts to create, enrich and share information on historical buildings. Among the various documentary sources, photographs contain a high level of details about shapes and colors. With the development of image analysis and image-based-modeling techniques, large sets of images can be spatially oriented towards a digital mock-up. For these reasons, digital photographs prove to be an easy to use, affordable and flexible support, for heritage documentation. This article presents, in a first step, an approach for 2D/3D semantic annotations in a set of spatially-oriented photographs (whose positions and orientations in space are automatically estimated). In a second step, we will focus on a method for displaying those annotations on new images acquired by mobile devices in situ. Firstly, an automated image-based reconstruction method produces 3D information (specifically 3D coordinates) by processing a large images set. Then, images are semantically annotated and a process uses the previously generated 3D information inherent to images for the annotations transfer. As a consequence, this protocol provides a simple way to finely annotate a large quantity of images at once instead of one by one. As those images annotations are directly inherent to 3D information, they can be stored as 3D files. To bring up on screen the information related to a building, the user takes a picture in situ. An image processing method allows estimating the orientation parameters of this new photograph inside the already oriented large images base. Then the annotations can be precisely projected on the oriented picture and send back to the user. In this way a continuity of information could be established from the initial acquisition to the in situ visualization. | en_US |
dc.description.sectionheaders | Track 2, Full Papers | en_US |
dc.description.seriesinformation | Digital Heritage International Congress | en_US |
dc.identifier.doi | 10.1109/DigitalHeritage.2013.6743752 | en_US |
dc.identifier.uri | https://doi.org/10.1109/DigitalHeritage.2013.6743752 | en_US |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1109/DigitalHeritage | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | {Semantic annotations | en_US |
dc.subject | dense image matching | en_US |
dc.subject | image processing | en_US |
dc.subject | photogrammetry} | en_US |
dc.title | An approach for precise 2D/3D semantic annotation of spatially-oriented images for in situ visualization applications | en_US |