Browsing by Author "Mara, Hubert"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Lithic Feature Identification in 3D based on Discrete Morse Theory(The Eurographics Association, 2022) Bullenkamp, Jan Philipp; Linsel, Florian; Mara, Hubert; Ponchio, Federico; Pintus, RuggeroNeanderthals and our human ancestors have coexisted for a large period of time sharing many things in common including the production of tools, which are among the few remaining artefacts providing a possible insight into the different paths of evolvement and extinction. These earliest tools were made of stone using different strategies to reduce a rather round stone to a sharp tool for slicing, scraping, piercing or chopping. The type of strategy is assumed to be correlated either with our ancestors or the Neanderthals. Recent research uses computational methods to analyse shapes of lithic artefacts using Geometric MorphoMetrics (GMM) as known in anthropology. As the main criteria for determining a production strategy are morphologic measures like shape, size, roughness of convex ridges and concave scars, we propose a new method based on discrete Morse theory for surface segmentation to enable GMM analysis in future work. We show the theoretical concepts for the proposed segmentation, which have been applied to a dataset being available via Open Access. For validation we have created a statistically significant subset of segmented simple and complex lithic tools, which have been manually segmented by an expert as ground truth. We finally show results of our experiments on this real dataset.Item R-CNN based PolygonalWedge Detection Learned from Annotated 3D Renderings and Mapped Photographs of Open Data Cuneiform Tablets(The Eurographics Association, 2023) Stötzner, Ernst; Homburg, Timo; Bullenkamp, Jan Philipp; Mara, Hubert; Bucciero, Alberto; Fanini, Bruno; Graf, Holger; Pescarin, Sofia; Rizvic, SelmaMotivated by the demands of Digital Assyriology and the challenges of detecting cuneiform signs, we propose a new approach using R-CNN architecture to classify and localize wedges. We utilize the 3D models of 1977 cuneiform tablets from the Frau Professor Hilprecht Collection available as pen data. About 500 of these tablets have a transcription available in the Cuneiform Digital Library Initiative (CDLI) database. We annotated 21.000 cuneiform signs as well as 4.700 wedges resulting in the new open data Mainz Cuneiform Benchmark Dataset (MaiCuBeDa), including metadata, cropped signs, and partially wedges. The latter is also a good basis for manual paleography. Our inputs are MSII renderings computed using the GigaMesh Software Framework and photographs having the annotations automatically transferred from the renderings. Our approach consists of a pipeline with two components: a sign detector and a wedge detector. The sign detector uses a RepPoints model with a ResNet18 backbone to locate individual cuneiform characters in the tablet segment image. The signs are then cropped based on the sign locations and fed into the wedge detector. The wedge detector is based on the idea of Point RCNN approach. It uses a Feature Pyramid Network (FPN) and RoI Align to predict the positions and classes of the wedges. The method is evaluated using different hyperparameters, and post-processing techniques such as Non-Maximum Suppression (NMS) are applied for refinement. The proposed method shows promising results in cuneiform wedge detection. Our detector was evaluated using the Gottstein system and with the PaleoCodage encoding. Our results show that the sign detector performs better when trained on 3D renderings than photographs. We showed that detectors trained on photographs are usually less accurate. The accuracy on photographs improves when trained, including 3D renderings. Overall, our pipeline achieves decent results, with some limitations due to the relatively small amount of data. However, even small amounts of high-quality renderings of 3D datasets with expert annotations dramatically improved sign detection.Item Visualizing Networks of Maya Glyphs by Clustering Subglyphs(The Eurographics Association, 2018) Bogacz, Bartosz; Feldmann, Felix; Prager, Christian; Mara, Hubert; Sablatnig, Robert and Wimmer, MichaelDeciphering the Maya writing is an ongoing process that has already started in the early 19th century. Among the reasons why Maya hieroglyphic script and language are still undeciphered are inexpertly-created drawings of Maya writing systems resulting in a large number of misinterpretations concerning the contents of these glyphs. As a consequence, the decipherment of Maya writing systems has experienced several setbacks. Modern research in the domain of cultural heritage requires a maximum amount of precision in capturing and analyzing artifacts so that scholars can work on - preferably - unmodified data as much as possible. This work presents an approach to visualize similar Maya glyphs and parts thereof and enable discovering novel connections between glyphs based on a machine learning pipeline. The algorithm is demonstrated on 3D scans from sculptured monuments, which have been filtered using a Multiscale Integral Invariant Filter (MSII) and then projected as a 2D image. Maya glyphs are segmented from 2D images using projection profiles to generate a grid of columns and rows. Then, the glyphs themselves are segmented using the random walker approach, where background and foreground is separated based on the surface curvature of the original 3D surface. The retrieved subglyphs are first clustered by their sizes into a set of common sizes. For each glyph a feature vector based on Histogram of Gradients (HOG) is computed and used for a subsequent hierarchical clustering. The resultant clusters of glyph parts are used to discover and visualize connections between glyphs using a force directed network layout.