3DOR 19
Permanent URI for this collection
Browse
Browsing 3DOR 19 by Title
Now showing 1 - 17 of 17
Results Per Page
Sort Options
Item A 3D CAD Assembly Benchmark(The Eurographics Association, 2019) Lupinetti, Katia; Giannini, Franca; monti, marina; PERNOT, Jean-Philippe; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoEvaluating the effectiveness of the systems for the retrieval of 3D assembly models is not trivial. CAD assembly models can be considered similar according to different criteria and at different levels (i.e. globally or partially). Indeed, besides the shape criterion, CAD assembly models have further characteristic elements, such as the mutual position of parts, or the type of connecting joint. Thus, when retrieving 3D models, these characteristics can match in the entire model (globally) or just in local subparts (partially). The available 3D model repositories do not include complex CAD assembly models and, generally, they are suitable to evaluate one characteristic at a time and neglecting important properties in the evaluation of assembly similarity. In this paper, we present a benchmark for the evaluation of content-retrieval systems of 3D assembly models. A crucial feature of this benchmark regards its ability to consider the various aspects characterizing the models of mechanical assemblies.Item 3DOR 2019: Frontmatter(Eurographics Association, 2019) Biasotti, Silvia; Lavoué, Guillaume; Veltkamp, Remco; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoItem Classification in Cryo-Electron Tomograms(The Eurographics Association, 2019) Gubins, Ilja; Schot, Gijs van der; Veltkamp, Remco C.; Förster, Friedrich; Du, Xuefeng; Zeng, Xiangrui; Zhu, Zhenxi; Chang, Lufan; Xu, Min; Moebel, Emmanuel; Martinez-Sanchez, Antonio; Kervrann, Charles; Lai, Tuan M.; Han, Xusi; Terashi, Genki; Kihara, Daisuke; Himes, Benjamin A.; Wan, Xiaohua; Zhang, Jingrong; Gao, Shan; Hao, Yu; Lv, Zhilong; Wan, Xiaohua; Yang, Zhidong; Ding, Zijun; Cui, Xuefeng; Zhang, Fa; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoDifferent imaging techniques allow us to study the organization of life at different scales. Cryo-electron tomography (cryo-ET) has the ability to three-dimensionally visualize the cellular architecture as well as the structural details of macro-molecular assemblies under near-native conditions. Due to beam sensitivity of biological samples, an inidividual tomogram has a maximal resolution of 5 nanometers. By averaging volumes, each depicting copies of the same type of a molecule, resolutions beyond 4 Å have been achieved. Key in this process is the ability to localize and classify the components of interest, which is challenging due to the low signal-to-noise ratio. Innovation in computational methods remains key to mine biological information from the tomograms. To promote such innovation, we organize this SHREC track and provide a simulated dataset with the goal of establishing a benchmark in localization and classification of biological particles in cryo-electron tomograms. The publicly available dataset contains ten reconstructed tomograms obtained from a simulated cell-like volume. Each volume contains twelve different types of proteins, varying in size and structure. Participants had access to 9 out of 10 of the cell-like ground-truth volumes for learning-based methods, and had to predict protein class and location in the test tomogram. Five groups submitted eight sets of results, using seven different methods. While our sample size gives only an anecdotal overview of current approaches in cryo-ET classification, we believe it shows trends and highlights interesting future work areas. The results show that learning-based approaches is the current trend in cryo-ET classification research and specifically end-to-end 3D learning-based approaches achieve the best performance.Item CMH: Coordinates Manifold Harmonics for Functional Remeshing(The Eurographics Association, 2019) Marin, Riccardo; Melzi, Simone; Musoni, Pietro; Bardon, Filippo; Tarini, Marco; Castellani, Umberto; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoIn digital world reconstruction, 2-dimensional surface of real objects are often obtained as polygonal meshes after an acquisition procedure using 3D sensors. However, such representation requires several manual efforts from highly experts to correct the irregularity of tessellation and make it suitable for professional applications, such as those in the gaming or movie industry. Moreover, for modelling and animation purposes it is often required that the same connectivity is shared among two or more different shapes. In this paper we propose a new method that exploits a remeshing-by-matching approach where the observed noisy shape inherits a regular tessellation from a target shape which already satisfies the professional constraints. A fully automatic pipeline is introduced based on a variation of the functional mapping framework. In particular, a new set of basis functions, namely the Coordinates Manifold Harmonics (CMH), is properly designed for this tessellation transfer task. In our experiments an exhaustive quantitative and quality evaluation is reported for human body shapes in T-pose where the effectiveness of the proposed functional remeshing is clearly shown in comparison with other methods.Item Depth-Based Face Recognition by Learning from 3D-LBP Images(The Eurographics Association, 2019) Neto, Joao Baptista Cardia; Marana, Aparecido Nilceu; Ferrari, Claudio; Berretti, Stefano; Bimbo, Alberto Del; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoIn this paper, we propose a hybrid framework for face recognition from depth images, which is both effective and efficient. It consists of two main stages: First, the 3DLBP operator is applied to the raw depth data of the face, and used to build the corresponding descriptor images (DIs). However, such operator quantizes relative depth differences over/under +-7 to the same bin, so as to generate a fixed dimensional descriptor. To account for this behavior, we also propose a modification of the traditional operator that encodes depth differences using a sigmoid function. Then, a not-so-deep (shallow) convolutional neural network (SCNN) has been designed that learns from the DIs. This architecture showed two main advantages over the direct application of deep-CNN (DCNN) to depth images of the face: On the one hand, the DIs are capable of enriching the raw depth data, emphasizing relevant traits of the face, while reducing their acquisition noise. This resulted decisive in improving the learning capability of the network; On the other, the DIs capture low-level features of the face, thus playing the role for the SCNN as the first layers do in a DCNN architecture. In this way, the SCNN we have designed has much less layers and can be trained more easily and faster. Extensive experiments on low- and high-resolution depth face datasets confirmed us the above advantages, showing results that are comparable or superior to the state-of-the-art, using by far less training data, time, and memory occupancy of the network.Item Extended 2D Scene Image-Based 3D Scene Retrieval(The Eurographics Association, 2019) Abdul-Rashid, Hameed; Yuan, Juefei; Li, Bo; Lu, Yijuan; Schreck, Tobias; Bui, Ngoc-Minh; Do, Trong-Le; Holenderski, Mike; Jarnikov, Dmitri; Le, Khiem T.; Menkovski, Vlado; Nguyen, Khac-Tuan; Nguyen, Thanh-An; Nguyen, Vinh-Tiep; Ninh, Tu V.; Rey, Perez; Tran, Minh-Triet; Wang, Tianyang; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoIn the months following our SHREC 2018 - 2D Scene Image-Based 3D Scene Retrieval (SceneIBR2018) track, we have extended the number of the scene categories from the initial 10 classes in the SceneIBR2018 benchmark to 30 classes, resulting in a new benchmark SceneIBR2019 which has 30,000 scene images and 3,000 3D scene models. For that reason, we seek to further evaluate the performance of existing and new 2D scene image-based 3D scene retrieval algorithms using this extended and more comprehensive new benchmark. Three groups from the Netherlands, the United States and Vietnam participated and collectively submitted eight runs. This report documents the evaluation of each method based on seven performance metrics, offers an indepth discussion as well as analysis on the methods employed and discusses future directions that have the potential to address this task. Again, deep learning techniques have demonstrated notable performance in terms of both accuracy and scalability when applied to this exigent retrieval task. To further enrich the current state of 3D scene understanding and retrieval, our evaluation toolkit, all participating methods' results and the comprehensive 2D/3D benchmark have all been made publicly available.Item Extended 2D Scene Sketch-Based 3D Scene Retrieval(The Eurographics Association, 2019) Yuan, Juefei; Abdul-Rashid, Hameed; Li, Bo; Lu, Yijuan; Schreck, Tobias; Bui, Ngoc-Minh; Do, Trong-Le; Nguyen, Khac-Tuan; Nguyen, Thanh-An; Nguyen, Vinh-Tiep; Tran, Minh-Triet; Wang, Tianyang; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoSketch-based 3D scene retrieval is to retrieve 3D scene models given a user's hand-drawn 2D scene sketch. It is a brand new but also very challenging research topic in the field of 3D object retrieval due to the semantic gap in their representations: 3D scene models or views differ from non-realistic 2D scene sketches. To boost this interesting research, we organized a 2D Scene Sketch-Based 3D Scene Retrieval track in SHREC'18, resulting a SceneSBR18 benchmark which contains 10 scene classes. In order to make it more comprehensive, we have extended the number of the scene categories from the initial 10 classes in the SceneSBR2018 benchmark to 30 classes, resulting in a new and more challenging benchmark SceneSBR2019 which has 750 2D scene sketches and 3,000 3D scene models. Therefore, the objective of this track is to further evaluate the performance and scalability of different 2D scene sketch-based 3D scene model retrieval algorithms using this extended and more comprehensive new benchmark. In this track, two groups from USA and Vietnam have successfully submitted 4 runs. Based on 7 commonly used retrieval metrics, we evaluate their retrieval performance. We have also conducted a comprehensive analysis and discussion of these methods and proposed several future research directions to deal with this challenging research topic. Deep learning techniques have been proved their great potentials again in dealing with this challenging retrieval task, in terms of both retrieval accuracy and scalability to a larger dataset. We hope this publicly available benchmark, together with its evaluation results and source code, will further enrich and promote 2D scene sketch-based 3D scene retrieval research area and its corresponding applications.Item Feature Curve Extraction on Triangle Meshes(The Eurographics Association, 2019) Moscoso Thompson, Elia; Arvanitis, G.; Moustakas, Konstantinos; Hoang-Xuan, N.; Nguyen, E. R.; Tran, M.; Lejemble, T.; Barthe, L.; Mellado, N.; Romanengo, C.; Biasotti, S.; FALCIDIENO, BIANCA; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoThis paper presents the results of the SHREC'19 track: Feature curve extraction on triangle meshes. Given a model, the challenge consists in automatically extracting a subset of the mesh vertices that jointly represent a feature curve. As an optional task, participants were requested to send also a similarity evaluation among the feature curves extracted. The various approaches presented by the participants are discussed, together with their results. The proposed methods highlight different points of view of the problem of feature curve extraction. It is interesting to see that it is possible to deal with this problem with good results, despite the different approaches.Item Generalizing Discrete Convolutions for Unstructured Point Clouds(The Eurographics Association, 2019) Boulch, Alexandre; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoPoint clouds are unstructured and unordered data, as opposed to images. Thus, most of machine learning approaches, developed for images, cannot be directly transferred to point clouds. It usually requires data transformation such as voxelization, inducing a possible loss of information. In this paper, we propose a generalization of the discrete convolutional neural networks (CNNs) able to deal with sparse input point cloud. We replace the discrete kernels by continuous ones. The formulation is simple, does not set the input point cloud size and can easily be used for neural network design similarly to 2D CNNs. We present experimental results, competitive with the state of the art, on shape classification, part segmentation and semantic segmentation for large scale clouds.Item Matching Humans with Different Connectivity(The Eurographics Association, 2019) Melzi, S.; Marin, R.; Rodolà, E.; Castellani, U.; Ren, J.; Poulenard, A.; Wonka, P.; Ovsjanikov, M.; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoObjects Matching is a ubiquitous problem in computer science with particular relevance for many applications; property transfer between 3D models and statistical study for learning are just some remarkable examples. The research community spent a lot of effort to address this problem, and a large and increased set of innovative methods has been proposed for its solution. In order to provide a fair comparison among these methods, different benchmarks have been proposed. However, all these benchmarks are domain specific, e.g., real scans coming from the same acquisition pipeline, or synthetic watertight meshes with the same triangulation. To the best of our knowledge, no cross-dataset comparisons have been proposed to date. This track provides the first matching evaluation in terms of large connectivity changes between models that come from totally different modeling methods. We provide a dataset of 44 shapes with dense correspondence as obtained by a highly accurate shape registration method (FARM). Our evaluation proves that connectivity changes lead to Objects Matching difficulties and we hope this will promote further research in matching shapes with wildly different connectivity.Item Monocular Image Based 3D Model Retrieval(The Eurographics Association, 2019) Li, Wenhui; Liu, Anan; Nie, Weizhi; Song, Dan; Li, Yuqian; Wang, Weijie; Xiang, Shu; Zhou, Heyu; Bui, Ngoc-Minh; Cen, Yunchi; Chen, Zenian; Chung-Nguyen, Huy-Hoang; Diep, Gia-Han; Do, Trong-Le; Doubrovski, Eugeni L.; Duong, Anh-Duc; Geraedts, Jo M. P.; Guo, Haobin; Hoang, Trung-Hieu; Li, Yichen; Liu, Xing; Liu, Zishun; Luu, Duc-Tuan; Ma, Yunsheng; Nguyen, Vinh-Tiep; Nie, Jie; Ren, Tongwei; Tran, Mai-Khiem; Tran-Nguyen, Son-Thanh; Tran, Minh-Triet; Vu-Le, The-Anh; Wang, Charlie C. L.; Wang, Shijie; Wu, Gangshan; Yang, Caifei; Yuan, Meng; Zhai, Hao; Zhang, Ao; Zhang, Fan; Zhao, Sicheng; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoMonocular image based 3D object retrieval is a novel and challenging research topic in the field of 3D object retrieval. Given a RGB image captured in real world, it aims to search for relevant 3D objects from a dataset. To advance this promising research, we organize this SHREC track and build the first monocular image based 3D object retrieval benchmark by collecting 2D images from ImageNet and 3D objects from popular 3D datasets such as NTU, PSB, ModelNet40 and ShapeNet. The benchmark contains classified 21,000 2D images and 7,690 3D objects of 21 categories. This track attracted 9 groups from 4 countries and the submission of 20 runs. To have a comprehensive comparison, 7 commonly-used retrieval performance metrics have been used to evaluate their retrieval performance. The evaluation results show that the supervised cross domain learning get the superior retrieval performance (Best NN is 97.4 %) by bridging the domain gap with label information. However, there is still a big challenge for unsupervised cross domain learning (Best NN is 61.2%), which is more practical for the real application. Although we provided both view images and OBJ file for each 3D model, all the participants use the view images to represent the 3D model. One of the interesting work in the future is directly using the 3D information and 2D RGB information to solve the task of monocular Image based 3D model retrieval.Item mpLBP: An Extension of the Local Binary Pattern to Surfaces based on an Efficient Coding of the Point Neighbours(The Eurographics Association, 2019) Moscoso Thompson, Elia; Biasotti, Silvia; Digne, Julie; Chaine, Raphaëlle; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoThe description of surface textures in terms of repeated colorimetric and geometric local surface variations is a crucial task for several applications, such as object interpretation or style identification. Recently, methods based on extensions to the surface meshes of the Local Binary Pattern (LBP) or the Scale-Invariant Feature Transform (SIFT) descriptors have been proposed for geometric and colorimetric pattern retrieval and classification. With respect to the previous works, we consider a novel LBPbased descriptor based on the assignment of the point neighbours into sectors of equal area and a non-uniform, multiple ring sampling. Our method is able to deal with surfaces represented as point clouds. Experiments on different benchmarks confirm the competitiveness of the method within the existing literature, in terms of accuracy and computational complexity.Item Online Gesture Recognition(The Eurographics Association, 2019) Caputo, F. M.; Burato, S.; Pavan, G.; Voillemin, T.; Wannous, H.; Vandeborre, J. P.; Maghoumi, M.; Taranta II, E. M.; Razmjoo, A.; LaViola Jr., J. J.; Manganaro, F.; Pini, S.; Borghi, G.; Vezzani, R.; Cucchiara, R.; Nguyen, H.; Tran, M. T.; Giachetti, A.; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoThis paper presents the results of the Eurographics 2019 SHape Retrieval Contest track on online gesture recognition. The goal of this contest was to test state-of-the-art methods that can be used to online detect command gestures from hands' movements tracking on a basic benchmark where simple gestures are performed interleaving them with other actions. Unlike previous contests and benchmarks on trajectory-based gesture recognition, we proposed an online gesture recognition task, not providing pre-segmented gestures, but asking the participants to find gestures within recorded trajectories. The results submitted by the participants show that an online detection and recognition of sets of very simple gestures from 3D trajectories captured with a cheap sensor can be effectively performed. The best methods proposed could be, therefore, directly exploited to design effective gesture-based interfaces to be used in different contexts, from Virtual and Mixed reality applications to the remote control of home devices.Item POP: Full Parametric model Estimation for Occluded People(The Eurographics Association, 2019) Marin, Riccardo; Melzi, Simone; Mitra, Niloy J.; Castellani, Umberto; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoIn the last decades, we have witnessed advances in both hardware and associated algorithms resulting in unprecedented access to volumes of 2D and, more recently, 3D data capturing human movement. We are no longer satisfied with recovering human pose as an image-space 2D skeleton, but seek to obtain a full 3D human body representation. The main challenges in acquiring 3D human shape from such raw measurements are identifying which parts of the data relate to body measurements and recovering from partial observations, often arising out of severe occlusion. For example, a person occluded by a piece of furniture, or being self-occluded in a profile view. In this paper, we propose POP, a novel and efficient paradigm for estimation and completion of human shape to produce a full parametric 3D model directly from single RGBD images, even under severe occlusion. At the heart of our method is a novel human body pose retrieval formulation that explicitly models and handles occlusion. The retrieved result is then refined by a robust optimization to yield a full representation of the human shape. We demonstrate our method on a range of challenging real world scenarios and produce high-quality results not possible by competing alternatives. The method opens up exciting AR/VR application possibilities by working on 'in-the-wild' measurements of human motion.Item Protein Shape Retrieval Contest(The Eurographics Association, 2019) Langenfeld, Florent; Axenopoulos, Apostolos; Benhabiles, Halim; Daras, Petros; Giachetti, Andrea; Han, Xusi; Hammoudi, Karim; Kihara, Daisuke; Lai, Tuan M.; Liu, Haiguang; Melkemi, Mahmoud; Mylonas, Stelios K.; Terashi, Genki; Wang, Yufan; Windal, Feryal; Montes, Matthieu; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoThis track aimed at retrieving protein evolutionary classification based on their surfaces meshes only. Given that proteins are dynamic, non-rigid objects and that evolution tends to conserve patterns related to their activity and function, this track offers a challenging issue using biologically relevant molecules. We evaluated the performance of 5 different algorithms and analyzed their ability, over a dataset of 5,298 objects, to retrieve various conformations of identical proteins and various conformations of ortholog proteins (proteins from different organisms and showing the same activity). All methods were able to retrieve a member of the same class as the query in at least 94% of the cases when considering the first match, but show more divergent when more matches were considered. Last, similarity metrics trained on databases dedicated to proteins improved the results.Item Shape Correspondence with Isometric and Non-Isometric Deformations(The Eurographics Association, 2019) Dyke, R. M.; Stride, C.; Lai, Y.-K.; Rosin, P. L.; Aubry, M.; Boyarski, A.; Bronstein, A. M.; Bronstein, M. M.; Cremers, D.; Fisher, M.; Groueix, T.; Guo, D.; Kim, V. G.; Kimmel, R.; Lähner, Z.; Li, K.; Litany, O.; Remez, T.; Rodolà, E.; Russell, B. C.; Sahillioglu, Y.; Slossberg, R.; Tam, G. K. L.; Vestner, M.; Wu, Z.; Yang, J.; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoThe registration of surfaces with non-rigid deformation, especially non-isometric deformations, is a challenging problem. When applying such techniques to real scans, the problem is compounded by topological and geometric inconsistencies between shapes. In this paper, we capture a benchmark dataset of scanned 3D shapes undergoing various controlled deformations (articulating, bending, stretching and topologically changing), along with ground truth correspondences. With the aid of this tiered benchmark of increasingly challenging real scans, we explore this problem and investigate how robust current state-of- the-art methods perform in different challenging registration and correspondence scenarios. We discover that changes in topology is a challenging problem for some methods and that machine learning-based approaches prove to be more capable of handling non-isometric deformations on shapes that are moderately similar to the training set.Item Sketch-Aided Retrieval of Incomplete 3D Cultural Heritage Objects(The Eurographics Association, 2019) Lengauer, Stefan; Komar, Alexander; Labrada, Arniel; Karl, Stephan; Trinkl, Elisabeth; Preiner, Reinhold; Bustos, Benjamin; Schreck, Tobias; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoDue to advances in digitization technology, documentation efforts and digital library systems, increasingly large collections of visual Cultural Heritage (CH) object data becomes available, offering rich opportunities for domain analysis, e.g., for comparing, tracing and studying objects created over time. In principle, existing shape- and image-based similarity search methods can aid such domain analysis tasks. However, in practice, visual object data are given in different modalities, including 2D, 3D, sketches or conventional drawings like profile sections or unwrappings. In addition, collections may be distributed across different publications and repositories, posing a challenge for implementing encompassing search and analysis systems. We introduce a methodology and system for cross-modal visual search in CH object data. Specifically, we propose a new query modality based on 3D views enhanced by user sketches (3D+sketch). This allows for adding new context to the search, which is useful e.g., for searching based on incomplete query objects, or for testing hypotheses on existence of certain shapes in a collection. We present an appropriately designed workflow for constructing query views from incomplete 3D objects enhanced by a user sketch based on shape completion and texture inpainting. Visual cues additionally help users compare retrieved objects with the query. We apply our method on a set of relevant 3D and view-based CH object data, demonstrating the feasibility of our approach and its potential to support analysis of domain experts in Archaeology and the field of CH in general.