3DOR 17
Permanent URI for this collection
Browse
Browsing 3DOR 17 by Title
Now showing 1 - 19 of 19
Results Per Page
Sort Options
Item 3D Hand Gesture Recognition Using a Depth and Skeletal Dataset(The Eurographics Association, 2017) Smedt, Quentin De; Wannous, Hazem; Vandeborre, Jean-Philippe; Guerry, J.; Saux, B. Le; Filliat, D.; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovHand gesture recognition is recently becoming one of the most attractive field of research in pattern recognition. The objective of this track is to evaluate the performance of recent recognition approaches using a challenging hand gesture dataset containing 14 gestures, performed by 28 participants executing the same gesture with two different numbers of fingers. Two research groups have participated to this track, the accuracy of their recognition algorithms have been evaluated and compared to three other state-of-the-art approaches.Item 3D Mesh Unfolding via Semidefinite Programming(The Eurographics Association, 2017) Liu, Juncheng; Lian, Zhouhui; Xiao, Jianguo; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovMesh unfolding is a powerful pre-processing tool for many tasks such as non-rigid shape matching and retrieval. Shapes with articulated parts may exist large variants in pose, which brings difficulties to those tasks. With mesh unfolding, shapes in different poses can be transformed into similar canonical forms, which facilitates the subsequent applications. In this paper, we propose an automatic mesh unfolding algorithm based on semidefinite programming. The basic idea is to maximize the total variance of the vertex set for a given 3D mesh, while preserving the details by minimizing locally linear reconstruction errors. By optimizing a specifically-designed objective function, vertices tend to move against each other as far as possible, which leads to the unfolding operation. Compared to other Multi-Dimensional Scaling (MDS) based unfolding approaches, our method preserves significantly more details and requires no geodesic distance calculation. We demonstrate the advantages of our algorithm by performing 3D shape matching and retrieval in two publicly available datasets. Experimental results validate the effectiveness of our method both in visual judgment and quantitative comparison.Item 3DOR 2017: Frontmatter(Eurographics Association, 2017) Pratikakis, Ioannis; Dupont, Florent; Ovsjanikov, Maks;Item Deformable Shape Retrieval with Missing Parts(The Eurographics Association, 2017) Rodolà , E.; Cosmo, L.; Litany, O.; Bronstein, M. M.; Bronstein, A. M.; Audebert, N.; Hamza, A. Ben; Boulch, A.; Castellani, U.; Do, M. N.; Duong, A.-D.; Furuya, T.; Gasparetto, A.; Hong, Y.; Kim, J.; Saux, B. Le; Litman, R.; Masoumi, M.; Minello, G.; Nguyen, H.-D.; Nguyen, V.-T.; Ohbuchi, R.; Pham, V.-K.; Phan, T. V.; Rezaei, M.; Torsello, A.; Tran, M.-T.; Tran, Q.-T.; Truong, B.; Wan, L.; Zou, C.; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovPartial similarity problems arise in numerous applications that involve real data acquisition by 3D sensors, inevitably leading to missing parts due to occlusions and partial views. In this setting, the shapes to be retrieved may undergo a variety of transformations simultaneously, such as non-rigid deformations (changes in pose), topological noise, and missing parts - a combination of nuisance factors that renders the retrieval process extremely challenging. With this benchmark, we aim to evaluate the state of the art in deformable shape retrieval under such kind of transformations. The benchmark is organized in two sub-challenges exemplifying different data modalities (3D vs. 2.5D). A total of 15 retrieval algorithms were evaluated in the contest; this paper presents the details of the dataset, and shows thorough comparisons among all competing methods.Item Directed Curvature Histograms for Robotic Grasping(The Eurographics Association, 2017) Schulz, Rodrigo; Guerrero, Pablo; Bustos, Benjamin; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovThree-dimensional descriptors are a common tool nowadays, used in a wide range of tasks. Most of the descriptors that have been proposed in the literature focus on tasks such as object recognition and identification. This paper proposes a novel three-dimensional local descriptor, structured as a set of histograms of the curvature observed on the surface of the object in different directions. This descriptor is designed with a focus on the resolution of the robotic grasping problem, especially on the determination of the orientation required to grasp an object. We validate our proposal following a data-driven approach using grasping information and examples generated using the Gazebo simulator and a simulated PR2 robot. Experimental results show that the proposed descriptor is well suited for the grasping problem, exceeding the performance observed with recent descriptors.Item Exploiting the PANORAMA Representation for Convolutional Neural Network Classification and Retrieval(The Eurographics Association, 2017) Sfikas, Konstantinos; Theoharis, Theoharis; Pratikakis, Ioannis; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovA novel 3D model classification and retrieval method, based on the PANORAMA representation and Convolutional Neural Networks, is presented. Initially, the 3D models are pose normalized using the SYMPAN method and consecutively the PANORAMA representation is extracted and used to train a convolutional neural network. The training is based on an augmented view of the extracted panoramic representation views. The proposed method is tested in terms of classification and retrieval accuracy on standard large scale datasets.Item A Framework Based on Compressed Manifold Modes for Robust Local Spectral Analysis(The Eurographics Association, 2017) Haas, Sylvain; Baskurt, Atilla; Dupont, Florent; Denis, Florence; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovCompressed Manifold Modes (CMM) were recently introduced as a solution to one of the drawbacks of spectral analysis on triangular meshes. The eigenfunctions of the Laplace-Beltrami operator on a mesh depend on the whole shape which makes them sensitive to local aspects. CMM are solutions of an extended problem that have a compact rather than global support and are thus suitable for a wider range of applications. In order to use CMM in real applications, an extensive test has been performed to better understand the limits of their computation (convergence and speed) according to the compactness parameter, the mesh resolution and the number of requested modes. The contribution of this paper is to propose a robust choice of parameters, the automated computation of an adequate number of modes (or eigenfunctions), stability with mutltiresolution and isometric meshes, and an example application with high potential for shape indexation.Item GSHOT: a Global Descriptor from SHOT to Reduce Time and Space Requirements(The Eurographics Association, 2017) Mateo, Carlos M.; Gil, Pablo; Torres, Fernando; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovThis paper presents a new 3D global feature descriptor for object recognition using shape representation on organized point clouds. Object recognition applications usually require significant speed and memory. The proposed descriptor requires 57 times less memory and it is also up to 3 times faster than the local feature descriptor in which it is based. Experimental results indicate that this new 3D global descriptor obtains better matching scores in comparison with known state-of-the-art 3D feature descriptors on two standard benchmark dataset.Item Large-Scale 3D Shape Retrieval from ShapeNet Core55(The Eurographics Association, 2017) Savva, Manolis; Yu, Fisher; Su, Hao; Kanezaki, Asako; Furuya, Takahiko; Ohbuchi, Ryutarou; Zhou, Zhichao; Yu, Rui; Bai, Song; Bai, Xiang; Aono, Masaki; Tatsuma, Atsushi; Thermos, S.; Axenopoulos, A.; Papadopoulos, G. Th.; Daras, P.; Deng, Xiao; Lian, Zhouhui; Li, Bo; Johan, Henry; Lu, Yijuan; Mk, Sanjeev; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovWith the advent of commodity 3D capturing devices and better 3D modeling tools, 3D shape content is becoming increasingly prevalent. Therefore, the need for shape retrieval algorithms to handle large-scale shape repositories is more and more important. This track provides a benchmark to evaluate large-scale 3D shape retrieval based on the ShapeNet dataset. It is a continuation of the SHREC 2016 large-scale shape retrieval challenge with a goal of measuring progress with recent developments in deep learning methods for shape retrieval. We use ShapeNet Core55, which provides more than 50 thousands models over 55 common categories in total for training and evaluating several algorithms. Eight participating teams have submitted a variety of retrieval methods which were evaluated on several standard information retrieval performance metrics. The approaches vary in terms of the 3D representation, using multi-view projections, point sets, volumetric grids, or traditional 3D shape descriptors. Overall performance on the shape retrieval task has improved significantly compared to the iteration of this competition in SHREC 2016. We release all data, results, and evaluation code for the benefit of the community and to catalyze future research into large-scale 3D shape retrieval (website: https://www.shapenet.org/shrec17).Item LightNet: A Lightweight 3D Convolutional Neural Network for Real-Time 3D Object Recognition(The Eurographics Association, 2017) Zhi, Shuaifeng; Liu, Yongxiang; Li, Xiang; Guo, Yulan; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovWith the rapid growth of 3D data, accurate and efficient 3D object recognition becomes a major problem. Machine learning methods have achieved the state-of-the-art performance in the area, especially for deep convolutional neural networks. However, existing network models have high computational cost and are unsuitable for real-time 3D object recognition applications. In this paper, we propose LightNet, a lightweight 3D convolutional neural network for real-time 3D object recognition. It achieves comparable accuracy to the state-of-the-art methods with a single model and extremely low computational cost. Experiments have been conducted on the ModelNet and Sydney Urban Objects datasets. It is shown that our model improves the VoxNet model by relative 17.4% and 23.1% on the ModelNet10 and ModelNet40 benchmarks with less than 67% of training parameters. It is also demonstrated that the model can be applied in real-time scenarios.Item Point-Cloud Shape Retrieval of Non-Rigid Toys(The Eurographics Association, 2017) Limberger, F. A.; Wilson, R. C.; Aono, M.; Audebert, N.; Boulch, A.; Bustos, B.; Giachetti, A.; Godil, A.; Saux, B. Le; Li, B.; Lu, Y.; Nguyen, H.-D.; Nguyen, V.-T.; Pham, V.-K.; Sipiran, I.; Tatsuma, A.; Tran, M.-T.; Velasco-Forero, S.; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovIn this paper, we present the results of the SHREC'17 Track: Point-Cloud Shape Retrieval of Non-Rigid Toys. The aim of this track is to create a fair benchmark to evaluate the performance of methods on the non-rigid point-cloud shape retrieval problem. The database used in this task contains 100 3D point-cloud models which are classified into 10 different categories. All point clouds were generated by scanning each one of the models in their final poses using a 3D scanner, i.e., all models have been articulated before scanned. The retrieval performance is evaluated using seven commonly-used statistics (PR-plot, NN, FT, ST, E-measure, DCG, mAP). In total, there are 8 groups and 31 submissions taking part of this contest. The evaluation results shown by this work suggest that researchers are in the right way towards shape descriptors which can capture the main characteristics of 3D models, however, more tests still need to be made, since this is the first time we compare non-rigid signatures for point-cloud shape retrieval.Item Protein Shape Retrieval(The Eurographics Association, 2017) Song, Na; Craciun, Daniela; Christoffer, Charles W.; Han, Xusi; Kihara, Daisuke; Levieux, Guillaume; Montes, Matthieu; Qin, Hong; Sahu, Pranjal; Terashi, Genki; Liu, Haiguang; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovThe large number of protein structures deposited in the protein database provide an opportunity to examine the structure relations using computational algorithms, which can be used to classify the structures based on shape similarity. In this paper, we report the result of the SHREC 2017 track on shape retrievals from protein database. The goal of this track is to test the performance of the algorithms proposed by participants for the retrieval of bioshape (proteins). The test set is composed of 5,854 abstracted shapes from actual protein structures after removing model redundancy. Ten query shapes were selected from a set of representative molecules that have important biological functions. Six methods from four teams were evaluated and the performance is summarized in this report, in which both the retrieval accuracy and computational speed were compared. The biological relevance of the shape retrieval approaches is discussed. We also discussed the future perspectives of shape retrieval for biological molecular models.Item Retrieval of Surfaces with Similar Relief Patterns(The Eurographics Association, 2017) Biasotti, S.; Thompson, E. Moscoso; Aono, M.; Hamza, A. Ben; Bustos, B.; Dong, S.; Du, B.; Fehri, A.; Li, H.; Limberger, F. A.; Masoumi, M.; Rezaei, M.; Sipiran, I.; Sun, L.; Tatsuma, A.; Forero, S. Velasco; Wilson, R. C.; Wu, Y.; Zhang, J.; Zhao, T.; Fornasa, F.; Giachetti, A.; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovThis paper presents the results of the SHREC'17 contest on retrieval of surfaces with similar relief patterns. The proposed task was created in order to verify the possibility of retrieving surface patches with a relief pattern similar to an example from a database of small surface elements. This task, related to many real world applications, requires an effective characterization of local "texture" information not depending on patch size and bending. Retrieval performances of the proposed methods reveal that the problem is not quite easy to solve and, even if some of the proposed methods demonstrate promising results, further research is surely needed to find effective relief pattern characterization techniques for practical applications.Item RGB-D to CAD Retrieval with ObjectNN Dataset(The Eurographics Association, 2017) Hua, Binh-Son; Truong, Quang-Trung; Tran, Minh-Khoi; Pham, Quang-Hieu; Kanezaki, Asako; Lee, Tang; Chiang, HungYueh; Hsu, Winston; Li, Bo; Lu, Yijuan; Johan, Henry; Tashiro, Shoki; Aono, Masaki; Tran, Minh-Triet; Pham, Viet-Khoi; Nguyen, Hai-Dang; Nguyen, Vinh-Tiep; Tran, Quang-Thang; Phan, Thuyen V.; Truong, Bao; Do, Minh N.; Duong, Anh-Duc; Yu, Lap-Fai; Nguyen, Duc Thanh; Yeung, Sai-Kit; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovThe goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN 16] and CAD models from ShapeNet [CFG 15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.Item Semantic Correspondence Across 3D Models for Example-based Modeling(The Eurographics Association, 2017) Léon, Vincent; Itier, Vincent; Bonneel, Nicolas; Lavoué, Guillaume; Vandeborre, Jean-Philippe; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovModeling 3D shapes is a specialized skill not affordable to most novice artists due to its complexity and tediousness. At the same time, databases of complex models ready for use are becoming widespread, and can help the modeling task in a process called example-based modeling. We introduce such an example-based mesh modeling approach which, contrary to prior work, allows for the replacement of any localized region of a mesh by a region of similar semantics (but different geometry) within a mesh database. For that, we introduce a selection tool in a space of semantic descriptors that co-selects areas of similar semantics within the database. Moreover, this tool can be used for part-based retrieval across the database. Then, we show how semantic information improves the assembly process. This allows for modeling complex meshes from a coarse geometry and a database of more detailed meshes, and makes modeling accessible to the novice user.Item Shape Similarity System driven by Digital Elevation Models for Non-rigid Shape Retrieval(The Eurographics Association, 2017) Craciun, Daniela; Levieux, Guillaume; Montes, Matthieu; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovShape similarity computation is the main functionality for shape matching and shape retrieval systems. Existing shape similarity frameworks proceed by parameterizing shapes through the use of global and/or local representations computed in the 3D or 2D space. Up to now, global methods have demonstrated their rapidity, while local approaches offer slower, but more accurate solutions. This paper presents a shape similarity system driven by a global descriptor encoded as a Digital Elevation Model (DEM) associated to the input mesh. The DEM descriptor is obtained through the jointly use of a mesh flattening technique and a 2D panoramic projection. Experimental results on the public dataset TOSCA [BBK08] and a comparison with state-of-the-art methods illustrate the effectiveness of the proposed method in terms of accuracy and efficiency.Item Sketch-based 3D Object Retrieval with Skeleton Line Views - Initial Results and Research Problems(The Eurographics Association, 2017) Zhao, Xueqing; Gregor, Robert; Mavridis, Pavlos; Schreck, Tobias; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovHand-drawn sketches are a convenient way to define 3D object retrieval queries. Numerous methods have been proposed for sketch-based 3D object retrieval. Such methods employ a non-photo-realistic rendering step to create sketch-like views from 3D objects for comparison with the sketch queries. An implicit assumption here often is that the sketch query resembles a perspective view of the 3D shape. However, based on personal inclination or the type of object, users often tend to draw skeleton views instead of a perspective one. In those cases, a retrieval relying on perspective views is not the best choice, as features extracted from skeleton-based sketches and perspective can be expected to diverge vastly. In this paper, we report on our ongoing work to implement sketch-based 3D object retrieval for skeleton query sketches. Furthermore, we provide an initial benchmark data set consisting of skeleton sketches for a selection of generic object classes. Then, we design a sketch-based retrieval processing pipeline involving a sketch rendering step using Laplacian contraction. Additional experimental results indicate that skeleton sketches can be automatically distinguished from perspective sketches, and that the proposed method works for selected object classes. We also identify object classes for which the rendering of skeleton views is difficult, motivating further research.Item Towards Recognizing of 3D Models Using A Single Image(The Eurographics Association, 2017) Rashwan, Hatem A.; Chambon, Sylvie; Morin, Geraldine; Gurdjos, Pierre; Charvillat, Vincent; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovAs 3D data is getting more popular, techniques for retrieving a particular 3D model are necessary. We want to recognize a 3D model from a single photograph; as any user can easily get an image of a model he/she would like to find, requesting by an image is indeed simple and natural. However, a 2D intensity image is relative to viewpoint, texture and lighting condition and thus matching with a 3D geometric model is very challenging. This paper proposes a first step towards matching a 2D image to models, based on features repeatable in 2D images and in depth images (generated from 3D models); we show their independence to textures and lighting. Then, the detected features are matched to recognize 3D models by combining HOG (Histogram Of Gradients) descriptors and repeatability scores. The proposed methods reaches a recognition rate of 72% among 12 3D objects categories, and outperforms classical feature detection techniques for recognizing 3D models using a single image.Item Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks(The Eurographics Association, 2017) Boulch, Alexandre; Saux, Bertrand Le; Audebert, Nicolas; Ioannis Pratikakis and Florent Dupont and Maks OvsjanikovIn this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.