Browsing by Author "Gobbetti, Enrico"
Now showing 1 - 20 of 22
Results Per Page
Sort Options
Item Aging Prediction of Cultural Heritage Samples Based on Surface Microgeometry(The Eurographics Association, 2018) Ciortan, Irina Mihaela; Marchioro, Giacomo; Daffara, Claudia; Pintus, Ruggero; Gobbetti, Enrico; Giachetti, Andrea; Sablatnig, Robert and Wimmer, MichaelA critical and challenging aspect for the study of Cultural Heritage (CH) assets is related to the characterization of the materials that compose them and to the variation of these materials with time. In this paper, we exploit a realistic dataset of artificially aged metallic samples treated with different coatings commonly used for artworks' protection in order to evaluate different approaches to extract material features from high-resolution depth maps. In particular, we estimated, on microprofilometric surface acquisitions of the samples, performed at different aging steps, standard roughness descriptors used in materials science as well as classical and recent image texture descriptors. We analyzed the ability of the features to discriminate different aging steps and performed supervised classification tests showing the feasibility of a texture-based aging analysis and the effectiveness of coatings in reducing the surfaces' change with time.Item Automatic Modeling of Cluttered Multi-room Floor Plans From Panoramic Images(The Eurographics Association and John Wiley & Sons Ltd., 2019) Pintore, Giovanni; Ganovelli, Fabio; Villanueva, Alberto Jaspe; Gobbetti, Enrico; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a novel and light-weight approach to capture and reconstruct structured 3D models of multi-room floor plans. Starting from a small set of registered panoramic images, we automatically generate a 3D layout of the rooms and of all the main objects inside. Such a 3D layout is directly suitable for use in a number of real-world applications, such as guidance, location, routing, or content creation for security and energy management. Our novel pipeline introduces several contributions to indoor reconstruction from purely visual data. In particular, we automatically partition panoramic images in a connectivity graph, according to the visual layout of the rooms, and exploit this graph to support object recovery and rooms boundaries extraction. Moreover, we introduce a plane-sweeping approach to jointly reason about the content of multiple images and solve the problem of object inference in a top-down 2D domain. Finally, we combine these methods in a fully automated pipeline for creating a structured 3D model of a multi-room floor plan and of the location and extent of clutter objects. These contribution make our pipeline able to handle cluttered scenes with complex geometry that are challenging to existing techniques. The effectiveness and performance of our approach is evaluated on both real-world and synthetic models.Item Automatic Surface Segmentation for Seamless Fabrication Using 4-axis Milling Machines(The Eurographics Association and John Wiley & Sons Ltd., 2021) Nuvoli, Stefano; Tola, Alessandro; Muntoni, Alessandro; Pietroni, Nico; Gobbetti, Enrico; Scateni, Riccardo; Mitra, Niloy and Viola, IvanWe introduce a novel geometry-processing pipeline to guide the fabrication of complex shapes from a single block of material using 4-axis CNC milling machines. This setup extends classical 3-axis CNC machining with an extra degree of freedom to rotate the object around a fixed axis. The first step of our pipeline identifies the rotation axis that maximizes the overall fabrication accuracy. Then we identify two height-field regions at the rotation axis's extremes used to secure the block on the rotation tool. We segment the remaining portion of the mesh into a set of height-fields whose principal directions are orthogonal to the rotation axis. The segmentation balances the approximation quality, the boundary smoothness, and the total number of patches. Additionally, the segmentation process takes into account the object's geometric features, as well as saliency information. The output is a set of meshes ready to be processed by off-the-shelf software for the 3-axis tool-path generation. We present several results to demonstrate the quality and efficiency of our approach to a range of inputsItem Crack Detection in Single- and Multi-Light Images of Painted Surfaces using Convolutional Neural Networks(The Eurographics Association, 2019) Dulecha, Tinsae Gebrechristos; Giachetti, Andrea; Pintus, Ruggero; Ciortan, Irina; Villanueva, Alberto Jaspe; Gobbetti, Enrico; Rizvic, Selma and Rodriguez Echavarria, KarinaCracks represent an imminent danger for painted surfaces that needs to be alerted before degenerating into more severe aging effects, such as color loss. Automatic detection of cracks from painted surfaces' images would be therefore extremely useful for art conservators; however, classical image processing solutions are not effective to detect them, distinguish them from other lines or surface characteristics. A possible solution to improve the quality of crack detection exploits Multi-Light Image Collections (MLIC), that are often acquired in the Cultural Heritage domain thanks to the diffusion of the Reflectance Transformation Imaging (RTI) technique, allowing a low cost and rich digitization of artworks' surfaces. In this paper, we propose a pipeline for the detection of crack on egg-tempera paintings from multi-light image acquisitions and that can be used as well on single images. The method is based on single or multi-light edge detection and on a custom Convolutional Neural Network able to classify image patches around edge points as crack or non-crack, trained on RTI data. The pipeline is able to classify regions with cracks with good accuracy when applied on MLIC. Used on single images, it can give still reasonable results. The analysis of the performances for different lighting directions also reveals optimal lighting directions.Item Ebb & Flow: Uncovering Costantino Nivola's Olivetti Sandcast through 3D Fabrication and Virtual Exploration(The Eurographics Association, 2022) Ahsan, Moonisa; Altea, Giuliana; Bettio, Fabio; Callieri, Marco; Camarda, Antonella; Cignoni, Paolo; Gobbetti, Enrico; Ledda, Paolo; Lutzu, Alessandro; Marton, Fabio; Mignemi, Giuseppe; Ponchio, Federico; Ponchio, Federico; Pintus, RuggeroWe report on the outcomes of a large multi-disciplinary project targeting the physical reproduction and virtual documentation and exploration of the Olivetti sandcast, a monumental (over 100m2) semi-abstract frieze by the Italian sculptor Costantino Nivola. After summarizing the goal and motivation of the project, we provide details on the acquisition and processing steps that led to the creation of a 3D digital model. We then discuss the technical details and the challenges that we have faced for the physical fabrication process of a massive physical replica, which was the centerpiece of a recent exhibition. We finally discuss the design and application of an interactive web-based tool for the exploration of an annotated virtual replica. The main components of the tool will be released as open source.Item Effective Interactive Visualization of Neural Relightable Images in a Web-based Multi-layered Framework(The Eurographics Association, 2023) Righetto, Leonardo; Bettio, Fabio; Ponchio, Federico; Giachetti, Andrea; Gobbetti, Enrico; Bucciero, Alberto; Fanini, Bruno; Graf, Holger; Pescarin, Sofia; Rizvic, SelmaRelightable images created from Multi-Light Image Collections (MLICs) are one of the most commonly employed models for interactive object exploration in cultural heritage. In recent years, neural representations have been shown to produce higherquality images, at similar storage costs, with respect to the more classic analytical models such as Polynomial Texture Maps (PTM) or Hemispherical Harmonics (HSH). However, their integration in practical interactive tools has so far been limited due to the higher evaluation cost, making it difficult to employ them for interactive inspection of large images, and to the difficulty in integration cost, due to the need to incorporate deep-learning libraries in relightable renderers. In this paper, we illustrate how a state-of-the-art neural reflectance model can be directly evaluated, using common WebGL shader features, inside a multiplatform renderer. We then show how this solution can be embedded in a scalable framework capable to handle multi-layered relightable models in web settings. We finally show the performance and capabilities of the method on cultural heritage objects.Item Exploiting Neighboring Pixels Similarity for Effective SV-BRDF Reconstruction from Sparse MLICs(The Eurographics Association, 2021) Pintus, Ruggero; Ahsan, Moonisa; Marton, Fabio; Gobbetti, Enrico; Hulusic, Vedad and Chalmers, AlanWe present a practical solution to create a relightable model from Multi-light Image Collections (MLICs) acquired using standard acquisition pipelines. The approach targets the difficult but very common situation in which the optical behavior of a flat, but visually and geometrically rich object, such as a painting or a bas relief, is measured using a fixed camera taking few images with a different local illumination. By exploiting information from neighboring pixels through a carefully crafted weighting and regularization scheme, we are able to efficiently infer subtle per-pixel analytical Bidirectional Reflectance Distribution Functions (BRDFs) representations from few per-pixel samples. The method is qualitatively and quantitatively evaluated on both synthetic data and real paintings in the scope of image-based relighting applications.Item A Framework for GPU-accelerated Exploration of Massive Time-varying Rectilinear Scalar Volumes(The Eurographics Association and John Wiley & Sons Ltd., 2019) Marton, Fabio; Agus, Marco; Gobbetti, Enrico; Gleicher, Michael and Viola, Ivan and Leitte, HeikeWe introduce a novel flexible approach to spatiotemporal exploration of rectilinear scalar volumes. Our out-of-core representation, based on per-frame levels of hierarchically tiled non-redundant 3D grids, efficiently supports spatiotemporal random access and streaming to the GPU in compressed formats. A novel low-bitrate codec able to store into fixed-size pages a variable-rate approximation based on sparse coding with learned dictionaries is exploited to meet stringent bandwidth constraint during time-critical operations, while a near-lossless representation is employed to support high-quality static frame rendering. A flexible high-speed GPU decoder and raycasting framework mixes and matches GPU kernels performing parallel object-space and image-space operations for seamless support, on fat and thin clients, of different exploration use cases, including animation and temporal browsing, dynamic exploration of single frames, and high-quality snapshots generated from near-lossless data. The quality and performance of our approach are demonstrated on large data sets with thousands of multi-billion-voxel frames.Item Guiding Lens-based Exploration using Annotation Graphs(The Eurographics Association, 2021) Ahsan, Moonisa; Marton, Fabio; Pintus, Ruggero; Gobbetti, Enrico; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà , EmanueleWe introduce a novel approach for guiding users in the exploration of annotated 2D models using interactive visualization lenses. Information on the interesting areas of the model is encoded in an annotation graph generated at authoring time. Each graph node contains an annotation, in the form of a visual markup of the area of interest, as well as the optimal lens parameters that should be used to explore the annotated area and a scalar representing the annotation importance. Graph edges are used, instead, to represent preferred ordering relations in the presentation of annotations. A scalar associated to each edge determines the strength of this prescription. At run-time, the graph is exploited to assist users in their navigation by determining the next best annotation in the database and moving the lens towards it when the user releases interactive control. The selection is based on the current view and lens parameters, the graph content and structure, and the navigation history. This approach supports the seamless blending of an automatic tour of the data with interactive lens-based exploration. The approach is tested and discussed in the context of the exploration of multi-layer relightable models.Item HexBox: Interactive Box Modeling of Hexahedral Meshes(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zoccheddu, Francesco; Gobbetti, Enrico; Livesu, Marco; Pietroni, Nico; Cherchi, Gianmarco; Memari, Pooran; Solomon, JustinWe introduce HexBox, an intuitive modeling method and interactive tool for creating and editing hexahedral meshes. Hexbox brings the major and widely validated surface modeling paradigm of surface box modeling into the world of hex meshing. The main idea is to allow the user to box-model a volumetric mesh by primarily modifying its surface through a set of topological and geometric operations. We support, in particular, local and global subdivision, various instantiations of extrusion, removal, and cloning of elements, the creation of non-conformal or conformal grids, as well as shape modifications through vertex positioning, including manual editing, automatic smoothing, or, eventually, projection on an externally-provided target surface. At the core of the efficient implementation of the method is the coherent maintenance, at all steps, of two parallel data structures: a hexahedral mesh representing the topology and geometry of the currently modeled shape, and a directed acyclic graph that connects operation nodes to the affected mesh hexahedra. Operations are realized by exploiting recent advancements in gridbased meshing, such as mixing of 3-refinement, 2-refinement, and face-refinement, and using templated topological bridges to enforce on-the-fly mesh conformity across pairs of adjacent elements. A direct manipulation user interface lets users control all operations. The effectiveness of our tool, released as open source to the community, is demonstrated by modeling several complex shapes hard to realize with competing tools and techniques.Item HistoContours: a Framework for Visual Annotation of Histopathology Whole Slide Images(The Eurographics Association, 2022) Al-Thelaya, Khaled; Joad, Faaiz; Gilal, Nauman Ullah; Mifsud, William; Pintore, Giovanni; Gobbetti, Enrico; Agus, Marco; Schneider, Jens; Renata G. Raidou; Björn Sommer; Torsten W. Kuhlen; Michael Krone; Thomas Schultz; Hsiang-Yun WuWe present an end-to-end framework for histopathological analysis of whole slide images (WSIs). Our framework uses deep learning-based localization & classification of cell nuclei followed by spatial data aggregation to propagate classes of sparsely distributed nuclei across the entire slide. We use YOLO (''You Only Look Once'') for localization instead of more costly segmentation approaches and show that using HistAuGAN boosts its performance. YOLO finds bounding boxes around nuclei at good accuracy, but the classification accuracy can be improved by other methods. To this end, we extract patches around nuclei from the WSI and consider models from the SqueezeNet, ResNet, and EfficientNet families for classification. Where we do not achieve a clear separation between highest and second-highest softmax activation of the classifier, we use YOLO's output as a secondary vote. The result is a sparse annotation of the WSI, which we turn dense by using kernel density estimation. The result is a full vector of per pixel probabilities for each class of nucleus we consider. This allows us to visualize our results using both color-coding and isocontouring, reducing visual clutter. Our novel nuclei-to-tissue coupling allows histopathologists to work at both the nucleus and the tissue level, a feature appreciated by domain experts in a qualitative user study.Item InShaDe: Invariant Shape Descriptors for Visual Analysis of Histology 2D Cellular and Nuclear Shapes(The Eurographics Association, 2020) Agus, Marco; Al-Thelaya, Khaled; Cali, Corrado; Boido, Marina M.; Yang, Yin; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; KozlÃková, Barbora and Krone, Michael and Smit, Noeska and Nieselt, Kay and Raidou, Renata GeorgiaWe present a shape processing framework for visual exploration of cellular nuclear envelopes extracted from histology images. The framework is based on a novel shape descriptor of closed contours relying on a geodesically uniform resampling of discrete curves to allow for discrete differential-geometry-based computation of unsigned curvature at vertices and edges. Our descriptor is, by design, invariant under translation, rotation and parameterization. Moreover, it additionally offers the option for uniform-scale-invariance. The optional scale-invariance is achieved by scaling features to z-scores, while invariance under parameterization shifts is achieved by using elliptic Fourier analysis (EFA) on the resulting curvature vectors. These invariant shape descriptors provide an embedding into a fixed-dimensional feature space that can be utilized for various applications: (i) as input features for deep and shallow learning techniques; (ii) as input for dimension reduction schemes for providing a visual reference for clustering collection of shapes. The capabilities of the proposed framework are demonstrated in the context of visual analysis and unsupervised classification of histology images.Item Interactive Volumetric Visual Analysis of Glycogen-derived Energy Absorption in Nanometric Brain Structures(The Eurographics Association and John Wiley & Sons Ltd., 2019) Agus, Marco; Calì, Corrado; Al-Awami, Ali K.; Gobbetti, Enrico; Magistretti, Pierre J.; Hadwiger, Markus; Gleicher, Michael and Viola, Ivan and Leitte, HeikeDigital acquisition and processing techniques are changing the way neuroscience investigation is carried out. Emerging applications range from statistical analysis on image stacks to complex connectomics visual analysis tools targeted to develop and test hypotheses of brain development and activity. In this work, we focus on neuroenergetics, a field where neuroscientists analyze nanoscale brain morphology and relate energy consumption to glucose storage in form of glycogen granules. In order to facilitate the understanding of neuroenergetic mechanisms, we propose a novel customized pipeline for the visual analysis of nanometric-level reconstructions based on electron microscopy image data. Our framework supports analysis tasks by combining i) a scalable volume visualization architecture able to selectively render image stacks and corresponding labelled data, ii) a method for highlighting distance-based energy absorption probabilities in form of glow maps, and iii) a hybrid connectivitybased and absorption-based interactive layout representation able to support queries for selective analysis of areas of interest and potential activity within the segmented datasets. This working pipeline is currently used in a variety of studies in the neuroenergetics domain. Here, we discuss a test case in which the framework was successfully used by domain scientists for the analysis of aging effects on glycogen metabolism, extracting knowledge from a series of nanoscale brain stacks of rodents somatosensory cortex.Item MTV-Player: Interactive Spatio-Temporal Exploration of Compressed Large-Scale Time-Varying Rectilinar Scalar Volumes(The Eurographics Association, 2019) DÃaz, Jose; Marton, Fabio; Gobbetti, Enrico; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroWe present an approach for supporting fully interactive exploration of massive time-varying rectilinear scalar volumes on commodity platforms. We decompose each frame into a forest of bricked octrees. Each brick is further subdivided into smaller blocks, which are compactly approximated by quantized variable-length sparse linear combinations of prototype blocks stored in a data-dependent dictionary learned from the input sequence. This variable bit-rate compact representation, obtained through a tolerance-driven learning and approximation process, is stored in a GPU-friendly format that supports direct adaptive streaming to the GPU with spatial and temporal random access. An adaptive compression-domain renderer closely coordinates off-line data selection, streaming, decompression, and rendering. The resulting system provides total control over the spatial and temporal dimensions of the data, supporting the same exploration metaphor as traditional video players. Since we employ a highly compressed representation, the bandwidth provided by current commodity platforms proves sufficient to fully stream and render dynamic representations without relying on partial updates, thus avoiding any unwanted dynamic effects introduced by current incremental loading approaches. Moreover, our variable-rate encoding based on sparse representations provides high-quality approximations, while offering real-time decoding and rendering performance. The quality and performance of our approach is demonstrated on massive time-varying datasets at the terascale, which are nonlinearly explored at interactive rates on a commodity graphics PC.Item A Novel Approach for Exploring Annotated Data With Interactive Lenses(The Eurographics Association and John Wiley & Sons Ltd., 2021) Bettio, Fabio; Ahsan, Moonisa; Marton, Fabio; Gobbetti, Enrico; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonWe introduce a novel approach for assisting users in exploring 2D data representations with an interactive lens. Focus-andcontext exploration is supported by translating user actions to the joint adjustments in camera and lens parameters that ensure a good placement and sizing of the lens within the view. This general approach, implemented using standard device mappings, overcomes the limitations of current solutions, which force users to continuously switch from lens positioning and scaling to view panning and zooming. Navigation is further assisted by exploiting data annotations. In addition to traditional visual markups and information links, we associate to each annotation a lens configuration that highlights the region of interest. During interaction, an assisting controller determines the next best lens in the database based on the current view and lens parameters and the navigation history. Then, the controller interactively guides the user's lens towards the selected target and displays its annotation markup. As only one annotation markup is displayed at a time, clutter is reduced. Moreover, in addition to guidance, the navigation can also be automated to create a tour through the data. While our methods are generally applicable to general 2D visualization, we have implemented them for the exploration of stratigraphic relightable models. The capabilities of our approach are demonstrated in cultural heritage use cases. A user study has been performed in order to validate our approach.Item Objective and Subjective Evaluation of Virtual Relighting from Reflectance Transformation Imaging Data(The Eurographics Association, 2018) Pintus, Ruggero; Dulecha, Tinsae; Jaspe, Alberto; Giachetti, Andrea; Ciortan, Irina; Gobbetti, Enrico; Sablatnig, Robert and Wimmer, MichaelReflectance Transformation Imaging (RTI) is widely used to produce relightable models from multi-light image collections. These models are used for a variety of tasks in the Cultural Heritage field. In this work, we carry out an objective and subjective evaluation of RTI data visualization. We start from the acquisition of a series of objects with different geometry and appearance characteristics using a common dome-based configuration. We then transform the acquired data into relightable representations using different approaches: PTM, HSH, and RBF. We then perform an objective error estimation by comparing ground truth images with relighted ones in a leave-one-out framework using PSNR and SSIM error metrics. Moreover, we carry out a subjective investigation through perceptual experiments involving end users with a variety of backgrounds. Objective and subjective tests are shown to behave consistently, and significant differences are found between the various methods. While the proposed analysis has been performed on three common and state-of-the-art RTI visualization methods, our approach is general enough to be extended and applied in the future to new developed multi-light processing pipelines and rendering solutions, to assess their numerical precision and accuracy, and their perceptual visual quality.Item Recovering 3D Indoor Floor Plans by Exploiting Low-cost Spherical Photography(The Eurographics Association, 2018) Pintore, Giovanni; Ganovelli, Fabio; Pintus, Ruggero; Scopigno, Roberto; Gobbetti, Enrico; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesWe present a novel approach to automatically recover, from a small set of partially overlapping panoramic images, an indoor structure representation in terms of a 3D floor plan registered with a set of 3D environment maps. Our improvements over previous approaches include a new method for geometric context extraction based on a 3D facets representation, which combines color distribution analysis of individual images with sparse multi-view clues, as well as an efficient method to combine the facets from different point-of-view in the same world space, considering the reliability of the facets contribution. The resulting capture and reconstruction pipeline automatically generates 3D multi-room environments where most of the other previous approaches fail, such as in presence of hidden corners, large clutter and sloped ceilings, even without involving additional dense 3D data or tools. We demonstrate the effectiveness and performance of our approach on different real-world indoor scenes.Item SPIDER: SPherical Indoor DEpth Renderer(The Eurographics Association, 2022) Tukur, Muhammad; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Agus, Marco; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoToday's Extended Reality (XR) applications that call for specific Diminished Reality (DR) strategies to hide specific classes of objects are increasingly using 360? cameras, which can capture entire areas in a single picture. In this work, we present an interactive-based image editing and rendering system named SPIDER, that takes a spherical 360? indoor scene as input. The system incorporates the output of deep learning models to abstract the segmentation and depth images of full and empty rooms to allow users to perform interactive exploration and basic editing operations on the reconstructed indoor scene, namely: i) rendering of the scene in various modalities (point cloud, polygonal, wireframe) ii) refurnishing (transferring portions of rooms) iii) deferred shading through the usage of precomputed normal maps. These kinds of scene editing and manipulations can be used for assessing the inference from deep learning models and enable several Mixed Reality (XR) applications in areas such as furniture retails, interior designs, and real estates. Moreover, it can also be useful in data augmentation, arts, designs, and paintings.Item State-of-the-art in Automatic 3D Reconstruction of Structured Indoor Environments(The Eurographics Association and John Wiley & Sons Ltd., 2020) Pintore, Giovanni; Mura, Claudio; Ganovelli, Fabio; Fuentes-Perez, Lizeth Joseline; Pajarola, Renato; Gobbetti, Enrico; Mantiuk, Rafal and Sundstedt, VeronicaCreating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this survey, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.Item State-of-the-art in Multi-Light Image Collections for Surface Visualization and Analysis(The Eurographics Association and John Wiley & Sons Ltd., 2019) Pintus, Ruggero; Dulecha, Tinsae Gebrechristos; Ciortan, Irina Mihaela; Gobbetti, Enrico; Giachetti, Andrea; Laramee, Robert S. and Oeltze, Steffen and Sedlmair, MichaelMulti-Light Image Collections (MLICs), i.e., stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination, provide large amounts of visual and geometric information. In this survey, we provide an up-to-date integrative view of MLICs as a mean to gain insight on objects through the analysis and visualization of the acquired data. After a general overview of MLICs capturing and storage, we focus on the main approaches to produce representations usable for visualization and analysis. In this context, we first discuss methods for direct exploration of the raw data. We then summarize approaches that strive to emphasize shape and material details by fusing all acquisitions in a single enhanced image. Subsequently, we focus on approaches that produce relightable images through intermediate representations. This can be done both by fitting various analytic forms of the light transform function, or by locally estimating the parameters of physically plausible models of shape and reflectance and using them for visualization and analysis. We finally review techniques that improve object understanding by using illustrative approaches to enhance relightable models, or by extracting features and derived maps. We also review how these methods are applied in several, main application domains, and what are the available tools to perform MLIC visualization and analysis. We finally point out relevant research issues, analyze research trends, and offer guidelines for practical applications.