Italian Chapter Conference 2023 - Smart Tools and Apps in Graphics
Permanent URI for this collection
Browse
Browsing Italian Chapter Conference 2023 - Smart Tools and Apps in Graphics by Subject "Applied computing"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item A Gaze Detection System for Neuropsychiatric Disorders Remote Diagnosis Support(The Eurographics Association, 2023) Cangelosi, Antonio; Antola, Gabriele; Iacono, Alberto Lo; Santamaria, Alfonso; Clerico, Marinella; Al-Thani, Dena; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaAccurate and early diagnosis of neuropsychiatric disorders, such as Autism Spectrum Disorders (ASD) is a significant challenge in clinical practice. This study explores the use of real-time gaze tracking as a tool for unbiased and quantitative analysis of eye gaze. The results of this study could support the diagnosis of disorders and potentially be used as a tool in the field of rehabilitation. The proposed setup consists of an RGB-D camera embedded in the latest-generation smartphones and a set of processing components for the analysis of recorded data related to patient interactivity. The proposed system is easy to use and doesn't require much knowledge or expertise. It also achieves a high level of accuracy. Because of this, it can be used remotely (telemedicine) to simplify diagnosis and rehabilitation processes. We present initial findings that show how real-time gaze tracking can be a valuable tool for doctors. It is a non-invasive device that provides unbiased quantitative data that can aid in early detection, monitoring, and treatment evaluation. This study's findings have significant implications for the advancement of ASD research. The innovative approach proposed in this study has the potential to enhance diagnostic accuracy and improve patient outcomes.Item JPEG Line-drawing Restoration With Masks(The Eurographics Association, 2023) Zhu, Yan; Yamaguchi, Yasushi; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaLearning-based JPEG restoration methods usually lack consideration on the visual content of images. Even though these methods achieve satisfying results on photos, the direct application of them on line drawings, which consist of lines and white background, is not suitable. The large area of background in digital line drawings does not contain intensity information and should be constantly white (the maximum brightness). Existing JPEG restoration networks consistently fail to output constant white pixels for the background area. What's worse, training on the background can negatively impact the learning efficiency for areas where texture exists. To tackle these problems, we propose a line-drawing restoration framework that can be applied to existing state-of-the-art restoration networks. Our framework takes existing restoration networks as backbones and processes an input rasterized JPEG line drawing in two steps. First, a proposed mask-predicting network predicts a binary mask which indicates the location of lines and background in the potential undeteriorated line drawing. Then, the mask is concatenated with the input JPEG line drawing and fed into the backbone restoration network, where the conventional L1 loss is replaced by a masked Mean Square Error (MSE) loss. Besides learning-based mask generation, we also evaluate other direct mask generation methods. Experiments show that our framework with learnt binary masks achieves both better visual quality and better performance on quantitative metrics than the state-of-the-art methods in the task of JPEG line-drawing restoration.Item Mixed Reality for Orthopedic Elbow Surgery Training and Operating Room Applications: A Preliminary Analysis(The Eurographics Association, 2023) Cangelosi, Antonio; Riberi, Giacomo; Salvi, Massimo; Molinari, Filippo; Titolo, Paolo; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaThe use of Mixed Reality in medicine is widely documented to be a candidate to revolutionize surgical interventions. In this paper we present a system to simulate k-wire placement, that is a common orthopedic procedure used to stabilize fractures, dislocations, and other traumatic injuries.With the described system, it is possible to leverage Mixed Reality (MR) and advanced visualization techniques applied on a surgical simulation phantom to enhance surgical training and critical orthopedic surgical procedures. This analysis is centered on evaluating the precision and proficiency of k-wire placement in an elbow surgical phantom, designed with a 3D modeling software starting from a virtual 3D anatomical reference. By visually superimposing 3D reconstructions of internal structures and the target K-wire positioning on the physical model, it is expected not only to improve the learning curve but also to establish a foundation for potential real-time surgical guidance in challenging clinical scenarios. The performance is measured as the difference between K-wires real placement in respect to target position; the quantitative measurements are then used to compare the risk of iatrogenic injury to nerves and vascular structures of MRguided vs non MR-guided simulated interventions.Item Semantic Segmentation of High-resolution Point Clouds Representing Urban Contexts(The Eurographics Association, 2023) Romanengo, Chiara; Cabiddu, Daniela; Pittaluga, Simone; Mortara, Michela; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaPoint clouds are becoming an increasingly common digital representation of real-world objects, and they are particularly efficient when dealing with large-scale objects and/or when extremely high-resolution is required. The focus of our work is on the analysis, 3D feature extraction and semantic annotation of point clouds representing urban scenes, coming from various acquisition technologies, e.g., terrestrial (fixed or mobile) or aerial laser scanning or photogrammetry; the task is challenging, due to data dimensionality and noise. In particular, we present a pipeline to segment high-resolution point clouds representing urban environments into geometric primitives; we focus on planes, cylinders and spheres, which are the main features of buildings (walls, roofs, arches, ...) and ground surfaces (streets, pavements, platforms), and identify the unique parameters of each instance. This paper focuses on the semantic segmentation of buildings, but the approach is currently being generalised to manage extended urban areas. Given a dense point cloud representing a specific building, we firstly apply a binary space partitioning method to obtain small enough sub-clouds that can be processed. Then, a combination of the well-known RANSAC algorithm and a recognition method based on the Hough transform (HT) is applied to each sub-cloud to obtain a semantic segmentation into salient elements, like façades, walls and roofs. The parameters of primitive instances are saved as metadata to document the structural element of buildings for further thematic analyses, e.g., energy efficiency. We present a case study on the city of Catania, Italy, where two buildings of historical and artistic value have been digitized at very high resolution. Our approach is able to semantically segment these huge point clouds and it proves robust to uneven sampling density, input noise and outliers.Item VarIS: Variable Illumination Sphere for Facial Capture, Model Scanning, and Spatially Varying Appearance Acquisition(The Eurographics Association, 2023) Baron, Jessica; Li, Xiang; Joshi, Parisha; Itty, Nathaniel; Greene, Sarah; Dhillon, Daljit Singh J.; Patterson, Eric; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaWe introduce VarIS, our Variable Illumination Sphere – a multi-purpose system for acquiring and processing real-world geometric and appearance data for computer-graphics research and production. Its key applications among many are (1) human-face capture, (2) model scanning, and (3) spatially varying material acquisition. Facial capture requires high-resolution cameras at multiple viewpoints, photometric capabilities, and a swift process due to human movement. Acquiring a digital version of a physical model is somewhat similar but with different constraints for image processing and more allowable time. Each requires detailed estimations of geometry and physically based shading properties. Measuring spatially varying light-scattering properties requires spanning four dimensions of illumination and viewpoint with angular, spatial, and spectral accuracy, and this process can also be assisted using multiple, simultaneous viewpoints or rapid switching of lights with no movement necessary. VarIS is a system of hardware and software for spherical illumination and imaging that has been custom designed and developed by our team. It has been inspired by Light Stages and goniophotometers, but costs less through use of primarily off-the-shelf components, and additionally extends capabilities beyond these devices. In this paper we describe the unique system and contributions, including practical details that could assist other researchers and practitioners.