Graphics Dissertation Online
Permanent URI for this community
For information about PHD Award please visit Eurographics Annual Award for Best PhD Thesis.
Eurographics PHD Award Winners (under construction)
Browse
Browsing Graphics Dissertation Online by Title
Now showing 1 - 20 of 405
Results Per Page
Sort Options
Item 3D Modelling and Reconstruction of Peripheral Arteries(La Cruz, Jan 2006) La Cruz, AlexandraEin Modell ist eine vereinfachte Repräsentationsform eines Objekts. DieModellbildung kann als Formen von individuellen Objekte bezeichnet werden,die später in der Szene Verwendung finden. Seit vielen Jahren versuchenWissenschaftler ein geeignetes Modell für die Blutgefäße zu finden.Auf den ersten Blick scheint hierfür ein tubuläres Modell am Bestengeeignet zu sein, allerdings erweist sich dabei eine präzise Berücksichtigungder vielfältigen Gefäßpathologien als problematisch. Aus medizinischerSicht ist nicht nur der Mittelpunkt eines Gefäßlumens, sondern auchder Mittelpunkt des Gefäßes selbst relevant. Dies trifft vor allem bei auftretendenAnomalien, wie zum Beispiel bei pathologischen Blutgefäßen, zu.Eine präzise Berechnung von Gefäßparametern ist eine Grundvoraussetzungfür automatisierte Visualisierung und Analyse von sowohl gesundenwie auch erkrankten Blutgefäßen. Wir sind davon überzeugt, dass sicheine modell-basierte Technik am Besten für die Parametrierung von Blutgefäßen eignet. Ziel dieser Arbeit ist die Vorstellung einer neuen Technikzur Berechnung von Parametern erkrankter Blutgefäße der unteren Extremitäten.Der erste Teil beschreibt den Vergleich verschiedener Methoden zurApproximation der Mittellinie eines Gefäßes in einem Phantom der peripherenArterien. Sechs verschiedene Algorithmen wurden zur Berechnungder Mittellinie einer synthetischen peripheren Arterie verwendet. Dieevaluierten Methoden basieren auf folgenden Verfahren: Raycasting, beidem das Abbruchkriterium entweder schwellwertbasiert oder auf dem maximalenGradienten basiert ist; Block-Matching, bei dem die Pixelbewegungin aufeinander folgenden Bildern geschätzt wird und schwerpunkt- oderformbasierte Segmentierung. Für die formbasierte Segmentierung wurdesowohl die randomisierte Hough-Transformation als auch Ellipsen-Fittingverwendet. Da in dem synthetischen Datensatz die Mittellinie bekannt ist,kann die Genauigkeit der Verfahren berechnet werden.Der zweite Teil beschreibt die Einschätzung der Abmessungen derBeinarterien, die mittels Computertomographie aufgenommen wurden. DasBlutgefäß wird durch ein elliptisches oder zylindrisches Modell mit bestimmtenAbmessungen, bestimmter Ausrichtung und einer bestimmtenDichte (CT-Schwächungswerte) beschrieben. Das Modell separiert zweihomogene Regionen: Im Inneren des Modells befindet sich eine Regionmit der Dichte eines Gefäßes, außerhalb befindet sich der Hintergrund.Um die Punktbildfunktion des CT-Scanners zu modellieren, wurdeein Gauß Filter verwendet, der zu einer Verschmierung der Gefäßgrenzenführt. Ein Optimierungsvorgang dient zur Auffindung des Modells, dassich am besten mit den Eingangsdaten deckt. Die Methode bestimmt Mittelpunkt,Durchmesser, Orientierung und die durchschnittliche Dichte desBlutgefäßes, sowie die durchschnittliche Dichte des Hintergrundes.Der dritte Teil präsentiert die Ergebnisse einer klinschen Evaluation unsererMethoden, eine Grundvoraussetzung für den klinischen Einsatz. Fürdiese Evaluation wurden 20 Fälle aus den vorhandenen Patientendaten ausgewählt und nach Schweregrad der Erkrankung in zwei Gruppen klassifiziert.Manuelle Identifikation diente als Referenzstandard. Wir verglichendie Model-Fitting-Methode mit einer Standard-Methode, die derzeit imklinischen Einsatz ist. Im Allgemeinen war der durschnittliche Abstandsfehlerfür beide Methoden innerhalb der Variabilität zwischen den einzelnenmanuellen Identifikationen. Jedoch erzielte die nicht-lineare Model-Fitting-Technik basierend auf einem zylindrischen Modell in den meisten Fälleneine bessere Annäherung an die Mittellinie, sowohl in den leicht wie auchin den schwer erkrankten Fällen. Die nicht-lineare Model-Fitting-Technikist robuster und ergab eine bessere Beurteilung der meisten Fälle. Nichtdestowenigerhaben die Radiologen und die klinischen Experten das letzteWort im Hirblick auf den Einsatz dieser Technik im klinischen Umfeld. - Ein Modell ist eine vereinfachte Repräsentationsform eines Objekts. DieModellbildung kann als Formen von individuellen Objekte bezeichnet werden,die später in der Szene Verwendung finden. Seit vielen Jahren versuchenWissenschaftler ein geeignetes Modell für die Blutgefäße zu finden.Auf den ersten Blick scheint hierfür ein tubuläres Modell am Bestengeeignet zu sein, allerdings erweist sich dabei eine präzise Berücksichtigungder vielfältigen Gefäßpathologien als problematisch. Aus medizinischerSicht ist nicht nur der Mittelpunkt eines Gefäßlumens, sondern auchder Mittelpunkt des Gefäßes selbst relevant. Dies trifft vor allem bei auftretendenAnomalien, wie zum Beispiel bei pathologischen Blutgefäßen, zu.Eine präzise Berechnung von Gefäßparametern ist eine Grundvoraussetzungfür automatisierte Visualisierung und Analyse von sowohl gesundenwie auch erkrankten Blutgefäßen. Wir sind davon überzeugt, dass sicheine modell-basierte Technik am Besten für die Parametrierung von Blutgefäßen eignet. Ziel dieser Arbeit ist die Vorstellung einer neuen Technikzur Berechnung von Parametern erkrankter Blutgefäße der unteren Extremitäten.Der erste Teil beschreibt den Vergleich verschiedener Methoden zurApproximation der Mittellinie eines Gefäßes in einem Phantom der peripherenArterien. Sechs verschiedene Algorithmen wurden zur Berechnungder Mittellinie einer synthetischen peripheren Arterie verwendet. Dieevaluierten Methoden basieren auf folgenden Verfahren: Raycasting, beidem das Abbruchkriterium entweder schwellwertbasiert oder auf dem maximalenGradienten basiert ist; Block-Matching, bei dem die Pixelbewegungin aufeinander folgenden Bildern geschätzt wird und schwerpunkt- oderformbasierte Segmentierung. Für die formbasierte Segmentierung wurdesowohl die randomisierte Hough-Transformation als auch Ellipsen-Fittingverwendet. Da in dem synthetischen Datensatz die Mittellinie bekannt ist,kann die Genauigkeit der Verfahren berechnet werden.Der zweite Teil beschreibt die Einschätzung der Abmessungen derBeinarterien, die mittels Computertomographie aufgenommen wurden. DasBlutgefäß wird durch ein elliptisches oder zylindrisches Modell mit bestimmtenAbmessungen, bestimmter Ausrichtung und einer bestimmtenDichte (CT-Schwächungswerte) beschrieben. Das Modell separiert zweihomogene Regionen: Im Inneren des Modells befindet sich eine Regionmit der Dichte eines Gefäßes, außerhalb befindet sich der Hintergrund.Um die Punktbildfunktion des CT-Scanners zu modellieren, wurdeein Gauß Filter verwendet, der zu einer Verschmierung der Gefäßgrenzenführt. Ein Optimierungsvorgang dient zur Auffindung des Modells, dassich am besten mit den Eingangsdaten deckt. Die Methode bestimmt Mittelpunkt,Durchmesser, Orientierung und die durchschnittliche Dichte desBlutgefäßes, sowie die durchschnittliche Dichte des Hintergrundes.Der dritte Teil präsentiert die Ergebnisse einer klinschen Evaluation unsererMethoden, eine Grundvoraussetzung für den klinischen Einsatz. Fürdiese Evaluation wurden 20 Fälle aus den vorhandenen Patientendaten ausgewählt und nach Schweregrad der Erkrankung in zwei Gruppen klassifiziert.Manuelle Identifikation diente als Referenzstandard. Wir verglichendie Model-Fitting-Methode mit einer Standard-Methode, die derzeit imklinischen Einsatz ist. Im Allgemeinen war der durschnittliche Abstandsfehlerfür beide Methoden innerhalb der Variabilität zwischen den einzelnenmanuellen Identifikationen. Jedoch erzielte die nicht-lineare Model-Fitting-Technik basierend auf einem zylindrischen Modell in den meisten Fälleneine bessere Annäherung an die Mittellinie, sowohl in den leicht wie auchin den schwer erkrankten Fällen. Die nicht-lineare Model-Fitting-Technikist robuster und ergab eine bessere Beurteilung der meisten Fälle. Nichtdestowenigerhaben die Radiologen und die klinischen Experten das letzteWort im Hirblick auf den Einsatz dieser Technik im klinischen Umfeld.</p> <h2>Abstract</h2><p>A model is a simplified representation of an object. The modeling stagecould be described as shaping individual objects that are later used in thescene. For many years scientists are trying to create an appropriate model ofthe blood vessels. It looks quite intuitive to believe that a blood vessel can bemodeled as a tubular object, and this is true, but the problems appear whenyou want to create an accurate model that can deal with the wide variabilityof shapes of diseased blood vessels. From the medical point of view it isquite important to identify, not just the center of the vessel lumen but alsothe center of the vessel, particularly in the presences of some anomalies,which is the case diseased blood vessels.An accurate estimation of vessel parameters is a prerequisite for automatedvisualization and analysis of healthy and diseased blood vessels. Webelieve that a model-based technique is the most suitable one for parameterizingblood vessels. The main focus of this work is to present a new strategyto parameterize diseased blood vessels of the lower extremity arteries.The first part presents an evaluation of different methods for approximatingthe centerline of the vessel in a phantom simulating the peripheralarteries. Six algorithms were used to determine the centerline of a syntheticperipheral arterial vessel. They are based on: ray casting using thresholdsand a maximum gradient-like stop criterion, pixel-motion estimation betweensuccessive images called block matching, center of gravity and shapebased segmentation. The Randomized Hough Transform and ellipse fittinghave been used as shape based segmentation techniques. Since in the syntheticdata set the centerline is known, an estimation of the error can becalculated in order to determine the accuracy achieved by a given method.The second part describes an estimation of the dimensions of lower extremityarteries, imaged by computed tomography. The vessel is modeledusing an elliptical or cylindrical structure with specific dimensions, orientationand CT attenuation values. The model separates two homogeneousregions: Its inner side represents a region of density for vessels, and its outerside a region for background. Taking into account the point spread functionof a CT scanner, which is modeled using a Gaussian kernel, in order tosmooth the vessel boundary in the model. An optimization process is usedto find the best model that fits with the data input. The method providescenter location, diameter and orientation of the vessel as well as blood andbackground mean density values.The third part presents the result of a clinical evaluation of our methods,as a prerequisite step for being used in clinical environment. To performthis evaluation, twenty cases from available patient data were selectedand classified as mildly diseased and severely diseased datasets. Manualidentification was used as our reference standard. We compared the modelfitting method against a standard method, which is currently used in theclinical environment. In general, the mean distance error for every methodwas within the inter-operator variability. However, the non-linear model fittingtechnique based on a cylindrical model shows always a better centerapproximation in most of the cases, mildly diseased as well as severelydiseased cases. Clinically, the non-linear model fitting technique is morerobust and presented a better estimation in most of the cases. Nevertheless,the radiologists and clinical experts have the last word with respect to theuse of this technique in clinical environment.Item 3D Reconstruction and Rendering from High Resolution Light Fields(ETH Zurich, 2015) Kim, ChangilThis thesis presents a complete processing pipeline of densely sampled, high resolution light fields, from acquisition to rendering. The key components of the pipeline include 3D scene reconstruction, geometry-driven sampling analysis, and controllable multiscopic 3D rendering. The thesis first addresses 3D geometry reconstruction from light fields. We show that dense sampling of a scene attained in light fields allows for more robust and accurate depth estimation without resorting to patch matching and costly global optimization processes. Our algorithm estimates the depth for each and every light ray in the light field with great accuracy, and its pixel-wise depth computation results in particularly favorable quality around depth discontinuities. In fact, most operations are kept localized over small portions of the light field, which by itself is crucial to scalability for higher resolution input and also well suited for efficient parallelized implementations. Resulting reconstructions retain fine details of the scene and exhibit precise localization of object boundaries. While it is the key to the success of our reconstruction algorithm, the dense sampling of light fields entails difficulties when it comes to the acquisition and processing of light fields. This raises a question of optimal sampling density required for faithful geometry reconstruction. Existing works focus more on the alias-free rendering of light fields, and geometry-driven analysis has seen much less research effort. We propose an analysis model for determining sampling locations that are optimal in the sense of high quality geometry reconstruction. This is achieved by analyzing the visibility of scene points and the resolvability of depth and estimating the distribution of reliable estimates over potential sampling locations. A light field with accurate depth information enables an entirely new approach to flexible and controllable 3D rendering. We develop a novel algorithm for multiscopic rendering of light fields which provides great controllability over the perceived depth conveyed in the output. The algorithm synthesizes a pair of stereoscopic images directly from light fields and allows us to control stereoscopic and artistic constraints on a per-pixel basis. It computes non-planar 2D cuts over a light field volume that best meet described constraints by minimizing an energy functional. The output images are synthesized by sampling light rays on the cut surfaces. The algorithm generalizes for multiscopic 3D displays by computing multiple cuts. The resulting algorithms are highly relevant to many application scenarios. It can readily be applied to 3D scene reconstruction and object scanning, depth-assisted segmentation, image-based rendering, and stereoscopic content creation and post-processing, and can also be used to improve the quality of light field rendering that requires depth information such as super-resolution and extended depth of field.Item 3D scene analysis through non-visual cues(University College London, 2019-10-06) Monszpart, AronThe wide applicability of scene analysis from as few viewpoints as possible attracts the attention of many scientific fields, ranging from augmented reality to autonomous driving and robotics. When approaching 3D problems in the wild, one has to admit, that the problems to solve are particularly challenging due to a monocular setup being severely under-constrained. One has to design algorithmic solutions that resourcefully take advantage of abundant prior knowledge, much alike the way human reasoning is performed. I propose the utilization of non-visual cues to interpret visual data. I investigate, how making non-restrictive assumptions about the scene, such as “obeys Newtonian physics” or “is made by or for humans” greatly improves the quality of information retrievable from the same type of data. I successfully reason about the hidden constraints that shaped the acquired scene to come up with abstractions that represent likely estimates about the unobservable or difficult to acquire parts of scenes. I hypothesize, that jointly reasoning about these hidden processes and the observed scene allows for more accurate inference and lays the way for prediction through understanding. Applications of the retrieved information range from image and video editing (e.g., visual effects) through robotic navigation to assisted living.Item 3D Simulation of external beam radiotherapy(Karangelis, Grigorios, Dec 2004) Karangelis, GrigoriosCancer belongs to a group of disease characterized by tumor growth and spread, and is the most significant health care problem in European and Western Countries. The clinical processes used to treat cancer can be separated into drug treatments, radiation therapy [Meyer96] (RT) treatment or even a combination of them [Zambo94]. RT uses radiation in order to deliver a very accurate dose of radiation to a well-defined target volume with minimal damage to surrounding healthy tissues. The wanted result is the eradication of the disease and the improvement or prolongation of patient's life. The amount of required dose can be applied on the tumor site using external beam radiotherapy or brachytherapy [Kolot99]. Hence RT is a very demanding process that requires accuracy and affectivity not only for the elimination of the cancer sells but also for the protection of the healthy organs within the human body. In this dissertation it is of interest the radiation therapy process (RTP) using external beam radiotherapy (EBRA).Item Accelerating the Rendering Process Using Impostors(Jeschke, 2005) Jeschke, StefanDie Darstellung dreidimensionaler geometrischer Modelle zur Erzeugung glaubw ürdiger Bilder ist in der Computergrafik ein Gebiet von großem Interesse. Eine besondere Herausforderung ist hierbei die Erstellung einer flüssigen Animation mit mindestens 60 Bildern pro Sekunde für sehr komplexe Modelle. Anwendungen hierfür finden sich in vielfältigen Bereichen wie zum Beispiel in Schiffs-, Fahr-, oder Flugsimulationen, virtuellen Realit¨aten oder auch Computerspielen. Obwohl gebräuchliche Grafikhardware in den vergangenen Jahren an Leistung stark zugenommen hat, wachsen die Ansprüche nach realistischeren und damit komplexeren Modellen in noch höherem Maße. Diese Dissertation beschäftigt sich mit einem Ansatz zur beschleunigten Ausgabe solch komplexer Modelle. Es wird ausgenutzt, daß sich das Erscheinungsbild insbesondere entfernter Szenenteile über mehrere Ausgabebilder kaum ver- ändert. Diese Szenenteile werden durch vorberechnete bildbasierte Repräsentationen, sogenannte Imposter, ersetzt. Imposter bieten den Vorteil der schnelleren Darstellbarkeit bei gleichem oder zumindest ähnlichem Erscheinungsbild für einen räumlich abgegrenzten Bereich, dem sogenannten Sichtbereich. In bisherigen Ansätzen hierzu wurde jedoch die visuelle Qualit¨at der Impostor (d.h. die visuelle Unterscheidbarkeit zur Originalgeometrie) für den Sichtbereich nur unter sehr hohem Aufwand sichergestellt, und die Speicherplatzanforderungen für alle Imposter einer Szene sind oft unerw¨unscht hoch. Diese Punkte sind beim heutigen Stand der Impostertechnik als Hauptprobleme einer breiten Anwendbarkeit zu sehen. In dieser Arbeit wurden zwei neue Impostortechniken entwickelt, die auf einer Einteilung des zu repräsentierenden Szenenteils in Bildschichten mit unterschiedlichem Abstand zum Betrachter basieren. Durch die Einf¨uhrung spezieller Fehlermetriken wird die visuelle Qualität der Imposter für einen großen Sichtbereich quantifizierbar und garantierbar. Gleichzeitig können unsichtbare Szenenteile ef- fizient entfernt werden, was den Speicherbedarf für die Repr¨asentation verringert. Dabei werden keinerlei Informationen über die Struktur des originalen Szenenteils benötigt. Bei der einen Technik wird jede Bildschicht separat mit Geometrieinformation verknüpft. Hierdurch wird eine schnelle Impostererstellung sowie ein sehr geringer Speicherbedarf f¨ur entfernte Szenenteile erreicht. Bei der anderen Technik erfolgt die Verkn¨upfung der Geometrieinformation unabh¨angig von den Bildschichten. Dies reduziert die geometrische Komplexit¨at und den ben¨otigten Speicherplatz f¨ur die Repr¨asentation nahe gelegener Objekte wesentlich. Der zweite Teil der Arbeit besch¨aftigt sich mit dem effizienten Einsatz von Impostern. Das Ziel ist hierbei, durch den Impostereinsatz eine Mindestbildwiederholrate für jeden möglichen Blickpunkt in einer Szene zu garantieren und gleichzeitig den Speicherplatzbedarf f¨ur alle Imposter zu minimieren. Dazu wurde ein Algorithmus entwickelt, der automatisch Imposter und dazugeh¨orige Sichtbereiche so auswählt, daß nur solche Szenenteile als Imposter für jeden Blickpunkt repräsentiert werden, die sich hierf¨ur besonders eignen. Speziell wird dabei der Fehler bisheriger Ansätze vermieden, für benachbarte Sichtbereiche mehrere sehr ähnliche Imposter für entfernte Objekte zu generieren. Außerdem werden parallel zu Impostern weitere Darstellungsbeschleunigungsverfahren eingesetzt, was den Speicheraufwand f¨ur die Imposter weiter reduziert. Der Algorithmus ist dabei so allgemein gehalten, daß ein effizienter Impostereinsatz in beliebigen dreidimensionalen Szenen sowie auch mit unterschiedlichen Impostertechniken erm¨oglicht wird. Zusammenfassend erlauben die entwickelten Techniken and Algorithmen eine fl¨ussige Animation bei einer garantierbaren Mindestausgabebildqualit¨at sowie einen wesentlich geringeren Speicheraufwand f¨ur Imposter in einer Szene. Dies erlaubt den Einsatz von Impostern in verschiedenen Szenen und Applikationen, in denen diese Technik bisher nicht anwendbar war. - The interactive rendering of three-dimensional geometric models is a research area of big interest in computer graphics. The generation of a fluent animation for complex models, consisting of multiple million primitives, with more than 60 frames per second is a special challenge. Possible applications include ship-, driving- and flight simulators, virtual reality and computer games. Although the performance of common computer graphics hardware has dramatically increased in recent years, the demand for more realism and complexity in common scenes is growing even faster. This dissertation is about one approach for accelerating the rendering of such complex scenes. We take advantage of the fact that the appearance of distant scene parts hardly changes for several successive output images. Those scene parts are replaced by precomputed image-based representations, so-called impostors. Impostors are very fast to render while maintaining the appearance of the scene part as long as the viewer moves within a bounded viewing region, a so-called view cell. However, unsolved problems of impostors are the support of a satisfying visual quality with reasonable computational effort for the impostor generation, as well as very high memory requirements for impostors for common scenes. Until today, these problems are the main reason why impostors are hardly used for rendering acceleration. This thesis presents two new impostor techniques that are based on partitioning the scene part to be represented into image layers with different distances to the observer. A new error metric allows a guarantee for a minimum visual quality of an impostor even for large view cells. Furthermore, invisible scene parts are efficiently excluded from the representation without requiring any knowledge about the scene structure, which provides a more compact representation. One of the techniques combines every image layer separately with geometric information. This allows a fast generation of memory-efficient impostors for distant scene parts. In the other technique, the geometry is independent from the depth layers, which allows a compact representation for near scene parts. The second part of this work is about the efficient usage of impostors for a given scene. The goal is to guarantee a minimum frame rate for every view within the scene while at the same time minimizing the memory requirements for all impostors. The presented algorithm automatically selects impostors and view cells so that for every view, only the most suitable scene parts are represented as impostors. Previous approaches generated numerous similar impostors for neighboring view cells, thus wasting memory. The new algorithm overcomes this problem. The simultaneous use of additional acceleration techniques further reduces the required impostor memory and allows making best use of all available techniques at the same time. The approach is general in the sense that it can handle arbitrary scenes and a broad range of impostor techniques, and the acceleration provided by the impostors can be adapted to the bottlenecks of different rendering systems. In summary, the provided techniques and algorithms dramatically reduce the required impostor memory and simultaneously guarantee a minimum output image quality. This makes impostors useful for numerous scenes and applications where they could hardly be used before.Item Accurate 3D-reconstruction and -navigation for high-precision minimal-invasive interventions(2016-02-03) El Hakimi, WissamThe current lateral skull base surgery is largely invasive since it requires wide exposure and direct visualization of anatomical landmarks to avoid damaging critical structures. A multi-port approach aiming to reduce such invasiveness has been recently investigated. Thereby three canals are drilled from the skull surface to the surgical region of interest: the first canal for the instrument, the second for the endoscope, and the third for material removal or an additional instrument. The transition to minimal invasive approaches in the lateral skull base surgery requires sub-millimeter accuracy and high outcome predictability, which results in high requirements for the image acquisition as well as for the navigation. Computed tomography (CT) is a non-invasive imaging technique allowing the visualization of the internal patient organs. Planning optimal drill channels based on patient-specific models requires high-accurate three-dimensional (3D) CT images. This thesis focuses on the reconstruction of high quality CT volumes. Therefore, two conventional imaging systems are investigated: spiral CT scanners and C-arm cone-beam CT (CBCT) systems. Spiral CT scanners acquire volumes with typically anisotropic resolution, i.e. the voxel spacing in the slice-selection-direction is larger than the in-the-plane spacing. A new super-resolution reconstruction approach is proposed to recover images with high isotropic resolution from two orthogonal low-resolution CT volumes. C-arm CBCT systems offers CT-like 3D imaging capabilities while being appropriate for interventional suites. A main drawback of these systems is the commonly encountered CT artifacts due to several limitations in the imaging system, such as the mechanical inaccuracies. This thesis contributes new methods to enhance the CBCT reconstruction quality by addressing two main reconstruction artifacts: the misalignment artifacts caused by mechanical inaccuracies, and the metal-artifacts caused by the presence of metal objects in the scanned region. CBCT scanners are appropriate for intra-operative image-guided navigation. For instance, they can be used to control the drill process based on intra-operatively acquired 2D fluoroscopic images. For a successful navigation, accurate estimate of C-arm pose relative to the patient anatomy and the associated surgical plan is required. A new algorithm has been developed to fulfill this task with high-precision. The performance of the introduced methods is demonstrated on simulated and real data.Item Acquisition, Encoding and Rendering of Material Appearance Using Compact Neural Bidirectional Texture Functions(2021-11-23) Rainer, GillesThis thesis addresses the problem of photo-realistic rendering of real-world materials. Currently the most faithful approach to render an existing material is scanning the Bidirectional Reflectance Function (BTF), which relies on exhaustive acquisition of reflectance data from the material sample. This incurs heavy costs in terms of both capture times and memory requirements, meaning the main drawback is the lack of practicability. The scope of this thesis is two-fold: implementation of a full BTF pipeline (data acquisition, processing and rendering) and design of a compact neural material representation. We first present our custom BTF scanner, which uses a freely positionable camera and light source to acquire light- and view-dependent textures. During the processing phase, the textures are extracted from the images and rectified onto a unique grid using an estimated proxy surface. At rendering time, the rectification is reverted and the estimated height field additionally allows the preservation of material silhouettes. The main part of the thesis is the development of a neural BTF model that is both compact in memory and practical for rendering. Concretely, the material is modeled by a small fully-connected neural network, parametrized on light and view directions as well as a vector of latent parameters that describe the appearance of the point. We first show that one network can efficiently learn to reproduce the appearance of one given material. The second focus of our work is to find an efficient method to translate BTFs into our representation. Rather than training a new network instance for each new material, the latent space and network are shared, and we use an encoder network to quickly predict latent parameter networks for new, unseen materials. All contributions are geared towards making photo-realistic rendering with BTFs more common and practicable in computer graphics applications like games and virtual environments.Item Adaptive Semantics Visualization(2014-11-27) Nazemi, KawaHuman access to the increasing amount of information and data plays an essential role for the professional level and also for everyday life. While information visualization has developed new and remarkable ways for visualizing data and enabling the exploration process, adaptive systems focus on users' behavior to tailor information for supporting the information acquisition process. Recent research on adaptive visualization shows promising ways of synthesizing these two complementary approaches and make use of the surpluses of both disciplines. The emerged methods and systems aim to increase the performance, acceptance, and user experience of graphical data representations for a broad range of users. Although the evaluation results of the recently proposed systems are promising, some important aspects of information visualization are not considered in the adaptation process. The visual adaptation is commonly limited to change either visual parameters or replace visualizations entirely. Further, no existing approach adapts the visualization based on data and user characteristics. Other limitations of existing approaches include the fact that the visualizations require training by experts in the field. In this thesis, we introduce a novel model for adaptive visualization. In contrast to existing approaches, we have focused our investigation on the potentials of information visualization for adaptation. Our reference model for visual adaptation not only considers the entire transformation, from data to visual representation, but also enhances it to meet the requirements for visual adaptation. Our model adapts different visual layers that were identified based on various models and studies on human visual perception and information processing. In its adaptation process, our conceptual model considers the impact of both data and user on visualization adaptation. We investigate different approaches and models and their effects on system adaptation to gather implicit information about users and their behavior. These are than transformed and applied to affect the visual representation and model human interaction behavior with visualizations and data to achieve a more appropriate visual adaptation. Our enhanced user model further makes use of the semantic hierarchy to enable a domain-independent adaptation. To face the problem of a system that requires to be trained by experts, we introduce the canonical user model that models the average usage behavior with the visualization environment. Our approach learns from the behavior of the average user to adapt the different visual layers and transformation steps. This approach is further enhanced with similarity and deviation analysis for individual users to determine similar behavior on an individual level and identify differing behavior from the canonical model. Users with similar behavior get similar visualization and data recommendations, while behavioral anomalies lead to a lower level of adaptation. Our model includes a set of various visual layouts that can be used to compose a multi-visualization interface, a sort of "visualization cockpit". This model facilitates various visual layouts to provide different perspectives and enhance the ability to solve difficult and exploratory search challenges. Data from different data-sources can be visualized and compared in a visual manner. These different visual perspectives on data can be chosen by users or can be automatically selected by the system. This thesis further introduces the implementation of our model that includes additional approaches for an efficient adaptation of visualizations as proof of feasibility. We further conduct a comprehensive user study that aims to prove the benefits of our model and underscore limitations for future work. The user study with overall 53 participants focuses with its four conditions on our enhanced reference model to evaluate the adaptation effects of the different visual layers.Item Adjoint-Driven Importance Sampling in Light Transport Simulation(Charles University, Prague, 2017-06-26) Vorba, JiříMonte Carlo light transport simulation has recently been adopted by the movie industry as a standard tool for producing photo realistic imagery. As the industry pushes current technologies to the very edge of their possibilities, the unprecedented complexity of rendered scenes has underlined a fundamental weakness of MC light transport simulation: slow convergence in the presence of indirect illumination. The culprit of this poor behaviour is that the sam- pling schemes used in the state-of-the-art MC transport algorithms usually do not adapt to the conditions of rendered scenes. We base our work on the ob- servation that the vast amount of samples needed by these algorithms forms an abundant source of information that can be used to derive superior sampling strategies, tailored for a given scene. In the first part of this thesis, we adapt general machine learning techniques to train directional distributions for biasing scattering directions of camera paths towards incident illumination (radiance). Our approach allows progressive training from a stream of particles while main- taining bounded memory footprint. This progressive nature makes the method robust even in scenarios where we have little information in the early stages of the training due to difficult visibility. The proposed method is not restricted only to path tracing, where paths start at the camera, but can be employed also in light tracing or photon mapping, where paths are emitted from light sources, as well as in combined bidirectional methods. In the second part of this thesis we revisit Russian roulette and splitting, two vari- ance reduction techniques that have been used in computer graphics for more than 25 years. So far, however, the path termination (Russian roulette) and splitting rates have been based only on local material properties in the scene which can re- sult in inefficient simulation in the presence of indirect illumination. In contrast, we base the termination and splitting rates on a pre-computed approximation of the adjoint quantity (i.e. radiance in the case of path tracing) which yields superior results to previous approaches. To increase robustness of our method, we adopt the so called weight window, a standard technique in neutron transport simulations. Both methods, that is the biasing of scattering directions introduced in the first part of the thesis and the adjoint-driven Russian roulette and splitting, are based on the prior estimate of the adjoint quantity. Nevertheless, they consti- tute two complementary importance sampling strategies of transported light and as we show, their combination yields superior results to each strategy alone. As one of our contributions, we present a theoretical analysis that provides insights into the importance sampling properties of our adjoint-driven Russian roulette and splitting, and also explains the synergic behaviour of the two strategies.Item Advanced Editing Methods for Image and Video Sequences(Granados, 2013-09-10) Granados, MiguelIn the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination.The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts.The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos.Item Advanced Methods for Relightable Scene Representations in Image Space(Fuchs, Martin, 2008-12-15) Fuchs, MartinThe realistic reproduction of visual appearance of real-world objectsrequires accurate computer graphics models that describe the opticalinteraction of a scene with its surroundings. Data-driven approachesthat model the scene globally as a reflectance field function in eightparameters deliver high quality and work for most material combinations,but are costly to acquire and store. Image-space relighting, whichconstrains the application to create photos with a virtual, fix camerain freely chosen illumination, requires only a 4D data structure toprovide full fidelity.This thesis contributes to image-space relighting on four accounts: (1)We investigate the acquisition of 4D reflectance fields in the contextof sampling and propose a practical setup for pre-filtering ofreflectance data during recording, and apply it in an adaptive samplingscheme. (2) We introduce a feature-driven image synthesis algorithm forthe interpolation of coarsely sampled reflectance data in software toachieve highly realistic images. (3) We propose an implicit reflectancedata representation, which uses a Bayesian approach to relight complexscenes from the example of much simpler reference objects. (4) Finally,we construct novel, passive devices out of optical components thatrender reflectance field data in real-time, shaping the incidentillumination into the desired image.Item Advances on computational imaging, material appearance, and virtual reality(Universidad de Zaragoza, 2019-04-29) Serrano, AnaVisual computing is a recently coined term that embraces many subfields in computer science related to the acquisition, analysis, or synthesis of visual data through the use of computer resources. What brings all these fields together is that they are all related to the visual aspects of computing, and more importantly, that during the last years they have started to share similar goals and methods. This thesis presents contributions in three different areas within the field of visual computing: computational imaging, material appearance, and virtual reality. The first part of this thesis is devoted to computational imaging, and in particular to rich image and video acquisition. First, we deal with the capture of high dynamic range images in a single shot, where we propose a novel reconstruction algorithm based on sparse coding and reconstruction to recover the full range of luminances of the scene being captured from a single coded low dynamic range image. Second, we focus on the temporal domain, where we propose to capture high speed videos via a novel reconstruction algorithm, again based on sparse coding, that allows recovering high speed video sequences from a single photograph with encoded temporal information. The second part attempts to address the long-standing problem of visual perception and editing of real world materials. We propose an intuitive, perceptually based editing space for captured data. We derive a set of meaningful attributes for describing appearance, and we build a control space based on these attributes by means of a large scale user study. Finally, we propose a series of applications for this space. One of these applications to which we devote particular attention is gamut mapping. The range of appearances displayable on a particular display or printer is called the gamut. Given a desired appearance, that may lie outside of that gamut, the process of gamut mapping consists on making it displayable without excessively distorting the final perceived appearance. For this task, we make use of our previously derived perceptually-based space to introduce visual perception in the mapping process to help minimize the perceived visual distortions that may arise during the mapping process. The third part is devoted to virtual reality. We first focus on the study of human gaze behavior in static omnistereo panoramas. We collect gaze samples and we provide an analysis of this data, proposing then a series of applications that make use of our derived insights. Then, we investigate more intricate behaviors in dynamic environments under a cinematographic context. We gather gaze data from viewers watching virtual reality videos containing different edits with varying parameters, and provide the first systematic analysis of viewers’ behavior and the perception of continuity in virtual reality video. Finally, we propose a novel method for adding parallax for 360◦ video visualization in virtual reality headsets.Item Algorithms and Interfaces for Real-Time Deformation of 2D and 3D Shapes(Jacobson, 2013-05-01) Jacobson, AlecThis thesis investigates computer algorithms and user interfaces which assist in the process of deforming raster images, vector graphics, geometric models and animated characters in real time. Many recent works have focused on deformation quality, but often at the sacrifice of interactive performance. A goal of this thesis is to approach such high quality but at a fraction of the cost. This is achieved by leveraging the geometric information implicitly contained in the input shape and the semantic information derived from user constraints. Existing methods also often require or assume a particular interface between their algorithm and the user. Another goal of this thesis is to design user interfaces that are not only ancillary to real-time deformation applications, but also endowing to the user, freeing maximal creativity and expressiveness. This thesis first deals with discretizing continuous Laplacian-based energies and equivalent partial differential equations. We approximate solutions to higher-order polyharmonic equations with piecewise-linear triangle meshes in a way that supports a variety of boundary conditions. This mathematical foundation permeates the subsequent chapters. We aim this energy-minimization framework at skinning weight computation for deforming shapes in real-time using linear blend skinning (LBS). We add additional constraints that explicitly enforce boundedness and later, monotonicity. We show that these properties and others are mandatory for intuitive response. Through the boundary conditions of our energy optimization and tetrahedral volume meshes we can support all popular types of user control structures in 2D and 3D. We then consider the skeleton control structure specifically, and show that with small changes to LBS we can expand the space of deformations allowing individual bones to stretch and twist without artifacts. We also allow the user to specify only a subset of the degrees of freedom of LBS, automatically choosing the rest by optimizing nonlinear, elasticity energies within the LBS subspace. We carefully manage the complexity of this optimization so that real-time rates are undisturbed. In fact, we achieve unprecedented rates for nonlinear deformation. This optimization invites new control structures, too: shape-aware inverse kinematics and disconnected skeletons. All our algorithms in 3D work best on volume representations of solid shapes. To ensure their practical relevancy, we design a method to segment inside from outside given a shape represented by a triangle surface mesh with artifacts such as open boundaries, non-manifold edges, multiple connected components and self-intersections. This brings a new level of robustness to the field of volumetric tetrahedral meshing. The resulting quiver of algorithms and interfaces will be useful in a wide range of applications including interactive 3D modeling, 2D cartoon keyframing, detailed image editing, and animations for video games and crowd simulation.Item Algorithms for 3D Isometric Shape Correspondence - Algorithms for 3D Isometric Shape Correspondence(Sahillioglu, 2012-08-01) Sahillioglu, YusufThere are many pairs of objects in the digital world that need to be related before performing any comparison, transfer, or analysis in between. The shape correspondence algorithms essentially address this problem by taking two shapes as input with the aim of finding a mapping that couples similar or semantically equivalent surface points of the given shapes. We focus on computing correspondences between some featured or all present points of two semantically similar 3D shapes whose surfaces overlap completely or partially up to isometric, i.e., distance-preserving, deformations and scaling. Differently put, our isometric shape correspondence algorithms handle several different cases for the shape correspondence problem that can be differentiated based on how similar the shape pairs are, whether they are partially overlapped, the resolution of the desired mapping, etc. Although there exist methods that can, in most cases, satisfactorily establish 3D correspondences between two given shapes, these methods commonly suffer from certain drawbacks such as high computational load, incapability of establishing a correspondence which is partial and dense at the same time, approximation and embedding errors, and confusion of symmetrical parts of the shapes. While the existing methods constitute a solid foundation and a good starting point for the shape correspondence problem, our novel solutions designed for a given scenario achieve significant improvements as well as contributions. We specifically explore the 3D shape correspondence problem under two categories as complete and partial correspondences where the former is categorized further according to the output resolution as coarse and dense correspondences. For complete correspondence at coarse resolution, after jointly sampling evenly-spaced feature vertices on shapes, we formulate the problem as combinatorial optimization over the domain of all possible mappings between source and target features, which then reduces within a probabilistic framework to a log-likelihood maximization problem that we solve via EM (Expectation Maximization) algorithm. Due to computational limitations of this approach, we design a fast coarse-to-fine algorithm to achieve dense correspondence between all vertices of complete models with specific care on the symmetric flip issue. Our scale normalization method based on a novel scale-invariant isometric distortion measure, on the other hand, handles a particular and rather restricted setting of partial matching whereas our rank-and-vote-and-combine (RAVAC) algorithm deals with the most general matching setting, where both two solutions produce correspondences that are partial and dense at the same time. In comparison with many state-of-the-art methods, our algorithms are tested by a variety of two-manifold meshes representing 3D shape models based on real and synthetic data.Item Algorithms for Data-Driven Geometric Stylization & Acceleration(University of Toronto, 2022-09-29) Liu, Hsueh-Ti DerekIn this thesis, we investigate computer algorithms for creating stylized 3D digital content and numerical tools for processing high-resolution geometric data. This thesis first addresses the problem of geometric stylization. Existing 3D content creation tools lack support for creating stylized 3D assets. They often require years of professional training and are tedious for creating complex geometries. One goal of this thesis is to address such a difficulty by presenting a novel suite of easy-to-use stylization algorithms. This involves a differentiable rendering technique to generalize image filters to filter 3D objects and a machine learning approach to renovate classic modeling operations. In addition, we address the problem by proposing an optimization framework for stylizing 3D shapes. We demonstrate how these new modeling tools can lower the difficulties of stylizing 3D geometric objects. The second part of the thesis focuses on scalability. Most geometric algorithms suffer from expensive computation costs when scaling up to high-resolution meshes. The computation bottleneck of these algorithms often lies in fundamental numerical operations, such as solving systems of linear equations. In this thesis, we present two directions to overcome such challenges. We first show that it is possible to coarsen a geometry and enjoy the efficiency of working on coarsened representation without sacrificing the quality of solutions. This is achieved by simplifying a mesh while preserving its spectral properties, such as eigenvalues and eigenvectors of a differential operator. Instead of coarsening the domain, we also present a scalable geometric multigrid solver for curved surfaces. We show that this can serve as a drop-in replacement of existing linear solvers to accelerate several geometric applications, such as shape deformation and physics simulation. The resulting algorithms in this thesis can be used to develop data-driven 3D stylization tools for inexperienced users and for scaling up existing geometry processing pipelines.Item Algorithms for User-Guided Surface Mappings(ETH Zurich, 2015) Diamanti, OlgaComputing mappings between spaces is a very general problem that appears in various forms in geometry processing. They can be used to provide descriptions or representations of shapes, or place shapes in correspondence. Their applications range from surface modeling and analysis to shape matching, morphing, attribute transfer and deformation. This thesis addresses two particular mapping problems that are of interest in the field, namely inter-surface maps and parameterizations. We focus on methods that are suitable for user-guided applications – we do not consider automatic methods, that do not leave space for the user to control the result. Existing meth- ods for the particular sub-problems that we are studying often either suffer from performance limitations, or cannot guarantee that the produced results align with the user’s intent; we improve upon the state of the art in both those respects. The first problem we study in this thesis is that of inter-surface mapping, with given sparse landmark point correspondences. We found that an efficient solution to this otherwise difficult topic emerges if one reformulates the mapping problem as a problem of finding affine combinations of points on the involved shapes. We extend the notion of standard Euclidean weighted averaging to 3D manifold shapes, and introduce a fast approximation that can be used to solve this problem much faster than the state of the art. We showcase applications of this approach in interactive attribute transfer between shapes. Next, we move on to the problem of surface parameterization. Here, we study the problem from the application point of view of surface remeshing; a popular way to generate a quadrilateral mesh for a given triangular mesh is to first compute a global parameterization, which is guided by a tangent vector field. This field then determines the directions of the quadrilateral edges on the output mesh. In order to design such a direction field, recent methods to tackle the problem are based on integer optimization problems, which often suffer from slow performance and local minima. We reformulate the problem in a way that the field design problem becomes a linear problem. We also add more flexibility by allowing for non- orthogonal directions. Still on the same problem of field-aligned surface parameterizations, we notice that the standard way of producing fields –namely, an optimization only focused iiion field smoothness– does not necessarily guarantee that the resulting quadrilateral meshing will be what the user intended in terms of edge directions. This is due to errors introduced in the post-processing of the field, during the later stages of the remeshing pipeline. This renders such fields suboptimal for user-guided meshing applications. We extend our efficient reformulation of the field design problem to generate fields that are guaranteed to not introduce such further errors, and thus make sure that the users obtain the expected results. Additionally, we allow users more flexible control, by supporting assignment of partial constraints for only some of the directions.Item Alves dos Santos, Luiz: Asymmetric and Adaptive Conference Systems for Enabling Computer-Supported Mobile Activities(Alves dos Santos, Luiz Manoel, 2003) Alves dos Santos, Luiz ManoelThis work was conducted at the Darmstadt University of Technology, essentially between 1998 and 2002. Before and during this period, I was working at the INI-GraphicsNet, Darmstadt, first in the Zentrum für Graphische Datenverarbeitung e.V., and then later at the Fraunhofer-Institut für Graphische Datenverarbeitung (IGD), as a researcher. This thesis addresses the investigations and results achieved during my work at these organizations. My initial development projects in the area of mobile computing were very challenging due to the immense constraints posed by the then incipient hardware and wireless network infrastructures, and similarly overwhelming due to the desire to employ those fascinating appliances by all means possible. The endeavour to keep the respective application systems in a course of continuous improvement (i.e., with richer media presentation and “interactiveness”), and at the same astonishing pace as the technological evolutions, was both demanding and rewarding; however, it turned out to be a questionable procedure. After several prototype demonstrations and observations, there came a turning point, following the acknowledgement that, for application cases involving user mobility, the supporting tool is appraised significantly on the basis of its adequacy for the usage conditions and its flexibility to adapt to changing requirements and to any platform specification or resource availability. The circumstances of a mobile use (e.g., outdoor, on the move, in confined places) require new approaches in application system development and create a high demand for specialized, task-oriented system features. Any service being offered has to be able to account for, adjust itself, and be responsive to the increasing and unpredictable diversity of prospective users and their usage environments. The achievement of this attribute is even more challenging when the service should be a basis for a digitally mediated human-to-human communication process involving all kinds of diversity between the individual partners and technical arrangements. In this thesis work, proposals and innovative solutions to these challenges have been investigated and implemented, and are presented in this report. Some contributions of this work are: an adaptive conference system for heterogeneous environments, tools to assess, distribute, and respond to User Profiles at both the individual and collective level; adaptive, flexible individual interaction modes and media that are nevertheless consistent for a collaborative work; and mechanisms for remote awareness (of constraints) for structuring interaction. However, above any technological advances, the major research challenge was concerned with the human factor and the achievement of an effective integration of a supporting tool in their daily activities and lives.Item Analysis and Generation of Quality Polytopal Meshes with Applications to the Virtual Element Method(University of Genoa, Department of Mathematics, 2022-08-31) Sorgente, TommasoThis thesis explores the concept of the quality of a mesh, the latter being intended as the discretization of a two- or three- dimensional domain. The topic is interdisciplinary in nature, as meshes are massively used in several fields from both the geometry processing and the numerical analysis communities. The goal is to produce a mesh with good geometrical properties and the lowest possible number of elements, able to produce results in a target range of accuracy. In other words, a good quality mesh that is also cheap to handle, overcoming the typical trade-off between quality and computational cost. To reach this goal, we first need to answer the question: “How, and how much, does the accuracy of a numerical simulation or a scientific computation (e.g., rendering, printing, modeling operations) depend on the particular mesh adopted to model the problem? And which geometrical features of the mesh most influence the result?” We present a comparative study of the different mesh types, mesh generation techniques, and mesh quality measures currently available in the literature related to both engineering and computer graphics applications. This analysis leads to the precise definition of the notion of quality for a mesh, in the particular context of numerical simulations of partial differential equations with the virtual element method, and the consequent construction of criteria to determine and optimize the quality of a given mesh. Our main contribution consists in a new mesh quality indicator for polytopal meshes, able to predict the performance of the virtual element method over a particular mesh before running the simulation. Strictly related to this, we also define a quality agglomeration algorithm that optimizes the quality of a mesh by wisely agglomerating groups of neighboring elements. The accuracy and the reliability of both tools are thoroughly verified in a series of tests in different scenarios.Item Analysis and Visualization of Industrial CT Data(Heinzl, Dec 2008) Heinzl, ChristophDie industrielle 3D Röntgencomputertomographie (3DCT) steht derzeitan der Schwelle von einer zerstörungsfreien Werkstoffprüfmethode hin zueiner genormten Methode für dimensionales Messen. 3DCT wird vor allemim Bereich der Erstmusterprüfung von neuen Komponenten eingesetzt,um die Nachteile und Einschränkungen bisheriger Methoden zu überwinden.Eine steigende Anzahl von Firmen vertraut daher auf 3DCT undsporadisch wird 3DCT bereits von einigen Pionieren für die Qualitätskontrollein der Produktion eingesetzt. Dennoch ist die 3DCT eine sehr jungeMethode mit einigen Nachteilen, die großen Einfluss auf das Messergebnishaben. Einige der größten Nachteile von 3DCT im Bereich der Qualitätssicherungsind:Artefakte ändern die Grauwerte im Datensatz und generieren künstlicheStrukturen, die in Realität nicht vorhanden sind.Diskretisierung bewirkt Unregelmäßigkeiten in den Grauwertenentsprechend des Abtasttheorems von Nyquist-Shannon.Informationen bezüglich Unsicherheit der Daten gehen bei der Extraktionvon dimensionalen Messmerkmalen verloren.Spezifikationen and Einschränkungen der einzelnen Komponenten undder Bauweise des 3DCTs limitieren die erreichbare Messgenauigkeit.Diese Dissertation trägt zum Stand der Technik durch algorithmische Lösungenvon typischen industriellen Problemen im Bereich der Metrologiemittels 3DCT bei. Das Hauptaugenmerk der präsentierten Arbeit liegtin der Entwicklung und Implementierung von neuen Prozessketten, diefür den täglichen industriellen Einsatz im Bereich der Qualitätssicherungoptimiert sind. Geeignete, einfach verständliche Visualisierungsmethodenwerden evaluiert und angewendet, um einen Einblick in die generiertenMessdaten zu ermöglichen. Im Speziellen werden drei Prozesskettenpräsentiert, die einige der wesentlichen Aspekte der Metrologie mittels3DCT abdecken. Die betrachteten Aspekte sind robuste Oberflächeextraktion,Artefaktreduzierung mittels Dual Energy CT, lokale Oberflächeextraktionvon Multimaterialkomponenten und statistische Analyse vonMultimaterialkomponenten. Die generierten Ergebnisse jeder Prozesskettewerden anhand von Testteilen und typischen Industriebauteilen demonstriertund verifiziert - Industrial X-Ray 3D computed tomography (3DCT) is on the edge ofadvancing from a non destructive testing method to a fully standardizedmeans of dimensional measurement for every day industrial use. Currently3DCT has drawn attention especially in the area of first part inspections ofnew components, mainly in order to overcome limitations and drawbacksof common methods. Yet an increasing number of companies is benefittingfrom industrial 3DCT and sporadically the first pioneers start using industrial3DCT for quality control in the production phase of a component. As3DCT is still a very young technology of industrial quality control, thismethod also faces severe problems, which seriously affect measurementresults. Some of the major drawbacks for quality control are the following:Artefacts modify the spatial greyvalues, generating artificial structures inthe datasets, which do not correspond to reality.Discrete sampling introduces further irregularities due to the Nyquist-Shannon sampling theorem.Uncertainty information is missing when extracting dimensional measurementfeatures.Specifications and limitations of the components and the special setup a3DCT constrain the best achievable measurement precision.This thesis contributes to the state of the art by algorithmic evaluationof typical industrial tasks in the area of dimensional measurement using3DCT. The main focus lies in the development and implementation of novelpipelines for everyday industrial use including comparisons to commonmethods. Convenient and easy to understand means of visualization are evaluated and used to provide insight into the generated results. In particularthree pipelines are introduced, which cover some of the major aspectsconcerning metrology using industrial 3DCT. The considered aspectsare robust surface extraction, artefact reduction via dual energy CT, localsurface extraction of multi-material components, and statistical analysisof multi-material components. The generated results of each pipeline aredemonstrated and verified using test specimens as well as real world components.Item Anatomical Modeling for Image Analysis in Cardiology(Zambal, Mar 2009) Zambal, SebastianEine der häufigsten Todesursachen in der westlichen Welt sind kardiovaskuläre Krankheiten. Für die Diagnose dieser Krankheiten erönen modernebildgebende Verfahren beeindruckende Möglichkeiten. Speziell in der Kardiologiehat die Entwicklung von Computertomographie (CT) und Magnetresonanztomographie(MRT) Scannern mit hoher zeitlicher Auösung dieAufnahme des schlagenden Herzens ermöglicht. Um die großen Datenmengen,die in der täglichen klinischen Routine akquiriert werden, zu analysierenund eine optimale Diagnose zu erstellen, wird intelligente Software benötigt.Diese Arbeit befasst sich mit modellbasierten Methoden für die automatischeExtraktion von klinisch relevanten Eigenschaften von medizinischenBildern in der kardialen Bildgebung. Typische Eigenschaften sind etwa dasSchlagvolumen des Herzens (engl. stroke volume, SV) oder die Masse desHerzmuskels.Im Vergleich zu anderen Algorithmen für die Segmentierung und Bildverarbeitunghaben die untersuchten modellbasierten Ansätze den Vorteil,dass vorhandenes Wissen in den Segmentierungsprozess eingebunden wirdund damit die Robustheit erhöht wird. In dieser Arbeit werden Modellebetrachtet, welche aus zwei essentiellen Teilen bestehen: Form undTextur. Form wird modelliert, um die geometrischen Eigenschaften deranalysierten anatomischen Strukturen einzuschränken. Textur wird verwendetum Grauwerte zu modellieren und spielt eine wichtige Rolle bei derAnpassung des Formmodells an ein neues Bild.Automatische Initialisierung von modellbasierter Segmentierung ist fürviele Anwendungen interessant. Für kardiale MR Bilder wird in dieser Arbeiteine Folge von Bildverarbeitungsschritten vorgeschlagen, um eine initialePlazierung des Modells zu berechnen.Ein spezielles Modell für die Segmentierung von funktionalen kardialenMR Studien, welches aus zwei Komponenten besteht, wird erläutert. DiesesModell kombiniert einzelne 2D Active Appearance Models mit einem statistischen3D Formmodell.Ein Ansatz zur eektiven Texturmodellierung wird vorgestellt. Eineinformationstheoretische Zielfunktion wird für optimierte probabilistischeTexturrepräsentation vorgeschlagen.Modellbasierte Extraktion von Koronararterien wird am Ende der Arbeit diskutiert. Die Resultate dieser Methode wurden auf einem Workshop aufder internationalen MICCAI Konferenz validiert. In einem direkten Vergleichschnitt diese Methode besser ab, als vier andere Ansätze. - The main cause of death in the western world is cardiovascular disease. Toperform eective diagnosis of this kind of disease, modern medical imagingmodalities offer great possibilities. In cardiology the advent of computedtomography (CT) and magnetic resonance (MR) scanners with high temporalresolution have made imaging of the beating heart possible. Largeamounts of data are aquired in everyday clinical practice. Intelligent softwareis required to optimally analyze the data and perform reliable andeective diagnosis.This thesis focusses on model-based approaches for automatic segmentationand extraction of clinically relevant properties from medical imagesin cardiology. Typical properties which are of interest are the volume ofblood that is ejected per cardiac cycle (stroke volume, SV) or the mass ofthe heart muscle (myocardial mass).Compared to other segmentation and image processing algorithms, theinvestigated model-based approaches have the advantage that they exploitprior knowledge. This increases robustness. Throughout this thesis modelsare discussed which consist of two important parts: shape and texture.Shape is modeled in order to restrict the geometric properties of the investigatedanatomical structures. Texture on the other hand is used to describegray values and plays an important role in matching the model to new unseenimages.Automatic initialization of model-based segmentation is important formany applications. For cardiac MR images this thesis proposes a sequenceof image processing steps which calculate an initial placement of a model.A special two-component model for segmentation of functional cardiacMR studies is presented. This model combines individual 2D Active AppearanceModels with a 3D statistical shape model.An approach to eective texture modeling is introduced. An informationtheoretic objective function is proposed for optimized probabilistic texturerepresentation.Finally a model-based coronary artery centerline extraction algorithmis presented. The results of this method were validated at a workshop atthe international MICCAI conference. In a direct comparison the methodoutperformed four other automatic centerline extraction algorithms.