Guided Interactive Volume Editing in Medicine
No Thumbnail Available
Date
2016-06-28
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Various medical imaging techniques, such as Computed Tomography, Magnetic
Resonance Imaging, Ultrasonic Imaging, are now gold standards in
the diagnosis of different diseases. The diagnostic process can be greatly
improved with the aid of automatic and interactive analysis tools, which, however, require certain
prerequisites in order to operate. Such analysis tools can, for example, be used for pathology
assessment, various standardized measurements, treatment and operation planning. One of the
major requirements of such tools is the segmentation mask of an object-of-interest. However,
the segmentation of medical data remains subject to errors and mistakes. Often, physicians have
to manually inspect and correct the segmentation results, as (semi-)automatic techniques do
not immediately satisfy the required quality. To this end, interactive segmentation editing is an
integral part of medical image processing and visualization.
In this thesis, we present three advanced segmentation-editing techniques. They are focused
on simple interaction operations that allow the user to edit segmentation masks quickly and
effectively. These operations are based on a topology-aware representation that captures structural
features of the segmentation mask of the object-of-interest.
Firstly, in order to streamline the correction process, we classify segmentation defects according
to underlying structural features and propose a correction procedure for each type of defect.
This alleviates users from manually applying the proper editing operations, but the segmentation
defects still have to be located by users.
Secondly, we extend the basic editing process by detecting regions that potentially contain defects.
With subsequently suggested correction scenarios, users are hereby immediately able to correct
a specific defect, instead of manually searching for defects beforehand. For each suggested
correction scenario, we automatically determine the corresponding region of the respective defect
in the segmentation mask and propose a suitable correction operation. In order to create the
correction scenarios, we detect dissimilarities within the data values of the mask and then classify
them according to the characteristics of a certain type of defect. Potential findings are presented
with a glyph-based visualization that facilitates users to interactively explore the suggested
correction scenarios on different levels-of-detail. As a consequence, our approach even offers
users the possibility to fine-tune the chosen correction scenario instead of directly manipulating
the segmentation mask, which is a time-consuming and cumbersome task. Third and finally, we guide users through the multitude of suggested correction scenarios of the
entire correction process. After statistically evaluating all suggested correction scenarios, we rank
them according to their significance of dissimilarities, offering fine-grained editing capabilities at
a user-specified level-of-detail. As we visually convey this ranking in a radial layout, users can
easily spot and select the most (or the least) dissimilar correction scenario, which improves the
segmentation mask mostly towards the desired result.
All techniques proposed within this thesis have been evaluated by collaborating radiologists. We
assessed the usability, interaction aspects, the accuracy of the results and the expenditure of time
of the entire correction process. The outcome of the assessment showed that our guided volume
editing not only leads to acceptable segmentation results with only a few interaction steps, but
also is applicable to various application scenarios.
Description