Browsing by Author "Albuquerque, Georgia"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Automatic Infant Face Verification via Convolutional Neural Networks(The Eurographics Association, 2018) Wöhler, Leslie; Zhang, Hangjian; Albuquerque, Georgia; Magnor, Marcus; Beck, Fabian and Dachsbacher, Carsten and Sadlo, FilipIn this paper, we investigate how convolutional neural networks (CNN) can learn to solve the verification task for faces of young children. One of the main issues of automatic face verification approaches is how to deal with facial changes resulting from aging. Since the facial shape and features change drastically during early childhood, the recognition of children can be challenging even for human observers. Therefore, we design CNNs that take two infant photographs as input and verify whether they belong to the same child. To specifically train our CNNs to recognize young children, we collect a new infant face dataset including 4,528 face images of 42 subjects in the age range of 0 to 6 years. Our results show an accuracy of up to 85 percent for face verification using our dataset with no overlapping subjects between the training and test data, and 72 percent in the FG-NET dataset for the age range from 0 to 4 years.Item Learning a Perceptual Quality Metric for Correlation in Scatterplots(The Eurographics Association, 2019) Wöhler, Leslie; Zou, Yuxin; Mühlhausen, Moritz; Albuquerque, Georgia; Magnor, Marcus; Schulz, Hans-Jörg and Teschner, Matthias and Wimmer, MichaelVisual quality metrics describe the quality and efficiency of multidimensional data visualizations in order to guide data analysts during exploration tasks. Current metrics are usually based on empirical algorithms which do not accurately represent human perception and therefore often differ from the analysts' expectations. We propose a new perception-based quality metric using deep learning that rates the correlation of data dimensions visualized by scatterplots. First, we created a data set containing over 15,000 pairs of scatterplots with human annotations on the perceived correlation between the data dimensions. Afterwards, we trained two different Convolutional Neural Networks (CNN), one extracts features from scatterplot images and the other directly from data vectors. We evaluated both CNNs on our test set and compared them to previous visual quality metrics. The experiments show that our new metric is able to represent human perception more accurately than previous methods.