A Deep Learned Method for Video Indexing and Retrieval
dc.contributor.author | Men, Xin | en_US |
dc.contributor.author | Zhou, Feng | en_US |
dc.contributor.author | Li, Xiaoyong | en_US |
dc.contributor.editor | Fu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes | en_US |
dc.date.accessioned | 2018-10-07T14:32:27Z | |
dc.date.available | 2018-10-07T14:32:27Z | |
dc.date.issued | 2018 | |
dc.description.abstract | In this paper, we proposed a deep neural network based method for content based video retrieval. Our approach leveraged the deep neural network to generate the semantic information and introduced the graph-based storage structure to establish the video indices. We devised the Inception-Single Shot Multibox Detector (ISSD) and RI3D model to extract spatial semantic information (objects) and extract temporal semantic information (actions). Our ISSD model achieved a mAP of 26.7% on MS COCO dataset, increasing 3.2% over the original SSD model, while the RI3D model achieved a top-1 accuracy of 97.7% on dataset UCF-101. And we also introduced the graph structure to build the video index with the temporal and spatial semantic information. Our experiment results showed that the deep learned semantic information is highly effective for video indexing and retrieval. | en_US |
dc.description.sectionheaders | Visual Content Matching and Retrieval | |
dc.description.seriesinformation | Pacific Graphics Short Papers | |
dc.identifier.doi | 10.2312/pg.20181287 | |
dc.identifier.isbn | 978-3-03868-073-4 | |
dc.identifier.pages | 85-88 | |
dc.identifier.uri | https://doi.org/10.2312/pg.20181287 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/pg20181287 | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Computing methodologies | |
dc.subject | Visual content | |
dc.subject | based indexing and retrieval | |
dc.title | A Deep Learned Method for Video Indexing and Retrieval | en_US |
Files
Original bundle
1 - 1 of 1