首页|Learning Structured Video Descriptions: Automated Video Knowledge Extraction for Video Understanding Tasks

Learning Structured Video Descriptions: Automated Video Knowledge Extraction for Video Understanding Tasks

扫码查看
Vision to language problems, such as video annotation, or visual question answering, stand out from the perceptual video understanding tasks (e。g。, classification) through their cognitive nature and their tight connection to the field of natural language processing。 While most of the current solutions to vision-to-language problems are inspired from machine translation methods, aiming to directly map visual features to text, several recent results on image and video understanding have proven the importance of specifically and formally representing the semantic content of a visual scene, before reasoning over it and mapping it to natural language。 This paper proposes a deep learning solution to the problem of generating structured descriptions for videos, and evaluates it on a dataset of formally annotated videos, which has been automatically generated as part of this work。 The recorded results confirm the potential of the solution, indicating that it manages to describe the semantic content in a video scene with a similar accuracy to the one of state-of-the-art natural language captioning models。

Structured video captioningVideo understanding

Daniel Vasile、Thomas Lukasiewicz

展开 >

Department of Computer Science, University of Oxford, Oxford, UK

International conference on the move to meaningful internet systems;Conference on cooperative information systems;Conference on cloud and trusted computing;Conference on ontologies, databases, and applications of semantics

Valletta(MT)

On the move to meaningful internet systems: OTM 2018 conferences

315-332

2018