Open Access
ARTICLE
Visual Relationship Detection with Contextual Information
1 School of Computer and Cyberspace Security, Communication University of China, Beijing, 100024, China.
2 Academy of Broadcasting Science, Beijing, 100866, China.
3 School of Electrical and Electronic Engineering, Nanyang Technological University, Nanyang Avenue,
639798, Singapore.
* Corresponding Author: Yugang Li. Email: .
Computers, Materials & Continua 2020, 63(3), 1575-1589. https://doi.org/10.32604/cmc.2020.07451
Received 21 May 2019; Accepted 01 July 2019; Issue published 30 April 2020
Abstract
Understanding an image goes beyond recognizing and locating the objects in it, the relationships between objects also very important in image understanding. Most previous methods have focused on recognizing local predictions of the relationships. But real-world image relationships often determined by the surrounding objects and other contextual information. In this work, we employ this insight to propose a novel framework to deal with the problem of visual relationship detection. The core of the framework is a relationship inference network, which is a recurrent structure designed for combining the global contextual information of the object to infer the relationship of the image. Experimental results on Stanford VRD and Visual Genome demonstrate that the proposed method achieves a good performance both in efficiency and accuracy. Finally, we demonstrate the value of visual relationship on two computer vision tasks: image retrieval and scene graph generation.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.