Open Access
ARTICLE
A Novel Feature Aggregation Approach for Image Retrieval Using Local and Global Features
1 Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou, 450001, China
2 China Mobile Group Henan Company Limited, Xinxiang, 453000, China
3 Department of Mechanical Engineering, MCKV Institute of Engineering, Howrah, 711204, India
4 Department of Logistics, University of Defence in Belgrade, Belgrade, 11000, Serbia
* Corresponding Authors: Junxia Ma. Email: ; Zhifeng Zhang. Email:
(This article belongs to the Special Issue: Intelligent Computing for Engineering Applications)
Computer Modeling in Engineering & Sciences 2022, 131(1), 239-262. https://doi.org/10.32604/cmes.2022.016287
Received 23 February 2021; Accepted 13 October 2021; Issue published 24 January 2022
Abstract
The current deep convolution features based on retrieval methods cannot fully use the characteristics of the salient image regions. Also, they cannot effectively suppress the background noises, so it is a challenging task to retrieve objects in cluttered scenarios. To solve the problem, we propose a new image retrieval method that employs a novel feature aggregation approach with an attention mechanism and utilizes a combination of local and global features. The method first extracts global and local features of the input image and then selects keypoints from local features by using the attention mechanism. After that, the feature aggregation mechanism aggregates the keypoints to a compact vector representation according to the scores evaluated by the attention mechanism. The core of the aggregation mechanism is to allow features with high scores to participate in residual operations of all cluster centers. Finally, we get the improved image representation by fusing aggregated feature descriptor and global feature of the input image. To effectively evaluate the proposed method, we have carried out a series of experiments on large-scale image datasets and compared them with other state-of-the-art methods. Experiments show that this method greatly improves the precision of image retrieval and computational efficiency.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.