Open Access iconOpen Access

ARTICLE

DIEONet: Domain-Invariant Information Extraction and Optimization Network for Visual Place Recognition

Shaoqi Hou1,2,3,*, Zebang Qin2, Chenyu Wu2, Guangqiang Yin2, Xinzhong Wang1, Zhiguo Wang2,*

1 School of Computer Science and Technology, Xinjiang University, Urumqi, 830046, China
2 School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
3 Institute of Public Security, Kash Institute of Electronics and Information Industry, Kashi, 844000, China

* Corresponding Authors: Shaoqi Hou. Email: email; Zhiguo Wang. Email: email

Computers, Materials & Continua 2025, 82(3), 5019-5033. https://doi.org/10.32604/cmc.2025.058233

Abstract

Visual Place Recognition (VPR) technology aims to use visual information to judge the location of agents, which plays an irreplaceable role in tasks such as loop closure detection and relocation. It is well known that previous VPR algorithms emphasize the extraction and integration of general image features, while ignoring the mining of salient features that play a key role in the discrimination of VPR tasks. To this end, this paper proposes a Domain-invariant Information Extraction and Optimization Network (DIEONet) for VPR. The core of the algorithm is a newly designed Domain-invariant Information Mining Module (DIMM) and a Multi-sample Joint Triplet Loss (MJT Loss). Specifically, DIMM incorporates the interdependence between different spatial regions of the feature map in the cascaded convolutional unit group, which enhances the model’s attention to the domain-invariant static object class. MJT Loss introduces the “joint processing of multiple samples” mechanism into the original triplet loss, and adds a new distance constraint term for “positive and negative” samples, so that the model can avoid falling into local optimum during training. We demonstrate the effectiveness of our algorithm by conducting extensive experiments on several authoritative benchmarks. In particular, the proposed method achieves the best performance on the TokyoTM dataset with a Recall@1 metric of 92.89%.

Keywords


Cite This Article

APA Style
Hou, S., Qin, Z., Wu, C., Yin, G., Wang, X. et al. (2025). Dieonet: domain-invariant information extraction and optimization network for visual place recognition. Computers, Materials & Continua, 82(3), 5019–5033. https://doi.org/10.32604/cmc.2025.058233
Vancouver Style
Hou S, Qin Z, Wu C, Yin G, Wang X, Wang Z. Dieonet: domain-invariant information extraction and optimization network for visual place recognition. Comput Mater Contin. 2025;82(3):5019–5033. https://doi.org/10.32604/cmc.2025.058233
IEEE Style
S. Hou, Z. Qin, C. Wu, G. Yin, X. Wang, and Z. Wang, “DIEONet: Domain-Invariant Information Extraction and Optimization Network for Visual Place Recognition,” Comput. Mater. Contin., vol. 82, no. 3, pp. 5019–5033, 2025. https://doi.org/10.32604/cmc.2025.058233



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 339

    View

  • 161

    Download

  • 0

    Like

Share Link