Table of Content

Open Access iconOpen Access

ARTICLE

Comparison of Local Descriptors for Humanoid Robots Localization Using a Visual Bag of Words Approach

Noé G. Aldana-Murillo, Jean-Bernard Hayet, Héctor M. Becerra

Computer Science Department, Centro de Investigación en Matemáticas (CIMAT), Guanajuato, Gto., México

* Corresponding Author: Noé G. Aldana-Murillo, email

Intelligent Automation & Soft Computing 2018, 24(3), 471-481. https://doi.org/10.1080/10798587.2017.1304508

Abstract

In this paper, we address the problem of the appearance-based localization of a humanoid robot, in the context of robot navigation. We only use information obtained by a single sensor, in this case the camera mounted on the robot. We aim at determining the most similar image within a previously acquired set of key images (also referred to as a visual memory) to the current view of the monocular camera carried by the robot. The robot is initially kidnapped and the current image has to be compared with the visual memory. To solve this problem, we rely on a hierarchical visual bag-of-words approach. The contribution of this paper is twofold: (1) we compare binary, floating-point and color descriptors, which feed the representation in bag-of-words using images captured by a humanoid robot; (2) a specific visual vocabulary is proposed to deal with the typical issues generated by the humanoid locomotion.

Keywords


Cite This Article

N. G. Aldana-Murillo, J. Hayet and H. M. Becerra, "Comparison of local descriptors for humanoid robots localization using a visual bag of words approach," Intelligent Automation & Soft Computing, vol. 24, no.3, pp. 471–481, 2018. https://doi.org/10.1080/10798587.2017.1304508



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1119

    View

  • 874

    Download

  • 0

    Like

Share Link