Open Access iconOpen Access

ARTICLE

crossmark

MFF-Net: Multimodal Feature Fusion Network for 3D Object Detection

Peicheng Shi1,*, Zhiqiang Liu1, Heng Qi1, Aixi Yang2

1 School of Mechanical Engineering, Anhui Polytechnic University, Wuhu, 241000, Anhui Province, China
2 School of Mechanical, Polytechnic Institute of Zhejiang University, Hangzhou, 310000, Zhejiang Province, China

* Corresponding Author: Peicheng Shi. Email: email

Computers, Materials & Continua 2023, 75(3), 5615-5637. https://doi.org/10.32604/cmc.2023.037794

Abstract

In complex traffic environment scenarios, it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance. The accuracy of 3D object detection will be affected by problems such as illumination changes, object occlusion, and object detection distance. To this purpose, we face these challenges by proposing a multimodal feature fusion network for 3D object detection (MFF-Net). In this research, this paper first uses the spatial transformation projection algorithm to map the image features into the feature space, so that the image features are in the same spatial dimension when fused with the point cloud features. Then, feature channel weighting is performed using an adaptive expression augmentation fusion network to enhance important network features, suppress useless features, and increase the directionality of the network to features. Finally, this paper increases the probability of false detection and missed detection in the non-maximum suppression algorithm by increasing the one-dimensional threshold. So far, this paper has constructed a complete 3D target detection network based on multimodal feature fusion. The experimental results show that the proposed achieves an average accuracy of 82.60% on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset, outperforming previous state-of-the-art multimodal fusion networks. In Easy, Moderate, and hard evaluation indicators, the accuracy rate of this paper reaches 90.96%, 81.46%, and 75.39%. This shows that the MFF-Net network has good performance in 3D object detection.

Keywords


Cite This Article

APA Style
Shi, P., Liu, Z., Qi, H., Yang, A. (2023). Mff-net: multimodal feature fusion network for 3D object detection. Computers, Materials & Continua, 75(3), 5615-5637. https://doi.org/10.32604/cmc.2023.037794
Vancouver Style
Shi P, Liu Z, Qi H, Yang A. Mff-net: multimodal feature fusion network for 3D object detection. Comput Mater Contin. 2023;75(3):5615-5637 https://doi.org/10.32604/cmc.2023.037794
IEEE Style
P. Shi, Z. Liu, H. Qi, and A. Yang, “MFF-Net: Multimodal Feature Fusion Network for 3D Object Detection,” Comput. Mater. Contin., vol. 75, no. 3, pp. 5615-5637, 2023. https://doi.org/10.32604/cmc.2023.037794



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1289

    View

  • 724

    Download

  • 0

    Like

Share Link