Open Access
ARTICLE
HGG-CNN: The Generation of the Optimal Robotic Grasp Pose Based on Vision
1 Shanghai Maritime University, Shanghai, China
2 HZ University of Applied Sciences, Vlissingen, Zeeland, Netherlands
* Corresponding Author: Shiyin Qiu. Email:
Intelligent Automation & Soft Computing 2020, 26(6), 1517-1529. https://doi.org/10.32604/iasc.2020.012144
Received 16 June 2020; Accepted 23 August 2020; Issue published 24 December 2020
Abstract
Robotic grasping is an important issue in the field of robot control. In order to solve the problem of optimal grasping pose of the robotic arm, based on the Generative Grasping Convolutional Neural Network (GG-CNN), a new convolutional neural network called Hybrid Generative Grasping Convolutional Neural Network (HGG-CNN) is proposed by combining three small network structures called Inception Block, Dense Block and SELayer. This new type of convolutional neural network structure can improve the accuracy rate of grasping pose based on the GG-CNN network, thereby improving the success rate of grasping. In addition, the HGG-CNN convolutional neural network structure can also overcome the problem that the original GG-CNN network structure has in yielding a recognition rate of less than 70% for complex artificial irregular objects. After experimental tests, the HGG-CNN convolutional neural network can improve the average grasping pose accuracy of the original GG-CNN network from 83.83% to 92.48%. For irregular objects with complex man-made shapes such as spoons, the recognition rate of grasping pose can also be increased from 21.38% to 55.33%.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.