Submission Deadline: 15 April 2021 (closed) View: 343
In the area of computer vision, human action recognition, gait recognition, and gesture recognition (HARGRGR) are important research areas from the last decade. The most important application of HARGRGR is video surveillance. As the imaging technique improvements and the camera expedient promotions, novel approaches for HAR continuously arise. Nowadays, through camera networks, a lot of videos are captured for human activities. Through these activities, it can be possible to predict the future activities of a human. For this purpose, many automated systems are proposed by computer vision researchers using machine learning algorithms. However, the question is how these systems can handle a large number of videos? Also, how they remove redundant or irrelevant information to monitor the required activities? The more recent, deep learning gain a huge success in the area of machine learning to handle a large amount of data with more accuracy as compared to classical techniques. For HARGRGR, deep learning can be more useful because it requires a large amount of data for training.
Sometimes, the deep learning models are trained on complex imaging datasets and due to these complex datasets, the required accuracy cannot be achieved. Therefore, it is possible to fuse two or more than two deep neural networks (layers information, features, etc.). But the question is that how the fusion process impact the system computational time? This problem can be resolve by employing feature reduction techniques.
This special issue aims to gather the achievemen of deep learning, information fusion, and feature selection in fields of action recognition, gait recognition, and gesture recognition.