Vol.63, No.1, 2020, pp.243-262, doi:10.32604/cmc.2020.06898
OPEN ACCESS
ARTICLE
Human Action Recognition Based on Supervised Class-Specific Dictionary Learning with Deep Convolutional Neural Network Features
  • Binjie Gu1, *, Weili Xiong1, Zhonghu Bai2
1 Key Laboratory of Advanced Process Control for Light Industry, Ministry of Education, Jiangnan University, Wuxi, China.
2 National Engineering Laboratory for Cereal Fermentation Technology, Jiangnan University, Wuxi, China.
* Corresponding Author: Binjie Gu. Email: gubinjie1980@jiangnan.edu.cn.
Received 09 April 2019; Accepted 13 June 2019; Issue published 30 March 2020
Abstract
Human action recognition under complex environment is a challenging work. Recently, sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions. The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class, and the minimal reconstruction error indicates its corresponding class. However, how to learn a discriminative dictionary is still a difficult work. In this work, we make two contributions. First, we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network (CNN) features. Secondly, we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term. Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.
Keywords
Action recognition, deep CNN features, sparse model, supervised dictionary learning.
Cite This Article
Gu, B., Xiong, W., Bai, Z. (2020). Human Action Recognition Based on Supervised Class-Specific Dictionary Learning with Deep Convolutional Neural Network Features. CMC-Computers, Materials & Continua, 63(1), 243–262.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.