Open Access
ARTICLE
A Modified Method for Scene Text Detection by ResNet
1 School of Computer Science and Technology, Beijing University of Posts and Telecommunications, Beijing,
100876, China.
2 College of Arts and Sciences, Boston University, Boston University, Boston, 02215, USA.
* Corresponding Author: Shaozhang Niu. Email: .
Computers, Materials & Continua 2020, 65(3), 2233-2245. https://doi.org/10.32604/cmc.2020.09471
Received 18 December 2019; Accepted 21 June 2020; Issue published 16 September 2020
Abstract
In recent years, images have played a more and more important role in our daily life and social communication. To some extent, the textual information contained in the pictures is an important factor in understanding the content of the scenes themselves. The more accurate the text detection of the natural scenes is, the more accurate our semantic understanding of the images will be. Thus, scene text detection has also become the hot spot in the domain of computer vision. In this paper, we have presented a modified text detection network which is based on further research and improvement of Connectionist Text Proposal Network (CTPN) proposed by previous researchers. To extract deeper features that are less affected by different images, we use Residual Network (ResNet) to replace Visual Geometry Group Network (VGGNet) which is used in the original network. Meanwhile, to enhance the robustness of the models to multiple languages, we use the datasets for training from multi-lingual scene text detection and script identification datasets (MLT) of 2017 International Conference on Document Analysis and Recognition (ICDAR2017). And apart from that, the attention mechanism is used to get more reasonable weight distribution. We found the proposed models achieve 0.91 F1-score on ICDAR2011 test, better than CTPN trained on the same datasets by about 5%.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.