iconOpen Access

ARTICLE

crossmark

An Enhanced Hybrid Model Based on CNN and BiLSTM for Identifying Individuals via Handwriting Analysis

Md. Abdur Rahim1, Fahmid Al Farid2, Abu Saleh Musa Miah3, Arpa Kar Puza1, Md. Nur Alam4, Md. Najmul Hossain5, Sarina Mansor2, Hezerul Abdul Karim2,6,*

1 Department of Computer Science and Engineering, Pabna University of Science and Technology, Pabna, 6600, Bangladesh
2 Faculty of Engineering, Multimedia University, Cyberjaya, 63100, Malaysia
3 Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur, 5311, Bangladesh
4 Department of Mathematics, Pabna University of Science and Technology, Pabna, 6600, Bangladesh
5 Department of Electrical, Electronic and Communication Engineering, Pabna, University of Science and Technology, Pabna, 6600, Bangladesh
6 Department of Electrical and Communication Engineering, Pabna University of Science and Technology, Pabna, 6600, Bangladesh

* Corresponding Author: Hezerul Abdul Karim. Email: email

(This article belongs to the Special Issue: Artificial Intelligence Emerging Trends and Sustainable Applications in Image Processing and Computer Vision)

Computer Modeling in Engineering & Sciences 2024, 140(2), 1689-1710. https://doi.org/10.32604/cmes.2024.048714

Abstract

Handwriting is a unique and significant human feature that distinguishes them from one another. There are many researchers have endeavored to develop writing recognition systems utilizing specific signatures or symbols for person identification through verification. However, such systems are susceptible to forgery, posing security risks. In response to these challenges, we propose an innovative hybrid technique for individual identification based on independent handwriting, eliminating the reliance on specific signatures or symbols. In response to these challenges, we propose an innovative hybrid technique for individual identification based on independent handwriting, eliminating the reliance on specific signatures or symbols. Our innovative method is intricately designed, encompassing five distinct phases: data collection, preprocessing, feature extraction, significant feature selection, and classification. One key advancement lies in the creation of a novel dataset specifically tailored for Bengali handwriting (BHW), setting the foundation for our comprehensive approach. Post-preprocessing, we embarked on an exhaustive feature extraction process, encompassing integration with kinematic, statistical, spatial, and composite features. This meticulous amalgamation resulted in a robust set of 91 features. To enhance the efficiency of our system, we employed an analysis of variance (ANOVA) F test and mutual information scores approach, meticulously selecting the most pertinent features. In the identification phase, we harnessed the power of cutting-edge deep learning models, notably the Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM). These models underwent rigorous training and testing to accurately discern individuals based on their handwriting characteristics. Moreover, our methodology introduces a groundbreaking hybrid model that synergizes CNN and BiLSTM, capitalizing on fine motor features for enhanced individual classifications. Crucially, our experimental results underscore the superiority of our approach. The CNN, BiLSTM, and hybrid models exhibited superior performance in individual classification when compared to prevailing state-of-the-art techniques. This validates our method’s efficacy and underscores its potential to outperform existing technologies, marking a significant stride forward in the realm of individual identification through handwriting analysis.

Keywords


1  Introduction

In recent years, the study of handwriting has attracted interest from various fields, including biometrics [1], personality traits [2], medical areas [3], and symbols representing multiple languages. It has significant measurable properties that can be used to describe or identify writers, and numerous studies have examined handwriting as a biometric trait. However, handwriting is the authentication of individuals in various offices like banking, private companies, and institutions. In banking transactions, each person’s signature, which is a specific name or symbol, is checked [4]. Most institutions and private companies are reviewed in the same way. However, a single word, title, or symbol poses a security threat that unscrupulous individuals will likely steal and copy. In addition, handwriting is the most common form of behavioral biometrics employed in forensic science [5]. Identifying fake handwritten documents is a challenging issue in forensic science. In this case, the question remains, “Can individuals be identified based on their unique writing characteristics?” This study analyzes various characteristics, including kinematic, statistical, spatial, and composite, obtained from pen-tablet handwriting for individual identification.

Moreover, handwriting is vital in identifying patients in medical fields, like identifying Parkinson’s disease and autistic children [6]. Researchers are investigating handwritten characters from many sources, such as paper documents [7], images, touch and non-touch screens [8], and other devices. It is easy to collect, less stressful for humans, and suitable for classification. Two forms of handwriting data are used: offline data collected with scanning machines and online data entered with a digital pen and tablet. In this work, we employed an online handwritten database. Numerous online handwriting recognition technology applications include signature authentication in industrial and finance sectors, criminal investigation authentication in legal proceedings, and document analysis. However, specific patterns are written and analyzed, sometimes making identification challenging. We propose a technique to identify a person based on independent text writing.

Nowadays, the ability to reveal unique characteristics through online handwriting analysis is gaining prominence among academics due to the incredible technological advancements and the connection between technology and the human brain. The authors present an online handwritten alphabet recognition system based on machine learning [9]. The classification is based on the writing of each letter of the alphabet; however, in our research, we classify by examining the entire writing of mixed phrases or sentences containing alphabets. A person’s fine motor skills and handwriting patterns are used for user identification and verification, as well as a person’s age, gender, hand function, and mental state. However, each individual has a distinctive handwriting style, making it a compelling subject of study for user identification and valuable for various applications, including personal identification, pattern recognition, biometric analysis, and signature verification. We extracted about 91 features for handwriting recognition and used them to identify individuals through inference analysis. Additionally, we have identified essential handwriting characteristics for better outcomes.

The experiments in this paper use datasets with various combinations, modifications, and numbers of individuals, as well as task-specific accuracy levels. Our main contributions to this study are as follows:

1   We collected and proposed a new Bengali Handwriting (BHW) dataset in this study. Therefore, we perform exploratory data analysis and statistical preprocessing to clean and prepare the data for analysis.

2   In this paper, 91 features are analyzed, with some new features that facilitate user identification from handwriting. Moreover, we applied the analysis of variance (ANOVA) F test and correlation information score to extract the essential features influencing user identification.

3   We propose deep learning architectures using handwriting fine motor features for user recognition. We used Convolutional Neural Network (CNN) and bidirectional long short-term memory (BiLSTM). In addition, a combined method such as CNN-BiLSTM was analyzed to evaluate the proposed methods with our dataset and a comparative experiment.

This paper has been organized into five sections. Section 2 provides an overview of the relevant research conducted in our field and concisely summarizes the findings. In Section 3, which contains a complete discussion of the proposed methodology, the proposed dataset offers a clear description of the new dataset. This section also includes and describes dataset preprocessing, feature extraction, and classification methods. The results obtained from the proposed techniques and features are elaborated upon in Section 4. Finally, in Section 5, we summarize our findings.

2  Related Work

Scientists worldwide are performing extensive studies on handwriting analysis, particularly in handwriting recognition, diagnosis, person identification, signature verification, image, pattern, and gesture analysis. This section summarizes the various image and pattern analysis-based handwriting recognition and individual identification techniques developed by previous researchers.

The classification of adult and child handwriting has been investigated by analyzing various significant features in writing [10]. The authors employed sequential forward floating selection (SFFS) for feature selection and different machine learning methods for classification. They achieved an accuracy rate of 93.5 for handwritten text and 89.8 for handwritten pattern databases. In [11], the authors proposed classifying handwriting by extracting information from a person’s drawing patterns. The average accuracy of handedness classification is 95.20. A neural network (NN)-based handwritten digit (HD) recognition technique was suggested in [12]. The authors conducted an extensive review of several existing HD recognition strategies. The authors surveyed handwritten character datasets from MNIST and EMNIST. The authors suggested a CNN and some data augmentation from previous work on the original dataset [13,14]. Image-based handwritten signature verification was proposed in [15] with a functional framework through a hybrid approach. The authors achieved 90.13% accuracy with 10 training samples.

The static handwriting images that have been dynamically enhanced are examined in [16]. The enhanced images are created synthetically by combining the static and dynamic properties of handwriting. However, image-capturing qualities can be a significant concern in image-based research, as light, image encoding, and brightness contribute to capturing an image-kinematic and pressure features of several handwriting symbols used for Parkinson’s disease differential diagnosis [17]. The authors compared K-nearest neighbors, ensemble Adaboost, and support vector machine classifiers. Deep learning approaches have recently succeeded in extracting and categorizing significant handwriting characteristics [18,19]. A study was done to identify personality traits based on handwriting. This study analyses various strategies for extracting features to predict scribal personality and provides connections between handwriting and personality psychology [2]. The authors examined in-the-moment and context-free handwriting data using digital pen-tablet sensor data to develop reliable user authentication systems. Through machine-learning techniques, the authors presented a reliable and effective user detection system based on the properties of the sensor signals from pen and tablet devices [20]. The study of entity recognition using perception technologies such as knowledge graphs or SNA (Social Network Analysis) focuses on activities and time that extract features from textual data [21]. A relationship-based global-local cognition fusion training methodology with adversarial sample production aims to improve comprehension of the intrinsic interactions between items in distinct local locations [22]. However, the authors proposed an image-text matching system, where we can only consider pen-tablet handwriting for evaluation purposes. Most of the authentication models, as mentioned earlier, show suboptimal performance due to inappropriate and insufficient features. In this paper, we have extracted about 91 features and identified the person by applying hybrid deep-learning techniques.

3  Dataset Description

This section provides an overview of the proposed deep learning-based hybrid framework. The created BHW dataset was used. After that, the dataset is preprocessed and used to extract features. The personal detection procedure is assessed using feature selection, classification, and assessment metrics. Fig. 1 depicts the general framework of the proposed system.

images

Figure 1: General flow diagram of the proposed individuals’ identification system

3.1 Bengali Handwriting (BHW) Dataset

A pen tablet system (Wacom Intuos Pro tablet) was used to collect handwriting data 1. The tablet was attached to a laptop PC (CX-XZ, Panasonic Corporation, Osaka, Japan) running Windows 10 (Microsoft Corporation, WA, USA). A specially developed program used signals from the pen tablet to generate six kinematic parameters: time, pressure, x, y, azimuth, and altitude. The coordinates of the parameters caused by the pen tablet are shown in Fig. 2. The tablet’s screen has a 338 mm × 219 mm × 8 mm resolution and 8,192 pen pressure levels. On a 15-bit scale, the writing pressure was measured. The azimuth angle of the pen was measured in steps of 0.1 degrees at the top, right, bottom, and left, respectively (0, 90, 180, and 270). A score from 0 to 90 degrees in step one was used to determine the elevation angle of the pen (0 and 90 correspond to perpendicular and parallel locations of the pen to the tablet surface).

images

Figure 2: General view of the pen tablet writing system

We used ten Bengali handwriting samples. Fig. 2 shows the Bengali handwriting samples. Participants have to follow the laptop display to write the tasks on the tablet display. The sample keywords are written on a flat tablet margin platform. When individuals write on the tablet’s surface using a digital pen, an automatic dataset with the relevant numeric values is generated and stored in an Excel sheet. Our suggested dataset is, therefore, more authentic and reliable because it comprises distinctive characteristics of individual people’s handwriting. In this study, 30 participants, 12 males and 18 females were asked to write ten distinct keywords and repeat each task five times. A minimum total of 1,500 data samples were collected to conduct this investigation. Since numerical data values are stored directly in an Excel sheet without any filtering or preprocessing, it can be said that Bengali handwriting data is more precise and reliable than image-based systems, as our automated process uses sensor signals from pen and tablet devices. Our proposed model is significant in achieving greater accuracy and paves the way for future sensor-based Bengali handwriting data collection research. Fig. 3 shows an example of a Bengali handwritten dataset.

images

Figure 3: Sample of Bengali handwriting

3.1.1 Parameters of the Handwriting Data

The dataset in this study contains six key parameters or features from which various dynamic, statistical, spatial, and composite features are computed. The following features are explained in detail:

1.   Time: The time taken to complete a written piece varies among individuals, with some writing at a rapid pace, others at a moderate speed, and others at a slow rate. An examination revealed that speedy/swift writers exhibit a dynamic handwriting style characterized by fluctuating baseline features, such as rising, straight, dropping, erratic, etc.

2.   X-axis: The X-axis reflects the X-coordinate region in the pen-tablet surface where pen pressure, p(t), exceeds zero and can range from 130 to 592 for one person, whereas for another, it can range from 185 to 995. In our study, we analysed five different occurrences of the same keyword written by the same individual. We found that the values for positions along the X-axis were similar for a particular individual. This suggests that the value at the X-axis could help distinguish one individual from another.

3.   Y-axis: When writing with a pen tablet, the Y-axis corresponds to the location in terms of the Y coordinate. Time-series Y-coordinate data showed that the range of Y-axis values varies among individuals. The Y-axis position values were consistent across all five cases, suggesting that it can be a valuable identifier for distinguishing one person from another.

4.   Pen Pressure: The pen pressure is an essential attribute that reflects the magnitude of the force exerted onto the tablet surface during writing, captured at any instant where the pen pressure, p(t), exceeds zero. This parameter is distinct for each individual, varying between heavy and light pressure levels. Analysis of the collected dataset reveals that an individual pen pressure is not constant but varies from one iteration to the next, with pressure ranges ranging from (132767) to (137628124) among different individuals.

5.   Horizontal Angle/Azimuth: The Horizontal Angle, which quantifies the angular separation between two lines with the exact origin, is measured by the pen tablet. In our study, this metric is a crucial attribute for individual identification.

6.   Vertical Angle/Altitude: The vertical angles, which complement each other when traversing two lines, significantly impact the study’s ability to distinguish between individuals. No collection of data has a vertical angle greater than 800 numeric values.

3.2 Data Preprocessing

In this section, datasets are preprocessed to achieve more consistent datasets and improve performance. A dataset has several column properties, each containing a valid numeric value. However, some data values in our dataset are missing. The discrepancy in processing times between the data collection and data acquisition equipment could cause this. If the write signal is lost, this could result in some zero values. We used mean imputation to replace missing data values to solve the missing value problem. It figures out a statistical significance for each column and instantly returns any missing values in that column with statistics based on the data closest to the missing values. Additionally, the whole dataset is maintained through mean simulation without reducing the sample size, and several simulations for missing data allow us to generate more accurate standard error estimates. The mean imputation is calculated using Eq. (1).

mimp=i=0NfreiN(1)

where frei represents the frequency of instances, and N is the total number of instances in a feature vector.

3.3 Hand Crafted Feature Extraction

In this section, we extracted numerous significant handwriting data features for person identification. There have been discussions on a variety of data features, including kinematic (12), statistical (25), spatial (39), and composite (15) features [23].

3.3.1 Kinematic Feature Extraction

A comprehensive understanding of the dynamic aspects of the writing process is possible through a multidimensional analysis and kinematic feature extraction from input handwriting data. By providing important insights into the cadence and motion of the pen strokes, this data greatly aids in the understanding of a person’s handwriting style. The position data is used to determine the acceleration and velocity. These profiles provide insightful information on the hand’s velocity and variability while writing. The acceleration peaks could be used as indicators for dramatic direction changes or pen lifts. The average and maximal pressure variations are retrieved for pressure sensitivity, along with other variables related to pressure changes during the writing process. To enhance classification performance, we considered the pressure, writing speed, velocity, and acceleration as kinematic features in this paper. Also, the average of the pen tip pressure over all pressures at the specified timestamp and the maximum average of the writing speed is calculated. In terms of velocity, average, standard deviation, peak speed, and minimum velocity have been extracted. Feature extraction for acceleration includes things like velocity. Table 1 represents the description of the kinematic features.

images

3.3.2 Statistical Feature Extraction

Statistical feature extraction is a way of analyzing and summarising useful information from raw data using statistical measurements. Several statistical functions can be used on Pentablet written data to get useful information. The statistical feature extraction process extracts mean, median, maximum, minimum, and variance which are shown in Table 2. We extracted all the statistical features for each property, such as mean_pen_pressure, mean_X_axis, mean_Y_axis, mean_azimuth, and mean_altitude of the input data.

images

3.3.3 Spatial Feature Extraction

Spatial features refer to attributes and characteristics related to the physical aspects of writing. These features include information regarding the start and end times of writing, which gives insight into the speed and duration of the writing process. The top and bottom positions of the tablet surface are also examined, providing information about the vertical placement of handwriting on the writing surface. Furthermore, spatial aspects include the length and height of the handwriting, which provide information about the size and proportions of the written characters. We extracted a single characteristic of handwriting width, height, and total length. However, statistical functions such as average, maximum, and minimum are obtained for each of the remaining spatial functions. Table 3 describes the functions of spatial feature extraction.

images

3.3.4 Composite Feature Extraction

Composite features involve combining the raw features to create new ones that capture more complex patterns or characteristics of handwriting. Positive and negative pressure changes, first and last 10% pressure and speed, and loop count could be composite features. Moreover, we calculated the feature mean, standard deviation, and maximum of each of the composite functions, as shown in Table 4.

images

3.4 Potential Feature Selection from Hand Crafted Features

In our experiment, 91 features are extracted using a handcrafted feature extraction approach, which is not equally important for identifying the user, and are retrieved from the handwriting sensor output for each individual. We presented a hybrid feature selection technique to extract significant features while ignoring non-significant features to achieve excellent classification results. We suggested the ANOVA F test (AFT) and mutual information (MI) to create a hybrid model in the hybrid feature selection process. Three premises are needed for an AFT (i) The samples are taken from normal people, (ii) Independent random samples are used, and (iii) The standard deviations (or variances) of the populations are equal. The approach also evaluates the presence of variance homogeneity across category groups regarding the numerical outcome. When the variance across groups is equal, it suggests that the feature in question has no meaningful effect on the answer variable. As a result, this categorical variable cannot be used in the model training procedure. Besides, MI assesses the correlation between two concurrently measured random variables. The MI between two discrete random variables X and Y, jointly distributed according to p(x, y) is given by Eq. (2).

I(X,Y)=x,yP(X,Y)logP(X,Y)P(X)P(Y)(2)

Moreover, the joint entropy of the channel input and output corresponds to the concept of mutual information using Eq. (3). The general structure of the hybrid feature selection model is shown in Fig. 4.

I(X,Y)=H(X)+H(Y)H(X,Y)(3)

images

Figure 4: Application process of AFT and MI scores to identify the optimal features

3.5 Convolutional Neural Network (CNN)

After selecting the potential feature, we employed a deep learning-based CNN model to enhance the extracted spatial feature and classification [24]. CNN is a type of deep learning model widely used in computer vision tasks such as image classification, object detection, and segmentation. CNN deals with the sequences or time series data that can automatically learn spatial classification from input data using convolutional layers. However, the convolutional filters slide along the sequence, extracting local patterns or features. These filters’ weights are learned through training, allowing the model to identify relevant patterns and structures within the sequential data. This paper uses a one-dimensional CNN model architecture for classification. We use a single convolution layer with a rectified linear unit (ReLU) activation function and 1 × 1 kernels with a stride of 1 pixel to extract features. After the convolution layers, the final classification is performed using fully-connected layers. After going through a flattening layer after the convolution layers, the output is sent to a hidden layer acting as the output layer. The architecture includes two convolution layers, each using 128 and 64 filters with two strides. The pooling process employs MaxPooling. Following the pooling process, the outputs are combined and flattened before being sent to a fully connected layer for prediction. Fig. 5 illustrates the CNN model architecture.

images

Figure 5: The CNN model architecture

3.6 Bidirectional LSTM

BiLSTM is short for Bidirectional Long Short Term Memory, which is not the same as the general LSTM, which can capture past and future information. This is mainly combined with two LSTM approaches in the sequence where one LSTM generates the output and the second LSTM takes as an input. Fig. 5 demonstrates the BiLSTM model where ht expresses the sub-BiLSTM, which will move forward through the sequence. Fig. 6 illustrates the BiLSTM model architecture used in this study. The output of the one LSTM model was inputted in the second LSTM model with a specific unit. In the same way, the final hidden state of the second LSTM output assigns the weight in each time step of the output sequence for the final classification task. In the process, we used the tanh activation function to operate the summation function of the wo-dense layer on the corresponding input. This score is generated for each time in the sequence [25].

images

Figure 6: BiLSTM model architecture

3.7 Hybrid Model (CNN-BiLSTM)

We used the CNN-BiLSTM architecture to obtain the hierarchical features that help us to extract intricate patterns of the time series characteristics, aiming to improve the effectiveness of the system [21,22,26,27]. CNN layers obtain features convolved relation among the initial hand-crafted features data in this system, while BiLSTMs predict sequences. In contrast to existing methods for converting time series to images, our model employs only the original raw data. A CNN’s layers are generated using kernels that iteratively process two-dimensional sequences. Fig. 7 visually represents the CNN-BiLSTM model for multivariate time data. The output of the CNN layer serves as input to a dynamic BiLSTM layer. The results of the BiLSTM layer are used to create a fully connected layer that performs the classification.

images

Figure 7: CNN-BiLSTM model architecture

4  Experimental Result

In the study, we used our data set to test different combinations of the model. To do this, we have tested several ways here. We split the dataset into training and testing in a ratio of 70 and 30 to evaluate the model. Moreover, we extracted 91 features with statistical and kinematics techniques and then performed to select the most effective features using different methods, such as ANOVA and MI. We used three modules to improve the features and classification: CNN, BiLSTM, and Hybrid model. The combination of CNN and BiLSTM is presented here as CNN-BiLSTM. In the performance table, we used the min. Validation loss (MVL), train accuracy (TA), test accuracy (TSA), precision, recall, and F1-score to evaluate the performance of the proposed model. The evaluation metrics of precision, recall, and F1-score are calculated based on true positive (TrP), false positive (FrP), true negative (TrN), and false negative (FrN). These terms are briefly defined as follows:

Accuracy(%)=TrP+TrNTrP+FrP+TrN+FrN×100(4)

Recall(%)=TrPTrP+FrN×100(5)

Precision(%)=TrPTrP+FrP×100(6)

F1Score(%)=2×Precision×RecallPrecision+Recall×100(7)

4.1 Experimental Result

In the study, we collected data from 30 persons and used this data to identify individuals by extracting qualitative features. To visualize the data size impact of the model, various combinations of personal data are used to train and evaluate it. The training and testing accuracy, precision, recall, and F1-score for 5-, 10-, 15-, 20-, 25-, and all 30-person data are shown in Table 5. We observed that the training and testing accuracy of using 5-person data is 100% for all proposed models, where the training accuracy for 10-person data is 99.74%, 98.72%, and 100% for CNN, BiLSTM, and CNN-BiLSTM, respectively. The highest training accuracy is 100%, 98.13%, 98.35%, and 98.3% for CNN-BiLSTM using 15, 20, 25, and 30-person data. In contrast, the highest testing accuracy is 98.92% (BiLSTM, 15-person data), 93.69% (CNN, 20-person data), 93.42% (CNN-BiLSTM, 25-person data), 93.62% (CNN-BiLSTM, 30-person data). Moreover, the testing accuracy of using all data for all features is 93.22%, 93.22%, and 93.62% for CNN, BiLSTM, and CNN-BiLSTM, respectively.

images

4.2 Performance with ANOVA and Mutual Information Selected Feature

To visualize the effectiveness of the various features, we used a different feature selection method to select the potential feature to reduce the computational complexity and protect the model from bias training. In the study, we employed ANOVA and the mutual information technique to select the potential features that could improve performance accuracy and efficiency. Table 6 shows the list of combinations of kinematic, statistical, spatial, and composite features. Table 7 represents the feature selection process used in the ANOVA F test. Table 8 demonstrates the performance accuracy after selecting the effective feature with the ANOVA technique. The ANOVA technique was used to train the model and predict 91 potential features. Table 8 shows that the training accuracy is 96.61%, 93.12%, and 98.40% for CNN, BiLSTM and CNN-BiLSTM, respectively. However, the testing accuracy is 92.43%, 93.62%, and 93.42%, respectively.

images

images

images

Table 9 demonstrates the performance accuracy after selecting the effective feature with the mutual information technique. We observed that the training accuracy is 96.96%, 92.13%, and 98.41% for CNN, BiLSTM, and CNN-BiLSTM, respectively. The testing accuracy is 93.22%, 94.02%, and 94.62%, respectively. Moreover, Tables 8 and 9 show the precision, recall, and F1-score of the proposed model.

images

Table 10 shows the individual task accuracy for all proposed techniques. The average training and testing accuracy for CNN, BiLSTM, and CNN-BiLSTM is 100% and 93.59%, 96.26% and 87.69%, and 99.80% and 93.84%, respectively. For CNN, the training accuracy is 100% for all tasks, whereas the highest testing accuracy is 100% for task 3 and task 6. Tasks 1, 2, 3, 5, 6, 8, and 10 have the highest training accuracy of 100%. Task 3 using CNN-BiLSTM has the highest testing accuracy of 100%. Using BiLSTM, the highest training accuracy is 98.69%, and the testing accuracy is 97.43%.

images

Fig. 8 depicts the training and validation loss and accuracy of the proposed hybrid CNN-BiLSTM methods. Moreover, Fig. 9 shows the confusion matrix of the proposed hybrid methods. Furthermore, we separately represent the individual classification accuracies of different data features such as kinetic, statistical, spatial, and composite features. However, the training accuracy using the proposed hybrid model is 78.17%, 97.55%, 96.91%, and 88.49%, respectively. Table 11 presents the classification accuracy of various data features. As a result, when compared to using the data features independently, we can conclude that the proposed hybrid model achieved higher accuracy by combining all data features and applying the proposed feature selection methods.

images

Figure 8: Model loss and accuracy for CNN-BiLSTM

images

Figure 9: Confusion matrix of the proposed CNN-BiLSTM method

images

Furthermore, we compared the proposed model with a benchmark dataset [20] and the different machine-learning techniques. However, the dataset contains numerous handwritten samples from 24 individuals. However, Table 12 compares accuracy using the proposed CNN, BiLSTM, and hybrid (CNN-BiLSTM) models with our data sets and benchmark datasets.

images

In addition, we measured the classification accuracy with the different machine learning algorithms to evaluate and compare the classification accuracy of the proposed methods shown in Table 13.

images

5  Conclusion

In the study, we developed a Bangla handwriting recognition system using effective feature extraction, feature selection, and the classification module. A few datasets are available for the Bangla handwriting recognition task; we collected a new BHW dataset. We preprocessed the received dataset to extract features and then selected and classified the potential parts. We extracted a total of 91 features here, and then we selected potential features with the ANOVA F test and mutual information scores. Finally, we applied deep learning-based CNN, BiLSTM, and CNN-BiLSTM techniques, where our model performed better in individual classification. The average training accuracy of all tasks is 100%, 96.26%, and 99.80% for CNN, BiLSTM, and CNN-BiLSTM, respectively. Moreover, the average testing accuracy of all tasks is 93.59%, 87.39%, and 93.84%, respectively. Our model achieved good performance compared to similar other state-of-the-art work. In the future, we plan to deploy it as a real-life BHW system.

Acknowledgement: This work is supported by the 2023–2024 Research Project at Pabna University of Science and Technology, Bangladesh.

Funding Statement: MMU Postdoctoral and Research Fellow (Account: MMUI/230023.02).

Author Contributions: The authors confirm their contribution to the paper as follows: study conception and design: Md. Abdur Rahim, Fahmid Al Farid, and Hezerul Abdul Karim; data collection: Abu Saleh Musa Miah, Arpa Kar Puza, and Md. Najmul Hossain; analysis and interpretation of results: Md. Abdur Rahim, Md. Nur Alam, and Abu Saleh Musa Miah; draft manuscript preparation: Md. Abdur Rahim, Fahmid Al Farid, Arpa Kar Puza, Md. Nur Alam, and Sarina Moansor. All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials: Bengali Hand Writing Dataset: https://github.com/PUSTCSE/Bengali-Handwriting-.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

1https://github.com/PUSTCSE/Bengali-Handwriting

References

1. Ballard L, Lopresti D, Monrose F. Evaluating the security of handwriting biometrics. In: Tenth Int. Work. on Frontiers in Handwriting Recogn. France: Suvisoft; 2006. [Google Scholar]

2. Chaudhari K, Thakkar A. Survey on handwriting-based personality trait identification. Expert Sys Appl. 2019;124:282–308. doi:10.1016/j.eswa.2019.01.028. [Google Scholar] [CrossRef]

3. Diaz M, Moetesum M, Siddiqi I, Vessio G. Sequence-based dynamic handwriting analysis for Parkinsons disease detection with one-dimensional convolutions and BiGRUs. Expert Sys Appl. 2021;15(168):114405. doi:10.1016/j.eswa.2020.114405. [Google Scholar] [CrossRef]

4. Shin J, Maruyama K, Kim CM. Signature verification based on inter-stroke and intra-stroke information. ACM SIGAPP Appl Comput Rev. 2017 May 3;17(1):26–34. doi:10.1145/3090058.3090062. [Google Scholar] [CrossRef]

5. Jadhav EB, Sankhla MS, Kumar R. Artificial intelligence: advancing automation in fo-rensic science and criminal investigation. J Seybold Rep. 2020;15(8):2064–75. doi:10.2174/2666484401666220819111603. [Google Scholar] [CrossRef]

6. Haroon AS, Padma T. An ensemble classification and binomial cumulative based PCA for diagnosis of parkinson’s disease and autism spectrum disorder. Int J Syst Assur Eng Manag. 2022 Jul;5:1–6. doi:10.1007/s13198-022-01699-x. [Google Scholar] [CrossRef]

7. Ahlawat S, Choudhary A, Nayyar A, Singh S, Yoon B. Improved handwritten digit recognition using convolutional neural networks (CNN). Sensors. 2020 Jun 12;20(12):3344. doi:10.3390/s20123344. [Google Scholar] [PubMed] [CrossRef]

8. Rahim MA, Shin J, Islam MR. Gestural flick input-based non-touch interface for character input. Vis Comput. 2020 Aug;36(8):1559–72. doi:10.1007/s00371-019-01758-8. [Google Scholar] [CrossRef]

9. Popli R, Kansal I, Garg A, Goyal N, Garg K. Classification and recognition of online hand-written alphabets using machine learning methods. IOP Conf Series: Mater Sci Eng. 2021;1022(1):012111. doi:10.1088/1757-899X/1022/1/012111. [Google Scholar] [CrossRef]

10. Shin J, Maniruzzaman M, Uchida Y, Hasan MA, Megumi A, Suzuki A, et al. Important features selection and classification of adult and child from handwriting using machine learning methods. Appl Sci. 2022 May 23;12(10):5256. doi:10.3390/app12105256. [Google Scholar] [CrossRef]

11. Shin J, Rahim MA. Handedness detection based on drawing patterns using machine learning techniques. In: Proceedings of the Thirteenth International Conference on Advances in Computer-Human Interactions, Valencia, Spain, 2020 Jun. [Google Scholar]

12. Ramzan M, Khan HU, Awan SM, Akhtar W, Ilyas M, Mahmood A, et al. A survey on using neural network based algorithms for hand written digit recognition. Int J of Adv Comp Sci and App. 2018;9(9519–28. doi:10.14569/IJACSA.2018.090965. [Google Scholar] [CrossRef]

13. Ashiquzzaman A, Tushar AK, Rahman A, Mohsin F. An efficient recognition method for handwritten arabic numerals using CNN with data augmentation and dropout. In: Data Management, Analytics and Innovation. Singapore: Springer; 2019; p. 299–309. [Google Scholar]

14. Altwaijry N, Al-Turaiki I. Arabic handwriting recognition system using convolutional neural network. Neur Comput Appl. 2021 Apr;33(7):2249–61. doi:10.1007/s00521-020-05070-8. [Google Scholar] [CrossRef]

15. Ooi SY, Teoh AB, Pang YH, Hiew BY. Image-based handwritten signature verification using hybrid methods of discrete radon transform, principal component analysis and probabilistic neural network. Appl Soft Comput. 2016 Mar 1;40:274–82. doi:10.1016/j.asoc.2015.11.039. [Google Scholar] [CrossRef]

16. Diaz M, Ferrer MA, Impedovo D, Pirlo G, Vessio G. Dynamically enhanced static handwriting representation for Parkinson’s disease detection. Pattern Recognit Lett. 2019 Dec 1;128:204–10. doi:10.1016/j.patrec.2019.08.018. [Google Scholar] [CrossRef]

17. Drotár P, Mekyska J, Rektorová I, Masarová L, Smékal Z, Faundez-Zanuy M. Evaluation of handwriting kinematics and pressure for differential diagnosis of Parkinson’s disease. Artif Intell Med. 2016 Feb;67:39–46. doi:10.1016/j.artmed.2016.01.004. [Google Scholar] [PubMed] [CrossRef]

18. Mezghani A, Elleuch M, Kherallah M. DL vs. traditional ML algorithms to recognize arabic handwriting script: a review. In: Intelligent Systems Design and Applications. Cham: Springer Nature Switzerland; 2023 Jun 3; p. 404–14. [Google Scholar]

19. Aouraghe I, Khaissidi G, Mrabti M. A literature review of online handwriting analysis to detect Parkinson’s disease at an early stage. Multimed Tools Appl. 2023 Mar;82(8):11923–48. doi:10.1007/s11042-022-13759-2. [Google Scholar] [CrossRef]

20. Begum N, Akash MA, Rahman S, Shin J, Islam MR, Islam ME. User authentication based on handwriting analysis of pen-tablet sensor data using optimal feature selection model. Future Internet. 2021 Sep 6;13(9):231. doi:10.3390/fi13090231. [Google Scholar] [CrossRef]

21. Liu S, He TH, Li JY, Li YT, Kumar A. An effective learning evaluation method based on text data with real-time attribution-a case study for mathematical class with students of junior middle school in China. ACM Trans Asian Low-Resour Lang Inf Process. 2023;22.3:1–22. doi:10.1145/3474367. [Google Scholar] [CrossRef]

22. Huang S, Fu W, Zhang Z, Liu S. Global-local fusion based on adversarial sample generation for image-text matching. Inf Fusion. 2024;103:102084. doi:10.1016/j.inffus.2023.102084. [Google Scholar] [CrossRef]

23. Hasan T, Rahim MA, Shin J, Nishimura S, Hossain MN. Dynamics of digital pen-tablet: handwriting analysis for person identification using machine and deep learning techniques. IEEE Access. 2024 Jan;12:8154–77. doi:10.1109/ACCESS.2024.3352070. [Google Scholar] [CrossRef]

24. Taye MM. Theoretical understanding of convolutional neural network: concepts, architectures, applications, future directions. Computation. 2023 Mar 6;11(3):52. doi:10.3390/computation11030052. [Google Scholar] [CrossRef]

25. Shin J, Konnai S, Maniruzzaman M, Hasan MA, Hirooka K, Megumi A, et al. Identifying ADHD for children with coexisting ASD from fNIRs signals using deep learning approach. IEEE Access. 2023 Jul 31;11:82794–801. doi:10.1109/ACCESS.2023.3299960. [Google Scholar] [CrossRef]

26. Méndez M, Merayo MG, Nunez M. Long-term traffic flow forecasting using a hybrid CNN-BiLSTM model. Eng Appl Artif Intell. 2023 May 1;121:106041. doi:10.1016/j.engappai.2023.106041. [Google Scholar] [CrossRef]

27. Sharma N, Mangla M, Yadav S, Goyal N, Singh A, Verma S, et al. A sequential ensemble model for photovoltaic power forecasting. Comput Electr Eng. 2021;96:107484. doi:10.1016/j.compeleceng.2021.107484. [Google Scholar] [CrossRef]


Cite This Article

APA Style
Rahim, M.A., Farid, F.A., Miah, A.S.M., Puza, A.K., Alam, M.N. et al. (2024). An enhanced hybrid model based on CNN and bilstm for identifying individuals via handwriting analysis. Computer Modeling in Engineering & Sciences, 140(2), 1689-1710. https://doi.org/10.32604/cmes.2024.048714
Vancouver Style
Rahim MA, Farid FA, Miah ASM, Puza AK, Alam MN, Hossain MN, et al. An enhanced hybrid model based on CNN and bilstm for identifying individuals via handwriting analysis. Comput Model Eng Sci. 2024;140(2):1689-1710 https://doi.org/10.32604/cmes.2024.048714
IEEE Style
M.A. Rahim et al., "An Enhanced Hybrid Model Based on CNN and BiLSTM for Identifying Individuals via Handwriting Analysis," Comput. Model. Eng. Sci., vol. 140, no. 2, pp. 1689-1710. 2024. https://doi.org/10.32604/cmes.2024.048714


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 276

    View

  • 158

    Download

  • 0

    Like

Share Link