[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.019801
images
Article

Effect of Direct Statistical Contrast Enhancement Technique on Document Image Binarization

Wan Azani Mustafa1,2,*, Haniza Yazid3, Ahmed Alkhayyat4, Mohd Aminudin Jamlos3 and Hasliza A. Rahim3

1Advanced Computing (AdvCOMP), Centre of Excellence, Universiti Malaysia Perlis (UniMAP), Pauh Putra Campus, Arau, 02600, Perlis, Malaysia
2Faculty of Electrical Engineering Technology, Universiti Malaysia Perlis (UniMAP), Pauh Putra Campus, Arau, 02600, Perlis, Malaysia
3Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP), Pauh Putra Campus, Arau, 02600, Perlis, Malaysia
4Faculty of Engineering, The Islamic University, Najaf, 54001, Iraq
*Corresponding Author: Wan Azani Mustafa. Email: wanazani@unimap.edu.my
Received: 26 April 2021; Accepted: 15 June 2021

Abstract: Background: Contrast enhancement plays an important role in the image processing field. Contrast correction has performed an adjustment on the darkness or brightness of the input image and increases the quality of the image. Objective: This paper proposed a novel method based on statistical data from the local mean and local standard deviation. Method: The proposed method modifies the mean and standard deviation of a neighbourhood at each pixel and divides it into three categories: background, foreground, and problematic (contrast & luminosity) region. Experimental results from both visual and objective aspects show that the proposed method can normalize the contrast variation problem effectively compared to Histogram Equalization (HE), Difference of Gaussian (DoG), and Butterworth Homomorphic Filtering (BHF). Seven (7) types of binarization methods were tested on the corrected image and produced a positive and impressive result. Result: Finally, a comparison in terms of Signal Noise Ratio (SNR), Misclassification Error (ME), F-measure, Peak Signal Noise Ratio (PSNR), Misclassification Penalty Metric (MPM), and Accuracy was calculated. Each binarization method shows an incremented result after applying it onto the corrected image compared to the original image. The SNR result of our proposed image is 9.350 higher than the three (3) other methods. The average increment after five (5) types of evaluation are: (Otsu = 41.64%, Local Adaptive = 7.05%, Niblack = 30.28%, Bernsen = 25%, Bradley = 3.54%, Nick = 1.59%, Gradient-Based = 14.6%). Conclusion: The results presented in this paper effectively solve the contrast problem and finally produce better quality images.

Keywords: Binarization; contrast; luminosity; illumination; document image

1  Introduction

One of the most significant current discussions in image processing is a contrast enhancement since the contrast problem is crucial for the binarization process [13]. Now, background and contrast normalization becomes challenging for many researchers, where various methods have been proposed to solve this problem [4]. In recent years, there has been an increasing amount of literature on contrast enhancement techniques. The main purpose is to improve the quality of the image by performing an adjustment on the brightness or darkness intensity image [57]. Many researchers argued that Histogram equalization (HE) is a simple yet effective method to improve contrast and image quality [79]. Moreover, Kim [10] raised several concerns on contrast problems. They suggested Brightness preserving Bi-Histogram Equalization (BBHE) to improve the contrast using the average intensity value to separate dark and bright areas. This finding is supported by findings in 1997 in Quantized Bi-Histogram Equalization (QBHE) method [11]. The above findings contradict the study by Wang et al. [12], where the authors presented that the median intensity value is more accurate as of the separating point compared to the average intensity. These results contradict the experiments of Chen et al. [13]. They suggested that the minimum mean brightness between input and output image as the separating point is more specific and accurate than the BBHE and Dualistic Sub Image Histogram Equalization (DSIHE). According to an investigation by Hasikin et al. [14] and Zhou et al. [15], they set local adaptive and global adaptive information as the main parameters to enhance the low brightness and low contrast in the non-uniform images.

Besides that, Homomorphic filtering is another well-known technique to solve the contrast problem, especially when the original image is badly illuminated or have a contrast problem [1618]. In 2014, Shahamat et al. [19] published a paper describing the homomorphic filter with the simple kernel in the spatial domain and a combination of bicubic interpolation to improve contrast enhancement and recognition. To solve the contrast problem under varying lighting conditions, Fan et al. [20] found that the modification of homomorphic filtering using Gaussian high pass filter called Difference of Gaussian (DoG) filtering gave an effective result performance compared to a few other methods. The research study by Delac et al. [21] also found a slight modification of the original homomorphic filtering technique applied in sub-images had significantly improved the contrast and eliminate the illumination in face images. The findings based on a modification of the homomorphic filter equation is consistent with the findings by Adelmann [22]. Still, in terms of cut-off, Adelmann proposed adjusting the filter response transition manually until the best setting is found.

Although the above investigation concentrated on contrast enhancement, to the best of the author’s knowledge, only a few references in the literature systematically describe the effect of contrast variation on the document image before the binarization process. This was the motivation behind the present study. Based on the previous research, the main limitation is to find a separating point to differentiate between the bright and dark areas before applying the contrast enhancement method. Second, the main problem when considering the filtering method (such as homomorphic filtering) is the cut-off value. According to the literature, the researcher obtained the cut-off and other parameter’s value by manual testing [18,19]. Thus, the parameter value is not efficient and accurate for all types of non-uniform input images.

The main objective of this paper is to investigate the binarization effect and proposes a novel method for contrast enhancement on document images before the binarization process. This paper focuses on pre-processing stage, but the result after applying binarization on the corrected images was impressive. This method effectively reduces the contrast variation problem with the other illumination techniques [1921]. Finally, this finding can assist many researchers in concentrating on the binarization method or post-processing stage. The rest of the paper is organized as follows. Section 1.1 explains related works based on contrast correction or illumination techniques, while Section 1.2 describes contrast and luminosity modeling. In addition, Section 2 presents the proposed method based on statistical value, while Section 3 shows the experimental result and the comparison with a few selected methods. Finally, Section 4 concludes this work.

1.1 Related Work

i) Histogram Equalization

Histogram equalization (HE) [7] is a common image enhancement technique. The purpose is to create an image with a uniform distribution over the whole brightness scale using the cumulative density function of the image as a transfer function.

ii) Difference of Gaussian (DoG)

Difference of Gaussian (DoG) is a modified method from homomorphic filtering to eliminate the illumination effect and automatically enhance the contrast. The effect of illumination is effectively reduced by adding the new parameters such as γH and γL in the original Butterworth homomorphic filter to obtain the final equation as given below [20]:

H(u,v)=(γHγL)1eD2(u,v)/2D02+γL (1)

Next, histogram equalization was applied in order to get a uniform gray distribution and enhanced contrast. D(u,v) denotes the distance from the origin of the centred Fourier transform. The optimal value for γH is 1.2 while γL is 0.02.

iii) Butterworth Homomorphic Filtering

This finding is consistent with findings by the DoG method based on a modification of the Butterworth homomorphic equation. However, Adelmann [22] focused on the equation to set the transition point and the transition slope in this method. A suitable and adjustable filter function was derived for frequency-domain processing with the homomorphic filter approach. The final equation is depicted as below:

Ibo=B(x,y)[L(x,y)+C(x,y)] (2)

where a maximal function (n) value is 2, while the offset (e) = 0.5 is the minimal value of the filter response [23]. The center of the radius (q) is the distance from the origin of the centred Fourier transform, while the optimal amplification (d) constant is 1.5.

1.2 The Contrast and Luminosity Modelling

The best image quality should have the foreground and background without any noise. The original model of an image is denoted as follows:

H(u,v)=1(1/(1+((q/a)n)))d+e (3)

where Io is the original image, while Fo and Bo represent the foreground pixel and background pixel, respectively. During the acquisition process, these images are often non-uniformly illuminated, exhibiting local luminosity and contrast variability [24]. The shadows, non-uniform illumination, ink bleed-through, blur, and perspective distortion frequently appears in background image [25,26]. Luminosity mostly appears during the scanning process [27]. Therefore, the contrast C(x,y) and luminosity L(x,y) might occur in the foreground and background within the image, given by:

BackgroundIo=N[F(x,y)+B(x,y)] (4)

ForegroundIfo=F(x,y)[L(x,y)+C(x,y)] (5)

According to expression (3), the original image includes contrast and luminosity. Therefore, the model of the original image with contrast and luminosity can be expressed as:

Io=[L(x,y)+C(x,y)][F(x,y)+B(x,y)] (6)

Contrast & luminosity Image

This work assumed the luminosity or illumination and contrast variation as a noise. Noise is a common problem in image processing [28]. It is described as pixels in the image showing different intensity values instead of true pixel values [29]. Thus, the final expression can be simplified as follows:

Io=Fo+Bo (7)

2  Proposed Method

A set of 14 document images from H-DIBCO (http://utopia.duth.gr/~ipratika/HDIBCO2012/-benchmark/) has been used in this study. The images encounter a contrast variation problem, where the input size is 400 × 400-pixel image with 8-bit depth. In this paper, we assume the foreground (object) is darker compared to the background. First, the non-uniform document image was subjected to mean filtering to determine the mean value in a 3 by 3 window size. Its standard deviation is calculated as a boundary to distinguish between a low intensity (dark region) and a high intensity (bright region). The combination of mean and standard deviation values is used to detect the background, foreground, and noise. A new intensity replaces the intensity of the contrast or luminosity region to enhance the contrast. Finally, a few selected binarization methods were applied to the corrected image, where the accuracy was measured. The flow of the proposed method is illustrated in Fig. 1 below.

images

Figure 1: The block diagram of the proposed method

2.1 Statistic Parameter (Mean & Standard Deviation)

To improve the image quality based on contrast enhancement, the local standard deviation and local mean for each 3 by 3 window sizes are obtained to classify the region into three groups: background, foreground, and contrast/luminosity. After testing on three window sizes of 3 by 3, 9 by 9 and 15 by 15, the result produced by a 3 by 3 is positive compared to the others. A smaller window size (3 by 3) is selected because it is effective and can increase the accuracy of the contrast variation pixel compared to a large window size. After applying this process, each coordinate pixel should consist of 3 values: original intensity, mean value and standard deviation. The main function of the standard deviation is to measure the spread of the intensity from the mean value consisting of 3 values: original intensity, mean value, and standard deviation. The main function of the standard deviation is to measure the spread of the intensity from the mean value. In this work, a combination of mean and standard deviation was used to separate the low intensity (dark area) and the high intensity (bright area).

2.2 Extraction of Foreground (Object) Pixels

The detection of the foreground area is crucial to avoid any change of intensity level and automatically retain the original information from the foreground area. This paper assumes that all foreground pixels are within the lower intensity, which means the foreground is darker than the background pixels. By using a statistical parameter from Section 2.1, a condition is set to detect the foreground region, denoted as follows:

stdlocal(i,j)<stdglobal & meanlocal(i,j)<meanglobal

The first condition is based on the local standard deviation within the dark region (foreground). Note that the value for local standard deviation should be lower than global. The first condition can also represent the problem area, such as the luminosity area, as the standard deviation’s value is lower. Thus, the second condition based on the local mean is proposed to find a specific foreground region. In the dark area (small intensity), the local mean value should be lower than the global. In this condition, the original intensity is kept unchanged.

2.3 Noise Region

In this paper, this region was concentrated to normalize a contrast variation. Normally, in the luminosity and contrast region, the intensity value should be higher than the background and object. We assumed this region as a bright region and noise. The proposed condition to detect the noise region is given as follows:

stdlocal(i,j)<stdglobal & meanlocal(i,j)>meanglobal

As explained in Section 2.2, a smaller local standard deviation than the global also represents the problematic region, as shown in Fig. 2. The local mean condition was proposed to detect the noise region, where in this bright area, the intensity value for the local mean should be higher than the global mean. In this part, the problem intensity is replaced with the new intensity based on the mean global value.

images

Figure 2: Simple illustration-(a) dark region, (b) bright region. The above condition should present the lower standard deviation with a different mean

2.4 Extraction of Background Pixels

As discussed in Section 1.2, the luminosity and contrast problem contributes a high effect, especially in the binarization part. The pixels do not fulfill the condition in Sections 2.2 and 2.3. Here, we assume that the intensity value is kept unchanged as the background region. The background presents a slight bright compared to the foreground and darker than luminosity (noise) in normal condition. This section aims to solve the problem of the border region between the background and foreground (dark). Thus, the background region is presented as:

stdlocal(i,j)>stdglobal

Fig. 3 illustrates the document image and surface plot before and after applying the proposed method. Based on the surface plot, the original image clearly presents a badly contrast variation problem. Therefore, the background contrast was improved after applying the proposed method.

images

Figure 3: (a) The original image with contrast problem, (b) the resulting image after applying the proposed method

3  Experimental Result

In this experiment, the programs were written in C programming and run using Ubuntu with a Linux 3.5 from an Asus laptop with AMD Athlon™ II P320 Dual-Core Processor 2.10 GHz and 3.00 GB RAM. The proposed method is applied to the 14 documents of images with the contrast problem from the H-DIBCO 2012 dataset. This evaluates the result performance, the SNR, PSNR, Misclassification Error, and F-measure and MPM. The limitation when proposing a filtering technique is the cut-off value.

images

Figure 4: (a) Original image, (b) HE, (c) Butterworth Homomorphic, (d) Difference of Gaussian (DoG), (e) Proposed method

The optimal performance of a cut-off value for DoG [20] and Butterworth homomorphic (BHF) [22] methods for document image is 0.5. Fig. 4 shows the resulting images and the comparison with a few selected contrast enhancement methods. The result of the proposed method appears smoother without noise compared to the other methods. The contrast was improved, which automatically increases the image quality.

To prove the effectiveness of the proposed approach, the signal to noise ratio (SNR) is calculated using the following equation:

SNR=10log10[Mean[I]Std[I]]

where both I represent the input image, while Std is the standard deviation for the input image [23]. A high SNR value shows a good indication, which increases the quality of the image since the contrast variation had been normalized. Tab. 1 shows that the proposed method produced the highest SNR, which is 9.3500 compared to the other three methods.

images

A simple binarization using the Otsu thresholding is applied to calculate the effectiveness and accuracy of the proposed image, which is compared with the benchmark. Fig. 5 presents the binarized image using Otsu, HE [23], DoG [20], Butterworth homomorphic (BHF) [22], and the proposed image. The binarization result for the proposed image was impressive and better compared to the other methods.

After applying the proposed method, the performance of the binarization image was evaluated based on misclassification error (ME). A lower ME value represents the closed similarity between the resulting image and benchmark while automatically shows the high-quality image [30]. The equation for ME is given by:

ME=1|B0BT|+|F0FT||B0|+|F0| (8)

The ME result of selected contrast enhancement methods is shown in Fig. 6. The overall ME result of the proposed image of 0.0270 is lower compared to the other methods. Fig. 6 shows that the ME result is consistently lower and approximates to 0.

Document image binarization plays an important role in document processing, especially for binarization and identification. Normally, poor image quality is caused by noise, illumination, artifacts from the camera, and degradation of the document [3136].

Otsu Thresholding [37]:

In 1979, Nobuyuki published a paper that described an automatic binarization method for a gray level image by selecting an optimal threshold based on global variance and class variance.

images

Figure 5: Otsu thresholding on (a) original greyscale, (b) original Otsu, (c) HE, (d) Butterworth homomorphic, (e) DoG, (f) proposed image

images

Figure 6: Misclassification error (ME) for document image H-DIBCO dataset

Niblack method [38]:

This method mentions the significant relationship between the local mean and the standard deviation to determine the specific threshold value within the sub-region. The equation is denoted as follows:

T(x,y)=m(x,y)+kδ(x,y) (9)

where standard deviation δ(x,y) and local mean m(x,y) were determined using 80 by 80 window size [39], while the standard k value is −0.2. This method do not work correctly if the image suffers from non-uniform illumination.

Nick Method [40]:

The proposed new strategies improve the Niblack method by shifting the thresholding value downward using the following equation:

T(x,y)=m+k(I2m2)N (10)

The k factor value is similar to the Niblack method. Moreover, the window size is defined as 15 by 15, while I and m represent the intensity pixel and mean of the greyscale image.

Gradient-Based [41]: This method uses adaptive thresholding to separate the objects of interest from the non-uniform illumination background condition. This technique involves a few steps like edge detection and threshold surface construction.

Bradley Method [42]: This method uses the integral image as the input image. It improves Wellner’s method [43] and is robust to illumination changes within the image. The default w = 15 by 15 and local threshold (T) is 10.

Bernsen method [44]: This method obtains the threshold based on the mean value. It uses a user-provided contrast threshold (k). This algorithm is dependent on k value and also on the size of n using N by N windowing. The default window size (w) is 3 by 3 and k is 15.

T(x,y)=Zmax+Zmin2 (11)

Local adaptive thresholding [23]: The basic and simple algorithm separates the foreground from the background with non-uniform illumination. The default local window size (w) is 15 by 15 and local threshold (T) is 0.05. Fig. 7 displays seven (7) types of binarization methods that were applied to the corrected image. The result shows the comparison images before and after applying the proposed method, where all seven (7) binarization methods show clear improvement. A great improvement was obtained from the Otsu Method [37], Niblack [38], Bernsen [44], and Gradient-Based [41]. Meanwhile, the Local Adaptive method [23], Bradley et al. [42], and Khurshid et al. [40] were slightly improved.

In order to check and prove the result performance before and after applying the proposed image, a few evaluation parameters such as F-Measure, Peak Signal Noise Ratio (PSNR), and Misclassification Penalty Metric (MPM) are calculated. All the equations can be referred from the H-DIBCO competition [45].

images

Figure 7: Comparison of selected binarization methods before and after applying the proposed method

The performance evaluation is presented in Tab. 2, where all the evaluation values positively result after applying the proposed method. High values achieved for F-measure, Accuracy, and PSNR, while lower values for ME and MPM represent the best and good quality images. We can see that all binarization methods show an increment and improvement after segmenting via the corrected images from this data.

images

The histogram in Fig. 8 indicates the increment (%) of seven (7) binarization methods after employing the proposed image, where all the methods show an increment. Based on Fig. 8, the evaluation result based on MPM shows greater improvement for all types of binarization methods. The average increment after five (5) types of evaluation are: (Otsu = 41.64%, Local Adaptive = 7.05%, Niblack = 30.28%, Bernsen = 25%, Bradley = 3.54%, Nick = 1.59%, Gradient-Based = 14.6%).

images

Figure 8: Increment evaluation result of seven (7) binarization methods

4  Conclusion

In image processing, a bad illumination condition influences image quality, especially the contrast of the dark and bright regions. The present study was developed to determine the effect of contrast variation on document images before the binarization process. This work was undertaken to propose a novel method for contrast enhancement and background correction based on statistical data. It aims to normalize the contrast and luminosity problem and automatically increase and improve the binarization results. This paper classifies the region into three (3) groups: background, foreground, and contrast variation/luminosity. The contrast and luminosity region is replaced with a new intensity based on the local mean and local standard deviation. This investigation shows that the proposed method is very effective at normalizing the contrast variation problem. It was supported by the SNR result, where our proposed image of 9.350 is higher compared to the other three (3) methods. The main finding presented in this paper is to correct the contrast, which improves the binarization results, as shown in Tab. 2 and Fig. 8. Based on five (5) evaluation techniques, all binarization methods show an improvement and increment after using the corrected images. The Otsu method shows a higher increment of 41.64%, while the Nick method shows a lower increment of 1.59%.

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. M. Kaur, K. Jain and V. Lather, “Study of image enhancement techniques: A review,” International Journal of Advanced Research in Computer Science and Software Engineering, vol. 3, no. 4, pp. 846– 848, 2013.
  2. M. K. Jain and I. B. Arya, “A survey of contrast enhancement technique for remote sensing images,” International Journal of Electrical and Computer Engineering, vol. 3, no. 2, pp. 1–6, 2014.
  3. R. F. Moghaddam and M. Cheriet, “Low quality document image modeling and enhancement,” International Journal on Document Analysis and Recognition, vol. 11, no. 4, pp. 183–201, 2009.
  4. W. A. Mustafa and M. M. M. A. Kader, “Binarization of document image using optimum threshold modification,” Journal of Physics: Conference Series, vol. 1019, no. 012022, pp. 1–8, 2018.
  5. S. C. F. Lin, C. Y. Wonga, M. A. Rahman, G. Jiang, S. Liu et al., “Image enhancement using the averaging histogram equalization (AVHEQ) approach for contrast improvement and brightness preservation,” Computers & Electrical Engineering, vol. 46, no. 1–2, pp. 356–370, 201
  6. S. Singh and S. Sharma, “A survey of image enhancement techniques,” International Journal of Computer Science, vol. 2, no. 5, pp. 1–5, 2014.
  7. N. Longkumer, M. Kumar and R. Saxena, “Contrast enhancement techniques using histogram equalization: A survey,” International Journal of Current Engineering and Technology, vol. 4, no. 3, pp. 1561–1565, 2014.
  8.    M. Kaur, J. Kaur and J. Kaur, “Survey of contrast enhancement techniques based on histogram equalization,” International Journal of Advanced Computer Science and Applications, vol. 2, no. 7, pp. 1–5, 2011.
  9. R. Jaiswal, A. G. Rao and H. P. Shukla, “Image enhancement techniques based on histogram equalization,” International Journal of Advances in Electrical and Electronics Engineering, vol. 1, no. 2, pp. 69–78, 2010.
  10. Y.-T. Kim, “Contrast enhancement using brightness preserving bi-histogram equalization,” IEEE Transactions on Consumer Electronics, vol. 43, no. 1, pp. 1–8, 1997.
  11. Y.-T. Kim, “Quantized bi-histogram equalization,” in Int. Conf. on Acoustics, Speech, & Signal Processing, Munich, Germany, vol. 4, pp. 2797–2800, 1997.
  12. Y. Wang, Q. Chen and B. Zhang, “Image enhancement based on equal area dualistic sub-image histogram equalization method,” IEEE Transactions on Consumer Electronics, vol. 45, no. 1, pp. 68– 75, 1999.
  13. S. D. Chen and A. R. Ramli, “Minimum mean brightness error bi-histogram equalization in contrast enhancement,” IEEE Transactions on Consumer Electronics, vol. 49, no. 4, pp. 1310–1319, 2003.
  14. K. Hasikin and N. A. Mat Isa, “Adaptive fuzzy contrast factor enhancement technique for low contrast and non-uniform illumination images,” Signal Image and Video Processing, vol. 8, no. 8, pp. 1591– 1603, 2012.
  15. Z. Zhou, N. Sang and X. Hu, “Global brightness and local contrast adaptive enhancement for low illumination color image,” Optik (Stuttg), vol. 125, no. 6, pp. 1795–1799, 2014.
  16. S. A. M. Saleh and H. Ibrahim, “Mathematical equations for homomorphic filtering in frequency domain: A literature survey,” in Int. Conf. on Information and Knowledge Management, Kuala Lumpur, Malaysia, vol. 45, pp. 74–77, 2012.
  17. W. Wang and X. Cui, “A background correction method for particle image under non-uniform illumination conditions,” in Int. Conf. on Signal Processing Systems, Dalian, China, pp. 695–699, 2010.
  18. E. Ardizzone, R. Pirrone and O. Gambino, “Illumination correction on MR images,” Journal of Clinical Monitoring and Computing, vol. 20, no. 6, pp. 391–398, 2006.
  19. H. Shahamat and A. A. Pouyan, “Face recognition under large illumination variations using homomorphic filtering in spatial domain,” Journal of Visual Communication and Image Representation, vol. 25, no. 5, pp. 970–977, 2014.
  20. C.-N. Fan and F.-Y. Zhang, “Homomorphic filtering based illumination normalization method for face recognition,” Pattern Recognition Letters, vol. 32, no. 10, pp. 1468–1479, 2011.
  21. K. Delac, M. Grgic and T. Kos, “Sub-image homomorphic filtering technique for improving facial identification under difficult illumination conditions,” in Int. Conf. on Systems, Signals and Image Processing, Budapest, Hungary, pp. 95–98, 2006.
  22. H. G. Adelmann, “Butterworth equations for homomorphic filtering of images,” Computers in Biology and Medicine, vol. 28, no. 2, pp. 169–181, 1998.
  23. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Third. Upper Saddle River, NJ, USA: Prentice-Hall, 2008.
  24. M. Foracchia, E. Grisan and A. Ruggeri, “Luminosity and contrast normalization in retinal images,” Medical Image Analysis, vol. 9, no. 3, pp. 179–190, 2005.
  25. J. Liang, D. Doermann and H. Li, “Camera-based analysis of text and documents: A survey,” International Journal on Document Analysis and Recognition, vol. 7, no. 2–3, pp. 84–104, 2005.
  26. B. M. Singh, R. Sharma, D. Ghosh and A. Mittal, “Adaptive binarization of severely degraded and non-uniformly illuminated documents,” International Journal on Document Analysis and Recognition, vol. 17, no. 4, pp. 393–412, 2014.
  27. E. H. B. Smith, “Characterization of image degradation caused by scanning,” Pattern Recognition Letters, vol. 19, no. 13, pp. 1191–1197, 1998.
  28. A. Stubbe, C. Ringlstetter and K. U. Schulz, “Genre as noise: Noise in genre,” International Journal on Document Analysis and Recognition, vol. 10, no. 3–4, pp. 199–209, 2007.
  29. R. Verma and J. Ali, “A comparative study of various types of image noise and efficient noise removal techniques,” Int. J. Adv. Res. Comput. Sci. Softw. Eng., vol. 3, no. 10, pp. 617–622, 2013.
  30. W. Azani Mustafa, H. Yazid and S. Yaacob, “Illumination normalization of non-uniform images based on double mean filtering,” in IEEE Int. Conf. on Control System, Computing and Engineering, Penang, Malaysia, pp. 366–371, 2014.
  31. W. A. Mustafa, H. Aziz, W. Khairunizam, Z. Ibrahim, S. Ab et al., “Review of different binarization approaches on degraded document images,” in IEEE Int. Conf. on Computational Approach in Smart Systems Design and Applications, Kuching, Malaysia, pp. 1–8, 2018.
  32. J. Sauvola and M. Pietika, “Adaptive document image binarization,” Pattern Recognition, vol. 33, no. 2, pp. 225–236, 2000.
  33. Y. Zhang and L. Wu, “Fast document image binarization based on an improved adaptive Otsu’s method and destination word accumulation,” Journal of Computer Information Systems, vol. 6, no. 7, pp. 1886–1892, 2011.
  34. K. Ntirogiannis, B. Gatos and I. Pratikakis, “A combined approach for the binarization of handwritten document images,” Pattern Recognition Letters, vol. 35, no. 6, pp. 3–15, 2014.
  35. W. A. Mustafa and M. M. M. A. Kader, “Binarization of document images: A comprehensive review,” Journal of Physics: Conference Series, vol. 1019, no. 12023, pp. 1–9, 2018.
  36. W. A. Mustafa, H. Yazid and M. Jaafar, “An improved sauvola approach on document images binarization,” Journal of Telecommunication, Electronic and Computer Engineering, vol. 10, no. 2, pp. 43–50, 2018.
  37. N. Otsu, “A threshold selection method from Gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 20, no. 1, pp. 62–66, 1979.
  38. W. Niblack, An Introduction to Digital Image Processing. Englewood Cliffs: Prentice-Hall, 1986.
  39. J. Shi, N. Ray and H. Zhang, “Shape-based local thresholding for binarization of document images,” Pattern Recognition Letters, vol. 33, no. 1, pp. 24–32, 2012.
  40. K. Khurshid, I. Siddiqi, C. Faure and N. Vincent, “Comparison of Niblack inspired binarization methods for ancient documents,” Proceedings of SPIE-IS&T Electronic Imaging, vol. 7247, pp. 1–9, 2009.
  41. H. Yazid and H. Arof, “Gradient-based adaptive thresholding,” Journal of Visual Communication and Image Representation, vol. 24, no. 7, pp. 926–936, 2013.
  42. D. Bradley and G. Roth, “Adaptive thresholding using the integral image,” Journal of Graphics, GPU, and Game Tools, vol. 12, no. 2, pp. 13–21, 2011.
  43. Wellner, “Adaptive thresholding for the digital desk,” EuroPARC, pp. 93–110, 1993. [Online]. Available: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.8856&rep=rep1&type=pdf.
  44. J. Bernsen, “Dynamic thresholding of grey-level images,” in Proc. of the Eighth Int. Conf. on Pattern Recognition, Berlin, Germany, pp. 1251–1255, 1986.
  45. I. Pratikakis, B. Gatos and K. Ntirogiannis, “ICDAR 2011 document image binarization contest,” in Int. Conf. Document Analysis and Recognition, Beijing, China, pp. 1506–1510, 2011.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.