Computer Systems Science & Engineering DOI:10.32604/csse.2023.027187 | |
Article |
Visual Enhancement of Underwater Images Using Transmission Estimation and Multi-Scale Fusion
1Department of Electronics and Communication Engineering, RMK College of Engineering and Technology, Tiruvallur, Tamilnadu, 601206, India
2Department of Electronics and Communication Engineering, RMD Engineering College, Gummidipundi, Tamilnadu, 601206, India
*Corresponding Author: R. Vijay Anandh. Email: vijayanandhphd@gmail.com
Received: 12 January 2022; Accepted: 10 March 2022
Abstract: The demand for the exploration of ocean resources is increasing exponentially. Underwater image data plays a significant role in many research areas. Despite this, the visual quality of underwater images is degraded because of two main factors namely, backscattering and attenuation. Therefore, visual enhancement has become an essential process to recover the required data from the images. Many algorithms had been proposed in a decade for improving the quality of images. This paper aims to propose a single image enhancement technique without the use of any external datasets. For that, the degraded images are subjected to two main processes namely, color correction and image fusion. Initially, veiling light and transmission light is estimated to find the color required for correction. Veiling light refers to unwanted light, whereas transmission light refers to the required light for color correction. These estimated outputs are applied in the scene recovery equation. The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques. The resultants are divided into three weight maps namely, luminance, saliency, chromaticity and fused using the Laplacian pyramid. The results obtained are graphically compared with their input data using RGB Histogram plot. Finally, image quality is measured and tabulated using underwater image quality measures.
Keywords: Underwater image; backscattering; attenuation; image fusion; veiling light; white balance; laplacian pyramid
Underwater exploration has become a demanding field today. Research are undertaken in resource exploration and extraction as the ocean contains an abundant amount of many resources that are very essential for mankind. State of those resources is essential and this is where visual data of those resources plays a predominant role. But there is degradation in getting those data. The two main factors in the degradation of the visual quality of underwater images are Backscattering and Attenuation.
Backscattering [1] occurs due to the reflection of light particles back in the same direction. These internal light particles are stood between the camera lens and the object. The intensity of light particles mostly depends on turbidity of the water. More the turbidity greater will be the effect of backscattering underwater. Not only light particles, even excessive content of sand and plankton can cause backscattering. Fig. 1 describes the underwater imaging model where the flow of backscattering is shown.
The word attenuation is otherwise called as extinction. In physics, attenuation is defined as the loss of flux intensity of the medium. The intensity of atmospheric light decreases as depth increases due to absorption of light. This makes the color look more bluish as the other colors are absorbed. Some of the passive factors that cause attenuation are scattering, noise etc. Color spectrum model of Attenuation of light is shown in Fig. 2.
The paper is organized as follows, Section 2 deals with related works, Section 3 gives a detailed explanation about the proposed method. In Section 4 results are displayed. In Section 5 paper is concluded with the impact. Finally in Section 6 future scope of the proposed work is described.
Many visual enhancement techniques are developed every day. Based on the requirements, each technique has its own specificity. In this section, some important techniques are reviewed, which are related to the proposed approach.
Histogram Equalization is a method to improve the visual quality of underwater images taken in distorted light conditions. The main objective of histogram equalization is to enhance the contrast of the degraded image. Gwanggil [2] developed a Histogram Equalization method for color images. In this approach, RGB image is taken are input. This input is first converted to HSV (Hue, Saturation and Value) color spaces and split into 3 channels. Then Histogram equalization is applied to S and V channels. Finally, Channels are merged and converted to RGB images as shown in Fig. 3.
This approach is not applicable when flat histogram is needed. To overcome this Adaptive Histogram Equalization is developed. Several histograms are computed for each part of the image. The main theme of Adaptive Histogram equalization is to convert each pixel with transformation function derived from near closest region. The main problem of AHE is over amplification of noise. To overcome this Rajesh Kumar et al. [3] proposed CLAHE (Contrast Limited Adaptive Histogram Equalization).
This approach was initially proposed for atmospheric hazy images. The main objective of this algorithm is to estimate the transmission of the input image. Using this, the hazy part of the image is removed. Drews et al. [4] developed underwater dark channel prior (UDCP) to estimate the transmission in underwater efficiently when compared to the normal DCP algorithm.
The above Fig. 4 shows the flow diagram of the dark channel prior analysis. Galdran et al. [5] perceived that red channel intensity decreases with an increase in distance of the camera. To overcome this Red Channel Prior is developed. This method is applicable for low wavelengths underwater. Simon et al. [6] proposed a Hierarchy based model where haze–opaque regions are identified. The main objective of this model is the estimation of backscatter.
There are some methods that involve specialized hardware for the visual enhancement of underwater images. Considering an example, the divergent beams, underwater LIDAR imaging system [7] possess an optical sensing technique to capture underwater images with more turbidity. But the investment needed to apply this process is very expensive and time-consuming. Even after the establishment the device should be monitored and cleaned which is not practical. So, this method is not applicable for constant data retrieval of underwater images.
In this proposed method underwater images are visually enhanced with two important steps. They are Color Restoration and Fusion.
The main objective of color restoration process is to solve the problem of attenuation in underwater images. As shown in Fig. 5 Color with lower intensity (most probably red) is recovered in three main steps they are Veiling Light Estimation, Transmission Estimation and Scene Recovery.
1) Veiling Light Estimation: Veiling light or Background Light [8] is a very important factor for many dehazing algorithms. It is defined as the atmospheric light which is scattering from particles in underwater in a hazy are into the line of sight of imaging. This results in image degradation leading to the low visual quality of underwater images. Firstly bright regions are estimated by developing a histogram in YCbCr color space in Luma channel. Equivalent pixels are found out for refinement process. Finally, Veiling Light is estimated by taking the average of the remaining pixels [9]. V = (xv, c), where xv ∈ ℝ2 and refers to the location of the veiling light and c ∈ ℝ3 which determines the RGB value of xv and V refers to the veiling light.
2) Transmission Estimation: The main objective of transmission estimation is to prevent oversaturation and artefacts in background regions. The oversaturation problem arises when image values range beyond the values, 0 and 1. The reason for oversaturation is the wrong estimation of bright regions. This results in incorrect estimation transmission values. Artefacts are defined as the features obtained in an image but are not originally present in a captured object. They normally in background regions due to low transmission values obtained from an image.
3) Scene Recovery: The values obtained in veiling light estimation and transmission estimation are applied in image formation model.
An error of Backscattering is recovered using Fusion Process [10]. Image obtained from restoration process is taken as an input image. Two versions of the input image is applied to White Balance and Contrast Enhancement. Then the results of the process are split into three weight maps namely, Luminance, Saliency and Chromaticity. Finally these images are fused using Laplacian Fusion. The flow diagram of image fusion process is shown in Fig. 6.
1) White Balance: The main objective of white balance algorithm is color casting [11]. This is achieved by selective absorption of colors with depth. The primary step of white balancing algorithm is improving the image aspect by eliminating unnatural color castings formed due to illumination properties. A simple white balance algorithm is used to improve the color constancy. Image obtained from restoration process is taken as input image. First version of the input image is applied for the white balance algorithm. Mean luminance is identified by converting RGB image into a gray image. Red, Blue and Green channels are individually extracted and mean of those channels is found out. Finally mean of those channels is made the same and combined to single RGB image.
2) Contrast Enhancement: The main theme of contrast is to differentiate the required objects present in the image. In this process, the regions in low contrast are enhanced. Low contrast occurs in various ways such as; airlight influence, attenuation, turbidity, backscattering etc. Intensity of these factors increases linearly with distance of the object from the water surface as well as the camera. The expression proposed by Ancuti et al. [12] for enhancing the contrast is given by [13]:
a) 3) Weight maps: The main disadvantage of enhancement operations mentioned above is that the same operation (process) is applied for all the regions of the image. This results in change of non–spatial regions of the image. To overcome this weight maps are introduced. The main objective of weight maps is to identify the spatial regions of the degraded image. Three weight maps named luminance, saliency and chromaticity are used to identify those regions.
Luminance weight map: The main objective of the luminance weight map is to distinguish and assign higher values for visible regions and lower values for nonvisible regions. To attain this, the visibility of each pixel is measured. RGB color channels are required for applying this weight map. The following expression [7] is applied for every pixel of the image:
where W refers to the weight map to be calculated, L refers to the luminance, RGB refers to the color channels and k refers to each pixel region.
b) Saliency weight map: The main objective of saliency is a perceptual quality measure. Perceptual quality defines the attractive parts of the image. It is also termed as visual attention. It is used to make the existence of portions more noticeable to its neighboring field. According to Ancuti et al. [12] saliency can be estimated using:
where
c) Chromatic weight map: The main objective of Chromatic weight map is to operate the saturation gain of the resultant image. Saturation determines the human preference in their visual appeal. It can be determined using [12]:
where
4) Multi-Scale Fusion: Image Fusion is defined as the method of fusing details of two or more images into a single image. The weight maps obtained from two versions of the image are combined into one single image. This can be done by using the following expression [14]:
where Wk(x) refers to the weight map values obtained with the identity k. Ik(x) refers to the input image and finally Rf(x) refers to the expression for fused image output. Artefacts can occur when this expression is directly applied. To overcome this, pyramid approach is used. The main objective of pyramid representation or multi scale representation is to infuse an image or signal in subsampling. Gaussian Pyramid [15] is utilized to achieve the multi scale image representation. In Gaussian Pyramid, the resolution of the image is reduced by breaking the pixels into smaller ones.
Here h and w represent rows and columns of the layered image, n represents the required decompositions layers. The final expression of the fusion pyramid obtained is:
Fig. 7 refers to the input images which are required for a single visual enhancement technique. The input images are obtained at various depths and turbidity. These images are first subjected to color restoration process where the transmission is estimated and then applied to image fusion based on the estimated weight maps that are obtained the resultant output of white balance and contrast enhancement algorithms. Fig. 8 refers to the final restored image.
RGB Plot:
The main purpose of the RGB plot is to visualize the Data layers obtained from the image in two dimensional spaces. It defines the RGB color intensity by taking the brightness in x-axis and number of iterations in y-axis. By comparing the plots of both input and output images, The red channel intensity is shaded by blue and green channels causing image degradation as shown in Figs. 9 and 10.
UIQM:
The results obtained are verified and tabulated using underwater image quality measurement [16]. Visual quality of underwater images is measured using three main parameters namely Underwater Image Colorfulness Measurement (UICM), Underwater Image Sharpness Measurement (UISM) and Underwater Image Contrast Measurement (UIConM).
1) UICM: The main purpose of Underwater Image Colorfulness measurement is to measure the colorfulness of the given image. Normally underwater images are degraded due to the low intensity of red light. So the main objective of image enhancement is color rendition. The following equation determines the colorfulness of the image.
2) UISM: The main objective Underwater Image Sharpness measurement is to estimate the sharpness of the image. Sharpness helps in saving the details of the image. The following expression determines the sharpness of the image.
3) UIConM: Contrast is defined as the difference between bright pixels and dark pixels. Underwater Image Contrast Measurement is used to estimate the contrast of the image. The following expression determines the contrast value of the image.
The resultant obtained from the UICM, UISM and UIConM are used to calculate UIQM. Underwater Image Quality Measurement can be calculated using the following expression [17].
Image Quality measured for input degraded image and Dehazed image are tabulated in the Tabs. 1 and 2.
Underwater images play a significant role in many research areas. Geologists require these visual data for their constant measurement of the ocean environment. In this paper, the reason for the distortion of underwater images is identified (Backscattering and Attenuation). A Survey is taken on previous traditional algorithms for single image enhancement. Based on those approaches proposed method is developed. Firstly, the reason for color restoration is identified and required transmission is estimated for restoration process. Then dehazing is done by image fusion process using the weight maps obtained in white balance and contrast enhancement process applied to the two versions of the image obtained in the restoration process. Finally, RGB plot is estimated to distinguish the color difference obtained between the input and resultant image. In future, proposed Single image visual enhancement can be improved in various ways. This approach can be extended for various depths and high turbidity levels. The results obtained can be used as a reference dataset in deep learning algorithms. The dehazing process can be modified by using the latest fusion techniques. More weights can be used in the fusion process to improve the accuracy of estimating the required data.
Funding Statement: The authors received no specific funding for this study.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. H. Lu, Y. Li and L. Zhang, “Contrast enhancement for images in turbid water,” Journal Optical Society of America, vol. 32, no. 5, pp. 886–893, 2015. [Google Scholar]
2. J. Gwanggil, “Color image enhancement by histogram equalization in heterogeneous color space,” International Journal of Multimedia and Ubiquitous Engineering, vol. 9, no. 7, pp. 309–318, 2014. [Google Scholar]
3. R. Rajesh Kumar, G. Puran and S. Balvant, “Underwater image segmentation using clahe enhancement and thresholding,” International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 1, pp. 118–123, 2012. [Google Scholar]
4. S. Drews, R. Paulo and J. Nascimento, “Underwater depth estimation and image restoration based on single images,” IEEE Computer Graphics and Applications, vol. 36, no. 9, pp. 24–35, 2016. [Google Scholar]
5. A. Galdran, D. Pardo and A. Picón, “Automatic red-channel underwater image restoration,” Journal of Visual Communication and Image Representation, vol. 26, no. 10, pp. 132–145, 2015. [Google Scholar]
6. E. Simon and C. Lars, “Hierarchical rank-based veiling light estimation for underwater dehazing,” in British Machine Vision Conf., Israel Institute, North America, pp. 73–81, 2015. [Google Scholar]
7. A. Derya and T. Haifa, “Sea-thru: A method for removing water from underwater images,” in IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 832–843, 2019. [Google Scholar]
8. J. Lu, N. Li, and S. Zhang, “Multi-scale adversarial network or underwater image restoration,” Optics and Laser Technology, vol. 110, no. 23, pp. 105–113, 2019. [Google Scholar]
9. D. Berman, T. Treibitz and S. Avidan, “Diving into haze-lines: Color restoration of underwater images,” in British Machine Vision Conf. (BMVC), Israel Institute, North America, pp. 723–731, 2017. [Google Scholar]
10. T. Ye, D. Dong and W. Xu, “A novel two-step strategy based on white-balancing and fusion for underwater image enhancement,” IEEE Access, vol. 8, no. 2, pp. 217651–217670, 2020. [Google Scholar]
11. S. Anwar, C. Li and F. Porikli, “Deep underwater image enhancement,” Computer Vision and Pattern Recognition, vol. 1, no. 1, pp. 1–10, 2018. [Google Scholar]
12. C. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3271–3282, 2013. [Google Scholar]
13. K. Barnard, V. Cardei and B. Funt, “A comparison of computational color constancy algorithms-part I: Experiments with image data, IEEE Trans Image Process, vol. 2, no. 9, pp. 505–513, 2002. [Google Scholar]
14. X. Yadong, Y. Cheng and S. Beibei, “A novel multi-scale fusion framework for detail-preserving low-light image enhancement,” Information Sciences, vol. 548, no. 23, pp. 378–397, 2021. [Google Scholar]
15. C. Li, S. Anwar and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, vol. 98, no. 22, pp. 107038–107049, 2020. [Google Scholar]
16. P. Karen and G. Chen, “Human-visual-system-inspired underwater image quality measures,” IEEE Journal of Oceanic Engineering, vol. 41, no. 3, pp. 541–556, 2016. [Google Scholar]
17. Y. Miao and S. Arcot, “An underwater color image quality evaluation metric,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 213–217, 2015. [Google Scholar]
This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |