iconOpen Access

ARTICLE

crossmark

Survey on Segmentation and Classification Techniques of Satellite Images by Deep Learning Algorithm

by Atheer Joudah1,*, Souheyl Mallat2, Mounir Zrigui1

1 Department of Computer Sciences, University of Monastir, Monastir, 1001, Tunisia
2 Research Laboratory in Algebra, Numbers Theory and Intelligent Systems, Monastir, 1001, Tunisia

* Corresponding Author: Atheer Joudah. Email: email

Computers, Materials & Continua 2023, 75(3), 4973-4984. https://doi.org/10.32604/cmc.2023.036483

Abstract

This survey paper aims to show methods to analyze and classify field satellite images using deep learning and machine learning algorithms. Users of deep learning-based Convolutional Neural Network (CNN) technology to harvest fields from satellite images or generate zones of interest were among the planned application scenarios (ROI). Using machine learning, the satellite image is placed on the input image, segmented, and then tagged. In contemporary categorization, field size ratio, Local Binary Pattern (LBP) histograms, and color data are taken into account. Field satellite image localization has several practical applications, including pest management, scene analysis, and field tracking. The relationship between satellite images in a specific area, or contextual information, is essential to comprehending the field in its whole.

Keywords


1  Introduction

Without electricity, modern existence would not be conceivable. Therefore, electric power supply components such as transformers, power pylons, and circuit breakers have become a transnational infrastructure. As noted in [1], sustaining the dependability of electrical power involves routine infrastructure repair. For this reason, it is vital to collect data from neighboring farms in order to maintain track of a variety of maintenance activities. In addition to the more usual textual information like as serial numbers, manufacturing years, and vendor names, these agricultural fields may also contain graphical elements such as schematic drawings, logos, or barcodes (referenced in [2]). Typically, metal or synthetic materials are utilized to make rectangular farm fields. This diversity in manufacturing means that not all of these fields have identical dimensions, styles, fonts, or construction. As seen in the first image, a field of corroded aluminum contains a barcode and machine- and human-readable text. They are composed of synthetic material, have a glossy appearance, are larger than the field outlined in [3], and contain diagrams. Historically, field-collected data on routine maintenance had to be recorded manually by operators. Inaccuracies or omissions in this transcription could result in significant damage to the relevant machinery or even power outages [4,5]. As most agricultural fields are intricate, this method’s lengthy completion time is a key drawback. Recent instances of agricultural products are provided. Since they are brand-new and have never been used, damage is not a concern. In the interim, outside metal farming fields have existed for decades. [5] depicts graphically some of the challenges of manual transcription and computational text extraction. After years of exposure to the elements, the fields will invariably appear aged and discolored. Occasionally, the paint on older fields would peel off, rendering the lettering unreadable. Due to high reflections, shadows, light variations, and partial occlusions, acquiring images outdoors for autonomous processing might be difficult. According to [6], the field demonstrates a specular reflection that rises to the image’s top. This is seen in Fig. 1.

images

Figure 1: The applications of deep learning and satellite image based agricultural field recognition in real times [6]

It is possible to extract semantic data from the field by transforming pictures of the text into machine-readable text. Optical character recognition is applied for this purpose (CNN). Several methodologies and implementations of open-source CNN software exist to overcome this issue, as explained in [7]. However, the majority of existing systems are designed and constructed to process digital paper documents. Consequently, as demonstrated in [8], each of the four accessible agricultural regions employs a font that is separate from the fonts used in text processing. One of the fields also contains embossed text, which has a thinner stroke width and lower contrast than painted or printed letters. Therefore, it is probable that the outcomes of CNN’s current solutions will be inadequate. Agriculture fields that are connected to the city’s electrical system must be rectangular and labeled so that the plots can be read. No processing of schematics or barcodes is permitted. Our strategy consists of the following measures: The supplied image undergoes a search for the field, extraction, and vertical flipping. Following the extraction of features, machine learning is applied to determine the species. Each type’s names and localities are listed in [9]. All sections are preprocessed to eliminate dust, the text and its surrounding text frames, and any background noise. The areas are eventually relayed to CNN’s processing unit after being filtered. If a region’s recognition score falls below a predetermined threshold, the user is prompted to take a new image of the problematic field region [10].

Therefore, we set out to be the first to provide a cutting-edge method for detecting satellite images in raw data. In contrast, studies employ extreme regions, a subset of MSER, to identify Category-Specific Extremal Regions (CSER). The second stage of the proposed CNN approach, which is based on deep learning, entails thresholding the image at each gray level, beginning with level one, so as to count all extreme regions.

1.1 Problem statement

Our effort in field detection and classification resembles the difficulty of data mining satellite photos in many ways. Satellite images are squares or rectangles with text that is understandable by humans placed on them. It is essential to locate these agricultural zones in the image and train a Convolutional Neural Network (CNN) over them in order to extract the data stored within. As demonstrated in [11], we provide a method for detecting and monitoring satellite pictures used for traffic control. We apply Maximally Stable Extremal Regions to locate the satellite in the input image (MSER). The suggested method labels satellite photos with MSER+, whereas MSER- labels crop fields on the ground as having black margins surrounding light portions. Below are the results of MSER processing on three typical satellite pictures. A satellite image can be identified if there are many MSER- zones within a larger MSER+ region. The black specks must be sized relatively uniformly and joined at their centers with straight lines. If you examine [12], you will notice that the average height of the dark areas should match that of the surrounding bright zone. More information from the field images used to train the deep convolutional neural network could improve the segmentation. Using the placements of text or logos to establish the boundaries of a field is one technique to do so more precisely. On sometimes, the classifier will wrongly assign an item to one of two classes. It is very difficult to categorize the three visually distinct images in the first group. Due to the existence of reflections, the second one gives a great deal of visual diversity. This has little influence on the actual process of preparation, which is anticipated to produce improvements in correlation acknowledgment performance under specific situations. These findings indicated that neurons in the inferior visual areas were designed for fundamental field segmentation aspects, such as form, diversity, and release variation. All field segmentation-based organizations could fit the increased information similarly well. There is a great possibility that flagging companies will be able to better suit the data than feedforward companies. In this study, a convolutional neural network (CNN) using a one-vs.-one strategy is utilized to identify characters, with the MSER-regions’ division into their various agricultural areas serving as classifier inputs. To further increase categorization accuracy, the satellite image is tracked through numerous frames, thereby offering access to multiple perspectives of the field. The final detection result of the satellite image is determined by a vote based on the best character results across all frames.

2  Related Works

A novel fusion rule in the fast curvelet transform (FCDT) domain is produced by combining high pass modulation with local magnitude ratio (LMR). This fusion rule enables the generation of a high-resolution (HR) multispectral image with a 2.5-m spatial resolution [13].

In the wavelet domain, have addressed the problem of producing high-resolution (HR) copies of low-resolution (LR) images. They recommended beginning the search for a clear image with a high-frequency subband (HF) [14].

Investigate four segmentation methods that fall under the two major categories of boundary-based and region-based segmentation. The empirical discrepancy evolution method was used to analyze every algorithm [15].

Presented a straightforward morphological super pixel technique in 2014. (SUM). Using the watershed transformation approach, it is possible to make super pixels rapidly and easily [16].

Present a method for recognizing and segmenting images by combining the two textural elements. They use Back-propagation neural network (BPNN) and Adaboost, respectively, for creating co-occurrence and Haar-like feature combinations [17].

Proposed an effective fuzzy-based segmentation algorithm applicable to YCbCr satellite image processing. The proposed method entails transferring images from the red, green, and blue (RGB) color system to the YCbCr color space and then dividing them into three groups based on brightness and chrominance [18].

For the goal of segmenting satellite images using Kapur’s, Otsu’s, and Tsallis’ function, a multilevel thresholding-based artificial bee colony technique has been adopted. This technique was evaluated in comparison to ant colony, particle swarm, and genetic programming (GA) [19].

Present a straightforward and useful segmentation technique based on a region-growing strategy; the implementation domain for this technique is GI-IPS(SPRINC) INPE [20].

Using the performance Specific Absorption Rate (SAR) imaging settings and selected primitive features, Dumitru and Datcu on demonstrated a reliance among information extraction approaches. After TerraSAR-X data processing, the image’s details are automatically displayed [21].

Using high resolution multi spectral pictures, provide a novel approach to object identification and recognition. This method is self-sufficient and minimum user input is required. Using geostatistical and local cluster analysis, this research devised a method for rapidly recognizing objects in big satellite images [22].

A unique unsupervised algorithm for mining association patterns in climate and remote sensing time series is presented as part of a new time series mining technique for remote sensing applications that improves sugarcane field monitoring [23].

Utilizes high-resolution data and object-oriented image processing to detect possible fuel sources. They discovered that a pixel-based analysis of satellite data with a very high resolution was utilized to determine the types of fuel present [24].

Using IKONOS satellite images, Unsalan and Boyer offer an outstanding method for detecting buildings and tracing roadway networks in 2004.

Two of the four pillars of their methodology are the segmentation of human-populated areas and the use of multispectral analysis to search for signals of cultural activity.

Offer a method for tracking motorways using satellite images as an illustration of the generic computational strategy “active testing” for monitoring 1D similar structures using computer vision [24].

Combining multispectral classification and texture filtering can boost the detection of structures [25].

Were co-authors. The proposed target recognition algorithms attempt to provide a fully automated, rapid, and reliable target recognition process. Using salient contour identification, unique contour clusters are extracted from a real-world image [26].

Present a method for object tracking applicable to a collection of multispectral satellite photos. For multi-angle data collection, the suggested method employs a three-stage processing chain consisting of moving object estimation, target modeling, and target matching [27].

Using Very High Resolution (VHR) satellite images, investigate a variety of shadow identification investigations. The results of their test on a VHR image strongly support this strategy for urban object extraction and detection [28].

According to Tatem et al., the suggested neural network is an effective tool for locating land cover objectives and producing maps with sub-pixel geometric accuracy [29].

According to a new interest point matching technique for high resolution satellite images introduced, the strongest interest points are retrieved first and a control network is constructed. The “super points” notion is key to this strategy [30].

The objective of this study was to develop an automatic registration method at employing wavelet transform and multi-resolution analysis to determine the grey-level information content of an image. So the parameter as mention for each table all technique used for each table, researchers in [31] describe a similar technique that targets computationally restricted mobile devices, but they only address the segmentation of specific agricultural fields. To do background subtraction, the variance of each non-overlapping pixel in the input image is determined. After being classified as field, the remainder is discarded. This consists of graphic elements, lines, and ambient sounds. In the remaining field-generated structures, components with connections are obtained. Multiple distorted or tightly spaced lines of field may combine to form a single connected component, whereas a single line of field with equal spacing would be represented by a single connected component. According to [32], the skew angle for the field lines can be calculated by measuring the horizontal distance between the first pixel at the bottom of the connected component and a parallel line. The skew is assessed at the component’s edges and its center. If there are considerable changes, the procedure is repeated using the connected component’s top pixels. The skew angle is finally the average of the three numbers. A threshold is applied to the region and a horizontal histogram is computed in accordance with [33] to split the vertically connected components into separate lines. The histogram depicts the region as being divided into reasonably uniform sections. Sadly, this prevents the correct separation of italic or cursive fields, as demonstrated in [34]. Similar to a horizontal histogram, a vertical histogram can be computed and studied to divide a line into distinct farming parcels. Thus, the agricultural fields linked with each line may be retrieved and fed into a convolutional neural network.

Researchers in [35] suggest a scanner for business cards based on CNN technology. As opposed to creating their own CNN like the other options, they use Tesseract, which is identical to our method. After many morphological operations have been done on the image, adaptive thresholding is employed to produce white regions within the business card and dark parts outside. As a second morphological technique, the thresholded difference between the Hough transforms of the eroded and enlarged pictures [36] is computed to determine the card’s boundaries. The four boundary lines are determined by the four highest peaks of the Hough accumulator, and it is required that all corners calculated by crossing the lines be contained within the image. If it is determined that the provided image has no lines that adhere to the constraint, it is used without further alteration. The authors highlight that, as described in [37], satellite images may not always have a uniform background and may instead contain gradients of color or graphics, both of which might result in false positives when attempting to detect outlines. To detect and address such instances, a comprehensive examination of the field flow is conducted. For each of the six bars that make up the bounding box of the card’s all-black pixels, the field flow angle is determined. It is essential that the angles of the outermost bars and their respective card border lines are relatively close. When the difference between them becomes excessive, the field flow angle takes precedence. Due to the rarity of vertical fields in satellite images, which may be used to determine the angle of vertical lines, the authors highlight that only horizontal lines are practicable. The image is initially preprocessed to straighten it, so allowing the backdrop to disappear. example approaches for preparing satellite images from [38] If there was a large shift in the fraction of black pixels from the previous round, the returning image undergoes a second round of adaptive thresholding. If the condition does not occur, the prior result is employed. To boost CNN performance, the image is split along the middle at corridors that are at least 20 pixels wide and are surrounded by black pixels. As described in [39], this is accomplished by sweeping the image in various directions, all of which begin above the road.

In Table 1 Show summary of training and testing with respect to number of epoch so each epoch will show different accuracy and validation loss and validation accuracy.

images

In Table 2 Show The comparison of proposed work with literature that is already available.

images

Academics propose a similar strategy in [41] for identifying fields in photographs of the physical environment. Extremal Regions, a more extensive data collection than MSER, is utilized to construct category-specific extreme regions. In accordance with the approach described in [42], thresholds are computed for each Gray level, beginning with level 1, and all visual extremes are tallied. We initially compute descriptors for each intensity level before deleting the linked components that represent the most extreme locations. According to [32], there are only three possible futures for the current extreme zones: either they will expand, two will combine into one, or a totally new region would develop. Utilizing descriptors that are likewise progressively computable is one method to take advantage of the recent trend toward incremental updates and reduced runtime complexity. In addition to the characteristics listed in [43], they recommend using the entropy of the cumulative histogram, the number of convexities, and the size of the convex hull. These characteristics include compactness, the Euler number (the number of holes in the objects divided by the number of objects), normalized central algebraic moments, and the Euler number. Additional data from the field images used to train the deep convolutional neural network may improve the segmentation. One such application is using a field’s location to better forecast the field’s edge. Occasionally, the classifier may incorrectly assign an object to one of two groups. The first group, which consists of three very distinct photos, is especially challenging to categorize. Multiple cycles are stimulated by consideration for the second cycle. The actual process of preparation remains unaffected, and it is envisaged that under specific conditions, correlation acknowledgment performance will improve. These results indicate that neurons in the inferior visual areas were designed to classify visual information based on fundamental parameters such as morphology, diversity, and release variance. Field-based classification systems are a possible application for the additional information. Organizations that flag data are more likely to be a good fit for the data than those that feed data forward. Training a classifier with these characteristics across all intensities reveals whether a region is of interest. This necessitates the inclusion of tilted or rotated agricultural regions in the training data to detect them in accordance with [34]’s recommendations. This approach is notable for its ability to recognize field in photographs captured in the actual world. Therefore, as illustrated in [44], a unique technique for grouping and filtering is required to collect only satellite photos. Identification of Corporate Social Responsibility (CSR) practices and a sample of the generated data Agricultural fields labelled with their country of origin are also clearly visible in the satellite image. This strategy can be applied beyond the playing area to retrieve concealed objects, as depicted in Fig. 2.

images

Figure 2: Satellite image detection using CNN [20]. (a) CNN convolutional backbone. The classifier determines if an extremal region contains relevant content or not. (b) CNN classifier head. Since all field in the image is detected, the letters on the country-of-origin sticker are found as well

The writers of [36] are, for instance, concerned in the challenge of identifying traffic signs. Researchers in [37] present a strategy that utilizes the colors of the fields, which is distinct from previous approaches. The procedure consists of two steps: locating the field using satellite images and then determining the precise field. The authors state that only white, black, red, and green will be detectable in satellite photographs. Consequently, they cannot operate without a color image. To reduce the frequency of false positives, they recommend employing a color edge detector that is only sensitive to edges created by specific color combinations. Before calculating the edge map, the image is inverted so that it appears in the HSV color space. Each picture plane and the edge map are utilized to build a fuzzy map displaying color transitions and indicating the likelihood that a given location contains a satellite image. Due to the large number of repeated edges with significant gradient magnitudes in satellite images, the maps are merged into a single map for use in the detection process. According to [38], the field is obtained by adaptive thresholding, and then related components are identified and filtered based on their aspect ratio. If the piece’s center is not in a straight line, it is either halved or discarded. Due to the fact that there are precisely eight agricultural fields in the supporting satellite photos, related components are eliminated from smallest to largest. If it is lower, the search for agricultural land will begin at the planet’s extremities and progress toward its center. In addition, the field pattern shown in [39] is utilized to validate the chosen farming group. Before trying character recognition, they manually separate numeric and alphabetic agricultural fields using the field’s specified format. Then, they employ information such as the number of holes and the type of node in a topological sort. This is done so that the last step of comparing the candidate to a smaller number of fields can be executed more efficiently. Using a model of self-organizing recognition, such as the one described in [40], the optimal character is chosen by comparing it to the remaining fields. Utilizing an artificial neural network with node weights learned from the input character. A similarity metric is computed by summing the changes in the network’s weights after the network has stabilized from the field’s input.

In general, the methods investigated for satellite image detection involve detecting the field in the input image, extracting it, and then using a Convolutional Neural Network to locate the field’s region. These images are highly noisy and comprise just a small portion of the original satellite image, unlike the input images we anticipate for our work. In addition, as indicated in [41], a satellite image has a more simpler arrangement. The segmentation could be enhanced by no more than two rows of data learned from field images that are used to train the deep convolutional neural network. The location of fields is one example that can be utilized to estimate the field’s limits more precisely. The findings of the classifier’s evaluation indicate that there are instances in which classes are incorrectly classified. Since there are only three visually distinct images in the first class, reliable categorization is quite difficult. The second features many peculiarities, primarily in the form of mirrored surfaces. However, this in no way decreases the importance of the preparation procedure, which, as stated previously, often results in enhanced correlation acknowledgment performance under specified conditions. These results indicated that neurons in the inferior visual cortex were created for fundamental field segmentation qualities, including form, diversity, and difference in release. The additional data was an excellent fit for any enterprise that used field segmentation. In general, flagging companies are more likely to comply with the data than feedforward businesses. Few typefaces are available, and neither the range nor the diversity of agricultural fields is particularly wide. Because they rely on the known layout and (partially) on the colors and field area format, the investigated approaches are not a suitable fit for our goal of field identification. However, some of the procedures can be adapted to extract fields from farms.

3  Analysis of Techniques Discussed in Literature Survey

Due to their structure and hostile environment, CNN’s satellite image detection units confront problems and obstacles; they are overcome by establishing a better environment solution, one that overcomes the weaknesses of conventional networks for satellite image detection and managing operations. In order to comprehend the impetus for this thesis work, a thorough review of these challenges and constraints is presented below. In addition, they provide a comprehensive analysis of CNN systems and their benefits, a methodological deficiency of this thesis. More information from the field images used to train the deep convolutional neural network could improve the segmentation. You can use information such as the positioning of logos or fields to help define the boundaries of a field. On sometimes, the classifier will wrongly assign an item to one of two classes. It is very difficult to categorize the three visually distinct images in the first group. Reflections also result in multiple iterations for the second one. This has little influence on the actual process of preparation, which is anticipated to produce improvements in correlation acknowledgment performance under specific situations. These findings revealed that neurons in lower visual regions were destined to process fundamental field segmentation characteristics such as form, diversity, and variation in release. There is a possibility that field-based categorization systems will function similarly with the new data. Flagging organizations are more likely than feedforward organizations to have a good fit with the data.

4  Conclusion

The survey study presented a way for recognizing, classifying, and extracting fields from a predefined set of fields using deep learning technology. The intended scenario featured a user who crops fields from satellite photos or creates regions of interest using a deep learning-based CNN technique to extract content from images (ROI). The satellite image is positioned in the input image, segmented, then labeled using machine learning. At this stage, a new technique is employed to determine categorization based on information such as color information, LBP histograms, and the field size ratio. Each species' field layout was detailed, as were the precise locations of the several earmarked fields.

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

1. M. Agrawal and R. Dash, “Image resolution enhancement using lifting wavelet and stationary wavelet transform,” in 2014 Int. Conf. on Electronic Systems, Signal Processing and Computing Technologies, vol. 2, no. 4, Nagpur, Maharashtra, India, pp. 10–27, 2014. [Google Scholar]

2. A. K. Bhandari, A. Kumar and G. K. Singh, “Segmentation and edge detection,” Image Processing, vol. 65, no. 3, pp. 7–22, 2020. [Google Scholar]

3. A. K. Bhandari, A. Kumar and G. K. Singh, “Modified artificial bee colony based computationally efficient multilevel thresholding for satellite image segmentation using Kapur’s, Otsu and Tsallis functions,” Expert Systems with Applications, vol. 42, no. 3, pp. 1573–1601, 2015. [Google Scholar]

4. Y. Byun, D. Kim, J. Lee and Y. Kim, “A framework for the segmentation of high-resolution satellite imagery using modified seeded-region growing and region merging,” International Journal of Remote Sensing, vol. 32, no. 16, pp. 4589–4609, 2011. [Google Scholar]

5. A. P. Carleer, O. Debeir and E. Wolff, “Assessment of very high spatial resolution satellite image segmentations,” Photogrammetric Engineering & Remote Sensing, vol. 71, no. 11, pp. 1285–1294, 2005. [Google Scholar]

6. L. A. Diago, M. Kitago and I. Hagiwara, “Wavelet domain solution of CSRBF SLAE for image interpolation using iterative methods,” Computer Technology and Applications, vol. 6, no. 1, pp. 33–56, 2004. [Google Scholar]

7. H. Duan and L. Gan, “Elitist chemical reaction optimization for contour-based target recognition in aerial images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 5, pp. 2845–2859, 2015. [Google Scholar]

8. S. Adelipour and H. Ghassemian, “The fusion of morphological and contextual information for building detection from very high-resolution SAR images,” in Electrical Engineering (ICEEIranian Conf. on, Tehran, Iran, pp. 621–632, 2018. [Google Scholar]

9. P. Ganesan, V. Rajini, B. S. Sathish, V. Kalist and S. K. Khamar Basha, “Satellite image segmentation based on CyBC color space,” Indian Journal of Science and Technology, vol. 8, no. 1, pp. 35, 2015. [Google Scholar]

10. D. Geman and B. Jedynak, “An active testing model for tracking roads in satellite images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 1, pp. 1–14, 1996. [Google Scholar]

11. X. Jin and C. H. Davis, “Automated building extraction from high-resolution satellite imagery in urban areas using structural, contextual, and spectral information,” EURASIP Journal on Advances in Signal Processing, vol. 2005, no. 14, pp. 1033–1057, 2005. [Google Scholar]

12. N. M. S. M. Kadhim, M. Mourshed and T. M. Bray, “Shadow detection from very high-resolution satellite image using grab cut segmentation and ratio-band algorithms,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 3, no. W2, pp. 95–101, 2015. [Google Scholar]

13. C. Giri, “Observation and monitoring of mangrove forests using remote sensing: Opportunities and challenges,” Remote Sensing, vol. 8, no. 9, pp. 783, 2016. [Google Scholar]

14. H. L. Le and D. -M. Woo, “Combination of two textural features for the improvement of terrain segmentation,” Advanced Science and Technology Letters, vol. 7, no. 3, pp. 673, 2015. [Google Scholar]

15. A. J. Tatem, H. G. Lewis, P. M. Atkinson and M. S. Nixon, “Super-resolution target identification from remotely sensed images using a Hopfield neural network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 4, pp. 781–796, 2001. [Google Scholar]

16. L. A. S. Romani, “New time series mining approach applied to multitemporal remote sensing imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 1, pp. 140–150, 2013. [Google Scholar]

17. S. Malladi, S. Ram and J. J. Rodriguez, “Super pixels using morphology for rock image segmentation,” in 2014 Southwest Symp. on Image Analysis and Interpretation, San Diego, CA, USA, pp. 62–65, 2014. [Google Scholar]

18. H. Y. Gu, H. T. Li and T. Blaschke, “An object-based semantic classification method of high-resolution satellite imagery using ontology,” in GEOBIA 2016: Solutions and Synergies, Enschede, Netherlands, pp. 6–15, 2016. [Google Scholar]

19. S. Singh, M. Suresh and K. Jain, “Land information extraction with boundary preservation for high resolution satellite image,” International Journal of Computer Applications, vol. 120, no. 7, pp. 39–43, 2015. [Google Scholar]

20. C. V. Rao, J. M. Rao, A. S. Kumar, D. S. Jain and V. K. Dadhwal, “Satellite image fusion using fast discrete curvelet transforms,” in IEEE Int. Advance Computing Conf. (IACC), Gurgaon, Haryana, India, pp. 21–32, 2014. [Google Scholar]

21. L. A. S. Romani, “New time series mining approach applied to multitemporal remote sensing imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 1, pp. 140–150, 2013. [Google Scholar]

22. C. Serief, Y. Bentoutou and M. Barkat, “Automatic registration of satellite images,” in 2009 First Int. Conf. on Advances in Satellite and Space Communications, Colmar, France, pp. 421–432, 2009. [Google Scholar]

23. M. A. Tanase and I. Z. Gitas, “Examination of the effects of spatial resolution and image analysis technique on indirect fuel mapping,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 1, no. 4, pp. 220–229, 2008. [Google Scholar]

24. A. J. Tatem, H. G. Lewis, P. M. Atkinson and M. S. Nixon, “Super-resolution target identification from remotely sensed images using a Hopfield neural network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 4, pp. 781–796, 2001. [Google Scholar]

25. D. R. Waghule and R. S. Ochawar, “Overview on edge detection methods,” in 2014 Int. Conf. on Electronic Systems, Signal Processing and Computing Technologies, NW Washington, DC, United States, pp. 65–75, 2014. [Google Scholar]

26. C. Unsalan and K. L. Boyer, “System to detect houses and residential street networks in multispectral satellite images,” in Proc. of the 17th Int. Conf. on Pattern Recognition, Cambridge, UK, ICPR, pp. 10–20, 2004. [Google Scholar]

27. Z. Xiong and Y. Zhang, “Novel interest-point-matching algorithm for high-resolution satellite images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 12, pp. 4189–4200, 2009. [Google Scholar]

28. P. Goovaerts, G. M. Jacquez and A. Marcus, “Geostatistical and local cluster analysis of high-resolution hyperspectral imagery for detection of anomalies,” Remote Sensing of Environment, vol. 95, no. 3, pp. 351–367, 2005. [Google Scholar]

29. A. Joudah Mounir, S. Mallat and M. Zrigui, “Analyzing satellite images by apply deep learning instance segmentation of agricultural fields,” Periodicals of Engineering and Natural Sciences (PEN), vol. 9, no. 4, pp. 1056, 2021. [Google Scholar]

30. W. Wang, Q. Jiang, X. Zhou and W. Wan, “Car license plate detection based on MSER,” in 2011 Int. Conf. on Consumer Electronics, Communications and Networks (CECNet), Xiamen, China, pp. 110–120, 2011. [Google Scholar]

31. A. K. M. Brillantes, A. A. Bandala, E. P. Dadios and J. A. Jose, “Detection of fonts and characters with hybrid graphic-text plate numbers,” in TENCON, 2018-2018 IEEE Region 10 Conf., Jeju, Korea, pp. 11–22, 2018. [Google Scholar]

32. S. Du, M. Ibrahim, M. Shehata and W. Badawy, “Automatic license plate recognition (ALPRA state-of-the-art review,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 2, pp. 311–325, 2013. [Google Scholar]

33. S. Sivaraman and M. M. Trivedi, “General active-learning framework for on-road vehicle recognition and tracking,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 267–276, 2010. [Google Scholar]

34. S. Montazzolli and C. Jung, “Real-time license plate detection and recognition using deep convolutional neural networks,” in 2017 30th SIBGRAPI Conf. on Graphics, Patterns and Images (SIBGRAPI), Niterói, Rio de Janeiro, Brazil, pp. 220–232, 2017. [Google Scholar]

35. M. Waleed, A. S. Abdullah and S. R. Ahmed, “Classification of vegetative pests for cucumber plants using artificial neural networks,” in 2020 3rd Int. Conf. on Engineering Technology and its Applications (IICETA), Najaf, Iraq, pp. 11–23, 2020. [Google Scholar]

36. S. Ahmed, Z. A. Abbood, H. M. Farhan, B. T. Yasen, M. R. Ahmed et al., “Speaker identification model based on deep neural networks,” Iraqi Journal For Computer Science and Mathematics, vol. 3, no. 1, pp. 108–114, 2022. [Google Scholar]

37. J. Matas and K. Zimmermann, “Unconstrained licence plate and text localization and recognition,” in 2005 IEEE Intelligent Transportation Systems, Vienna, Austria, pp. 43–55, 2005. [Google Scholar]

38. S. R. A. Ahmed and E. Sonuç, “Deepfake detection using rationale-augmented convolutional neural network,” Applied Nanoscience, vol. 3, no. 1, pp. 108–114, 2021. [Google Scholar]

39. R. Girshick, “Fast R-CNN,” in 2015 IEEE Int. Conf. on Computer Vision (ICCV), NW Washington, DC, United States, pp. 12–43, 2015. [Google Scholar]

40. S. Ren, K. He, R. Girshick and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017. [Google Scholar] [PubMed]

41. A. Kolekar and V. Dalal, “Barcode detection and classification using SSD (single shot multibox detector) deep learning algorithm,” SSRN Electronic Journal, vol. 37, no. 3, pp. 297–311, 2020. [Google Scholar]

42. R. Araki, T. Onishi, T. Hirakawa, T. Yamashita and H. Fujiyoshi, “Deconvolutional single shot detector using multitask learning for object detection, segmentation, and grasping detection,” in 2020 IEEE Int. Conf. on Robotics and Automation (ICRA), Virtual, pp. 132–144, 2020. [Google Scholar]

43. T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona et al., “Common fields in cornfield,” in The European Conf. on Computer Vision (ECCV), Tel Aviv, Israel, pp. 133–155, 2014. [Google Scholar]

44. J. Uijlings, K. Sande, T. Gevers and A. W. Smeulders, “Selective search for field recognition,” International Journal of Computer Vision (IJCV), vol. 37, no. 3, pp. 297–311, 2013. [Google Scholar]


Cite This Article

APA Style
Joudah, A., Mallat, S., Zrigui, M. (2023). Survey on segmentation and classification techniques of satellite images by deep learning algorithm. Computers, Materials & Continua, 75(3), 4973-4984. https://doi.org/10.32604/cmc.2023.036483
Vancouver Style
Joudah A, Mallat S, Zrigui M. Survey on segmentation and classification techniques of satellite images by deep learning algorithm. Comput Mater Contin. 2023;75(3):4973-4984 https://doi.org/10.32604/cmc.2023.036483
IEEE Style
A. Joudah, S. Mallat, and M. Zrigui, “Survey on Segmentation and Classification Techniques of Satellite Images by Deep Learning Algorithm,” Comput. Mater. Contin., vol. 75, no. 3, pp. 4973-4984, 2023. https://doi.org/10.32604/cmc.2023.036483


cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 938

    View

  • 511

    Download

  • 1

    Like

Share Link