[BACK]
images Computer Modeling in Engineering & Sciences images

DOI: 10.32604/cmes.2022.017822

ARTICLE

Wavelet Decomposition Impacts on Traditional Forecasting Time Series Models

W. A. Shaikh1,2,*, S. F. Shah2, S. M. Pandhiani3 and M. A. Solangi2

1Department of Mathematics and Statistics, Quaid-e-Awam University of Engineering, Science & Technology, Nawabshah, Sindh, Pakistan
2Department of Basic Sciences & Related Studies, Mehran University of Engineering & Technology, Jamshoro, Sindh, Pakistan
3Department of General Studies, Jubail University College, Al Jubail, Saudi Arabia
*Corresponding Author: W. A. Shaikh. Email: wajid@quest.edu.pk
Received: 09 June 2021; Accepted: 10 September 2021

Abstract: This investigative study is focused on the impact of wavelet on traditional forecasting time-series models, which significantly shows the usage of wavelet algorithms. Wavelet Decomposition (WD) algorithm has been combined with various traditional forecasting time-series models, such as Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Multivariate Adaptive Regression Splines (MARS) and their effects are examined in terms of the statistical estimations. The WD has been used as a mathematical application in traditional forecast modelling to collect periodically measured parameters, which has yielded tremendous constructive outcomes. Further, it is observed that the wavelet combined models are classy compared to the various time series models in terms of performance basis. Therefore, combining wavelet forecasting models has yielded much better results.

Keywords: Impact; wavelet decomposition; combined; traditional forecasting models; statistical analysis

1  Introduction

Due to the predictive importance, researchers have developed various forecasting models. As better environmental forecasting arts can be used to make appropriate management decisions, researchers are continually striving to improve the effectiveness and efficiency of the models. For decades the term wavelet has been used for the exploration of signal processing and geophysics. Therefore, this article looks at the WD combined with various traditional forecast time-series models. For decades the term wavelet has been used for the exploration of signal processing and geophysics. The last decade has shown vast interest in wavelets; it is a subject area that can be appropriate applicable and coalesced in various fields such as applied mathematics, physics, electrical engineering, etc.

Consequently, the WD has significantly impacted various fields, such as image processing, differential equations, statistics, and chemical signal processing [1,2]. WD algorithm is used as a mathematical approach for extracting nontrivial and potentially useful information from different data types such as historical data information, re-analysis, local or global climate model simulations, etc. Usually, WD based models are required to analyse the datasets and decomposing them thoroughly. Its hypothesis is to get better re-construct datasets with minimal losses. The wavelet transform technique uses the signal frequency from the time domain to the wavelet domain. Therefore, the newly acquired domain has contained more complex basis functions called wavelets, mother wavelets, or analysing wavelets [3].

Wavelet-based models are a noteworthy edge in de-noising the datasets to develop an efficient model. It has made it easy to analyse streamflow processes on different parameters without eliminating the effects of the time-frequency accompanied by conventional bandpass filters. The WD tool can let on information within the signal in both frequencies, time and scale domains [4,5]. The WD application controls the time-frequency or signal scale content and judges the temporal variation spectrum [6]. In contrast, the Fourier transforms interpret a quite different perspective that allows estimating the signal frequency but is not suitable to estimate the time-frequency dependence. Therefore, the wavelet transform has its origins in the Fourier transform. Thus, the WD has allowed for tracking the time evolution of processes at various scales in the signal as it has both time and scale measures of localisation. The WD tool signals can be classified as high limited frequency events or a significant number of scale-variable methods because it provides explicit information for rate forecasting classification [7]. In a review of the applications of the wavelet transform in hydrological dataset modelling describing the multifaceted information that can be obtained from such an analysis and recognition of seasonality, streamflow trends, and data de-noising.

The performance and accuracy of the traditional time series forecasting models continuously may be improved. Therefore, it can be inspired by the researchers to intend an improved version of the models [8]. This study describes the performance impact of the WD as an optimization in the traditional time series models in which the optimal response is continuously exchanged during the simulation and can be approved. The effectiveness of the models has been tested on two different streamflow datasets, including Indus and Chenab Rivers.

2  Study Areas and Data Utilization

To endorse the discussed TS forecast models and forecast the rivers, streamflow of the rate of the rivers have been collected 484 and 550 months, respectively, from two renowned Indus and Chenab Rivers of Pakistan (Figs. 12).

images

Figure 1: 484 months streamflow rate of Indus River

images

Figure 2: 550 months streamflow rate of Chenab River

3  Methodology

The use of wavelet application with the various traditional forecasting models such as LSSVM, ANN and MARS has improved the efficiency of the models and found excellent outcomes. These tractable combined models have been implemented as efficient tools on streamflow datasets to forecast phenomena that provide comprehensive signals information. The developed combined wavelet with AI models implements the following two-step protocol for forecasting activities.

1.    The WD methodology has been used as a preprocessor of input datasets. As a result, it has a time-frequency signal analysis at distinct intervals in the time-domain and considerable detail about input datasets.

2.    After obtaining the input signal by WD, it has been used for further processes as AI input in various traditional forecasting models.

Initially, the forecasting time-series datasets have been decomposed into a sub-time-series {W1,W2,,Wp,Cp} by the WD algorithm, where, W1,W2,,Wp and Cp have been described as detailed time-series and background time-series, respectively. Commonly, these have different roles in the forecasting time-series datasets and each sub-time-series have different behaviour. Therefore, the attribute and influence on the forecasting time-series dataset have different from each other. The developed LSSVM, ANN and MARS forecast models in which t are the input sub-time-series of the models, and the forecasting time-series datasets at t+T time have the output of models. Where T is described as the measurement of forecast time-frequency. Finally, wavelet network models (WNM) were created, in which certain weights have been learned using specific techniques [9]. The key object of forecasting time-series datasets to the WNM algorithm is to construct combined models with the WD algorithm.

3.1 Wavelet Decomposition (WD)

The algorithm of WD ability to de-noise non-stationary signals into sub-signals at different levels has a suitable resource for improved streamflow elucidation [10]. The novel developed forecasting models have been individual capacities that comprised WD and some conventional AI model techniques. The combined wavelet with various AI forecasting models has been an appropriate methodology [11]. According to forecasting time series models, the wavelet technique is becoming an ever more effective and essential tool used in models. The primary reason for WD has analysed the time series datasets in the time and frequency domain of the valuable decomposition of the original time series by taking useful information in various frequency levels using wavelet functions. The main advantage of using the WD is its robustness, as it does not include any potentially erroneous assumption or parametric testing procedure. The following WD mathematical structure is defined as a continuous time-series x(t),t[,]:

ψ(t,s)=1sψ(tτ2)(1)

where defined t,τ and s[0,] for time, for time step that iterates the window function and wavelet scale respectively, whereas ψ(t) is known as mother wavelet, it can be defined as follows:

ψ(t)dt=0(2)

Therefore, the continuous wavelet transform (CWT) can be defined as follows [12];

W(τ,s)=1sx(t)ψ¯(tτs)dt(3)

where, ψ¯(t) and W(τ,s) defined for complex conjugation of ψ(t) and presents the sum overall time of the time series multiplied by scale and shifted version of wavelet function ψ(t), respectively. The usage of WD for forecasting is not feasible since it is time-consuming to measure the wavelet coefficient on any conceivable scale and produces many results. WD is preferred in most forecasting problems because of its simplicity and ability to compute with less time. The WD involves choosing scales and positions on powers of 2, so-called dyadic scales and translations; then, the analysis is much more efficient as well as more accurate. The main advantage of using the WD is its robustness, as it does not include any potentially erroneous assumption or parametric testing procedure. The following mathematical structure is known as WD:

ψm,n(tτs)=1s0m/2ψ(tnτ0s0ms0m)(4)

where m,nZ that control the scale and length of the time-series datasets respectively, s0 is stated, fixed dilation step greater than 1,τ0 is the location parameter, which must be greater than zero. The parameters s0=2 and τ0=1 are the most common preference. For a discrete time-series x(t) that occur at t discrete time, then WD formation is defined as follows:

Wm,n=2m/2t=0N1ψ(2mtn)x(t)(5)

where, Wm,n is the wavelet coefficient of the WD at defined scale s=2m and τ=2mn, x(t) is discrete time-series (i.e.,t=1,2,,N1) and NZ to the power of 2(N=2m);n is the time translation parameter that changes in the ranges 0<n<2Mm and 1<m<M, whereas M has defined for the decomposition level. The analysed datasets have decomposed into several wavelet input components that depend on the particular decomposition level. Decisive the advantage-able decomposition level of the datasets in WD plays a mark-able role in conserved the figures and reduces the curvature of the datasets. However, there is no prevailing theory to inform how many decomposition levels have required for any time series dataset. By the following formula is estimated the decomposition level:

M=log(n)(6)

Decomposition levels M=2.6848 and M=2.7403 estimated when n=484 and n=550 monthly streamflow datasets (Figs. 12) have been used for Indus and Chenab Rivers, respectively, and approximately M=3 round-off decomposition level has used for both said rivers in this research work. By the hypothesis of Mallat's, the original discrete time-series x(t) decomposed into a series of linearity independent estimation and detail signals by the inverse WD and defined as follows:

x(t)=T+m=1Mt=02Mm1Wm,n2m/2ψ(2mtn)(7)

or inverse WD also in a simple format defined as follows:

x(t)=AM(t)+m=1MDm(t)(8)

where, AM(t) is known as the residual term at levels M and Dm(t)(m=1,2,,M) are the detailed sub-series of information capable of catching specific functions of data interpretation value. The following various forms of transfer functions (TF) have been emerged and vastly used in the model [13]:

PolynomialFunctiony=zSigmoidFunctiony=11+exp(z)RadialBasisFunctiony=exp(z2)HyperbolicTangentFunctiony=tanh(z)=21+exp(2z)1}(9)

3.2 Wavelet Artificial Neural Networks (WANN)

The human brain's functioning principle influenced an artificial neural network (ANN) as a forecasting model. Several architectures in the literature are available to forecast the streamflow and many other applications, one of which is the ANN algorithm mostly used. It is comprised of a network system with many interconnected nodes called neurons. The number of layers of an ANN is used to classify it, and layer(s) exists between an input and an output layer. Therefore, a single-layer feed-forward (SLFF) neural network is an architecture with just one layer for establishing connection among the nodes of the input, middle, and output layers. This type of system is characterised as a multi-layer feed-forward (MLFF) neural network built by more than one middle layer [14]. ANN has the merits of fault-tolerance, an efficient nonlinear developing capability that provides a well-organised model for streamflow forecasting. Though the model has some drawbacks, such as slow optimization processing, model complexities, and the approximation of the applications could not be overlooked. Therefore, this forecasting architectural model needed to be improved. The WD has been compounded to the ANN model for this purpose. The WANN model has been combined with the strengths of the WD and ANN applications and to achieve nonlinear strong approximation capability. WANN model architectural design has based on multi-layer perceptron (MLP). WANN is quite corresponding to that of (1+1/2) layer neural network and has contained the following three layers [15]:

Layer-i. input layer, which is introduced to the network and takes one or more than one inputs.

Layer-ii. the hidden layer, where data is manipulating procedure with a feed-forward neural network accompanied with orthonormal WD basis by activation functions developed.

Layer-iii. output layer, which contains one or more linear combiners and the corresponding estimations are consistent with the given inputs.

The training process has acquired the weights of the network connections. The WANN model can have various TF of different nodes in identical or different layers. The TF such as sigmoid, hyperbolic tangent functions are used for hidden layers, and there is no appropriation for the output layer. The WANN model has been successfully used for forecasting estimations. Two key approaches to developing the WANN model technique are described as following:

•   The WD technique and the ANN model processing are used separately. Firstly decomposed, the input signal employs various WD basis functions (Eq. (1)) with neurons in the hidden layer. Then, the wavelet coefficients have one or more output activities in their input weights have adjusted according to the certain learning algorithm.

•   In this case, two structures, WD mathematical and ANN artificial intelligence algorithms have been combined and performed. The transferal and dilation of the WD accomplished weights that have been adjusted according to a certain learning algorithm.

Only dyadic dilations and translations of the WD have developed the wavelet basis function whenever the first approach occurs. Therefore, this objective approach of WANN has often been known as a wavenet.

3.3 Wavelet Least-Squares Support Vector Machine (WLSSVM)

The following reprocessed support vector machine (SVM) classifier governs the application of minimisation [16]

MinJ(ω,b,e)=μ2ωTω+ζ2i=1Nei2(10)

Under the following equality condition

yi[ωTϕ(xi)+b]=1ei(i=1,2,,N)(11)

The usage of the LSSVM classifier is implicitly compatible with the definition of regression using binary conditions yi=±1yi2=1 and obtained

i=1Nei2=i=1N(yiei)2=i=1N[yi(ωTϕ(xi)+b)]2(12)

Therefore, developing a sense for least square (LS) data fitting is equivalent to LSSVM classifier development

J(ω,b,e)=μEw+ζED(Ew=12ωTωandED=12i=1N[yi(ωTϕ(xi)+b)]2)(13)

where, μ and ζ are known to be hyper-parameters to modify the regularisation number vs. the sum square error. The original formulation ratio γ=ζ/μ has provided the solution as a tuning parameter. Apply parameters μ and ζ to LSSVM using a Bayesian description. The solution of the LSSVM model has been obtained after the development of the following Lagrangian function:

{ L(ω,b,e,α)=J(ω,e)i=1Nαi[ { ωTϕ(xi)+b }+eiyi ]=ωTω2+γ2i=1Nei2       i=1Nαi[ { ωTϕ(xi)+b }+eiyi ](14)

Since αi are Lagrange multipliers αiR. For the LSSVM model, the following structures are appropriate:

images

After eliminating ω and e, in place of quadratic programming (QP) problem, following linear system obtained:

[01NT1NΩ+γ1IN][ba]=[oY](16)

Since 1N=[1,,1]T.IN is Nth order identity matrix; ΩRN×N is kernel matrix and defined as Ωij=K(xi,xj)=ϕ(xi)Tϕ(xj); and Y=[y1,,yN].

Here, pick K(xi,xj)=e||xxi||2σ2 as a kernel function, σR+ is a scale parameter, and evaluate the inputs scaling in the RBF kernel [17--18].

The wavelet least-squares support vector machine (WLSSVM) model has been developed with the potential worth of the WD algorithm and LSSVM processing and obtained optimum nonlinear approximation ability. WLSSVM model has been consists of an input layer, hidden layer, and output layer and the model successfully has been used for forecasting approximations [16].

3.4 Multivariate Adaptive Regression Splines (MARS)

The MARS model schemes discoveries to forecasting continuous numeric outcomes. Appropriate, the MARS model scheme has been implemented in two stages containing forward-backward stepwise techniques. The stepwise forward technique has observed a large set of input variables (basis function) with a different knot; though, this stepwise technique might be developing complexity and a multi-layered model [19]. Determination of such type of model has weak forecasting presentation. For increasing forecast accuracy, the backward stepwise technique eradicates the pointless variables amongst the before chosen set, which may have fewer effects on the approximation procedure pruned by the MARS. For the projection of x the input variable to a novel y output variable based on appropriation, named basis functions that defines point of inflection along with the input range [20]:

y={max(0,xc)max(0,cx)(17)

In these y functions, x treat as input and c chosen is a threshold value is said to be a knot. The function is useful in forward-backward stepwise techniques for each input unknown to classify the position of knots, where the value of the function changes. These y basis functions are called Spline functions, which is a cknot reflected pair. The following description is the standard form of the MARS model [21].

y=f(x)=c0+i=1MciBi(x)(18)

where the output variable y estimated by the MARS model, c0 is constant, ci is the ith basis function coefficient determined by minimising the Root Mean Squared Errors (RMSE), and Bi(x) is the ith basis function. The optimal MARS model scheme is designated based on the smallest value of the Generalised Cross-Validation (GCV) principle. The GCV is defined as follows [17]:

GCV(M)=i=1n[yif(xi)]2n(1C(M)n)2(19)

where, yi is the objective of output, f(xi) is the projected output, n is the number of inputs, and C(M) is a penalty, expressed as follows:

C(M)=d×M+M+1(20)

where d is the penalty for each basis function consisted in the model, M is the number of basis functions. Wavelet Multivariate Adaptive Regression Splines (WMARS) model has combined the prospective techniques of WD and MARS for achieving robust nonlinear estimation potentiality. WMARS model binding designed is found on a multi-layer perception (MLP). The WMARS model is used for forecast estimations and has been dependent on an input layer, hidden layer, and output layer [22].

3.5 Statistical Parameters Assessment

The statistical parameters are used to demonstrate the effectiveness in terms of forecastability of the models assessed by comparing the actual and forecasted values. Usually, the Mean Absolute Error (MAE), the Root Mean Square Error (RMSE), and the correlation coefficients (CC), are used to determine the efficiency of the models and outcomes fitted to the best fit line [23--24]

MAE=1mt=1m|yty^t|(21)

RMSE=1mt=1m(yty^t)2(22)

CC=1mt=1m(yty¯t)(y^ty¯^t)1mt=1m(yty¯t)21mt=1m(y^ty¯^t)2(23)

where yt,y^t,y¯tandy¯^t are the actual (streamflow observed), forecasted, average streamflow, and average forecast values at a time t, respectively, whereas m is the total number of observations.

The statistical parameters (21)(23) have been used to assess the impact of discussed different models. MAE statistics provide an appropriate picture of the actual position in terms of the projected value error, whereas the RMSE statistics are the deviation of the models between the observed and projected values, the least values of these statistics are evaluation criteria for the best model. Similarly, the degree of the linear correlation coefficient is measured by the CC and observed the best flow effect with its high value [25].

4  Results and Discussion

This article examines the wavelet impact on traditional forecasting models by fitting input hydrological time-series datasets collected from Indus and Chenab Rivers. The computational code of the conversed forecasting models has been written in the MATLAB application, including the wavelet toolbox.

The six (M1--M6) appropriate input data specimens have been prepared and used in the training and testing phases for traditional and combination of WD forecast models as shown in Table 1. M1--M6 and WM1--WM6 input combinations signify the number of variables based on earlier analysis for monthly river streamflow rates in Figs. 12.

images

The training dataset of the models is described for approximated parameters and the testing dataset has characterised by choosing the best combination model amongst every number of hidden layers considered. A trial-and-error technique has estimated the optimum complexity of conversed models. The statistical approaches, such as the Correlation Coefficient (CC), the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE), respectively, have estimated the outcomes. The estimates of both streamflow datasets are described in Tables 27 regarding accuracy and errors of the forecasting time-series models, such as LSSVM, WLSSVM, ANN, WANN, MARS and WMARS, respectively.

In Table 2, the testing phase outcomes of the LSSVM model are observed that the M6 input combination performed well for Indus and Chenab Rivers. Likewise, Table 3 examines that WM2 and WM6 input combinations of the WLSSVM model have been estimated good results for both Indus and Chenab Rivers, respectively. After observing both tables, Table 3 yielded more inspired outcomes using the wavelet transform combined model compared to Table 2.

images

images

Additionally, in Table 4 is noticed that M2,M5 and M1,M3 input combinations of the ANN model are better for Indus and Chenab Rivers. Similarly, WM5 and WM1 input combinations of the WANN model have given improved outcomes in the analysis of Table 5 for both Indus and Chenab Rivers. Afterward, perceiving both tables, the estimations of Table 5 are more attractive outcomes using the combined model paralleled to Table 4.

images

images

Furthermore, in Table 6 is analysed that M3 and M6 input combinations of the MARS model are obtained appropriate results for Indus and Chenab Rivers. As well in Table 7 is observed that WM4 and WM5 input combinations for the WMARS model have been found improved estimations for both Indus and Chenab Rivers. Therefore, Table 7 valuations are improved due to the usage of the combined model than in Table 6.

images

images

Figs. 34 are described the predicted linear trend line scatter displaying the LSSVM, WLSSVM, ANN, WANN, MARS and WMARS models employing the testing phase and show the performance as relatively to observed data on Indus and Chenab banks. The scatter lines are described in the equation of regression line (y=a+bx) by MATLAB software for each model.

images

Figure 3: Linear trend line scatter graphs of predicted and observed for Indus streamflow through testing phase

images

Figure 4: Linear trend line scatter graphs of predicted and observed for Chenab streamflow through the testing phase

Clearly, Fig. 3 shows that the forecasting model with wavelet transforms performs remarkably because scatter graphs are more appropriate than traditional models.

Similarly, Fig. 4 shows that forecasting models with wavelet transform have better forecasting scatter graphs than traditional models.

The approximations of each model are shown in Table 8, scrutinised by various statistical parameters with small errors and large coefficient values with the evidence that models with wavelet decomposition are more efficient than the traditional models.

The CC-values of WD combined models are close to 100% for Indus and Chenab datasets compared to traditional models. Therefore, the WD algorithm combined with traditional models has made a tremendous impact and performs the role of a gadget to deliver improved estimations of both streamflow rivers. Consequently, the combined WLSSVM, WANN, and WMARS forecasting methodologies have been used as the second type models and provide excellent results instead of the first type traditional models LSSVM, ANN and MARS.

images

5  Conclusions

It is concluded that the use of wavelet application with the addressed forecast time-series models has improved the efficiency and yielded tremendous results. The traditional forecasting time-series models have been prescribed by utilising the impact of the wavelet algorithm. The significance of wavelet information is to improve the efficiency of the models that determines the appropriate outcomes for time-series models. The performance of the wavelet combined models mapping with their associated resampling outcomes. Filtrations of the streamflow time-series datasets have been interpreted from the WD application and these features have not been observed in traditional models. The nonlinear input combination models have been constructed with the WD application and used as input estimators with traditional models that improve the forecast efficiency of the combined models. Therefore, the WD application has become an efficient and interesting valuable tool to analysed simulations of time-series datasets models in various domains.

Thus researchers have a good argument in the future for the extensive usage of the wavelet algorithm to build up the novelty in the model or improve the existing models by suitable transform other than wavelet. The WD algorithm provides the optimum ability to pick the appropriate input and logically improves the output of the traditional forecasting models.

Acknowledgement: This study has been reinforced by the Department of Basic Sciences & Related Studies, Mehran University of Engineering & Technology, Jamshoro, Sindh, Pakistan. The authors have gratefully acknowledged the institute for its support and cooperation in the research activity and providing a healthy research environment and facilities.

Funding Statement: The authors have not received any financial support for the research, authorship, and publication of this article.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. AL W. S., Ismail, M. T., Karim, S. A. A. (2010). Forecasting financial time series data base on wavelet transforms and neural network model. International Conference on Mathematical Applications in Engineering , pp. 3–5. Kuala Lumpur, Malaysia.
  2. Shao, X. G., Leung, A. K. M., & Chau, F. T. (2003). Wavelet: A new trend in chemistry. Accounts of Chemical Research, 36(4), 276-283. [Google Scholar] [CrossRef]
  3. Kisi, O. (2010). Wavelet regression model for short-term streamflow forecasting. Journal of Hydrology, 389(3–4), 344-353. [Google Scholar] [CrossRef]
  4. Kaboudan, M. (2004). Wavelets in forecasting. School of Business, University of Redlands, Redlands.
  5. Nourani, V., Baghanam, A. H., Adamowski, J., & Kisi, O. (2014). Applications of hybrid wavelet–artificial intelligence models in hydrology: A review. Journal of Hydrology, 514, 358-377. [Google Scholar] [CrossRef]
  6. Lee, H. Y., Beh, W. L., & Lem, K. H. (2020). Wavelet as a viable alternative for time series forecasting. Austrian Journal of Statistics, 49(3), 38-47. [Google Scholar] [CrossRef]
  7. Esfetanaj, N. N., Nojavan, S. (2018). The use of hybrid neural networks, wavelet transform and heuristic algorithm of WIPSO in smart grids to improve short-term prediction of load, solar power, and wind energy. Operation of Distributed Energy Resources in Smart Distribution Networks, 75–100. DOI 10.1016/B978-0-12-814891-4.00004-7. [CrossRef]
  8. Pritpal, S. (2021). FQTSFM: A fuzzy-quantum time series forecasting model. Information Sciences, 566, 57–79. DOI 10.1016/j.ins.2021.02.024. [CrossRef]
  9. Wang, W., & Ding, J. (2003). Wavelet network model and its application to the prediction of hydrology. Nature and Science, 1(1), 67-71. [Google Scholar]
  10. Sang, Y., & Wang, D. (2008). Wavelets selection method in hydrologic series wavelet analysis. Journal of Hydraulic Engineering, 39(3), 295-300. [Google Scholar]
  11. Mirbagheri, S. A., Nourani, V., Rajaee, T., & Alikhani, A. (2010). Neuro-fuzzy models employing wavelet analysis for suspended sediment concentration prediction in rivers. Hydrological Sciences, 55(7), 1175-1189. [Google Scholar] [CrossRef]
  12. Pandhiani, S. M., & Shabri, A. B. (2015). Time series forecasting by using hybrid models for monthly streamflow data. Applied Mathematical Sciences, 9(57), 2809-2829. [Google Scholar] [CrossRef]
  13. Senkal, S., Ozgonenel, O. (2013). Performance analysis of artificial and wavelet neural networks for short term wind speed prediction. 8th International Conference on Electrical and Electronics Engineering, pp. 196–198. Antalya, Turkey.
  14. Pritpal, S., & Huang, Y. P. (2019). A high-order neutrosophic-neuro-gradient descent algorithm-based expert system for time series forecasting. International Journal of Fuzzy Systems, 21(7), 2245-2257. [Google Scholar] [CrossRef]
  15. Yonaba, H., Anctil, F., & Fortin, V. (2010). Comparing sigmoid transfer functions for neural network multistep ahead streamflow forecasting. Journal of Hydrologic Engineering, 15(4), 275-283. [Google Scholar] [CrossRef]
  16. Pandhiani, S. M., & Shabri, A. B. (2013). Time series forecasting using wavelet-least squares support vector machines and wavelet regression models for monthly stream flow data. Open Journal of Statistics, 3(3), 183. [Google Scholar] [CrossRef]
  17. Shaikh, W., Shah, S., Solangi, M., & Pandhiani, S. (2019). Forecasting analysis of GMDH model with LSSVM and MARS models for hydrological datasets (case study). Indian Journal of Science and Technology, 12, 1-6. [Google Scholar] [CrossRef]
  18. Ismail, S., Shabri, A., & Samsudin, R. (2012). A hybrid model of self organising maps and least square support vector machine for river flow forecasting. Hydrology and Earth System Sciences, 16(11), 4417-4433. [Google Scholar] [CrossRef]
  19. Bansal, P., Salling, J. (2013). Multivariate Adaptive Regression Splines (MARS). http://www.ideal.ece.utexas.edu/courses/ee380l_ese/2013/mars.pdf.
  20. Adamowski, J., Chan, H. F., Prasher, S. O., & Sharda, V. N. (2012). Comparison of multivariate adaptive regression splines with coupled wavelet transform artificial neural networks for runoff forecasting in himalayan micro-watersheds with limited data. Journal of Hydroinformatics, 14(3), 731-744. [Google Scholar] [CrossRef]
  21. Fathian, F., Mehdizadeh, S., Kozekalani Sales, A., & Safari, M. J. S. (2019). Hybrid models to improve the monthly river flow prediction: Integrating artificial intelligence and non-linear time series models. Journal of Hydrology, 575, 1200-1213. [Google Scholar] [CrossRef]
  22. Kao, L. J., Chiu, C. C., Lu, C. J., & Chang, C. H. (2013). A hybrid approach by integrating wavelet-based feature extraction with MARS and SVR for stock index forecasting. Decision Support Systems, 54(3), 1228-1244. [Google Scholar] [CrossRef]
  23. Sahoo, B. B., Jha, R., Singh, A., & Kumar, D. (2019). Application of support vector regression for modeling low flow time series. KSCE Journal of Civil Engineering, 23(2), 923-934. [Google Scholar] [CrossRef]
  24. Pritpal, S. (2020). A novel hybrid time series forecasting model based on neutrosophic–PSO approach. International Journal of Machine Learning and Cybernetics, 11, 1643-1658. [Google Scholar] [CrossRef]
  25. Sun, Y., Niu, J., & Sivakumar, B. (2019). A comparative study of models for short-term streamflow forecasting with emphasis on wavelet-based approach. Stochastic Environmental Research and Risk Assessment, 33, 1875-1891. [Google Scholar] [CrossRef]
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.