Loading [MathJax]/jax/element/mml/optable/BasicLatin.js

iconOpen Access

ARTICLE

Statistical Inference for Kumaraswamy Distribution under Generalized Progressive Hybrid Censoring Scheme with Application

Magdy Nagy*

Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh, 11451, Saudi Arabia

* Corresponding Author: Magdy Nagy. Email: email

Computer Modeling in Engineering & Sciences 2025, 143(1), 185-223. https://doi.org/10.32604/cmes.2025.061865

Abstract

In this present work, we propose the expected Bayesian and hierarchical Bayesian approaches to estimate the shape parameter and hazard rate under a generalized progressive hybrid censoring scheme for the Kumaraswamy distribution. These estimates have been obtained using gamma priors based on various loss functions such as squared error, entropy, weighted balance, and minimum expected loss functions. An investigation is carried out using Monte Carlo simulation to evaluate the effectiveness of the suggested estimators. The simulation provides a quantitative assessment of the estimates accuracy and efficiency under various conditions by comparing them in terms of mean squared error. Additionally, the monthly water capacity of the Shasta reservoir is examined to offer real-world examples of how the suggested estimations may be used and performed.

Keywords

Bayesian estimation; E-Bayesian estimation; H-Bayesian estimation; generalized progressive hybrid; Kumaraswamy distribution; censoring sample; maximum likelihood estimation

1  Introduction

Kumaraswamy [1] created a novel two-parameter distribution with hydrological concerns in mind because the conventional probability distributions functions such as beta, normal, log-normal, Student-t, and other empirical distributions do not suit hydrological data well. For further details on how the Kumaraswamy distribution fits well with several natural occurrences, including engineering modeling, daily rainfall, water flows, and other pertinent fields, see [24], and [5]. This is particularly true for results that have upper and lower boundaries, such as economic statistics, test scores, human height, and air temperatures. Although the Kumaraswamy distribution has a closed form cumulative distribution function (CDF) in the closed interval [0, 1], it is comparable to the beta distribution. The probability density function (PDF) of the Kumaraswamy distribution, CDF, reliability function (RF), and hazard rate function (HRF) can be written in exponential form, respectively, as follows:

f(z;λ,δ)=λδzδ1(1zδ)λ1=λδzδ11zδexp[λln(1zδ)],0  z  1,(1)

F(z;λ,δ)=1(1zδ)λ=1exp[λln(1zδ)],0  z  1,(2)

R(t)=(1tδ)λ=exp[λln(1tδ)],t > 0(3)

H(t)=λδtδ11tδ,t > 0,(4)

where λ and δ are non-negative, shape and scale parameters, respectively. In recent years, estimation for the Kumaraswamy distribution has increasingly come to the attention of academics, see for more information [610]. Fig. 1ad shows the PDF, CDF, RF and HRF for the Kumaraswamy distribution, respectively, with different values for shape and scale parameters.

images

Figure 1: The PDF, CDF, RF and HRF for the Kumaraswamy distribution with different values of shape and scale parameters

Due to time, cost, and resource limitations, quality assessments that use lifetime data may encounter issues in practice. To address these problems, censoring schemes (CSs) have been employed. The progressive censoring schemes (PCSs) technique has become the norm due to its versatility and the range of data it provides. PCS states that in an experiment on n experimental units, the R1 surviving items are randomly selected from (n1) surviving items at the moment of the first failure in the test. Also, at the moment of the second failure in the test, R2 surviving items are randomly selected from (nR12) surviving items. The procedure keeps going until the mth failure time; at the mth failure, every last surviving Rm item is arbitrarily eliminated from the experiment. The m ordered observed failure times will be designated by (ZR11:m:n,ZR22:m:n,,ZRmm:m:n) and referred to as progressively censored samples of size m from a sample of size n with PCS (R1,R2,,Rm),mn. The use of PCS is often motivated by practical considerations, such as cost constraints or limited resources, where it may not be feasible or efficient to monitor the subjects continuously or until the failure occurs. Data analysis using PCS poses challenges as the censoring mechanism relies on observed failure times, which may be long, and the time intervals between successive observations may vary. This requires the development of specialized statistical methods for estimation and inference. A generalized progressive hybrid censored scheme (GPHCS) was recently presented by Cho et al. [11]. He may handle the complete test duration inside the allocated time after determining the proper total test time T. Fig. 2 shows the three different cases of GPHCS.

images

Figure 2: The Layout representation of generalized progressive hybrid censoring scheme

In the recent years, GPHCS has gained a lot of attention in reliability and survival analysis. Based on GPHCS from various distributions, Bayesian estimation has been studied by Nagy et al. in [12,13], and Nagy et al. in [14].

Bayesian estimation involves updating prior beliefs with observed data to obtain a posterior distribution. In expected Bayesian (E-Bayesian), we impose certain restrictions on our hyperparameters; it provides more stable and reliable estimates, especially in situations with uncertainty about the appropriate prior specification. The hierarchical Bayesian (H-Bayesian) method is more robust because it entails a two-stage process for constructing the prior distribution. However, a drawback of this approach lies in the intricate integrals within the estimator expression. These integrals often necessitate resolution through numerous numerical approximation techniques, making calculations tedious and time-consuming.

In the recent past, E- and H-Bayesian estimations have been considered for the parameters and the reliability characteristics under different censored data. Han [15] discussed the E-Bayesian and H-Bayesian estimations of the parameter derived from the Pareto distribution under different loss functions. Okasha et al. [16] discussed the E-Bayesian estimation for geometric distribution. Yousefzadeh [17] considered the E- and H-Bayesian estimates for Pascal distribution. Rabie et al. [18] discussed the Bayesian and E-Bayesian estimation methods of the parameter and the reliability function of the Burr-X distribution based on a generalized Type-I hybrid censoring scheme. Yaghoobzadeh [19] conducted research on E-Bayesian and H-Bayesian estimation of a scalar parameter in the Gompertz distribution under type-II censoring schemes based on fuzzy data. Their study focused on developing estimation techniques that account for uncertainty and imprecision in the data. Nassar et al. [20] conducted research on E-Bayesian estimation and associated properties of the simple step-stress model for the exponential distribution based on type-II censoring. Their study focused on developing estimation methods that account for the censoring scheme employed and investigating the properties of the estimated parameters. Nagy et al. [21] provided an E-Bayesian estimation for an exponential model based on simple step stress with Type I hybrid censored data. They developed estimation methods to obtain reliable parameter estimates considering the specific censoring scheme employed. Balakrishnan et al. [22] discussed the best linear unbiased estimation (BLUE) and maximum likelihood estimation (MLE) techniques for exponential distributions under general progressive type-II censored samples. They provided statistical methodologies for parameter estimation under this type of censoring scenario. They developed methodologies to analyze this specific censoring scheme and its impact on the estimation of the distribution’s parameters. Mohie El-Din et al. [23] proposed an E-Bayesian estimation approach for the parameters and HRF of the Gompertz distribution using type-II PCS. Their research aimed to provide reliable estimation techniques for this specific censoring scheme and demonstrated the application of these methods in practical scenarios.

Recent research has demonstrated that E- and H-Bayesian estimates are more effective than both classical and Bayesian approaches. To the best of our knowledge, no article can be found in the literature having both E- and H-Bayesian estimation for the Kumaraswamy distribution under GPHCS. Motivated by the effectiveness of E- and H-Bayesian estimates, the usefulness of the GPHCS, and the importance of the Kumaraswamy distribution in reliability applications. Our main objective in this paper is to estimate the shape parameter and the HRF of the Kumaraswamy distribution based on GPHCS using E-Bayesian and H-Bayesian approaches, which we believe would be of profound interest to applied statisticians and quality control engineers.

The structure of the article is as follows: Section 1 provides an introduction, outlining the research problem and objectives. Section 2 focuses on the MLE and Bayesian estimation techniques for the shape parameter and HRF, considering different loss functions. Section 3 presents the derivation of E-Bayesian estimators for the shape parameter under different loss functions. In Section 4, H-Bayesian estimators are obtained, again considering different loss functions. To assess the performance of the estimators, a simulation study is conducted in Section 5, where different loss functions are considered and the estimators are compared. Section 6 presents the analysis of a real-life data to demonstrate the practical application of the proposed estimators. In Section 7, the article concludes by summarizing the key findings and implications of the study. Finally, the properties of the E- and H-Bayesian estimators are discussed in the Appendix A of this paper.

2  Maximum Likelihood and Bayesian Estimation

In this section, we estimate the shape parameter λ and HRF H(t) of the Kumaraswamy distribution using the MLE method. Let Z_=(ZR11:n:n,ZR22:n:n,,ZRnn:n:n)=(Z1,Z2,,Zn) where ZRjj:n:n=Zj, for simplicity of notation, be the ordered observed failure times based on GPHCS follow from absolutely continuous PDF F(.) and RF R(.), the joint PDF is given by

fZ_(z_)=[ni=1mj=i(Rj+1)]ni=1f(zi:n:n)[R(zi:n:n)]Ri[R(T)]Rτ,(5)

0<z1<z2<<zn<.

where Rj is the jth value of the vector R.

[z_,R]={[(z1,R1),,(zs,Rs),(zs+1,0),,(zk1,0),(zk,Rk=nksj=1Rj)],Case-I,[(z1,R1),,(zs,Rs)],Case-II,[(z1,R1),,(zm,Rm)],Case-III,(6)

and Rτ is the number of units eliminated at time T, as determined by

Rτ={0,Case-I,nssj=1Rj,Case-II,0,Case-III, and n={kCase-I,sCase-II,mCase-III.(7)

Let (Z1,Z2,,Zn) be the observed ordered data under the GPHCS follow from the Kumaraswamy distribution with PDF and RF as in Eqs. (1) and (3), the likelihood function of λ,δ can be derived by Eq. (5), as

L(λ,δ;z_)=C(λδ)nnj=1zδ1j1zδjexp[λD(δ,z_)],(8)

where C=nj=1mj=j(Rj+1) and D (δ,z_) =[nj=1(Rj+1)ln(1zδj)+Rτln(1Tδ)]. Taking log of Eq. (8), we get

logL(λ|z_)nlog(λδ)+(δ1)nj=1logzjnj=1log(1zδj)λD(δ,z_).

We differentiate the above equation with respect to λ and equating to zero, as

lnL(λ,δ|z_)λ=nλD(δ,z_)=0,

ˆλML=nD(δ, z_).(9)

The ML estimate of H(t) for given δ is given as

ˆHML(t)=ˆλMLδtδ11tδ.(10)

2.1 Bayesian Estimation

In calculating statistical inference and estimation of scale parameters using the Bayesian method, this is done through integrations of non-closed formulas. Then, calculating the inference and estimation of these parameters using the E- and H-Bayesian approaches is a more complex integral formula. Therefore, methods are used to approximate these quantities, which often lead to large errors. Therefore, in this manuscript, we calculate the estimate of the shape parameter for the Kumaraswamy distribution because the Bayesian estimate of this parameter is a closed form. Therefore, calculating the estimates using the E- and H-Bayesian approaches is more accurate and efficient. Here, we have developed Bayesian estimators for the shape parameter λ and the HRF of the Kumaraswamy distribution using various loss functions. These loss functions include the squared error loss function (SELF), entropy loss function (ELF), weighted balance loss function (WBLF), and minimum expected loss function (MELF).

We are using gamma distribution as the prior distribution of λ as it is a conjugate prior. The gamma prior distribution with shape and scale parameters a and b, respectively, has the following PDF:

P(λ|a,b)=baλa1Γ(a)ebλ;λ,a,b > 0.(11)

Here the hyper-parameters a and b have been chosen based on a formula, which can be found in Dutta et al. [24]. Using the likelihood function and the prior in Eq. (11), the posterior distribution for the parameter λ becomes

P(λ|z_)= L(λ|z_)P(λ|a,b)0L(λ|z_)P(λ|a,b)dλ= [g1(δ,z_)]n+aλ(n+a)1exp[λg1(δ,z_)]Γ(n+a),(12)

where g1(δ,z_) =b+D(δ,z_) and we will write it as g1 for simplicity of notation.

2.2 Bayesian Estimate with SELF

The Bayes estimator of λ under the squared error loss function (SELF) ˆλBS, is given by

ˆλBS=E[λ|z_],(13)

provided if E[λ|z_] exists and finite. Then, by using Eqs. (12) and (13) with the sample  z_=(z1,,zn), the Bayes estimator of λ under SELF is given by

ˆλBS=E[λ|z_]=0λP(λ|z_)dλ=n+ag1.(14)

The Bayes estimate of H(t) for given δ is given as

ˆHBS(t)=ˆλBSδtδ11tδ.(15)

2.3 Bayesian Estimate with ELF

Dey et al. [25] discussed the ELF and the Bayes estimator of λ under ELF ˆλBE, is given by

ˆλBE=E[λ1|z_]1,(16)

provided if E[λ1|z_]1 exists and finite. Then, by using Eqs. (12) and (16) with the sample z_=(z1,,zn), the Bayes estimator of λ under ELF is given by

ˆλBE=E[λ1|z_]1=0P(λ|z_)dλλ=n+a1g1.(17)

The Bayes estimate of H(t) for given δ under ELF is given as

ˆHBE(t)=ˆλBEδtδ11tδ.(18)

2.4 Bayesian Estimate with WBLF

The WBLF can be expressed as (see Nasir et al. [26]) where the Bayes estimator of λ under WBLF ˆλBW, is given by

ˆλBW=E[λ2|z_]E[λ|z_],(19)

provided if E[λ2|z_] and E[λ|z_] exist and finite. Then, by using Eqs. (12) and (19) with the sample z_=(z1,,zn), we have

E[λ2|z_]=0λ2P(λ|z_)dλ=(n+a)(n+a+1)[g1]2andE[λ|z_]=n+ag1.

Hence, the Bayes estimator of λ under WBLF is given by

ˆλBW=(n+a+1)g1.(20)

The Bayes estimate of H(t) for given δ under WBLF is given as

ˆHBW(t)=ˆλBWδtδ11tδ.(21)

2.5 Bayesian Estimate with MELF

Tummala et al. [27] defined the MELF where the Bayes estimator of λ under WBLF ˆλBM, is given by

ˆλBM=E[λ1|z_]E[λ2|z_],(22)

provided that E[λ1|z_] and E[λ2|z_] exist and finite. Then, by using Eqs. (12) and (22) with the sample z_=(z1,,zn), we have

E[λ2|z_]=01λ2P(λ|z_)dλ=[g1]n+aΓ(n+a)Γ(n+a2)[g1]n+a2,andE[λ1|z_]=g1n+a1.

Hence, by using MELF, the Bayes estimate is obtained as

ˆλBM=(n+a2)g1.(23)

The Bayes estimate of H(t) for given δ under MELF is given as

ˆHBM(t)=ˆλBWδtδ11tδ.(24)

3  E-Bayesian Estimation

According to Han [28], it is recommended to select the prior parameters a and b in such a way that the prior distribution in Eq. (11) is a decreasing function of λ.

ddλP(λ|a,b)=(ab)Γ(a)λ(b2)eλa[(b1)aλ].

Specifically, it is suggested that for 0 < a < 1 and b > 0, the prior distribution becomes a decreasing function of λ. The E-Bayesian estimate of λ is given by

ˆλEB=10k0ˆλBP(λ,a,b) dadb.

The E-Bayesian estimate of λ is then given by the integral of the Bayesian estimate of λ, denoted as ˆλB, multiplied by the prior distribution, over suitable domains for the hyper parameters a and b. The domains for the first and second integrals correspond to the regions where the prior density function is a decreasing function of λ.

The specific distributions chosen for the hyper parameters a and b are as follows:

P1(a,b)=2(kb)k2,0 < a < 1,0 < b < k,(25)

P2(a,b)=1k,0 < a < 1,0 < b < k,(26)

and

P3(a,b)=2bk2,0 < a < 1,0 < b < k.(27)

3.1 E-Bayesian Estimation under SELF

For a sample z_=(z1,,zn) of the Kumaraswamy distribution, using the Bayesian estimator of λ under SELF in Eq. (14) with the priors given in Eqs. (25)(27), the E-Bayesian estimators of λ under SELF are given by

ˆλEBS1= 2n+1k2[(k+g2(δ,z_))log(k+g2(δ,z_)g2(δ,z_))k],ˆλEBS2= 2n+12klog(k+g2(δ,z_)g2(δ,z_)),ˆλEBS3= 2n+1k2[g2(δ,z_)log(g2(δ,z_)k+g2(δ,z_))+k].(28)

For a time t with Eq. (15), using the priors given in Eqs. (25)(27), the E-Bayesian estimators of H(t) under SELF are given by

ˆHEBS1= δtδ11tδ(2n+1)k2[(k+g2)log(k+g2g2)k],ˆHEBS2= δtδ11tδ(2n+1)2klog(k+g2g2),ˆHEBS3= δtδ11tδ(2n+1)k2[g2log(g2k+g2)+k],(29)

where g2(δ,z_) =D(δ,z_)=[nj=1(Rj+1)ln(1zδj)+Rτln(1Tδ)] and we will write it as g2 for simplicity of notation.

3.2 E-Bayesian Estimation under ELF

For a sample z_=(z1,,zn) from the Kumaraswamy distribution, using the Bayesian estimator of λ under SELF in Eq. (17) with the priors given in Eqs. (25)(27), the E-Bayesian estimators of λ under ELF are given by

ˆλEBE1= 2n1k2[(k+g2)log(k+g2g2)k],ˆλEBE2= 2n12klog(k+g2g2),ˆλEBE3= 2n1k2[g2log(g2k+g2)+k].(30)

For a time t with Eq. (18), using the priors given in Eqs. (25)(27), the E-Bayesian estimators of H(t) under ELF are given by

ˆHEBE1= δtδ11tδ(2n1)k2[(k+g2)log(k+g2g2)k],ˆHEBE2= δtδ11tδ(2n1)2klog(k+g2g2),ˆHEBE3= δtδ11tδ(2n1)k2[g2log(g2k+g2)+k].(31)

3.3 E-Bayesian Estimation under WBLF

For a sample z_=(z1,,zn) from the Kumaraswamy distribution, using the Bayesian estimator of λ under WBLF in Eq. (20) with the priors given in Eqs. (25)(27), the E-Bayesian estimators of λ under WBLF are given by

ˆλEBW1= 2n+3k2[(k+g2)log(k+g2g2)k],ˆλEBW2= 2n+32klog(k+g2g2),ˆλEBW3= 2n+3k2.[g2log(g2k+g2)+k].(32)

For a time t with Eq. (21), using the priors given in Eqs. (25)(27), the E-Bayesian estimators of H(t) under WBLF are given by

ˆHEBW1= δtδ1(1tδ)(2n+3)k2[(k+g2)log(k+g2g2)k],ˆHEBW2= δtδ1(1tδ)(2n+3)2klog(k+g2g2),ˆHEBW3= δtδ1(1tδ)(2n+3)k2[g2log(g2k+g2)+k].(33)

3.4 E-Bayesian Estimation under MELF

For a sample z_=(z1,,zn) of the Kumaraswamy distribution, using the Bayesian estimator of λ under MELF in Eq. (23) with the priors given in Eqs. (25)(27), the E-Bayesian estimators of λ under MELF are given by

ˆλEBM1= 2n3k2[(k+g2)log(k+g2g2)k],ˆλEBM2= 2n32klog(k+g2g2),ˆλEBM3= 2n3k2[g2log(g2k+g2)+k].(34)

For a time t with Eq. (24), using the priors given in Eqs. (25)(27), the E-Bayesian estimators of H(t) under MELF are given by

ˆHEBM1= δtδ11tδ(2n3)k2[(k+g2)log(k+g2g2)k],ˆHEBM2= δtδ11tδ(2n3)2klog(k+g2g2),ˆHEBM3= δtδ11tδ(2n3)k2[g2log(g2k+g2)+k].(35)

4  H-Bayesian Estimation

In this part, the H-Bayesian estimates of the shape parameter of the Kumaraswamy distribution are obtained using different loss functions, namely SELF, ELF, WBLF, and MELF. Following the methodology proposed by Lindley et al. [29], we introduce hyper parameters denoted as a and b in the prior distribution Prior(λ|a,b) as given in Eq. (11). The hyper prior distributions of a and b are defined in Eqs. (25)(27). Then the corresponding hierarchical prior distributions of λ are derived based on these hyper priors.

P4(λ)=10k0P(λ|a,b)P1(a,b)dbda=10k0baλa1Γ(a)exp(bλ)2(kb)k2 dbda,(36)

P5(λ)=10k0P(λ|a,b)P2(a,b)dbda=10k0baλa1Γ(a)exp(bλ)1k dbda,(37)

and

P6(λ)=10k0P(λ|a,b)P3(a,b)dbda=10k0baλa1Γ(a)exp(bλ)2bk2 dbda.(38)

Using Bayes theorem, likelihood function and Eqs. (36)(38), the hierarchical posterior distributions of λ can be written as

P1(λ|z_)=L(λ|z_)P4(λ)0L(λ|z_)P4(λ) dλ=10k0(kb)baΓ(a)λn+a1exp[λg1] dbda10k0(kb)baΓ(a)Γ(n+a)[g1]n+a dbda,

P2(λ|z_)=L(λ|z_)P5(λ)0L(λ|z_)P5(λ) dλ=10k0baΓ(a)λn+a1exp[λg1] dbda10k0baΓ(a)Γ(n+a)[g1]n+a dbda,

and

P3(λ|z_)=L(λ|z_)P6(λ)0L(λ|z_)P6(λ) dλ=10k0ba+1Γ(a)λn+a1exp[λg1]dbda10k0ba+1Γ(a)Γ(n+a)[g1]n+a dbda.

4.1 H-Bayesian Estimation Based on SELF

Using SELF and the H-posterior distributions which are defined respectively in Eqs. (13), (36)(38), the H-Bayesian estimates ˆλHBS1, ˆλHBS2, ˆλHBS3 of λ can be defined as

ˆλHBSj=E[(λ|z_)];j=1,2,3.

Here,

ˆλHBS1=E[(λ|z_)]=0λP1(λ|z_) dλ=10k0(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda10k0(kb)baΓ(a)Γ(n+a)gn+a1 dbda,ˆλHBS2=E[(λ|z_)]= 0λP2(λ|z_) dλ=10k0baΓ(a)Γ(n+a+1)[g1]n+a+1dbda10k0baΓ(a)Γ(n+a)[g1]n+adbda,ˆλHBS3=E[(λ|z_)]=0λP3(λ|z_) dλ=10k0ba+1Γ(a)Γ(n+a+1)gn+a+11 dbda10k0ba+1Γ(a)Γ(n+a)gn+a1 dbda.(39)

For a time t, the H-Bayesian estimators of H(t) under SELF are given by

ˆHHBS1= δtδ1(1tδ)10k0(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda10k0(kb)baΓ(a)Γ(n+a)[g1]n+a dbda,ˆHHBS2= δtδ1(1tδ)10k0baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda10k0baΓ(a)Γ(n+a)[g1]n+a dbda,ˆHHBS3= δtδ1(1tδ)10k0ba+1Γ(a)Γ(n+a+1)[g1]n+a+1 dbda10k0ba+1Γ(a)Γ(n+a)[g1]n+a dbda.(40)

4.2 H-Bayesian Estimation Based on ELF

Using ELF and the H-posterior distributions which are defined respectively in Eqs. (16), (36)(38), the H-Bayesian estimates ˆλHBE1, ˆλHBE2, ˆλHBE3 of λ can be defined as

ˆλHBEj=E[(λ1|z_)]1;j=1,2,3.

Here,

ˆλHBE1=E[(λ1|z_)]=0P1(λ|z_)λ dλ=10k0(kb)baΓ(a)Γ(n+a1)[g1]n+a1 dbda10k0(kb)baΓ(a)Γ(n+a)[g1]n+a dbda,ˆλHBE2=E[(λ1|z_)]=0P2(λ|z_) λdλ=10k0baΓ(a)Γ(n+a1)[g1]n+a1dbda10k0baΓ(a)Γ(n+a)[g1]n+adbda,ˆλHBE3=E[(λ1|z_)]=0P3(λ|z_)λ dλ=10k0ba+1Γ(a)Γ(n+a1)[g1]n+a1 dbda10k0ba+1Γ(a)Γ(n+a)[g1]n+a dbda.(41)

For a time t, the H-Bayesian estimates of H(t) under ELF are given by

ˆHHBE1= δtδ1(1tδ)10k0(kb)baΓ(a)Γ(n+a)[g1]n+a dbda10k0(kb)baΓ(a)Γ(n+a1)[g1]n+a1 dbda,ˆHHBE2= δtδ1(1tδ)10k0baΓ(a)Γ(n+a)[g1]n+a dbda10k0baΓ(a)Γ(n+a1)[g1]n+a1 dbda,ˆHHBE3= δtδ1(1tδ)10k0ba+1Γ(a)Γ(n+a)[g1]n+a dbda10k0ba+1Γ(a)Γ(n+a1)[g1]n+a1 dbda.(42)

4.3 H-Bayesian Estimation Based on WBLF

Assuming WBLF as defined in Eq. (19) and using the priors defined in Eqs. (36)(38), the H-Bayes estimates ˆλHBW1, ˆλHBW2, ˆλHBW3 of λ are

ˆλHBWj=E[(λ2|z_)]E[(λ|z_)];j=1,2,3.

Here,

ˆλHBW1= 10k0(kb)baΓ(a)Γ(n+a+2)[g1]n+a+2 dbda10k0(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda,ˆλHBW2= 10k0baΓ(a)Γ(n+a+2)[g1]n+a+2 dbda10k0baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda,ˆλHBW3= 10k0ba+1Γ(a)Γ(n+a+2)[g1]n+a+2 dbda10k0ba+1Γ(a)Γ(n+a+1)[g1]n+a+1 dbda.(43)

For a time t, the H-Bayesian estimates of H(t) under WBLF are given by

ˆHHBW1= δtδ1(1tδ)10k0(kb)baΓ(a)Γ(n+a+2)[g1]n+a+2 dbda10k0(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda,ˆHHBW2= δtδ1(1tδ)10k0baΓ(a)Γ(n+a+2)[g1]n+a+2 dbda10k0baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda,ˆHHBW3= δtδ1(1tδ)10k0ba+1Γ(a)Γ(n+a+2)[g1]n+a+2 dbda10k0ba+1Γ(a)Γ(n+a+1)[g1]n+a+1 dbda.(44)

4.4 H-Bayesian Estimation Based on MELF

Assuming MELF as defined in Eq. (22), and using the priors defined in Eqs. (36)(38), the H-Bayesian estimates ˆλHBM1, ˆλHBM2, ˆλHBM3 of λ are

ˆλHBMj=E[(λ2|z_)]E[(λ1|z_)];j=1,2,3.

Then, we get

ˆλHBM1=10k0(kb)baΓ(a)Γ(n+a1)[g1]n+a1dbda10k0(kb)baΓ(a)Γ(n+a2)[g1]n+a2dbda,ˆλHBM2= 10k0baΓ(a)Γ(n+a1)[g1]n+a1dbda10k0baΓ(a)Γ(n+a2)[g1]n+a2dbda,ˆλHBM2= 10k0ba+1Γ(a)Γ(n+a1)[g1]n+a1dbda10k0ba+1Γ(a)Γ(n+a2)[g1]n+a2dbda.(45)

Also, the H-Bayesian estimates of H(t) under MELF are given by

ˆHHBM1= δtδ1(1tδ)10k0(kb)baΓ(a)Γ(n+a1)[g1]n+a1dbda10k0(kb)baΓ(a)Γ(n+a2)[g1]n+a2dbda,ˆHHBM2= δtδ1(1tδ)10k0baΓ(a)Γ(n+a1)[g1]n+a1dbda10k0baΓ(a)Γ(n+a2)[g1]n+a2dbda,ˆHHBM3= δtδ1(1tδ)10k0ba+1Γ(a)Γ(n+a1)[g1]n+a1dbda10k0ba+1Γ(a)Γ(n+a2)[g1]n+a2dbda.(46)

In the Appendix A of this paper, we review the properties of E-Bayesian and H-Bayesian estimators of parameter λ and HRF H(t).

5  Simulation Study

In this section, Monte Carlo simulation study is performed to compare the performance of the proposed estimates based on GPHCS for the Kumaraswamy distribution. The simulation study has been conducted using the R software. To generate the data, the initial true values of λ and β are assumed as 2 and 3, and k = 1. Here, the value of HRF H(t=0.5) is considered. In this simulation study, 10,000 GPHCS data from Kumaraswamy distribution for a given sets of n(40, 60, 80), m=(20, 30, 40), k=(10, 20, 30), T=(0.3, 0.5, 1.0) have been generated by using the following three progressive scheme:

•   Scheme 1: Ri=1 for all i;

•   Scheme 2: Ri=2 if i is odd and Ri=0 if i is even;

•   Scheme 3: Ri=0 if i is odd and Ri=2 if i is even.

The GPHCS has been generated by using the algorism in Nagy et al. [13]. Using gamma prior, the Bayes, E-Bayes and H-Bayes estimates are obtained under four different loss functions, as SELF, ELF, WBLF, and MELF. For these gamma priors, the hyperparameters (a,b) is considered as (0.5,0.5). The performance of the proposed estimates has been assessed using the values of the average estimates (AEs) and the values of the mean squared error (MSE). Tables 14 show the values of AE and MSE for parameter λ, while Tables 58 show the values of AE and MSE for HRF H(t). From the tables, the following conclusions have been made:

•   As the values of n, m, and T increases, the MSE decreases.

•   The Bayesian, E-Bayesian and H-Bayesian estimates outperform MLE in terms of MSE.

•   For any fixed loss function, the E-Bayesian estimates have smaller MSE than the Bayesian and H-Bayesian estimates.

•   For fixed value of n, when m or T increases, AEs come closer to their true value.

•   In most of the cases, the estimates under ELF performs better than the estimates using other loss functions.

•   In most cases, Scheme 3 have minimum mean square error compared to Schemes 1 and 2 at the same time T.

images

images

images

images

images

images

images

images

Combining all the above results, it is recommended to use the E-Bayesian technique to estimate the parameters and the HRF for GPHCS based on ELF, due to the better performance than other estimates in terms of MSE.

6  Applications to Real Data

This section uses a real-world dataset as an example to analyze the applicability of the suggested estimation techniques. The following dataset, which spans the years 1975 to 2016, displays the monthly water capacity of the Shasta reservoir between the months of August and December. Some statisticians, including Kohansal [30] and Tu et al. [31], have previously exploited these data. We shall use these data to consider the following PCS: Suppose n=42, m=28, k=14, and different values of T=0.3, 0.5, and 1.0, with R using as the following three different PCS,

•   Scheme 1: Rj=0 for all j=1, 2,…, m1 and R28=14;

•   Scheme 2: Rj=1 if j is odd and Rj=0 if i is even;

•   Scheme 3: Rj=0 if j is odd and Rj=1 if i is even.

Table 9 shows the ML, Bayesian, E-Bayesian, and H-Bayesian estimates for the unknown parameter λ and HRF H(t) based on the GPHCS.

images

7  Conclusions

This paper proposes E-Bayesian and H-Bayesian estimations using GPHCS for the unknown shape parameter and HRF of the Kumaraswamy distribution. To obtain the Bayesian, E- and H-Bayesian estimates, the squared error, entropy, weighted balance, and minimum expected loss functions are introduced. A Monte Carlo simulation study has been performed to compare the performance of the estimates of parameters such as the shape parameter and HRF. In terms of AE and MSE, the simulation study yields that E-Bayesian estimates outperform all other estimates. Finally, a real data set has been analyzed to illustrate the applicability of the proposed estimates. After analyzing this data, it is concluded that E-Bayesian estimates for the parameters and the HRF perform better than other estimates; the sample size n as well as the observed sample size m are very important factors for determining the efficiency of the estimators. The E- and H-Bayesian estimation approach for the scale parameter and reliability function of the Kumaraswamy under GPHCS using different loss functions is done through more complex integral formulas so that it is interesting and difficult work that needs more time. We are currently addressing this problem as part of our future research.

Acknowledgement: The author is grateful to the editor and anonymous referees for their insightful comments and suggestions, which helped to improve the paper’s presentation. The author extends his appreciation to King Saud University for funding this work through Researchers Supporting Project, King Saud University, Riyadh, Saudi Arabia.

Funding Statement: This research was funded by Researchers Supporting Project number (RSPD2025R969), King Saud University, Riyadh, Saudi Arabia.

Availability of Data and Materials: The data was mentioned along the paper.

Ethics Approval: The author declares that this study don’t include human or animal subjects.

Conflicts of Interest: The author declares no conflicts of interest to report regarding the present study.

Appendix A Properties of E-Bayesian and H-Bayesian Estimation of λ: In this section, the properties of E-Bayesian estimates and the relations among the E-Bayesian and H-Bayesian estimates are discussed.

Appendix A.1 The Relations between the E-Bayesian Estimates under Different Loss Functions

Theorem A1: It follows from Eq. (28) that

(1) ˆλEBS3<ˆλEBS2<ˆλEBS1,

(2) lim

(3) limg2λ^EBSj=0,j=1,2,3.

Proof: (1) From Eq. (28), we have

λ^EBS1λ^EBS2= (2n+1)k2[(k+g2)log(1+kg2)k](2n1)2k[log(1+kg2)]= (2n+1)k[log(1+kg2)(g2k+12)1]

and

λ^EBS2λ^EBS3= (2n+1)2k[log(1+kg2)](2n+1)k2[g2log(g2k+g2)+k]= (2n+1)k[log(1+kg2)(g2k+12)1].

Therefore,

λ^EBS1λ^EBS2=λ^EBS2λ^EBS3=(2n+1)k[log(1+kg2)(g2k+12)1].

For 1<t1, we have ln(1+t)=tt22+t33t44+t55=i=1(1)i+1tii.

Let t=kg2, when 0<k<g2, 0<kg2<1, we get

[log(1+kg2)(g2k+12)1]= (g2k+12)[(kg2)12(kg2)2+13(kp)314(kg2)4+]1= [112(kg2)+13(kg2)214(kg2)3+15(kg2)4+] +[12(kg2)+14(kg2)2+16(kg2)318(kg2)4+]1=112(kg2)2[1kg2]+340(kg2)4[189kg2]+,

where 0<kg2<1. Thus,

λ^EBS1λ^EBE2=λ^EBS2λ^EBS3=(2n+1)k[log(1+kg2)(g2k+12)1]>0,

which yields that, λ^EBS3<λ^EBS2<λ^EBS1.

(2) From (1), we get

limg2(λ^EBS1λ^EBS2)= limg2(λ^EBS2λ^EBS3)= (2n+1)klimg2[112k2g2(1kg2)+340k4g2(189kg2)+].

So it can be easily obtained that limg2λ^EBS1=limg2λ^EBS2=limg2λ^EBS3.

(3) From Eq. (28) and from the proof of (1), we have

limg2λ^EBS1=limg22n+1k[12(kg2)16(kg2)2+112(kg2)3120(kg2)4+]=0.

Using (2), we get limg2λ^EBSj=0, j = 1, 2, 3.

Theorem A2: It follows from Eq. (30) that

(1) λ^EBE3<λ^EBE2<λ^EBE1,

(2) limg2λ^EBE1=limg2λ^EBE2=limg2λ^EBE3,

(3) limg2λ^EBEj=0,j=1,2,3.

Proof: (1) From Eq. (30), we have

λ^EBE1λ^EBE2= (2n1)k2[(k+g2)log(1+kg2)k](2n1)2k[log(1+kg2)]= (2n1)k[log(1+kg2)(g2k+12)1]

and

λ^EBE2λ^EBE3= (2n1)2k[log(1+kg2)](2n1)k2[g2log(g2k+g2)+k]= (2n1)k[log(1+kg2)(g2k+12)1].

Therefore,

λ^EBE1λ^EBE2=λ^EBE2λ^EBE3=(2n1)k[log(1+kg2)(g2k+12)1].

For 1<t1, we have ln(1+t)=tt22+t33t44+t55=i=1(1)i+1tii. Let t=kg2, when 0<k<g2, 0<kg2<1, we get

[log(1+kg2)(g2k+12)1]= (g2k+12)[(kg2)12(kg2)2+13(kg2)314(kg2)4+]1= [112(kg2)+13(kg2)214(kg2)3+15(kg2)4+] +[12(kg2)+14(kg2)2+16(kg2)318(kg2)4+]1=112(kg2)2[1kg2]+340(kg2)4[189kg2]+,

where 0<kg2<1. Thus,

λ^EBE1λ^EBE2=λ^EBE2λ^EBE3=(2n1)k[log(1+kg2)(g2k+12)1]>0,

which yields that, λ^EBE3<λ^EBE2<λ^EBE1.

(2) From (1), we get

limg2(λ^EBE1λ^EBE2)= limg2(λ^EBE2λ^EBE3)= (2n1)plimg2[112k2g2(1kg2)+340k4g2(189kg2)+].

So it can be easily obtained that limg2λ^EBE1=limg2λ^EBE2=limg2λ^EBE3.

(3) From Eq. (30) and from the proof of (1), we have

limg2λ^EBE1=limg22n1k[12(kg2)16(kg2)2+112(kg2)3120(kg2)4+]=0.

Using (2), we get limg2λ^EBEj=0, j=1, 2, 3.

Theorem A3: It follows from Eq. (32) that

(1) λ^EBW3<λ^EBW2<λ^EBW1,

(2) limg2λ^EBW1=limg2λ^EBW2=limg2λ^EBW3,

(3) limg2λ^EBWj=0,j=1,2,3.

Proof: (1) From Eq. (32), we have

λ^EBW1λ^EBW2=(2n+3)k2[(k+g2)log(1+kg2)k](2n+3)2k[log(1+kg2)]

=(2n+3)k[log(1+kg2)(g2k+12)1],

and

λ^EBW2λ^EBW3=(2n+3)2k[log(1+kg2)](2n+3)k2[g2log(g2k+g2)+k]

=(2n+3)k[log(1+qg2)(g2k+12)1].

Therefore,

λ^EBW1λ^EBW2=λ^EBW2λ^EBW3=(2n+3)k[log(1+kg2)(g2k+12)1].

For 1<t1, we have ln(1+t)=tt22+t33t44+t55=i=1(1)i+1tii. Let t=kg2, when 0<k<F, 0<kg2<1, we get

[log(1+kg2)(g2k+12)1]=(g2k+12)[(kg2)12(kg2)2+13(kg2)314(kg2)4+]1=[112(kg2)+13(kg2)214(kg2)3+15(Kg2)4+]+[12(kg2)+14(kg2)2+16(kg2)318(kg2)4+]1=112(kg2)2[1kg2]+340(kg2)4[189kg2]+,

Then,

λ^EBW1λ^EBW2=λ^EBW2λ^EBW3=(2n+3)k[log(1+kg2)(g2k+12)1]>0.

This shows that λ^EBW3<λ^EBW2<λ^EBW1

(2) From (1), we get

limg2(λ^EBW1λ^EBW2)=limg2(λ^EBW2λ^EBW3)= (2n+3)klimg2[112k2g2(1kg2)+340k4g2(189kg2)+].

This yields that, limg2λ^EBW1=limg2λ^EBW2=limg2λ^EBW3.

(3) From Eq. (32) and from the proof of (1), we have limg2λ^EBW1=0 Using (2), we get limg2λ^EBWj=0, for j=1,2,3.

Theorem A4: It follows from Eq. (34) that

(1) λ^EBM3<λ^EBM2<λ^EBM1,

(2) limg2λ^EBM1=limg2λ^EBM2=limg2λ^EBM3,

(3) limg2λ^EBMj=0,j=1,2,3.

Proof: (1) From Eq. (34), we have

λ^EBM1λ^EBM2=(2n3)p2[(k+g2)log(1+kg2)k](2n+3)2k[log(1+kg2)]

=(2n3)k[log(1+kg2)(g2k+12)1]

and

λ^EBM2λ^EBM3=(2n3)2k[log(1+kg2)](2n3)k2[g2log(kk+g2)+k]

=(2n3)k[log(1+kg2)(g2k+12)1],

λ^EBM1λ^EBM2=λ^EBM2λ^EBM3=(2n3)k[log(1+kg2)(g2k+12)1].

For 1 < t  1, we have

ln(1+t)=tt22+t33t44+t55=i=1(1)i+1tii.

Let t=kg2, when 0 < k < g2, 0 < kg2 < 1, we get

[log(1+kg2)(g2k+12)1]=(g2k+12)[(kg2)12(kg2)2+13(kg2)314(kg2)4+]1

=[112(kg2)+13(kg2)214(kg2)3+15(kg2)416(kg2)5+][+12(kg2)+14(kg2)2+16(kg2)318(kg2)4+]1

=112(kg2)2[1kg2]+340(kg2)4[189kg2]+

where 0 < kg2 < 1.

Thus,

λ^EBM1λ^EBM2=λ^EBM2λ^EBM3=(2n3)p[log(1+kg2)(g2k+12)1]>0.(A1)

That is λ^EBM3<λ^EBM2<λ^EBM1.

(2) From (1), we get

limg2(λ^EBM1λ^EBM2)= limg2(λ^EBM2λ^EBM3)= (2n3)klimg2[112k2g2(1kg2)+340k4g2(189kg2)+].(A2)

That is, limg2λ^EBM1=limg2λ^EBM2=limg2λ^EBM3.

(3) From Eq. (34) and from the proof of (1), we have limg2λ^EBM1=0. Using (2), we get limg2λ^EBMj=0, j = 1, 2, 3.

Similar relationships holds for the E-Bayesian estimates of H(t) under different loss functions.

Appendix A.2 The Relations between the H-Bayesian Estimates under Different Loss Functions

Theorem A5: It follows from Eq. (39) that limg2λ^HBSj=0, for j=1,2,3.

Proof: Based on SELF, the H-Bayesian estimate of λ can be expressed as

λ^HBS1=010k(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda010k(kb)baΓ(a)Γ(n+a)[g1]n+a dbda.

Using the result Γ(n+a+1)=(n+a)Γ(n+a), we have

010k(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda=010k(kb)baΓ(a)(n+a)Γ(n+a)[g1]n+a+1 dbda.

As g1=b+g2, and For a(0,1), b(0,q), (n+a)(b+g2)1 is continuous and baΓ(a)Γ(n+a)[g1]n+a>0 and, hence, using the generalized mean value theorem, we can find at least one number a2(0,1) and b2(0,k) such that

010k(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda=(n+a2)(b2+g2)010k(kb)baΓ(a)Γ(n+a)[g1]n+a dbda.

Therefore,

λ^HBS1=(n+a2)(b2+g2)010k(kb)baΓ(a)Γ(n+a)[g1]n+a dbda010k(kb)baΓ(a)Γ(n+a)[g1]n+a dbda=(n+a2)(b2+g2).

Taking limit as g2, it has been noticed that, limg2λ^HBS1=0. Similarly, we have limg2λ^HBS2=limg2λ^HBS3=0. Thus, limg2λ^HBSj=0, for j=1,2,3.

Theorem A6: It follows from Eq. (41) that limg2λ^HBEj=0, for j=1,2,3.

Proof: Based on ELF, the H-Bayesian estimate of λ can be expressed as

λ^HBE1=010k(kb)baΓ(a)Γ(n+a)[g1]n+a dbda010k(kb)baΓ(a)Γ(n+a1)[g1]n+a1 dbda.

Using the result Γ(n+a)=(n+a1)Γ(n+a1), we have

010k(kb)baΓ(a)Γ(n+a)[g1]n+a dbda=010k(kb)baΓ(a)(n+a1)Γ(n+a1)[g1]n+a dbda.

As g1=b+g2, and For a(0,1), b(0,k), (n+a1)(b+g2)1 is continuous and baΓ(a)Γ(n+a1)g1n+a1>0 and, hence, using the generalized mean value theorem, we can find at least one number a3(0,1) and b3(0,k) such that

010k(kb)baΓ(a)Γ(n+a)[g1]n+a dbda=(n+a51)(b5+g2)010k(kb)baΓ(a)Γ(n+a1)[g1]n+a1 dbda.

Therefore,

λ^HBE1=(n+a31)(b3+g2)010k(kb)baΓ(a)Γ(n+a1)[g1]n+a1 dbda010k(kb)baΓ(a)Γ(n+a1)[g1]n+a1 dbda=(n+a31)(b3+g2).

Taking limit as g2, it has been noticed that, limg2λ^HBE1=0. Similarly, we have limg2λ^HBE2=limg2λ^HBE3=0. Thus, limg2λ^HBEj=0, for j=1,2,3.

Theorem A7: It follows from Eq. (43) that limg2λ^HBWj=0, for j=1,2,3.

Proof: Under WBLF, the H-Bayesian estimate of λ can be expressed as

λ^HBW1=010k(kb)baΓ(a)Γ(n+a+2)[g1]n+a+2 dbda010k(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda.

Using the result, Γ(n+a+2)=(n+a+1)Γ(n+a+1), we have

010k(kb)baΓ(a)Γ(n+a+2)[g1]n+a+2 dbda=010k(kb)baΓ(a)(n+a+1)Γ(n+a+1)[g1]n+a+2 dbda.

As g1=b+g2, and For a(0,1), b(0,k), (n+a+1)(b+g2)1 is continuous and baΓ(a)Γ(n+a+1)g1n+a+1>0 and, hence, using the generalized mean value theorem, we can find at least one number a4(0,1) and b4(0,k) such that

010k(kb)baΓ(a)Γ(n+a+2)[g1]n+a+2 dbda=(n+a8+1)(b8+g2)010k(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda.

Therefore,

λ^HBW1=(n+a4+1)(b4+g2)010k(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda010k(kb)baΓ(a)Γ(n+a+1)[g1]n+a+1 dbda=(n+a4+1)(b4+g2).(A3)

Taking limit as g2, it has been obtained that limg2λ^HBW1=0. Similarly, we have limg2λ^HBW2=limg2λ^HBW3=0. Thus, limg2λ^HBWj=0, for j=1,2,3.

Theorem A8: It follows from Eq. (45) that limg2λ^HBMj=0,j=1,2,3.

Proof: Under MELF, the H-Bayesian estimate of λ can be expressed as

λ^HBM1=010k(kb)baΓ(a)Γ(n+a1)[g1]n+a1dbda010k(kb)baΓ(a)Γ(n+a2)[g1]n+a2dbda.

Using the result Γ(n+a1)=(n+a2)Γ(n+a2), we have

010k(kb)baΓ(a)Γ(n+a1)[g1]n+a1dbda=010k(kb)baΓ(a)(n+a2)Γ(n+a2)[g1]n+a1dbda.

As g1=b+g2, and For a(0,1), b(0,k), (n+a2)(b+g2)1 is continuous and baΓ(a)Γ(n+a2)[g1]n+a2>0 and, hence, using the generalized mean value theorem, we can find at least one number a5(0,1) and b5(0,k) such that

010k(kb)baΓ(a)Γ(n+a1)[g1]n+a1dbda=(n+a52)(b5+g2)010k(kb)baΓ(a)Γ(n+a2)[g1]n+a2dbda

λ^HBM1=(n+a52)(b5+g2)010k(kb)baΓ(a)Γ(n+a2)[g1]n+a2dbda010k(kb)baΓ(a)Γ(n+a2)[g1]n+a2dbda=(n+a52)(b5+g2).(A4)

Taking limit as g2, limg2λ^HBM1=0. Similarly, we have limg2λ^HBM2=limg2λ^HBM3=0. Thus, limg2λ^HBMj=0,j=1,2,3.

Similar relationships holds for the H-Bayesian estimates of HRF under different loss functions.

References

1. Kumaraswamy P. A generalized probability density function for double-bounded random processes. J Hydrol. 1980;46(1–2):79–88. doi:10.1016/0022-1694(80)90036-0. [Google Scholar] [CrossRef]

2. Sundar V, Subbiah K. Application of double bounded probability density function for analysis of ocean waves. Ocean Eng. 1989;16(2):193–200. doi:10.1016/0029-8018(89)90005-X. [Google Scholar] [CrossRef]

3. Fletcher SG, Ponnambalam K. Estimation of reservoir yield and storage distribution using moments analysis. J Hydrol. 1996;182(1–4):259–75. doi:10.1016/0022-1694(95)02946-X. [Google Scholar] [CrossRef]

4. Mitnik PA. New properties of the Kumaraswamy distribution. Commun Statistcs-Theory Methods. 2013;42(5):741–55. doi:10.1080/03610926.2011.581782. [Google Scholar] [CrossRef]

5. Ponnambalam K, Seifi A, Vlach J. Probabilistic design of systems with general distributions of parameters. Int J Circuit Theory Appl. 2001;29(6):527–36. doi:10.1002/cta.173. [Google Scholar] [CrossRef]

6. Dey S, Mazucheli J, Nadarajah S. Kumaraswamy distribution: different methods of estimation. Comput Appl Math. 2018;37(2):2094–111. doi:10.1007/s40314-017-0441-1. [Google Scholar] [CrossRef]

7. Jamal F, Arslan Nasir M, Ozel G, Elgarhy M, Mamode Khan N. Generalized inverted Kumaraswamy generated family of distributions: theory and applications. J Appl Stat. 2019;46(16):2927–44. doi:10.1080/02664763.2019.1623867. [Google Scholar] [CrossRef]

8. Alshkaki R. A generalized modification of the Kumaraswamy distribution for modeling and analyzing real-life data. Statist, Optim Inform Comput. 2020;8(2):521–48. doi:10.19139/soic-2310-5070-869. [Google Scholar] [CrossRef]

9. Mahto AK, Lodhi C, Tripathi YM, Wang L. Inference for partially observed competing risks model for Kumaraswamy distribution under generalized progressive hybrid censoring. J Appl Stat. 2022;49(8):2064–92. doi:10.1080/02664763.2021.1889999. [Google Scholar] [PubMed] [CrossRef]

10. Alduais FS, Yassen MF, Almazah MM, Khan Z. Estimation of the Kumaraswamy distribution parameters using the E-Bayesian method. Alex Eng J. 2022;61(12):11099–110. doi:10.1016/j.aej.2022.04.040. [Google Scholar] [CrossRef]

11. Cho Y, Sun H, Lee K. Exact likelihood inference for an exponential parameter under generalized progressive hybrid censoring scheme. Stat Methodol. 2015;23:18–34. doi:10.1016/j.stamet.2014.09.002. [Google Scholar] [CrossRef]

12. Nagy M, Sultan KS, Abu-Moussa MH. Analysis of the generalized progressive hybrid censoring from Burr Type-XII lifetime model. AIMS Math. 2021;6(9):9675–704. doi:10.3934/math.2021564. [Google Scholar] [CrossRef]

13. Nagy M, Bakr ME, Alrasheedi AF. Analysis with applications of the generalized type-II progressive hybrid censoring sample from burr type-XII model. Math Probl Eng. 2022;2022(1):1241303. doi:10.1155/2022/1241303. [Google Scholar] [CrossRef]

14. Nagy M, Alrasheedi AF. The lifetime analysis of the Weibull model based on Generalized Type-I progressive hybrid censoring schemes. Math Biosci Eng. 2022;19(3):2330–54. doi:10.3934/mbe.2022108. [Google Scholar] [PubMed] [CrossRef]

15. Han M. The E-Bayesian and hierarchical Bayesian estimations of Pareto distribution parameter under different loss functions. J Stat Comput Simul. 2017;87(3):577–93. doi:10.1080/00949655.2016.1221408. [Google Scholar] [CrossRef]

16. Okasha HM, Wang J. E-Bayesian estimation for the geometric model based on record statistics. Appl Math Model. 2016;40(1):658–70. doi:10.1016/j.apm.2015.05.004. [Google Scholar] [CrossRef]

17. Yousefzadeh F. E-Bayesian and hierarchical Bayesian estimations for the system reliability parameter based on asymmetric loss function. Commun Statist-Theory Methods. 2017;46(1):1–8. doi:10.1080/03610926.2014.968736. [Google Scholar] [CrossRef]

18. Rabie A, Li J. E-Bayesian estimation for Burr-X distribution based on generalized type-I hybrid censoring scheme. Am J Math Manag Sci. 2020;39(1):41–55. doi:10.1080/01966324.2019.1579123. [Google Scholar] [CrossRef]

19. Yaghoobzadeh Shahrastani S. Estimating E-bayesian and hierarchical bayesian of scalar parameter of gompertz distribution under type II censoring schemes based on fuzzy data. Commun Stat–Theory Methods. 2019;48(4):831–40. doi:10.1080/03610926.2017.1417438. [Google Scholar] [CrossRef]

20. Nassar M, Okasha H, Albassam M. E-Bayesian estimation and associated properties of simple step-stress model for exponential distribution based on type-II censoring. Qual Reliab Eng Int. 2021;37(3):997–1016. doi:10.1002/qre.2778. [Google Scholar] [CrossRef]

21. Nagy M, Abu-Moussa M, Alrasheedi AF, Rabie A. Expected Bayesian estimation for exponential model based on simple step stress with Type-I hybrid censored data. Math Biosci Eng. 2022;19(10):9773–979. doi:10.3934/mbe.2022455. [Google Scholar] [PubMed] [CrossRef]

22. Balakrishnan N, Sandhu RA. Best linear unbiased and maximum likelihood estimation for exponential distributions under general progressive type-II censored samples. Sankhyã: Indian J Stat, Ser B. 1996;58(1):1–9. [Google Scholar]

23. Mohie El-Din MM, Sharawy A, Abu-Moussa MH. E-Bayesian estimation for the parameters and hazard function of Gompertz distribution based on progressively type-II right censoring with application. Qual Reliab Eng Int. 2023;39(4):1299–317. doi:10.1002/qre.3292. [Google Scholar] [CrossRef]

24. Dutta S, Kayal S. Estimation and prediction for Burr type III distribution based on unified progressive hybrid censoring scheme. J Appl Stat. 2024;51(1):1–33. doi:10.1080/02664763.2022.2113865. [Google Scholar] [PubMed] [CrossRef]

25. Dey D, Gosh M, Srinivasan C. Simultaneous estimation of parameters under entropy loss. J Stat Plan Inference. 1986;15:347–63. doi:10.1016/0378-3758(86)90108-4. [Google Scholar] [CrossRef]

26. Nasir W, Aslam M. Bayes approach to study shape parameter of Frechet distribution. Int J Basic Appl Sci. 2015;4(3):246. doi:10.14419/ijbas.v4i3.4644. [Google Scholar] [CrossRef]

27. Tummala VMR, Sathe PT. Minimum expected loss estimators of reliability and parameters of certain lifetime distributions. IEEE Trans Reliab. 1978;27(4):283–5. doi:10.1109/TR.1978.5220373. [Google Scholar] [CrossRef]

28. Han M. The structure of hierarchical prior distribution and its applications. Chin Oper Res Manag Sci. 1997;6(3):31–40. [Google Scholar]

29. Lindley DV, Smith AF. Bayes estimates for the linear model. J Royal Statist Soc Series B: Statist Methodol. 1972;34(1):1–18. doi:10.1111/j.2517-6161.1972.tb00885.x. [Google Scholar] [CrossRef]

30. Kohansal A. On estimation of reliability in a multicomponent stress-strength model for a Kumaraswamy distribution based on progressively censored sample. Stat Pap. 2019;60(6):2185–224. doi:10.1007/s00362-017-0916-6. [Google Scholar] [CrossRef]

31. Tu J, Gui W. Bayesian inference for the Kumaraswamy distribution under generalized progressive hybrid censoring. Entropy. 2020;22(9):1032. doi:10.3390/e22091032. [Google Scholar] [PubMed] [CrossRef]


Cite This Article

APA Style
Nagy, M. (2025). Statistical Inference for Kumaraswamy Distribution under Generalized Progressive Hybrid Censoring Scheme with Application. Computer Modeling in Engineering & Sciences, 143(1), 185–223. https://doi.org/10.32604/cmes.2025.061865
Vancouver Style
Nagy M. Statistical Inference for Kumaraswamy Distribution under Generalized Progressive Hybrid Censoring Scheme with Application. Comput Model Eng Sci. 2025;143(1):185–223. https://doi.org/10.32604/cmes.2025.061865
IEEE Style
M. Nagy, “Statistical Inference for Kumaraswamy Distribution under Generalized Progressive Hybrid Censoring Scheme with Application,” Comput. Model. Eng. Sci., vol. 143, no. 1, pp. 185–223, 2025. https://doi.org/10.32604/cmes.2025.061865


cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 201

    View

  • 83

    Download

  • 0

    Like

Share Link