1 Introduction

Although the event of system failure is defined through the magnitudes of random shocks in classical shock models, according to the delta shock model, the system fails if the length of the times between two successive shocks is less than a given fixed threshold. Such a model has been studied by Li (1984), Li and Kong (2007), Li and Zhao (2007) when the shocks occur according to a Poisson process, i.e. the interarrival times between shocks are exponentially distributed. Tuncel and Eryilmaz (2018) obtained the survival function and the mean time to failure of the system when intershock times follow proportional hazard rate model. Eryilmaz (2017) studied reliability properties of a system under the delta shock model when the shocks occur according to a Polya process. Under the assumption of Polya process, the interarrival times between successive shocks are no longer independent.

Various modifications and generalizations of delta shock (or simply, \(\delta\)-shock) models have been introduced to represent real life systems and processes. Mallor and Santos (2003) studied a general shock model that generalizes the extreme, the cumulative and the run shock models. Wang and Zhang (2005) defined a mixed shock model. According to their model, the system fails if either the time between two successive shocks is less than a given threshold or the magnitude of a single shock exceeds a given threshold. This is a mixed model that combines delta and extreme shock models. See also Parvardeh and Balakrishnan (2015) for mixed shock models. Eryilmaz (2012) extended the delta shock model to the case when the system fails if a specified number of consecutive interarrival times are less than a threshold delta. Jiang (2020) defined and studied a generalized delta shock model with multi-failure thresholds. Goyal et al. (2022a) introduced a general δ-shock model when the recovery time depends on both the arrival times and the magnitudes of shocks. See also Lorvand et al. (2019), Poursaeed (2021), Goyal et al. (2022b) and Lorvard and Nematollahi (2022) for other types of extensions and generalizations of the delta shock model.

Shock models have wide applications for modeling various real-life processes and systems. The term shock is generic, and it can be replaced by an appropriate concept depending on the process under concern. For example, in a certain spare parts inventory management system, demands arrive randomly with random size. In this case, the interarrival times are the times between successive demands. The size of a demand coincides with the magnitude of a shock. In a certain portfolio of an insurance company, claims occur randomly over time. In this case, the interarrival times are the times between successive claims and the magnitudes of the shocks are the sizes of the claims.

In most real-life situations, after a shock that is above a certain level, the characteristics, e.g., distribution of the shocks may change. For example, if the shock is a kind of environmental effect, e.g., pressure, temperature, and wind, then a harsher environment might be available after the shock that is significantly different from the previous shocks. Eryilmaz and Kan (2021) studied reliability and optimal replacement policy for an extreme shock model when the distribution of the shock size changes after a random number of shocks following a particular distribution. In particular, for a sequence of shock sizes denoted by \(Y_{1}\), \(Y_{2}\), \(\cdots\), they considered the case when \(Y_{1}\), \(Y_{2}\), \(\cdots\), \(Y_{M}\) are independent and identically distributed (iid) with common distribution \(G_{1}\) and \(Y_{M + 1}\), \(Y_{M + 2}\), \(\cdots\) are iid with common distribution \(G_{2}\), where \(M\) is a positive discrete random variable independent of \(Y_{i}\)’s and follows a particular distribution. Also, Eryilmaz and Kan (2021), studied the reliability of an extreme shock model when there is a change in the shock size distribution. According to their model, the change in shock size distribution occurs upon the occurrence of a shock that is above a fixed warning threshold. They assumed that the distribution of the magnitudes of shocks changes after the first shock of size at least \(d_{1}\) a given level. That is, if \(N_{{d_{1} }}\) denotes the number of shocks until the first shock of size at least \(d_{1}\), then it is assumed that \(Y_{1}\), \(Y_{2}\), \(\cdots\), \(Y_{{N_{{d_{1} }} }}\) have common distribution \(G_{1}\), and \(Y_{{N_{{d_{1} }} + 1}}\), \(Y_{{N_{{d_{1} }} + 2}}\), \(\cdots\) have another common distribution \(G_{2}\). In this case, the random variable \(N_{{d_{1} }}\) denotes the number of shocks until the change point depends on \(Y_{1}\), \(Y_{2}\), \(\cdots\). Applications of this model to statistical control processes and wind turbines are given by Eryilmaz and Kan (2021).

In this paper, we study a new mixed shock model that combines extreme and delta shock models under the change point model proposed by Eryilmaz and Kan (2021). Such a model is useful for modeling various real-life systems or processes. In general, this proposed new change point setup is useful to model the situation when there is a change in behavior of the system/process upon the occurrence of an unexpected event, e.g., a sudden change in environmental condition, a large claim due to an extreme event, an enormous demand shock for health care systems and ventilators due to COVID-19. Such changes ensure the change in probability distribution of the shock/claim/demand. Also, this model is useful for insurance portfolio management. Consider a heterogeneous portfolio of losses and/or payments of an insurance company. Expected losses are defined by a threshold, say \(d_{1}\), and are such that the loss sizes do not exceed the threshold \(d_{1}\). An unexpected future loss, (with loss size exceeding the value of the threshold) could be used as a warning for an approaching high-risk situation for the portfolio since the distribution of the claim sizes may change after an unexpected loss. This may occur if the portfolio is affected by a random event such as explosion, breakdown that influence all policies. A plausible criterion to confer that a high-risk situation is approaching for the insurance company would be to observe at least two unexpected losses that are very close to each other or more generally to observe least two unexpected losses which are separated by at most \(\delta\) expected losses (this corresponds to the classical \(\delta\)-shock model) as well as the size of a unexpected catastrophic loss is at least \(d > d_{1}\) (this corresponds to the extreme shock model).

The findings of the paper can be summarized as follows:

  • The probability generating function (Laplace-Stieltjes transform) of the system’s lifetime is obtained when the times between shocks follow discrete (continuous) distribution

  • The matrix-based expressions for the survival function of the system are obtained when the times between shocks follow discrete matrix-geometric (continuous matrix-exponential) distribution.

In mixed shock models, system failure results from the competing failure criteria. Indeed, in most real-life processes, the system failure occurs with respect to the occurrence of two or more different events. The novelty of the present paper lies in combining the mixed shock model previously studied in the literature with the change point-based model. Such a combined model is flexible and can be used in a more general setting.

The paper is organized as follows. In Sect. 2, the model is described, and some preliminary results are presented. In Sect. 3, the reliability properties of the system are investigated under both cases when the interarrival times between successive shocks follow discrete and continuous probability distributions. Section 4 contains numerical illustrations.

2 Description of the model and preliminary results

We consider a system that is subject to random shocks. Let \(X_{1}\), \(X_{2}\), \(\cdots\) be the interarrival times between successive shocks, with common distribution function (df) \(F\left( x \right) = 1 - \overline{F}\left( x \right) = {\text{Pr}}\left( {X \le x} \right)\), where \(X\) denotes a generic random variable of \(X_{i}\) s. Also, let \(Y_{1}\), \(Y_{2}\), \(\cdots\) be the random magnitudes of the shocks. It is assumed that the distribution of the magnitudes of shocks changes after the first shock of size at least \(d_{1} > 0\). That is, the random shock magnitudes \(Y_{1}\), \(Y_{2}\), \(\cdots\), \(Y_{{N_{{d_{1} }} }}\) have common df \(G_{1}\), and \(Y_{{N_{{d_{1} }} + 1}}\), \(Y_{{N_{{d_{1} }} + 2}}\), \(\cdots\) have common df \(G_{2}\), where \(N_{{d_{1} }}\) denotes the random number of shocks until the first shock of size at least \(d_{1}\). The system fails when the time between successive shocks is less than a given threshold \(\delta > 0\), or the magnitude of a single shock is at least \(d\), where \(d > d_{1}\). Let also \(N_{d}\) denotes the number of shocks for which the time between successive shocks is less than a given threshold \(\delta\), or the magnitude of a single shock is at least \(d\). Then, the lifetime of the system is \(T = \mathop \sum \limits_{i = 1}^{{N_{d} }} X_{i}\), Hence, \(T\) is a random sum and has a compound distribution.

For simplicity, let us denote by:

\(p_{1} = G_{1} \left( {d_{1} } \right)\): the probability that the magnitude of a shock before the change in shock size distribution is below \(d_{1}\),

\(p_{2} = G_{1} \left( d \right)\): the probability that the magnitude of a shock before the change in shock size distribution is below \(d\),

\(p_{3} = G_{2} \left( d \right)\): the probability that the magnitude of a shock after the change in shock size distribution is below \(d\).

It should be noted that the mixed shock model considered in the present paper coincides with the model studied by Eryilmaz and Kan (2021) when \(\delta \to 0\).

In the following Lemma 1 we obtain the joint probability mass function (pmf) of the random vector \((N_{{d_{1} }}\), \(N_{d} )\).

Lemma 1.

The joint pmf of the random variables \(N_{{d_{1} }}\) and \(N_{d}\) is

$${\text{Pr}}(N_{{d_{1} }} = m,N_{d} = n) = \left\{ {\begin{array}{*{20}c} {\left( {p_{2} - p_{1} } \right)p_{1}^{m - 1} p_{3}^{n - m - 1} \overline{F}^{n - 1} \left( \delta \right)\left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right], n > m} \\ {p_{1}^{n - 1} \overline{F}^{n - 1} \left( \delta \right)\left[ {1 - p_{2} \overline{F}\left( \delta \right)} \right], n = m} \\ \end{array} } \right..$$
(1)

Proof.

For \(n > m\) we have

$$\begin{aligned} \{ N_{{d_{1} }} = m,N_{d} = n\} & = \{ X_{1} > \delta , Y_{1} < d_{1} , \cdots , X_{m - 1} > \delta , Y_{m - 1}< d_{1} , X_{m} > \delta , d_{1} \le Y_{m} < d, \\ & \quad X_{m + 1} > \delta , Y_{m + 1} <d, \cdots , X_{n - 1} >\delta , Y_{n - 1} < d, X_{n} \le \delta \} \\ \cup \{ X_{1} > \delta , Y_{1} < d_{1} , \cdots , X_{m - 1} > \delta , Y_{m - 1} <d_{1} , X_{m} >\delta , d_{1} \le Y_{m} < d, \\ & \quad X_{m + 1} > \delta , Y_{m + 1} <d, \cdots , X_{n - 1} >\delta , Y_{n - 1} <d, X_{n} >\delta , Y_{n} \ge d\} \\ \end{aligned}$$

and

$$\begin{aligned} {\text{Pr}}(N_{{d_{1} }} = m,N_{d} = n) & = \overline{F}^{n - 1} \left( \delta \right)G_{1}^{m - 1} \left( {d_{1} } \right)\left[ {G_{1} \left( d \right) - G_{1} \left( {d_{1} } \right)} \right]G_{2}^{n - m - 1} \left( d \right)F\left( \delta \right) \\ & \quad + \overline{F}^{n} \left( \delta \right)G_{1}^{m - 1} \left( {d_{1} } \right)\left[ {G_{1} \left( d \right) - G_{1} \left( {d_{1} } \right)} \right]G_{2}^{n - m - 1} \left( d \right)\left[ {1 - G_{2} \left( d \right)} \right] \\ & = \overline{F}^{n - 1} \left( \delta \right)p_{1}^{m - 1} \left( {p_{2} - p_{1} } \right)p_{3}^{n - m - 1} F\left( \delta \right) \\ & \quad + \overline{F}^{n} \left( \delta \right)p_{1}^{m - 1} \left( {p_{2} - p_{1} } \right)p_{3}^{n - m - 1} \left( {1 - p_{3} } \right) \\ & = \left( {p_{2} - p_{1} } \right)p_{1}^{m - 1} p_{3}^{n - m - 1} \overline{F}^{n - 1} \left( \delta \right)\left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]. \\ \end{aligned}$$

For \(n = m\) we have

$$\begin{aligned} \{ N_{{d_{1} }} = n,N_{d} = n\} & = \{ X_{1} > \delta , Y_{1} < d_{1} , \ldots ,X_{n - 1} > \delta , Y_{n - 1} < d_{1} , X_{n} \le \delta \} \\ & \cup \{ X_{1} > \delta , Y_{1} < d_{1} , \ldots ,X_{n - 1} > \delta , Y_{n - 1} <d_{1} , X_{n} >\delta , Y_{n} \ge d\} \\ \end{aligned}$$

and

$$\begin{aligned} {\text{Pr}}(N_{{d_{1} }} = n,N_{d} = n) & = \overline{F}^{n - 1} \left( \delta \right)G_{1}^{n - 1} \left( {d_{1} } \right)F\left( \delta \right) + \overline{F}^{n} \left( \delta \right)G_{1}^{n - 1} \left( {d_{1} } \right)\left[ {1 - G_{1} \left( d \right)} \right] \\ & = \overline{F}^{n - 1} \left( \delta \right)p_{1}^{n - 1} F\left( \delta \right) + \overline{F}^{n} \left( \delta \right)p_{1}^{n - 1} \left( {1 - p_{2} } \right) \\ & = p_{1}^{n - 1} \overline{F}^{n - 1} \left( \delta \right)\left[ {1 - p_{2} \overline{F}\left( \delta \right)} \right] \\ \end{aligned}$$

\(\square\)

Using Lemma 1, we can easily obtain the marginal pmfs of the random variables \({N}_{{d}_{1}}\) and \({N}_{d}\), given in the following:

Corollary 1.

(i) The marginal pmf of \({N}_{{d}_{1}}\) is given by.

$$\Pr \left( {N_{{d_{1} }} = m} \right) = \left[ {1 - p_{1} \overline{F}\left( \delta \right)} \right]p_{1}^{m - 1} \overline{F}^{m - 1} \left( \delta \right)$$
(2)

(ii) The marginal pmf of \(N_{d}\) is given by

$$\Pr \left( {N_{d} = n} \right) = \left( {\frac{{p_{3} - p_{2} }}{{p_{3} - p_{1} }}} \right)\left[ {p_{1} \overline{F}\left( \delta \right)} \right]^{n - 1} \left[ {1 - p_{1} \overline{F}\left( \delta \right)} \right] + \left( {\frac{{p_{2} - p_{1} }}{{p_{3} - p_{1} }}} \right)\left[ {p_{3} \overline{F}\left( \delta \right)} \right]^{n - 1} \left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right],$$
(3)

for \(p_{1} \ne p_{3}\) , and

$$\begin{aligned} \Pr \left( {N_{d} = n} \right) & = \left( {\frac{{\left( {p_{2} - p_{3} } \right)\overline{F}\left( \delta \right)}}{{1 - p_{3} \overline{F}\left( \delta \right)}}} \right)\left( {n - 1} \right)\left[ {p_{3} \overline{F}\left( \delta \right)} \right]^{n - 2} \left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]^{2} \\ & \quad + \left( {\frac{{1 - p_{2} \overline{F}\left( \delta \right)}}{{1 - p_{3} \overline{F}\left( \delta \right)}}} \right)\left[ {p_{3} \overline{F}\left( \delta \right)} \right]^{n - 1} \left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right] \\ \end{aligned}$$
(4)

for \(p_{1} = p_{3}\) .

Proof.

(i) Using (1) we get

$$\begin{aligned} \Pr \left( {N_{{d_{1} }} = m} \right) & = {\text{Pr}}(N_{{d_{1} }} = m,N_{d} = m) + \mathop \sum \limits_{n = m + 1}^{\infty } \Pr \left( {N_{{d_{1} }} = m,{ }N_{d} = n} \right) \\ & = p_{1}^{m - 1} \overline{F}^{m - 1} \left( \delta \right)\left[ {1 - p_{2} \overline{F}\left( \delta \right)} \right] \\ & \quad + \left( {p_{2} - p_{1} } \right)p_{1}^{m - 1} \left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]\mathop \sum \limits_{n = m + 1}^{\infty } p_{3}^{n - m - 1} \overline{F}^{n - 1} \left( \delta \right) \\ & = p_{1}^{m - 1} \overline{F}^{m - 1} \left( \delta \right)\left[ {1 - p_{2} \overline{F}\left( \delta \right)} \right] + \left( {p_{2} - p_{1} } \right)p_{1}^{m - 1} \overline{F}^{m} \left( \delta \right) \\ & = \left[ {1 - p_{1} \overline{F}\left( \delta \right)} \right]p_{1}^{m - 1} \overline{F}^{m - 1} \left( \delta \right). \\ \end{aligned}$$

(ii) Similarly, from (1) we obtain

$$\begin{aligned} \Pr \left( {N_{d} = n} \right) & = \mathop \sum \limits_{m = 1}^{n - 1} \Pr \left( {N_{{d_{1} }} = m,{ }N_{d} = n} \right) + {\text{Pr}}\left( {N_{{d_{1} }} = n,{ }N_{d} = n} \right) \\ & = \left( {p_{2} - p_{1} } \right)\overline{F}^{n - 1} \left( \delta \right)\left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]\mathop \sum \limits_{m = 1}^{n - 1} p_{1}^{m - 1} p_{3}^{n - m - 1} \\ & \quad + p_{1}^{n - 1} \overline{F}^{n - 1} \left( \delta \right)\left[ {1 - p_{2} \overline{F}\left( \delta \right)} \right] \\ & = \left( {p_{2} - p_{1} } \right)\overline{F}^{n - 1} \left( \delta \right)\left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]p_{3}^{n - 2} \mathop \sum \limits_{m = 0}^{n - 2} \left( {\frac{{p_{1} }}{{p_{3} }}} \right)^{m} \\ & \quad + p_{1}^{n - 1} \overline{F}^{n - 1} \left( \delta \right)\left[ {1 - p_{2} \overline{F}\left( \delta \right)} \right]. \\ \end{aligned}$$

Now, the result follows immediately, since

$$\mathop \sum \limits_{m = 0}^{n - 2} \left( {\frac{{p_{1} }}{{p_{3} }}} \right)^{m} = \left\{ {\begin{array}{*{20}c} {\frac{{p_{3} }}{{p_{3} - p_{1} }}\left[ {1 - \left( {\frac{{p_{1} }}{{p_{3} }}} \right)^{n - 1} } \right]} \\ {n - 1, if p_{1} = p_{3} } \\ \end{array} } \right., if p_{1} \ne p_{3} .$$

\(\square\)

Remark 1. (i) From Eq. (2), we have that the random variable \(N_{{d_{1} }}\) follows the geometric distribution with parameter \(1 - p_{1} \overline{F}\left( \delta \right)\).

(ii) Let \(p_{1} \ne p_{3}\).

Let also

$$\omega = \frac{{p_{3} - p_{2} }}{{p_{3} - p_{1} }}\quad {\text{and}}\quad 1 - \omega = \frac{{p_{2} - p_{1} }}{{p_{3} - p_{1} }}$$
(5)

and consider the geometric random variables \(W_{1}\) and \(W_{2}\) having probability mass functions

$$\Pr \left( {W_{1} = n} \right) = \left[ {p_{1} \overline{F}\left( \delta \right)} \right]^{n - 1} \left[ {1 - p_{1} \overline{F}\left( \delta \right)} \right],\quad n = 1,2, \ldots$$
(6)

and

$$\Pr \left( {W_{2} = n} \right) = \left[ {p_{3} \overline{F}\left( \delta \right)} \right]^{n - 1} \left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right],\quad n = 1,2, \ldots$$
(7)

respectively. Note that \(W_{1} \triangleq N_{{d_{1} }}\), where the symbol \(\triangleq\) means equality in distribution.

Then Eq. (3) can be rewritten as

$$\Pr \left( {N_{d} = n} \right) = \omega \Pr \left( {W_{1} = n} \right) + \left( {1 - \omega } \right)\Pr \left( {W_{2} = n} \right),\quad n = 1,2, \ldots$$
(8)

Therefore, the distribution of the random variable \(N_{d}\) is a mixture of two geometric distributions. It should be noted that the weights \(\omega\) and \(1 - \omega\) may take negative values and hence, Eq. (8) is a generalized mixture of two geometric distributions.

(iii) Let \(p_{1} = p_{3}\).

Then, Eq. (4) can also be rewritten as

$$\Pr \left( {N_{d} = n} \right) = \theta \Pr \left( {U_{1} = n} \right) + \left( {1 - \theta } \right){\text{Pr}}\left( {W_{2} = n} \right),\quad n = 1,2, \ldots$$
(9)

where

$$\theta = \frac{{\left( {p_{2} - p_{3} } \right)\overline{F}\left( \delta \right)}}{{1 - p_{3} \overline{F}\left( \delta \right)}},\quad 1 - \theta = \frac{{1 - p_{2} \overline{F}\left( \delta \right)}}{{1 - p_{3} \overline{F}\left( \delta \right)}}$$
(10)

\(\Pr \left( {W_{2} = n} \right)\) is given by (7), and the random variable \(U_{1}\) has the negative binomial distribution with probability mass function

$$\Pr \left( {U_{1} = n} \right) = \left( {n - 1} \right)\left[ {p_{3} \overline{F}\left( \delta \right)} \right]^{n - 2} \left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]^{2} ,\quad n = 2,3, \ldots .$$
(11)

Therefore, since the weights \(\theta\) and \(1 - \theta\) may take negative values, Eq. (9) is a generalized mixture of negative binomial and geometric distributions.

3 Reliability of the system

For our system, the following two dependent lifetime random variables are of interest

$$T_{1} = \mathop \sum \limits_{i = 1}^{{N_{{d_{1} }} }} X_{i} ,T = \mathop \sum \limits_{i = 1}^{{N_{d} }} X_{i} .$$

Note that \({T}_{1}\) and \(T\) are compound random variables (or random sums), \({T}_{1}\) denotes the time until the first shock is of size at least \({d}_{1}\), i.e., it is the time until the change point and the random variable \(T\) denotes the lifetime of the system.

In the sequel, we examine two cases according to which the interarrival times \({X}_{1}\), \({X}_{2}\), \(\cdots\) between successive shocks have a discrete or a continuous distribution.

3.1 The discrete case

Suppose that the interarrival times \({X}_{1}\), \({X}_{2}\), \(\cdots\) between successive shocks have a discrete distribution with an arbitrary common df \(F\). Let \(f\left(t\right)=\mathrm{Pr}(X=t)\) be the pmf of \(X\) and \({f}_{T}\left(t\right)=\mathrm{Pr}(T=t)\) be the pmf of the lifetime \(T\). Define the probability generating functions (pgf) of the conditional random variables \(X\left|X\le \delta \right.\) and \(X\left|X>\delta \right.\) by

$$P_{X,\delta } \left( u \right) = E\left( {u^{X} {|}X \le \delta } \right)$$

and

$$\overline{P}_{X,\delta } \left( u \right) = E\left( {u^{X} {|}X > \delta } \right).$$

Then, we have the following.

Theorem 1.

Let \(P_{{T_{1} ,T}} \left( {u_{1} ,u} \right) = E\left[ {u_{1}^{{T_{1} }} u^{T} } \right]\) be the joint pgf of the random vector \(\left( {T_{1} , T} \right)\) . Then, it holds.

$$\begin{aligned} P_{{T_{1} ,T}} \left( {u_{1} ,u} \right) & =\, \frac{{P_{X,\delta } \left( u \right)F\left( \delta \right)\left( {p_{2} - p_{1} } \right)\overline{P}_{X,\delta } \left( {u_{1} u} \right)\overline{F}\left( \delta \right) + \overline{P}_{X,\delta } \left( u \right)\left( {p_{2} - p_{1} } \right)\left( {1 - p_{3} } \right)\overline{P}_{X,\delta } \left( {u_{1} u} \right)\overline{(F} \left( \delta \right))^{2} }}{{\left[ {1 - p_{3} \overline{F}\left( \delta \right)\overline{P}_{X,\delta } \left( u \right)} \right]\left[ {1 - p_{1} \overline{F}\left( \delta \right)\overline{P}_{X,\delta } \left( {u_{1} u} \right)} \right]}} \\ \quad + \frac{{P_{X,\delta } \left( {u_{1} u} \right)F\left( \delta \right) + \overline{P}_{X,\delta } \left( {u_{1} u} \right)\overline{F}\left( \delta \right)\left( {1 - p_{2} } \right)}}{{1 - p_{1} \overline{F}\left( \delta \right)\overline{P}_{X,\delta } \left( {u_{1} u} \right)}}. \\ \end{aligned}$$

Proof.

By conditioning on \(\left( {N_{{d_{1} }} , N_{d} } \right)\) we get

$$ \begin{aligned}{P_{{T_1},T}}\left( {{u_1},u} \right) &= \mathop \sum \limits_{m = 1}^\infty \mathop \sum \limits_{n = m + 1}^\infty E\left[ {u_1^{\mathop \sum \limits_{i = 1}^{{N_{{d_1}}}} {X_i}}{u^{\mathop \sum \limits_{i = 1}^{{N_d}} {X_i}}}\left| {{N_{{d_1}}} = m,~{N_d} = n} \right.} \right]{\rm{Pr}}({N_{{d_1}}} = m,{N_d} = n) \\ &\quad+ \mathop \sum \limits_{n = 1}^\infty \ E\left[ {u_1^{\mathop \sum \limits_{i = 1}^{{N_{{d_1}}}} {X_i}}{u^{\mathop \sum \limits_{i = 1}^{{N_d}} {X_i}}}\left| {{N_{{d_1}}} = n,~{N_d} = n} \right.} \right]{\rm{Pr}}({N_{{d_1}}} = n,{N_d} = n)\end{aligned} $$

The result follows using the details in the proof of Lemma 1. \(\square\)

By letting \(u = 1 {\text{and}}\) \(u_{1} = 1\) in Theorem 1 we obtain the marginal pgfs of the lifetimes \(T_{1} and\) \(T,\) respectively. Thus, we have the following:

Corollary 2.

(i) The pgf \(P_{{T_{1} }} \left( u \right) = E\left[ {u^{{T_{1} }} } \right]\) of the lifetime random variable \(T_{1} is\).

$$\begin{aligned} P_{{T_{1} }} \left( u \right) & = \frac{{F\left( \delta \right)\left( {p_{2} - p_{1} } \right)\overline{P}_{X,\delta } \left( u \right)\overline{F}\left( \delta \right) + \left( {p_{2} - p_{1} } \right)\left( {1 - p_{3} } \right)\overline{P}_{X,\delta } \left( u \right)\overline{(F} \left( \delta \right))^{2} }}{{\left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]\left[ {1 - p_{1} \overline{F}\left( \delta \right)\overline{P}_{X,\delta } \left( u \right)} \right]}} \\ & \quad + \frac{{P_{X,\delta } \left( u \right)F\left( \delta \right) + \overline{P}_{X,\delta } \left( u \right)\overline{F}\left( \delta \right)\left( {1 - p_{2} } \right)}}{{1 - p_{1} \overline{F}\left( \delta \right)\overline{P}_{X,\delta } \left( u \right)}}. \\ \end{aligned}$$
(12)

(ii) The pgf \(P_{T} \left( u \right) = E\left[ {u^{T} } \right]\) of the lifetime random variable \(T\) is

$$\begin{aligned} P_{T} \left( u \right) & = \frac{{P_{X,\delta } \left( u \right)F\left( \delta \right)\left( {p_{2} - p_{1} } \right)\overline{P}_{X,\delta } \left( u \right)\overline{F}\left( \delta \right) + \overline{P}_{X,\delta } \left( u \right)\left( {p_{2} - p_{1} } \right)\left( {1 - p_{3} } \right)\overline{P}_{X,\delta } \left( u \right)\overline{(F} \left( \delta \right))^{2} }}{{\left[ {1 - p_{3} \overline{F}\left( \delta \right)\overline{P}_{X,\delta } \left( u \right)} \right]\left[ {1 - p_{1} \overline{F}\left( \delta \right)\overline{P}_{X,\delta } \left( u \right)} \right]}} \\ & \quad + \frac{{P_{X,\delta } \left( u \right)F\left( \delta \right) + \overline{P}_{X,\delta } \left( u \right)\overline{F}\left( \delta \right)\left( {1 - p_{2} } \right)}}{{1 - p_{1} \overline{F}\left( \delta \right)\overline{P}_{X,\delta } \left( u \right)}}. \\ \end{aligned}$$
(13)

Using that \(E\left[ {T_{1} } \right] = P_{{T_{1} }}^{^{\prime}} \left( 1 \right)\) and \(E\left[ T \right] = P_{T}^{^{\prime}} \left( 1 \right)\) and Corollary 2, we can obtain the means of the random variables \(T_{1}\) and \(T\). Thus, we have the following:

Corollary 3.

The mean of \(T_{1}\) is

$$E\left[ {T_{1} } \right] = \frac{1}{{1 - p_{1} \overline{F}\left( \delta \right)}}E\left[ X \right]$$

and the mean time to failure (MTTF) of the system is

$$E\left[ T \right] = \frac{{1 + \left( {p_{2} - p_{1} - p_{3} } \right)\overline{F}\left( \delta \right)}}{{\left[ {1 - p_{1} \overline{F}\left( \delta \right)} \right]\left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]}}E\left[ X \right].$$

For some specific distributions of the random variable \(X\) we can obtain analytical results for the evaluation of the distributions of the lifetime random variables \(T_{1}\) and \(T\).

At first, we consider the particular case when the interarrival times between successive shocks follow the geometric distribution with common pmf

$$f\left( t \right) = p\left( {1 - p} \right)^{t - 1} ,t = 1,2 \ldots ,$$
(14)

and we represent the random variables \(T_{1}\) and \(T\) as matrix-geometric distributions.

A discrete random variable \(W\) with zero pmf at zero that has a probability generating function in the form

$$E\left[ {u^{W} } \right] = \frac{{c_{1} u + \ldots + c_{m} u^{m} }}{{1 + d_{1} u + \ldots + d_{m} u^{m} }}$$
(15)

for some \(m \ge 1\) and real constants \(c_{1} , \ldots ,c_{m}\) and \(d_{1} , \ldots ,d_{m}\), is said to have a matrix-geometric distribution (see, e.g. Bladt and Nielsen (2017)). In this case, the probability mass function and the survival function of the random variable \(T_{\delta }\) can be represented, respectively, as

$$P\left( {W = n} \right) = {\pi Q}^{n - 1} {u^{\prime}}$$
(16)

and

$$P\left( {W n} \right) = {\pi Q}^{n} \left( {{\varvec{I}}_{m} - {\varvec{Q}}} \right)^{ - 1} {u^{\prime},}$$
(17)

where \({\varvec{\pi}} = \left( {1,0, \ldots ,0} \right)\) is a \(1 \times m\) row vector, \({\varvec{I}}_{m}\) is the identity matrix of order \(m\), and the \(m \times m\) matrix \({\varvec{Q}}\) and the \(m \times 1\) column vector \({u^{\prime}}\) are given by

$$ {\varvec{Q}} = \left[ {\begin{array}{*{20}c} { - d_{1} } & 0 & 0 & \cdots & 0 & 1 \\ { - d_{m} } & 0 & 0 & \cdots & 0 & 0 \\ { - d_{m-1} } & 1 & 0 & \cdots & 0& 0 \\ { - d_{m-2} } & 0 & 1 & \cdots & 0 & 0 \\\vdots & \vdots & \vdots & \ddots & \vdots& \vdots \\ { - d_{2} } & 0 & 0 & \cdots & 1 & 0 \\ \end{array} } \right],{u^{\prime}} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {c_{1} } \\ {c_{m} } \\ {c_{m - 1} } \\ \end{array} } \\ {c_{m - 2} } \\ {\begin{array}{*{20}c} \vdots \\ {c_{2} } \\ \end{array} } \\ \end{array} } \right]. $$
(18)

We write \(W\sim MG_{m} \left( {{\varvec{Q}}, {\varvec{u}}} \right)\), to represent that the random variable \(W\) has matrix-geometric distribution.

Theorem 2.

Let the interarrival times between successive shocks follow the geometric distribution with common pmf given by (14). Then

(i) The random variable \(T_{1} \sim MG_{\delta + 1} \left( {{\varvec{Q}}, {\varvec{u}}} \right)\), where \({\varvec{Q}}\)\(:\delta + 1 \times\) \(\delta + 1\) and \({u^{\prime}}\)\(:\delta + 1 \times 1\) are given by (18) with

$$\begin{gathered} c_{1} = p;\quad c_{i} = 0, 2 \le i \le \delta ;{\kern 1pt} \quad c_{\delta + 1} = - p_{1} pq^{\delta } ; \hfill \\ d_{1} = - q,\quad d_{i} = 0, 2 \le i \le \delta ;\quad d_{\delta + 1} = - p_{1} pq^{\delta } . \hfill \\ \end{gathered}$$

(ii) The random variable \(T\sim MG_{2\delta + 2} \left( {{\varvec{Q}}, {\varvec{u}}} \right)\), where \({\varvec{Q}}\)\(:2\left( {\delta + 1} \right) \times 2\left( {\delta + 1} \right)\) and \({u^{\prime}}\)\(:2\left( {\delta + 1} \right) \times 1\) are given by (18) with

$$\begin{gathered} c_{1} = p;\quad c_{2} = - pq; \hfill \\ c_{i} = 0;\quad 2 < i \le \delta ; \hfill \\ c_{\delta + 1} = - p_{2} pq^{\delta } ;\quad c_{\delta + 2} = p_{2} p^{2} q^{\delta } - p_{1} p^{2} q^{\delta } + p_{2} pq^{\delta + 1} - p_{3} p^{2} q^{\delta } ; \hfill \\ c_{i} = 0,\quad \delta + 2 < i \le 2\delta + 1;\quad c_{2\delta + 2} = p_{1} p_{3} p^{2} q^{2\delta } ; \hfill \\ d_{1} = - 2q;\quad d_{2} = q^{2} ; \hfill \\ d_{i} = 0;\quad 2 < i \le \delta ; \hfill \\ d_{\delta + 1} = - \left( {p_{1} pq^{\delta } + p_{3} pq^{\delta } } \right);d_{\delta + 2} = p_{1} pq^{\delta + 1} + p_{3} pq^{\delta + 1} ; \hfill \\ d_{i} = 0,\quad \delta + 3 < i \le 2\delta + 1; \hfill \\\quad d_{2\delta + 2} = p_{1} p_{3} p^{2} q^{2\delta } . \end{gathered}$$

Proof.

Under the assumptions of the Theorem,

$$P_{X,\delta } \left( u \right) = E\left( {u^{X} {|}X \le \delta } \right) = \frac{{pu\left( {1 - \left( {qu} \right)^{\delta } } \right)}}{{\left( {1 - qu} \right)\left( {1 - q^{\delta } } \right)}},$$

and

$$\overline{P}_{X,\delta } \left( u \right) = E\left( {u^{X} {|}X > \delta } \right) = \frac{1}{{q^{\delta } }}\left[ {\left( {\frac{pu}{{1 - qu}}} \right) - P_{X,\delta } \left( u \right)\left( {1 - q^{\delta } } \right)} \right].$$

(i) Using Eq. (12), the pgf of \(T_{1}\) is obtained as

$$P_{{T_{1} }} \left( u \right) = \frac{{pu - p_{1} pq^{\delta } u^{\delta + 1} }}{{1 - qu - p_{1} pq^{\delta } u^{\delta + 1} }},$$

and hence the result follows directly from (15).

(ii) Using Eq, (13), the pgf of \(T\) can be written as

$$P_{T} \left( u \right) = \left( {\mathop \sum \limits_{i = 1}^{{2\left( {\delta + 1} \right)}} c_{i} u^{i} } \right)/\left( {1 + \mathop \sum \limits_{i = 1}^{{2\left( {\delta + 1} \right)}} d_{i} u^{i} } \right),$$

where the coefficients \(c_{i}\) and \(d_{i}\) are as given in the theorem. Since \(P_{T} \left( u \right)\) is of the form of Eq. (15), the result follows immediately. \(\square\)

3.2 The continuous case

Now suppose that the interarrival times \(X_{1}\), \(X_{2}\), \(\cdots\) between successive shocks have a continuous distribution with an arbitrary common df \(F\left( t \right) = {\text{Pr}}\left( {X \le t} \right)\) and let \(f\left( t \right) = F^{\prime}\left( t \right)\) denotes its common probability density function (pdf). Also, let \(\hat{f}\left( u \right) = E\left[ {e^{ - uX} } \right] = \mathop \smallint \limits_{0}^{\infty } e^{ - ut} f\left( t \right)dt\) be the Laplace–Stieltjes transform (LST) of \(X\). Similar to the proof of Theorem 2, we can obtain the joint LST \(\hat{f}_{{T_{1} , T}} \left( {u_{1} , u} \right) = E\left[ {e^{{ - u_{1} T_{1} - uT}} } \right]\) of the random vector \(\left( {T_{1} , T} \right)\) given in the following Theorem 3. Define the LSTs of the random variables \(X\left| {X \le \delta } \right.\) and \(X\left| {X > \delta } \right.\) by

$$\hat{f}_{X,\delta } \left( u \right) = E\left( {e^{ - uX} {|}X \le \delta } \right)$$

and

$$\hat{g}_{X,\delta } \left( u \right) = E\left( {e^{ - uX} {|}X > \delta } \right),$$

respectively, then we have the next.

Theorem 3.

The joint LST \(\hat{f}_{{T_{1} , T}} \left( {u_{1} , u} \right) = E\left[ {e^{{ - u_{1} T_{1} - uT}} } \right]\) of the random vector \(\left( {T_{1} , T} \right)\) is.

$$\hat{f}_{{T_{1} , T}} \left( {u_{1} , u} \right) = \frac{{\hat{f}_{X,\delta } \left( u \right)F\left( \delta \right)\left( {p_{2} - p_{1} } \right)\hat{g}_{X,\delta } \left( {u_{1} + u} \right)\overline{F}\left( \delta \right) + \hat{g}_{X,\delta } \left( u \right)\left( {p_{2} - p_{1} } \right)\left( {1 - p_{3} } \right)\hat{g}_{X,\delta } \left( {u_{1} + u} \right)\overline{(F} \left( \delta \right))^{2} }}{{\left[ {1 - p_{3} \overline{F}\left( \delta \right)\hat{g}_{X,\delta } \left( u \right)} \right]\left[ {1 - p_{1} \overline{F}\left( \delta \right)\hat{g}_{X,\delta } \left( {u_{1} + u} \right)} \right]}}+\frac{{\hat{f}_{X,\delta } \left({u_{1} + u} \right)F\left( \delta \right) + \hat{g}_{X,\delta } \left( {u_{1} + u} \right)\overline{F}\left( \delta \right)\left( {1 - p_{2} } \right)}}{{1 - p_{1} \overline{F}\left( \delta \right)\hat{g}_{X,\delta } \left( u \right)}}.$$

Using the LST of Theorem 3, we get the marginal LSTs \(\hat{f}_{{T_{1} }} \left( u \right) = E\left[ {e^{{ - uT_{1} }} } \right]\) and \(\hat{f}_{T} \left( u \right) = E\left[ {e^{ - uT} } \right]\) of the random variables \(T_{1}\) and \(T\).

Corollary 4.

The marginal LSTs corresponding to \(T_{1}\) and \(T\) are given, respectively, by.

$$\hat{f}_{{T_{1} }} \left( u \right) = \, \frac{{F\left( \delta \right)\left( {p_{2} - p_{1} } \right)\hat{g}_{X,\delta } \left( u \right)\overline{F}\left( \delta \right) + \left( {p_{2} - p_{1} } \right)\left( {1 - p_{3} } \right)\hat{g}_{X,\delta } \left( u \right)\overline{(F} \left( \delta \right))^{2} }}{{\left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]\left[ {1 - p_{1} \overline{F}\left( \delta \right)\hat{g}_{X,\delta } \left( u \right)} \right]}} + \frac{{\hat{f}_{X,\delta } \left( u \right)F\left( \delta \right) + \hat{g}_{X,\delta } \left( u \right)\overline{F}\left( \delta \right)\left( {1 - p_{2} } \right)}}{{1 - p_{1} \overline{F}\left( \delta \right)\hat{g}_{X,\delta } \left( u \right)}}.$$

and

$$\hat{f}_{T} \left( u \right) = \frac{{\hat{f}_{X,\delta } \left( u \right)F\left( \delta \right)\left( {p_{2} - p_{1} } \right)\hat{g}_{X,\delta } \left( u \right)\overline{F}\left( \delta \right) + \hat{g}_{X,\delta } \left( u \right)\left( {p_{2} - p_{1} } \right)\left( {1 - p_{3} } \right)\hat{g}_{X,\delta } \left( u \right)\overline{(F} \left( \delta \right))^{2} }}{{\left[ {1 - p_{3} \overline{F}\left( \delta \right)\hat{g}_{X,\delta } \left( u \right)} \right]\left[ {1 - p_{1} \overline{F}\left( \delta \right)\hat{g}_{X,\delta } \left( u \right)} \right]}}+ \frac{{\hat{f}_{X,\delta } \left( u \right)F\left( \delta \right) + \hat{g}_{X,\delta } \left( u \right)\overline{F}\left( \delta \right)\left( {1 - p_{2} } \right)}}{{1 - p_{1} \overline{F}\left( \delta \right)\hat{g}_{X,\delta } \left( u \right)}}.$$

It should be noted that using the relations \(E\left[ {T_{1} } \right] = - \hat{f}_{{T_{1} }}^{^{\prime}} \left( 0 \right)\), \(E\left[ T \right] = - \hat{f}_{T}^{^{\prime}} \left( 0 \right)\) and Corollary 4, we can obtain the means of the random variables \(T_{1}\) and \(T\). Their expressions are the same with those given in Corollary 3 for the discrete case.

As in the discrete case, by considering specific distributions of the random variable \(X\), we can obtain analytical results for the evaluation of the distributions of the lifetime random variables \(T_{1}\) and \(T\). We consider the particular case when the interarrival times between successive shocks follow the exponential distribution with common pdf

$$f\left( t \right) = \lambda e^{ - \lambda t} ,{\kern 1pt} \;t \ge 0,\;\lambda \ge 0.$$

In this case, we have

$$\hat{f}_{X,\delta } \left( u \right) = E\left( {e^{ - uX} {|}X \le \delta } \right) = \frac{1}{{1 - e^{ - \lambda \delta } }}\frac{\lambda }{u + \lambda }\left[ {1 - e^{{ - \left( {u + \lambda } \right)\delta }} } \right],$$

and

$$\hat{g}_{X,\delta } \left( u \right) = E\left( {e^{ - uX} {|}X > \delta } \right) = \frac{\lambda }{u + \lambda }e^{ - \delta u} .$$

Using Corollary 3, the LSTs of the random variables \(T_{1}\) and \(T\) are found to be

$$\hat{f}_{{T_{1} }} \left( u \right) = \frac{{\lambda \left[ {1 - p_{1} e^{{ - \left( {u + \lambda } \right)\delta }} - p_{3} e^{ - \lambda \delta } + p_{1} p_{3} e^{{ - \left( {u + 2\lambda } \right)\delta }} } \right]}}{{\lambda + u - \lambda p_{1} e^{{ - \left( {u + \lambda } \right)\delta }} - \left( {\lambda + u} \right)p_{3} e^{ - \lambda \delta } + \lambda p_{1} p_{3} e^{{ - \left( {u + 2\lambda } \right)\delta }} }},$$
(19)

and

$$\hat{f}_{T} \left( u \right) = \frac{{ {\lambda \left( {\lambda + u} \right)\left( {1 - p_{2} e^{{ - \left( {u + \lambda } \right)\delta }} } \right) + \lambda^{2} p_{1} p_{3} e^{{ - 2\left( {u + \lambda } \right)\delta }} + \lambda^{2} (p_{2} - p_{1} - p_{3} )e^{{ - \left( {u + \lambda } \right)\delta }} } }}{{\left( {\lambda + u} \right)^{2} - \lambda \left( {\lambda + u} \right)p_{3} e^{{ - \left( {u + \lambda } \right)\delta }} - \lambda \left( {\lambda + u} \right)p_{1} e^{{ - \left( {u + \lambda } \right)\delta }} + \lambda^{2} p_{1} p_{3} e^{{ - 2\left( {u + \lambda } \right)\delta }} }}.$$
(20)

Corollary 5

Let \(X_{i}\) have the exponential distribution with parameter \(\lambda > 0\). If \(\delta \to \infty\), then the distribution of the random variables \(T_{1}\) and \(T\) approaches the exponential distribution with parameter \(\lambda\), that is.

$$\mathop {\lim }\limits_{\delta \to \infty } \left( {T_{1} > t} \right) = \mathop {\lim }\limits_{\delta \to \infty } \left( {T > t} \right) = e^{ - \lambda t} .$$

Proof.

The result follows immediately from (19) and (20), since

$$\mathop {lim}\limits_{\delta \to \infty } \hat{f}_{{T_{1} }} \left( u \right) = \frac{\lambda }{\lambda + u}\quad {\text{and}}\quad \mathop {lim}\limits_{\delta \to \infty } \hat{f}_{T} \left( u \right) = \frac{{\lambda \left( {\lambda + u} \right)}}{{\left( {\lambda + u} \right)^{2} }} = \frac{\lambda }{\lambda + u},$$

and \(\lambda /\left( {\lambda + u} \right)\) is the LST of the exponential distribution with parameter \(\lambda\). \(\square\)

A continuous random variable \(W\) is said to have a matrix-exponential distribution, if its distribution function is given by

$$F_{W} \left( t \right) = \Pr \left( {W \le t} \right) = 1 - \overline{F}_{W} \left( t \right) = 1 + {\varvec{a}}^{T} exp\left( {{\varvec{S}}t} \right){\varvec{S}}^{ - 1} {\varvec{s}}{,}$$
(21)

where \({\varvec{a}}\) is a \(p \times 1\) vector, \({\varvec{S}}\) is a \(p \times p\) matrix (which is called a companion matrix) and \({\varvec{s}}\) is a \(p \times 1\) vector. We write \(W\sim ME_{p} (\user2{a,S,s}),\) to present that the random variable \(W\) has a matrix-exponential distribution with parameters \((\user2{a,S,s})\). The common probability density function is computed from \(f_{W} \left( t \right) = {\varvec{a}}^{T} exp\left( {{\varvec{S}}t} \right){\varvec{s}}\). If \(\hat{f}_{W} \left( u \right) = E\left[ {e^{ - uX} } \right] = \mathop \smallint \limits_{0}^{\infty } e^{ - ut} f\left( t \right)dt\) denotes the Laplace–Stieltjes transform of \(X\), then \(\hat{f}_{W} \left( u \right) \) can be computed from

$$\hat{f}_{W} \left( u \right) = {\varvec{a}}^{T} \left( {u{\varvec{I}}_{p} - {\varvec{S}}} \right)^{ - 1} {\varvec{s}},$$

where \({\varvec{I}}_{p}\) is the identity matrix of order \(p\).

The non-centralized moments of \(W\) can be computed from

$$E\left[ {W^{n} } \right] = n!{\varvec{a}}^{T} \left( { - {\varvec{S}}} \right)^{{ - \left( {n + 1} \right)}} {\varvec{s}}.$$

Asmussen and Bladt (1997) showed that if \(W\sim ME_{p} ({\varvec{a}}\), \({\varvec{S}}\), \({\varvec{s}})\), then the LST \(\hat{f}_{W} \left( u \right)\) is rational and can be written as

$$\hat{f}_{W} \left( u \right) = \frac{{b_{1} + b_{2} u + \cdots + b_{p} u^{p - 1} }}{{a_{1} + a_{2} u + \cdots + a_{p} u^{p - 1} + u^{p} }},$$
(22)

where \(a_{i}\), \(b_{i}\), \(1 \le i \le n\) are real numbers. It should be noted that Eq. (22) holds true when there is no point mass at zero. Then, the representation \(({\varvec{a}}\), \({\varvec{S}}\), \({\varvec{s}})\) is given by

$${\varvec{a}}^{T} = (b_{1} ,b_{2} , \ldots ,b_{p} ),\quad {\varvec{S}} = \left[ {\begin{array}{*{20}c} 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 \\ { - a_{1} } & { - a_{2} } & { - a_{3} } & { - a_{4} } & \cdots & { - a_{p} } \\ \end{array} } \right], \quad {\varvec{s}} = \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 1 \\ \end{array} } \right].$$
(23)

For a comprehensive review of matrix-exponential distributions and their sub-class of phase-type distributions, we refer to Bladt and Nielsen (2017).

As it is clear from (19) and (20), the LSTs of the random variables \(T_{1}\) and \(T\) are not in the form of (22). That is, the numerator and denominator are not polynomial functions. Therefore, the random variables \(T_{1}\) and \(T\) do not have matrix-exponential distributions. However, using the Taylor expansion for the exponential terms involved in (19) and (20), the LSTs and hence the survival functions of \(T_{1}\) and \(T\) can be approximated by matrix-exponential distributions. Such a method has been used by Kus et al. (2022) and Chadjiconstantinidis and Eryilmaz (2022). To obtain the approximated values of the survival functions, we first represent (19) and (20) in the form of (22) via Taylor expansion of the exponential terms and then use (23).

For illustration purposes, let us consider the LST of the random variables \(T_{1}\). Multiplying both the numerator and the denominator of the right-hand side of (19) by \(e^{u\delta }\), we get

$$\hat{f}_{{T_{1} }} \left( u \right) = \frac{{ - \lambda p_{1} e^{ - \lambda \delta } + e^{\delta u} }}{{ - \lambda p_{1} e^{ - \lambda \delta } + \lambda e^{\delta u} + ue^{\delta u} }}.$$

For sufficiently large \(q\), using the Taylor expansion \(\mathop \sum \limits_{i = 0}^{q - 1} \frac{{\left( {\delta u} \right)^{i} }}{i!}\) for the term \(e^{\delta u}\) the LST of \(T_{1}\) can be approximated by the rational LST

$$\hat{f}_{{\tilde{T}_{1} }} \left( u \right) = \frac{{b_{1} + b_{2} u + \cdots + b_{q} u^{q - 1} }}{{a_{1} + a_{2} u + \cdots + a_{q} u^{q - 1} + u^{q} }},$$

where

$$b_{1} = \left( {1 - \lambda p_{1} e^{ - \lambda \delta } } \right)\frac{{\left( {q - 1} \right)!}}{{\delta^{q - 1} }};\;b_{i} = \left( {q - 1} \right)!/\delta^{q - i} \left( {i - 1} \right)!,\;2 \le i \le q - 1$$
$$a_{1} = \frac{{\lambda \left( {1 - p_{1} e^{ - \lambda \delta } } \right)\left( {q - 1} \right)!}}{{\delta^{q - 1} }};\;a_{i} = \left( {\frac{\lambda }{{\left( {i - 1} \right)!}} + \frac{1}{{\delta \left( {i - 2} \right)!}}} \right)\frac{{\left( {q - 1} \right)!}}{{\delta^{q - i} }},\;2 \le i \le q.$$

Therefore, the approximated random variable \(\tilde{T}_{1}\) of \(T_{1}\) has a matrix-exponential distribution, and hence, using these coefficients, the survival function of \(T_{1}\) can be approximated by

$$\Pr \left( {\tilde{T}_{1} > t} \right) = - a^{T} \exp \left( {St} \right)S^{ - 1} s,$$

where

$${a^T} = ({b_1},{b_2} \ldots {b_q}),\;S = \left[ {\begin{array}{*{20}{c}} 0&1&0&0& \cdots &0\\ 0&0&1&0& \cdots &0\\ 0&0&0&1& \cdots &0\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0&0& \cdots &1\\ { - {a_1}}&{ - {a_2}}&{ - {a_3}}&{ - {a_4}}& \cdots &{ - {a_q}} \end{array}} \right],\;s = \left[ {\begin{array}{*{20}{c}} 0\\ 0\\ 0\\ \vdots \\ 0\\ 1 \end{array}} \right],\;q \times 1,$$

and \(b_{i}\)’s, \(a_{i}\)’s are given as above.

By a similar way, using (20) we can approximate the LST of \(T\) by a rational LST, and hence we can approximate the distribution of \(T\) by a matrix-exponential distribution. Using Corollary 4, this approach can be easily applied and for the more general case when the interarrival times between successive shocks have a matrix-exponential distribution. The details are omitted (see, for example in Kus et al. (2022) and Chajiconstantinidis and Eryilmaz (2022)).

4 Numerical illustrations

In this section, we present some illustrative computational results. First, consider the case when the intershock times follow the geometric distribution with mean \(\frac{1}{p}\) and when \(\delta =2\) and \({p}_{1}=0.7,{p}_{2}=0.9,{p}_{3}=0.6\). In Table 1, we present the survival functions of \({T}_{1}\) and \(T\) for selected values of \(t\). Clearly, the survival functions are decreasing in \(p\) since we expect more frequent shocks with an increase in \(p\).

Table 1 Survival functions when the intershock times have geometric distribution

Table 2 contains approximated values of the survival functions when the intershock times have exponential distribution with mean \(\frac{1}{\lambda }\) when \({{p}_{2}=0.9, p}_{3}=0.6\) and \(\delta =2\). As expected, an increase in \(\lambda\) leads to a decrease in survival probabilities.

Table 2 Survival functions when the intershock times have exponential distribution

5 Reliability application

Kus et al. (2022) used the mixed \(\delta\)-shock model to represent a repairable system consisting of active and cold standby components. As noted by Kus et al. (2022), at time t = 0, both of the components are new, and component 1 starts operation first, while component 2 is in a cold standby state. The standby component is switched into operation when the active component (component 1) fails, and a repair action is immediately taken for component 1. Suppose that the repair time for a failed component is fixed as δ, and the component is as good as new after repair. A damage of random size occurs upon the failure of a component. Let \({Y}_{i+1}\) denote the magnitude of the damage for the component who lives\({X}_{i}\), i = 0, 1, 2, …. If the size of the damage is above d, then the unit cannot be repaired and hence the system fails after the failure of the standby component. Such a repairable system fails either if one of \({X}_{i}\) s is less than or equal to δ or the size of the damage for a failed component is above d. The lifetime of the system is then represented as

$$T = \mathop \sum \limits_{i = 0}^{{N_{d} }} X_{i} .$$

Under the change point setup considered in the present paper, the random variable \({N}_{{d}_{1}}\) denotes the number of component failures until the first damage that is at least \({d}_{1}.\) Assume that the size of the damage is determined by a certain environmental factor. Thus, the probabilistic law of the damage size changes after the damage above the threshold \({d}_{1}\).

Suppose that there is a desired level for the mean time to failure (MTTF) of the system. If the MTTF of the system is below a given specified level, then an additional standby component may be added to increase the performance of the system. The MTTF of the system can be computed from

$$E\left[ T \right] = \left[ {1 + \frac{{1 + \left( {p_{2} - p_{1} - p_{3} } \right)\overline{F}\left( \delta \right)}}{{\left[ {1 - p_{1} \overline{F}\left( \delta \right)} \right]\left[ {1 - p_{3} \overline{F}\left( \delta \right)} \right]}}} \right]E\left[ X \right],$$

where \(F\left(x\right)=\mathrm{Pr}(X\le x)\) is the time to failure distribution of the component. The distribution \(F\left(x\right)\) may be either discrete or continuous depending on the system. If the lifetime of the component corresponds to the number of cycles, then \(F\left(x\right)\) should be modeled as discrete. For example, the lifetimes of batteries may be measured in terms of how many times their charge–discharge processes have been repeated. In this case, a cycle is one complete use of the battery to store power and release it (Willis and Scott (2000)).

6 Summary and conclusions

In this paper, a mixed shock model that combines extreme and delta shock models has been studied when there is a change in shock size distribution. Computationally efficient matrix-based expressions were obtained for the survival function and mean time to failure of the system. The results extend and generalize the previous results mainly in two directions. First, the model of Eryilmaz and Kan (2021) has been extended to a mixed model. Second, discrete intershock times have also been considered for reliability evaluation of the system. The latter one is also new for the extreme shock model with a change point studied in Eryilmaz and Kan (2021). As a future work, more general models and extensions can be considered. For example, a mixed model that combines run and delta shock models can be studied. The dependence between shock magnitude and the intershock time may be considered as well. The consideration of the change point in the interarrival times is also worthy to investigate since it is applicable to real life systems.