1 Introduction

Hepatitis C is a major health problem on a global scale. This is illustrated by the following facts [17]. There are 1.5 million infections per year. Of those infected 30% recover after a few months while the disease becomes chronic in the remaining 70% of cases. The estimated number of people with chronic hepatitis C is 58 million. Chronic infection leads in many cases to cirrhosis and liver cancer. In 2019 the number of people who died from hepatitis C (mainly the long-term complications) was 290,000. These numbers may be compared with those for hepatitis B, a disease due to an unrelated virus which also affects the liver. In that case there are also 1.5 million infections per year. The disease can become chronic, mainly in the case of childhood infections. The estimated number of people with chronic hepatitis B is 296 million. As in the case of hepatitis C chronic infection leads in many cases to cirrhosis and liver cancer. In 2019 the number of people who died from hepatitis B (mainly the long-term complications) was 820,000. These numbers may be compared with those for a viral infection better known to the general public, HIV/AIDS. In that case the estimated number of infections per year is again 1.5 million. The disease always becomes chronic. The estimated number of people infected with HIV is 37.7 million. In 2020 the number of people who died from the effects of HIV was 680,000.

There is no effective vaccine against hepatitis C. However there are treatments with direct-acting antivirals (DAA) which can cure most chronic cases. They do not suffer from the very long treatment times and severe side effects of earlier treatments such as interferon \(\alpha \) and ribavirin. The availability of the new treatments has been limited by their high costs. In the case of hepatitis B there is an effective vaccine but there is no curative treatment. There are drugs which may help to reduce viral load and hence long-term damage to the liver. In the case of HIV there is neither an effective vaccine nor a curative treatment. There are drugs which can be used to maintain the virus load on a low level on a long-term basis (highly active anti-retroviral therapy, HAART). There is a great need to understand these diseases better and in doing so it may be helpful to compare them with each other.

One way to attempt to better understand hepatitis C and its differences to other diseases caused by viral infections is to use mathematical models of the evolution of the disease within a host. The basic model of viral infection [12] is a system of three ordinary differential equations where the unknowns are the numbers of uninfected cells x, infected cells y and virus particles v. It can be used to model any viral infection and any information about the difference between viruses must be contained in the choice of parameters and initial data. The basic model was originally applied to HIV and there are two aspects of it which may be inappropriate for modelling hepatitis C (infection with HCV) and hepatitis B (infection with HBV). The first is that it includes a constant source term in the evolution equation for uninfected cells. In the case of HIV the main target cells of the virus are T cells and these can be replenished from production in the bone marrow. In the case of HCV and HBV the target cells are the hepatocytes of the liver and it is not clear that there is an external source for those. On the other hand it has been suggested that blood cells coming from the bone marrow may be transformed into hepatocytes [15] and to the authors’ knowledge this possibility has not been ruled out. The second aspect of the basic model which is problematic for its application to hepatitis is the fact that the infection rate is described by a mass action term, i.e. one proportional to xv and this can lead to the conclusion that adults should be more susceptible to hepatitis than children, something which is not seen in reality [3]. This can be avoided by replacing mass action by the so-called standard incidence function, which is proportional to \(\frac{xv}{x+y}\). It has the mathematical inconvenience that this expression is singular for \(x+y=0\).

In [11] a model for hepatitis C was introduced and some of the properties of its solutions established. A central aim of what follows is to extend those results on the dynamics of the model of [11]. Another aim is to better understand those results in a wider context, both mathematical and biological. The model of [11] is a system of five ODE which augments a modification of the basic model by a simple description of virus production in the cell. The model for virus production comes from [4], where it was added to a previous model of hepatitis C introduced in [13]. The model of [11] includes a constant source of uninfected cells and uses the standard incidence function.

The basic model has been extended in many directions, some of which are mentioned in what follows. One key issue is that of the role of the immune system in the infection. The immune system is not included explicitly in the basic model. It is included implicitly due the fact that the parameters describing the elimination of infected cells and virus particles may include contributions due to effects of the immune system. This issue is important since the differences between the effects of HCV, HBV and HIV probably have a lot to do with differences in the immune response. It is not simple to identify these since the mechanisms involved may go beyond the most obvious candidates. For instance it is observed that in the initial phase of an infection with hepatitis C the amount of virus reaches a maximum and decreases again. If this was due to killing of infected cells an increase in liver enzymes should be seen. In fact no such effect is observed. It is believed that the decrease of the virus in this phase is due to the influence of natural killer cells (NK cells). However this influence takes place not through the killing which gives NK cells their name but due to the fact that they produce interferon \(\gamma \) [10]. In this context it should be noted that important features of the dynamics of hepatitis C evolve on very different time scales, from a few days for the acute infection to many years for the chronic infection. Thus it might be appropriate to use different models for different phases of the disease. In view of this the strategy of the present paper is to understand the dynamics of the model without any restrictions on the parameters other than their positivity. Restrictions on the parameters which are appropriate for specific applications of the model and the restrictions on the phenomenology which may result are left for future investigation.

In this paper we extend the analysis in [11] of the model introduced there. Section 2 is concerned with basic properties of that model. Theorem 2 states that there are up to three steady states \(E_0\), \(E_0'\) and \(E_0''\) on the boundary of the positive orthant. The solution \(E_0''\) was not seen in [11] since it is not consistent with the stronger restrictions on the parameters made in that paper. Theorem 3 shows, among other things, that there is an open set of initial data leading to solutions which converge to \(E_0''\) as \(t\rightarrow \infty \) and that the study of the late-time behaviour of solutions which do not converge to either \(E_0\) or \(E_0''\) can be reduced to that of the late-time behaviour of a three-dimensional system. In Sect. 3 the stability properties of the boundary steady states are studied by linearization. In [11] a condition on the parameters in the system was found which ensures that all solutions converge to steady states but the question whether this is true for all values of the parameters was left open. In Sect. 4 we provide a negative answer to this question by proving that there are parameter choices for which periodic solutions exist. This is done with the help of Hopf bifurcations. In Sect. 5 a parametrization of all steady states is given which extends one given in [11]. It follows that there are at most three positive steady states for any choice of parameters. In many models related to infectious diseases we have the situation that the number of positive steady states is controlled by a basic reproductive number \(R_0\). It is zero for \(R_0\le 1\) and one for \(R_0>1\). For the model of [11] we are able to prove that for a certain quantity \({{\mathcal {R}}}_0''\) (see (17) for its definition) the number of steady states is even for \({{\mathcal {R}}}_0''\le 1\) and odd for \({{\mathcal {R}}}_0''>1\). We were, however, not able to decide whether the number of positive steady states is ever greater than the minimum consistent with this parity statement. One way in which a greater number of positive steady states might occur is through the occurrence of a backward bifurcation from the infection-free steady state \(E_0'\), where an unstable positive steady state arises [2, 8]. We show that this does not happen in the model studied here. Instead there is a forward bifurcation where a stable positive steady state arises. We also prove the existence of a stable steady state in a different region of parameter space by a perturbation analysis of the limit where the source strength s tends to zero. Section 6 gives an outlook on possible future research directions.

2 Basic Properties of the Model

The following model for the dynamics of the infection of a host by the hepatitis C virus (HCV) was introduced in [11].

$$\begin{aligned} \frac{dT}{dt}= & {} s+r_TT\left( 1-\frac{T+I}{T_\textrm{max}}\right) -dT -\frac{bTV}{T+I}, \end{aligned}$$
(1)
$$\begin{aligned} \frac{dI}{dt}= & {} r_II\left( 1-\frac{T+I}{T_\textrm{max}}\right) +\frac{bTV}{T+I}-\delta I,\end{aligned}$$
(2)
$$\begin{aligned} \frac{dV}{dt}= & {} \rho RI-cV-\frac{bTV}{T+I},\end{aligned}$$
(3)
$$\begin{aligned} \frac{dU}{dt}= & {} \beta R\left( 1-\frac{U}{U_\textrm{max}}\right) -\gamma U, \end{aligned}$$
(4)
$$\begin{aligned} \frac{dR}{dt}= & {} \alpha (1-\epsilon )U-\sigma R. \end{aligned}$$
(5)

In that paper several properties of the solutions of this model were determined. In what follows these results will be extended and some features of solutions of this model will be compared with those of solutions of related models.

In these equations T, I, V, U and R are functions of time t which are supposed to be non-negative. All other quantities in these equations are positive constants and \(\epsilon <1\). U and R are the total amounts of certain types of RNA in the cells and they give a representation of the replication machinery of HCV. T, I and V are the numbers of uninfected cells, infected cells and virus particles outside the cells. Note that the equations (4)–(5) form a closed system for U and R.

The right hand sides of (1)–(5) define smooth functions on the positive orthant and thus local existence holds for the initial value problem with positive initial data. These functions cannot all be extended continuously to the non-negative orthant and so to go beyond local existence it is not enough to show that solutions are bounded on any finite time interval. It is also necessary to show that a solution which is initially positive cannot approach the singular set defined by \(T=I=0\). The following theorem collects some results proved in [11] and extends them slightly by removing some restrictions on the parameters.

Theorem 1

Let \(T_0,I_0,V_0,U_0,R_0\) be positive real numbers. Then there exists a unique positive solution (T(t), I(t), V(t), U(t), R(t)) of (1)–(5) on the interval \([0,\infty )\) with \(T(0)=T_0\), \(I(0)=I_0\), \(V(0)=V_0\), \(U(0)=U_0\) and \(R(0)=R_0\). It is bounded and there exists a constant \(C>0\) such that \(T+I\ge C\).

Proof

In [11] it was proved that for any solution with positive initial data defined on an interval \([0,t_1)\) with \(t_1\) finite or infinite there exists a constant \(C>0\) such that \(T+I\ge C\). It was also proved in [11] that all solutions of the system considered in that paper exist for all positive times and are bounded under the restrictions that \(d\le \delta \) and \(r_I\le r_T\). In fact it is not hard to extend the proof given there to general positive parameters. To do so let \(r_1=\max \{r_T,r_I\}\) and \(d_1=\min \{d,\delta \}\). Then

$$\begin{aligned} \frac{d(T+I)}{dt}\le s+r_1(T+I)\left( 1-\frac{T+I}{T_\textrm{max}}\right) -d_1(T+I). \end{aligned}$$

This allows us to argue as in [11] to show that \(T+I\) is bounded and thus that the whole solution is bounded. The bound of \(T+I\) by the quantity \(p_0\) introduced in [11] is replaced by a bound in terms of the quantity obtained from \(p_0\) by replacing d by \(d_1\) and \(r_T\) by \(r_1\). The definition of \(p_0\) is recalled in (6). Once the solution is known to be bounded and remain away from the singular set it is straightforward to show that all concentrations remain positive as long as the solution exists. \(\square \)

Note that it is essential for this result that \(s>0\). It was shown in [5] and [6] that for some models related to the system (1)–(5) with \(s=0\) there are solutions which tend to the origin as \(t\rightarrow \infty \). In fact it is also true, and simpler to prove, that there are solutions of this type in the model of Guedj and Neumann [4]. There the origin is a steady state and the linearization of the system about that point has a negative eigenvalue when \(r<d\). The analogues for the system of [4] of all statements in Theorem 1 other than that concerning boundedness below by a positive constant hold and can be proved in the same way.

In order to obtain insights into the dynamics of the model a useful first step is to determine the boundary steady states, i.e. the non-negative steady states for which at least one of the variables vanishes. Some information on these is collected in the following theorem.

Theorem 2

For given values of the parameters, the system (1)–(5) has up to three steady states on the boundary. Let

$$\begin{aligned} p_0=\left( 1-\frac{d}{r_T}+\left( \left( 1-\frac{d}{r_T}\right) ^2 +\frac{4s}{r_T T_\textrm{max}}\right) ^{\frac{1}{2}}\right) \frac{T_\textrm{max}}{2}. \end{aligned}$$
(6)

1. \(E_0=(p_0,0,0,0,0)\) is a non-negative steady state.

2. Let \(U^*=U_\textrm{max}\left( 1-\frac{1}{{{\mathcal {R}}}_0'}\right) \) and \(R^*=U_\textrm{max}\frac{\gamma }{\beta }({{\mathcal {R}}}_0'-1)\), where \({{\mathcal {R}}}_0'=\frac{\alpha \beta (1-\epsilon )}{\gamma \sigma }\). Then if \({{\mathcal {R}}}_0'>1\) the point \(E_0'=(p_0,0,0,U^*,R^*)\) is a non-negative steady state.

3. If \(d>\frac{r_T\delta }{r_I}\) and \(\frac{s}{r_TT_\textrm{max}}<\left( 1-\frac{\delta }{r_I}\right) \left( \frac{d}{r_T}-\frac{\delta }{r_I}\right) \), let \(T^*=s\left( d-\frac{r_T\delta }{r_I}\right) ^{-1}\) and

$$\begin{aligned} I^*=T_\textrm{max}\left( 1-\frac{\delta }{r_I} -\frac{s}{r_TT_\textrm{max}}\left( \frac{d}{r_T} -\frac{\delta }{r_I}\right) ^{-1}\right) . \end{aligned}$$

Then the point \(E_0''=(T^*,I^*,0,0,0)\) is a non-negative steady state. There are no other non-negative steady states than those given in points 1.-3..

Proof

For a non-negative steady state of (1)–(5) it follows from (1) that T is positive. If V is positive then it follows from (3) that I and R are positive. It then follows from (5) that U is positive and that we do not have a boundary steady state. Thus boundary steady states of (1)–(5) satisfy \(V=0\). It then follows from (3) that at least one of R and I must be zero. The case \(I=0\) was analysed completely in [11]. Under the assumptions made on the parameters in [11] it was the only case. In that case \(T=p_0\) and only the steady states \(E_0\) and \(E_0'\) occur. \((U^*,R^*)\) is the unique positive steady state of the sub-system (4)–(5). It remains to treat the case that \(I\ne 0\), so that \(R=0\). Then it follows that \(U=0\). As a consequence of (2) we have \(1-\frac{T+I}{T_\textrm{max}}=\frac{\delta }{r_I}\). Substituting this back into (1) gives \(s=\left( d-\frac{r_T\delta }{r_I}\right) T\). If \(d\le \frac{r_T\delta }{r_I}\) this gives a contradiction. Suppose then that \(d>\frac{r_T\delta }{r_I}\). Solving for T we get the equation \(T=s\left( d-\frac{r_T\delta }{r_I}\right) ^{-1}\). The evolution equation for I gives \(I=T_\textrm{max}\left( 1-\frac{\delta }{r_I}-\frac{T}{T_\textrm{max}}\right) \). Hence

$$\begin{aligned} I=T_\textrm{max}\left( 1-\frac{\delta }{r_I} -\frac{s}{r_TT_\textrm{max}}\left( \frac{d}{r_T} -\frac{\delta }{r_I}\right) ^{-1}\right) . \end{aligned}$$

It may be concluded that necessary and sufficient conditions for the existence of a boundary steady state with \(I>0\) are \(d>\frac{r_T\delta }{r_I}\) and \(\frac{s}{r_TT_\textrm{max}}<\left( 1-\frac{\delta }{r_I}\right) \left( \frac{d}{r_T}-\frac{\delta }{r_I}\right) \). When these conditions are satisfied we get explicit expressions for T and I and \(V=R=U=0\). \(\square \)

Remark 1

In the model of [4] there exist boundary steady states corresponding to \(E_0\) and \(E_0'\) when \(d<r\), with \(p_0\) replaced by \(1-\frac{d}{r}\). When \(r\le d\) the only boundary steady states satisfy \(T=I=V=0\).

We recall some facts about the late time behaviour of solutions of (1)–(5) proved in [11].

Theorem 3

Any solution of the system (1)–(5) with positive initial data belongs to one of the following three cases.

1. The given solution converges to \(E_0\) as \(t\rightarrow \infty \).

2. The given solution converges to \(E_0''\) as \(t\rightarrow \infty \).

3. The solution passing through any \(\omega \)-limit point of the given solution is of the form \((T,I,V,U^*,R^*)\), where (TIV) is a solution of the system (1)–(5) with \(R=R^*\).

Proof

Given a positive solution of (1)–(5) we can consider a solution passing through one of its \(\omega \)-limit points. In the limiting solution R and U are constant while the other unknowns satisfy the equations obtained from (1)–(3) by setting R equal to that given value, either \(R=0\) or \(R=R^*\). This was proved at the beginning of Section 5 of [11]. The proof is based on the facts that the evolution equations for R and U decouple from those for T, I and V, that the resulting two-dimensional system is cooperative on the region \(U\le U_{\max }\) and, as shown in the proof of Theorem 2.5 of [11], each solution is contained in this region for t sufficiently large. The \(\omega \)-limit set of a given solution is connected. Hence, for a given solution, R either converges to 0 or to \(R^*\) for \(t\rightarrow \infty \). Consider first the case of the system (1)–(3) with \(R=0\). In that case V is a Lyapunov function and V converges to zero for all solutions. Considering again a solution passing through an \(\omega \)-limit point leads to a solution of the system (1)–(2) with \(V=0\). As remarked in [11] this system is competitive and so each solution converges to a steady state. In [11] it was claimed that all points of this system with \(T=0\) were steady states. This is not true. In fact on that boundary we have \(\dot{T}=s>0\) and so no positive solution can approach a point of that boundary. On the boundary \(I=0\) the unique steady state is given by \(T=p_0\). Under the assumptions made on the parameters in [11] there did not exist an interior steady state. With the less restrictive assumptions made here there is an interior steady state for certain choices of parameters. It corresponds to the point \(E_0''\) considered above. We can conclude that any solution with the property that \(R\rightarrow 0\) for \(t\rightarrow \infty \) either converges to \(E_0\) or \(E_0''\). The only remaining case is that where R does not tend to zero and then \(R\rightarrow R^*\) for \(t\rightarrow \infty \) and the solution belongs to case 3. This completes the proof. \(\square \)

Under the assumptions made on the parameters in [11] only the first and third cases listed in Theorem 3 occur. It will be seen in the next section that there are parameter values for which the second case occurs. Theorem 3 implies that either a solution has very simple asymptotics (case 1 or 2) or the analysis of its asymptotics can be reduced to that of a solution of (1)–(3) with \(R=R^*\). A similar statement holds for the model defined by the equations (3) of [4]. Either a solution converges to \(E_0'\), it converges to a steady state with \(T=I=V=0\) or its asymptotics are determined by those of a solution of the equations (1) of [4].

The biological interpretation of the three boundary steady states will now be considered. The point \(E_0\) is a state where no viral RNA is present in the system. It gives a representation of an individual who has never come into contact with hepatitis C. The point \(E'_0\) is a state which is free of infectious virus and cells which produce that virus but where the cells contain virus RNA in the form of replication units. These might come from a previous infection of a given individual and would have an effect in the event of a new infection. \({{\mathcal {R}}}_0'>1\) is the condition for a new infection of this type to be possible. The point \(E''_0\) is a state in which a population of infected cells maintains itself in the absence of free virus. Whether this can happen in an infected individual is unclear.

3 Linearization

We now consider the linearization of the system (1)–(5) about an arbitrary steady state. It has a block upper triangular form with the bottom left \(2\times 3\) submatrix being zero and so its eigenvalues are the union of those of the top left \(3\times 3\) submatrix and the bottom right \(2\times 2\) submatrix. The latter is the linearization of the system (4)–(5) and is of the form

$$\begin{aligned} \left[ {\begin{array}{cc} -\frac{\beta R}{U_\textrm{max}}-\gamma &{} \beta \left( 1-\frac{U}{U_\textrm{max}}\right) \\ \alpha (1-\epsilon ) &{} -\sigma \end{array}}\right] . \end{aligned}$$
(7)

The former is the linearization of the system (1)–(3) and is of the form

$$\begin{aligned} \left[ {\begin{array}{ccc} -a_1 &{}\quad b_1 &{}\quad -\frac{bT}{T+I}\\ a_2 &{}\quad -b_2 &{}\quad \frac{bT}{T+I}\\ -\frac{bIV}{(T+I)^2} &{}\quad \rho R+\frac{bTV}{(T+I)^2} &{}\quad -c-\frac{bT}{T+I} \end{array}} \right] \end{aligned}$$
(8)

where

$$\begin{aligned}{} & {} a_1=d-r_T+\frac{2r_TT}{T_\textrm{max}}+\frac{r_TI}{T_\textrm{max}} +\frac{bIV}{(T+I)^2},\\{} & {} a_2=-\frac{r_II}{T_\textrm{max}}+\frac{bIV}{(T+I)^2},\\{} & {} b_1=-\frac{r_TT}{T_\textrm{max}}+\frac{bTV}{(T+I)^2},\\{} & {} b_2=\delta -r_I+\frac{r_IT}{T_\textrm{max}}+2\frac{r_II}{T_\textrm{max}} +\frac{bTV}{(T+I)^2}. \end{aligned}$$

Theorem 4

The eigenvalues of the linearization at the boundary steady states of the system (1)–(5) have the following properties.

1. The number of eigenvalues at the point \(E_0\) which have positive real parts is equal to the number of positive quantities among \(\left( 1-\frac{\delta }{r_I}\right) \left( \frac{d}{r_I}-\frac{\delta }{r_T}\right) -\frac{s}{r_TT_\textrm{max}}\) and \({{\mathcal {R}}}_0'-1\).

2. The point \(E_0'\) always has at least four eigenvalues with negative real part. The sign of the remaining eigenvalue is equal to that of \({{\mathcal {R}}}_0''-1\).

3. All eigenvalues of \(E_0''\) have negative real part.

Proof

It was shown in [11] that the eigenvalues of (7) at the origin have negative real parts for \({{\mathcal {R}}}_0'<1\) and that for \({{\mathcal {R}}}_0'>1\) there is one positive and one negative eigenvalue. It was also shown that the eigenvalues of (7) at the positive steady state always have negative real parts.

If we evaluate the matrix (8) at the point \(E_0\) we see that the entries below the diagonal vanish so that the eigenvalues are just the diagonal elements. The third diagonal element is obviously negative. The first is \(r_T\left[ \left( 1-\frac{d}{r_T}\right) -\frac{2p_0}{T_\textrm{max}}\right] \), which is also negative. The sign of the second depends on the choice of parameters. Note that \(p_0\) is a positive root of a quadratic polynomial p which has a positive leading term and is negative at the origin. The eigenvalue has the same sign as the value of the polynomial at the point \(T_\textrm{max}\left( 1-\frac{\delta }{r_I}\right) \). Hence it is negative precisely when

$$\begin{aligned} \left( 1-\frac{\delta }{r_I}\right) \left( \frac{d}{r_T}-\frac{\delta }{r_I}\right) <\frac{s}{r_TT_\textrm{max}}. \end{aligned}$$
(9)

This proves 1.

The signs of the real parts of the eigenvalues of the linearization at the point \((p_0,0,0,R^*,U^*)\) were determined in [11] and this proves 2.

If we evaluate the matrix (8) at the point \(E_0''\) we get

$$\begin{aligned} \left[ {\begin{array}{ccc} r_T-d-\frac{r_T(2T+I)}{T_\textrm{max}} &{}\quad -\frac{r_TT}{T_\textrm{max}} &{} \quad -\frac{bT}{T+I}\\ -\frac{r_II}{T_\textrm{max}}&{}\quad r_I-\delta -\frac{r_I(T+2I)}{T_\textrm{max}} &{}\quad \frac{bT}{T+I}\\ 0 &{}\quad 0 &{}\quad -c-\frac{bT}{T+I} \end{array}} \right] . \end{aligned}$$

Evidently one eigenvalue is equal to \(-c-\frac{bT}{T+I}\) and the other two are given by the eigenvalues of the upper left \(2\times 2\) submatrix. After some manipulation using the expressions for T and I at the steady state it can be shown that the trace of this submatrix is equal to \(-d+\frac{r_T\delta }{r_I}-\frac{r_TT}{T_{\max }}-\frac{r_II}{T_{\max }}<0\). Its determinant is given by \(\frac{r_II}{T_{\max }}\left( d-\frac{r_T\delta }{r_I}\right) >0\). We conclude that all eigenvalues have negative real part. This proves 3.. \(\square \)

Remark 2

It follows from 3. that \(E_0''\) is a hyperbolic sink. In particular this implies that there exist positive solutions which converge to \(E_0''\) as \(t\rightarrow \infty \).

Remark 3

If the parameters are varied so that the quantities on the left and right hand sides of (9) become equal then I tends to zero, which means that \(E_0\) and \(E_0''\) approach each other.

Remark 4

In the system of [4] the number of positive eigenvalues at the point \(E_0\) is equal to the number of positive quantities among \(\frac{d}{r}-1\) and \({{\mathcal {R}}}_0'\). The stability properties of \(E_0'\) are as in Theorem 4. The number of positive eigenvalues at one of the remaining steady states is zero for \({{\mathcal {R}}}_0'<1\) and one for \({{\mathcal {R}}}_0'>1\).

Consider now the case where \(E_0''\) does not exist. Then all solutions must approach \(E_0\) and this suggests that \(-b_2<0\). One case in which \(E_0''\) does not exist is when \(d\le \frac{r_T\delta }{r_I}\). Then \(\frac{d}{r_T}\le \frac{\delta }{r_I}\). We will show that this indeed implies that \(-b_2<0\). If \(\frac{d}{r_T}\ge 1\) then \(\frac{\delta }{r_I}\ge 1\) and \(-b_2<0\). If, on the other hand, \(\frac{d}{r_T}<1\) then \(p_0>\left( 1-\frac{d}{r_T}\right) T_\textrm{max} \ge \left( 1-\frac{\delta }{r_I}\right) T_\textrm{max}\) and again \(-b_2<0\). The other case where \(E_0''\) does not exist is that where \(d>\frac{r_T\delta }{r_I}\) and \(\frac{s}{r_TT_\textrm{max}}\ge \left( 1-\frac{\delta }{r_I}\right) \left( \frac{d}{r_T}-\frac{\delta }{r_I}\right) \). It has been shown above that when this inequality is strict \(-b_2<0\) while in the case of equality \(b_2=0\).

4 Oscillations

It has been known for a long time that in-host models of viral dynamics can exhibit oscillations. This was seen numerically in a model for HIV in [14]. The model studied in that paper has four unknowns, since it distinguishes between latently and productively infected cells. It uses mass-action kinetics for the process of infection. There are both a constant source and a logistic proliferation of uninfected cells. It was found that for certain values of the parameters there are periodic solutions which arise in a Hopf bifurcation. However these parameter values do not seem to be appropriate for the concrete application considered in [14]. A version of this model without latently infected cells was studied mathematically in [1]. It was proved that there exist stable periodic solutions for certain values of the parameters. This model is similar to that of [4] but not identical to it. The difference is that the expression \(T+I\) in the model of [4] is replaced by T. This is essential for the proofs in [1] since they make use of the fact that the system is three-dimensional and competitive.

In this section it will be proved that the system (1)–(5) possesses periodic solutions and thus, in particular, that there are solutions which do not converge to a steady state as \(t\rightarrow \infty \). In order to do this we start with the system obtained from (1)–(3) by setting \(R=R^*\), \(s=0\), \(d=0\) and \(r_I=0\) and replacing the last term in (3) by \(-\frac{\eta bTV}{T+I}\) with a constant \(\eta \). The result is

$$\begin{aligned} \frac{dT}{dt}= & {} r_TT\left( 1-\frac{T+I}{T_\textrm{max}}\right) -\frac{bTV}{T+I}, \end{aligned}$$
(10)
$$\begin{aligned} \frac{dI}{dt}= & {} \frac{bTV}{T+I}-\delta I, \end{aligned}$$
(11)
$$\begin{aligned} \frac{dV}{dt}= & {} \rho R^*I-cV-\frac{\eta bTV}{T+I}. \end{aligned}$$
(12)

It is identical to a model studied in [5] except for an additional term in the last equation which takes account of the absorption of virions by the cells they infect and the use of a different notation. It coincides with the system of [5] in the case \(\eta =0\) while the case \(\eta =1\) is that with absorption of virions. The equations are defined and smooth except when \(T+I=0\). There are at most two steady states in the regular region where \(T+I\ne 0\). The first is a disease-free steady state with coordinates \((p_0,0,0)\). Consider now the equations

$$\begin{aligned} T^*= & {} \frac{T_{\max }\delta }{r_T}\left( \frac{R_1}{R_0}-1\right) , \end{aligned}$$
(13)
$$\begin{aligned} I^*= & {} \frac{T_{\max }\delta }{r_T}\left( \frac{R_1}{R_0}-1\right) (R_0-1), \end{aligned}$$
(14)
$$\begin{aligned} V^*= & {} \frac{T_{\max }\delta \rho R^*\omega }{r_T}\left( \frac{R_1}{R_0}-1\right) (R_0-1), \end{aligned}$$
(15)

where \(R_0=\frac{b}{c}\left( \frac{\rho R^*}{\delta }-\eta \right) \), \(R_1=\frac{r_T+\delta }{\delta }\) and \(\omega =\left( c+\frac{\eta b}{R_0}\right) ^{-1}\). When the expressions on the right hand sides of these equations are all positive then they define a positive steady state. Conversely, when a positive steady state exists it satisfies these equations. These formulae generalize those given for the case \(\eta =0\) in [5]. We see that a positive steady state exists precisely when \(1<R_0<R_1\). Let \(X=\frac{T^*}{T^*+I^*}\). Then \(X=R_0^{-1}=\frac{c\delta }{b(\rho R^*-\eta \delta )}\). If we let \(X_-=(R_1)^{-1}=\frac{\delta }{r_T+\delta }\) then the values of X corresponding to steady states lie in the interval \((X_-,1)\).

The linearization at the positive steady state is of the form

$$\begin{aligned} \left[ {\begin{array}{ccc} r_T\left( 1-\frac{(R_0+1)T^*}{T_\textrm{max}}\right) -\frac{b\rho R^*\omega (R_0-1)^2}{R_0^2}&{} -\frac{r_TT^*}{T_\textrm{max}}+\frac{b\rho R^*\omega (R_0-1)}{R_0^2} &{} -\frac{b}{R_0}\\ \frac{b\rho R^*\omega (R_0-1)^2}{R_0^2} &{} -\frac{b\rho R^*\omega (R_0-1)}{R_0^2}-\delta &{} \frac{b}{R_0}\\ -\frac{\eta b\rho R^*\omega (R_0-1)^2}{R_0^2}&{} \rho R^*+\frac{\eta b\rho R^*\omega (R_0-1)}{R_0^2}&{} -c-\frac{\eta b}{R_0}\\ \end{array}} \right] . \end{aligned}$$

The eigenvalues are the roots of the characteristic polynomial \(\lambda ^3+a_2\lambda ^2+a_1\lambda +a_0\) where

$$\begin{aligned} a_2= & {} c+\frac{\delta R_1+\eta b}{R_0},\\ a_1= & {} 2\delta ^2\left( \frac{R_1}{R_0}-1\right) (R_0-1) +\frac{\delta ^2}{R_0^2}(R_0-1)^2 -\frac{\delta r_T}{R_0}(R_0-1)+\delta c\left( \frac{R_1}{R_0}-1\right) \\{} & {} +\frac{\eta b^2\rho R^*\omega (R_0-1)}{R_0^2} +\frac{\eta \delta b R_1}{R_0^2},\\ a_0= & {} \delta ^2c (R_0-1)\left( \frac{R_1}{R_0}-1\right) -\eta \delta ^2 b (R_0-1)\\{} & {} +\frac{\eta b}{R_0}\left[ \delta ^2\left( \frac{R_1}{R_0}-1\right) +\delta (R_0-1) +\frac{\delta b\rho R^* (R_0+1)}{R_0}\left( \frac{R_1}{R_0}-1\right) \right] . \end{aligned}$$

Remark 5

Later we will be interested in the limit of these equations where \(b\rightarrow 0\), b/c is constant and the parameters \(r_T\), \(T_\textrm{max}\), \(\delta \) and \(\rho R^*\) are constant. In this limit \(R_0\) and \(R_1\) are constant while \(\omega =O(b^{-1})\). If \(a_{i,0}\) denotes the values of these coefficients when \(\eta =0\) then \(a_i=a_{i,0}+O(b)\) for \(b\rightarrow 0\).

Lemma 1

There are parameter values for which the linearization of the system (10)–(12) with \(\eta =0\) or \(\eta =1\) about the unique positive steady state has a pair of purely imaginary eigenvalues and the remaining eigenvalue is negative.

Proof

The proof uses the Routh–Hurwitz criterion. Both \(a_2\) and \(a_{0,0}\) are positive. It can be ensured that \(a_0\) is positive by choosing b sufficiently small as described in Remark 5. Note that if we start with parameters for which the condition \(R_0>1\) holds, this condition is maintained in this limiting procedure. In [5] two parameters are introduced in order to understand the eigenvalues of the linearization at the point \((T^*,I^*,V^*)\). Calling the first of these parameters \({\hat{\delta }}\) instead of \(\delta \) to avoid a conflict between the notation of [5] and that of the present paper the parameters are given by

$$\begin{aligned} {\hat{\delta }}= & {} \left[ \delta ^2\left( \frac{R_1}{R_0}-1\right) +\frac{\delta ^2}{R_0^2}-\frac{\delta r_T}{R_0}\right] (R_0-1) +\delta c\left( \frac{R_1}{R_0}-1\right) ,\\ \sigma= & {} \frac{-\delta ^2(R_1-R_0)(R_0-1)}{R_0}. \end{aligned}$$

To apply the Routh–Hurwitz criterion we must consider the quantity

$$\begin{aligned} a_2a_1-a_0=\left( c+\delta \frac{R_1}{R_0}\right) {\hat{\delta }} +\delta ^2\left( c+\delta \frac{R_1}{R_0}\right) \left( \frac{R_1}{R_0}-1\right) (R_0-1) +\eta \xi \end{aligned}$$

where \(\xi =O(b)\). As observed in [5] in the case \(\eta =0\) the linearization has a pair of purely imaginary eigenvalues precisely when \({\hat{\delta }}=\sigma \). In the general case the condition for imaginary eigenvalues is modified to

$$\begin{aligned} {\hat{\delta }}+\eta \xi \frac{R_0^2}{cR_0+\delta R_1}=\sigma . \end{aligned}$$
(16)

Now we prove that in the case \(\eta =0\) there are parameters for which the condition \({\hat{\delta }}=\sigma \) is satisfied. We note that no proof of this fact was given in [5] or in the later more general paper [6]. Consider fixed positive values of the parameters \(\delta \), \(r_T\) and c. Then \(R_1\) is also fixed. We can vary \(R_0\) freely in the interval \((1,R_1)\), for instance by varying \(\rho \) between \(\frac{\delta c}{bR^*}\) and \(\frac{(\delta +r_T)c}{bR^*}\) while keeping the other parameters fixed. We think of \({\hat{\delta }}\) and \(\sigma \) as functions of \(R_0\). The quantity \(\sigma \) is zero for \(R_0\) at both ends of the interval and negative in between. Now \({\hat{\delta }} (1)=\delta ^2+c r_T>0\) and \({\hat{\delta }}(R_1)=\frac{\delta ^2r_T^2}{(\delta +r_T)^2} \left[ \left( \frac{\delta }{r_T}\right) ^2 -\left( \frac{\delta }{r_T}\right) -1\right] \). For suitable values of \(r_T\) and \(\delta \), for instance with \(r_T=\delta \), \({\hat{\delta }}(R_1)<0\) and so by the intermediate value theorem there is a value of \(R_0\) for which \({\hat{\delta }}=\sigma \). Consider now the case \(\eta =1\). \(R_0\) can be varied between 1 and \(R_1\) by varying \(\rho \) between \(\frac{\delta c}{bR^*}+1\) and \(\frac{(\delta +r_T)c}{bR^*}+1\). If b is sufficiently small the quantity on the left hand side of (16) is positive for \(R_0=1\) and negative for \(R_0=R_1\). The expressions for \({\hat{\delta }} (1)\) and \({\hat{\delta }}(R_1)\) are as in the case \(\eta =0\) and so in the case \(\eta =1\) (16) is satisfied at some point. \(\square \)

Remark 6

In [5] it is shown numerically that in the case \(\eta =0\) there is a Hopf bifurcation at points in parameter space where the condition \({\hat{\delta }}=\sigma \) is satisfied and that periodic solutions arise. No analytical proof of the existence of a Hopf bifurcation is given in [5].

Theorem 5

There are parameter values for which the system (10)–(12) with \(\eta =0\) or \(\eta =1\) has Hopf bifurcations and therefore periodic solutions.

Proof

In order to prove that a Hopf bifurcation occurs it remains to show that the eigenvalues which lie on the imaginary axis at the bifurcation point cross the axis with non-zero velocity as a suitable parameter is varied [7]. Assuming the existence of the bifurcation point this crossing condition has been verified for the model with \(\eta =0\) in [6]. We give a new proof that this condition is satisfied for the model with \(\eta =0\). The advantage of the new argument is that we were able to extend it to treat the case \(\eta =1\). It follows from a result of [9] that in order to verify the crossing condition it is enough to show that the Hurwitz coefficient \(a_2a_1-a_0\) passes through zero with non-zero velocity. To achieve this in the case \(\eta =0\) we need a curve \(\zeta (u)\) in the space of parameters \((r_T,T_\textrm{max},b,\delta ,\rho R^*,c)\), passing through the bifurcation point for \(u=0\), which satisfies \(\nabla ({\hat{\delta }}-\sigma )\cdot \zeta '(0)\ne 0\). Curves of this kind exist when the gradient of \({\hat{\delta }}-\sigma \) at the bifurcation point is non-zero. To show that this holds we can again fix the parameters \(\delta \), \(r_T\) and c and vary \(R_0\) freely. We have

$$\begin{aligned} \frac{d{\hat{\delta }}}{dR_0}= & {} \frac{1}{R_0^2}[-\delta ^2-c (\delta +r_T)] -\frac{2\delta ^2}{R_0^3},\\ \frac{d\sigma }{dR_0}= & {} \delta ^2\left( 1-\frac{1}{R_0^2}\right) . \end{aligned}$$

Hence

$$\begin{aligned} \frac{d({\hat{\delta }}-\sigma )}{dR_0}=-\delta ^2-\frac{c (\delta +r_T)}{R_0^2} -\frac{2\delta ^2}{R_0^3}<0. \end{aligned}$$

It follows that the condition on the gradient is always satisfied. In the case \(\eta =1\) we can argue similarly. For b small the gradient of \({\hat{\delta }}-\sigma +\eta \xi \frac{R_0^2}{c R_0+\delta R_1}\) may not be non-zero everywhere but it will be non-zero somewhere. Since it has now been proved that the system admits Hopf bifurcations, both for \(\eta =0\) and \(\eta =1\), it follows that in both cases there are parameters for which it admits non-constant periodic solutions [7]. \(\square \)

This result can be used to prove that the system of five equations describing the in-host dynamics of hepatitis C studied in [11] admits Hopf bifurcations and hence periodic solutions for certain values of its parameters. Consider first the system (1.3a)–(1.3c) of [11]. Setting the parameters s, d and \(r_I\) in that system to zero gives, up to differences in notation, exactly the system (10)–(12) with \(\eta =1\). To avoid any confusion we note that, even after allowing for the differences in notation for the basic quantities, the basic reproductive ratio \(R_0\) given above for the case \(\eta =1\) is different from the quantity \({{\mathcal {R}}}''_0\) used in [11]. In fact, as explained in [16], there are choices which can be made in defining \(R_0\) so that this quantity is not unique. What is independent of these choices are the relations \(R_0<1\), \(R_0=1\) and \(R_0>1\). That this is true for the choices of \(R_0\) made in [5] and [11] can easily be checked directly - in the notation used above \({{\mathcal {R}}}''_0=\frac{b\rho R^*}{(b+c)\delta }\).

It will now be shown that the system (1.3a)–(1.3c) of [11] inherits the Hopf bifurcations and hence the periodic solutions from the system (10)–(12) with \(\eta =1\). To do this we use the following result.

Lemma 2

Let \(f:U\times (-\epsilon ,\epsilon )^{k+1}\rightarrow {{\mathbb {R}}}^n\), \((x,\alpha _0,\ldots ,\alpha _k)\mapsto f(x,\alpha _0,\ldots ,\alpha _k)\) be a \(C^1\) mapping, where U is an open subset of \({{\mathbb {R}}}^n\). Suppose that \(f(0,0,\ldots ,0)=0\), that the parameter-dependent system \(\dot{x}=f(x,\alpha _0,0,\ldots ,0)\) has a Hopf bifurcation at \((0,0,\ldots ,0)\) and that no more than two eigenvalues of \(D_xf(0,0,\ldots ,0)\) lie on the imaginary axis. Then for sufficiently small fixed values of \((\alpha _1,\ldots ,\alpha _k)\) the system \(\dot{x}=f(x,\alpha _0,\alpha _1,\ldots ,\alpha _k)\) has a Hopf bifurcation at \((x^*,0)\) for some \(x^*(\alpha _1,\ldots ,\alpha _k)\) close to zero.

Proof

Denote the derivative \(D_xf\) by A. Then by assumption \(A(0,0,\ldots ,0)\) has a pair of purely imaginary eigenvalues and all other eigenvalues lie off the imaginary axis. Let \(\lambda ^*\) be one of the imaginary eigenvalues. Because of the smooth dependence of simple eigenvalues on the matrix there is a \(C^1\) function \(\lambda (\alpha _0,\alpha _1,\ldots ,\alpha _k)\) defined on a neighbourhood of the origin with \(\lambda (0,\ldots ,0)=\lambda ^*\). By the definition of a Hopf bifurcation \(\frac{\partial }{\partial \alpha _0}(\textrm{Re}\ \lambda )\ne 0\) at the bifurcation point. Hence for fixed \((\alpha _1,\ldots ,\alpha _k)\) in a small neighbourhood of the origin in \({{\mathbb {R}}}^k\) the quantity \({\textrm{Im}\ \lambda }\) has one sign for \(\alpha _0\) small and positive and the other sign for \(\alpha _0\) small and negative. By the implicit function theorem the curve \(\lambda (\alpha _0,\alpha _1,\ldots \alpha _k)\) for \((\alpha _1,\ldots ,\alpha _k)\) fixed and sufficiently small also passes through the imaginary axis for some \(\alpha _0\) where its derivative is non-zero. Thus a Hopf bifurcation occurs at that point. \(\square \)

Theorem 6

There are parameter values for which the system (1)–(5) has Hopf bifurcations and therefore periodic solutions.

Proof

The system obtained from (1.3a)–(1.3c) of [11] by setting s, d and \(r_I\) to zero admits a Hopf bifurcation depending on the parameter \(\rho \). Now Lemma 2 will be used to show that there are also Hopf bifurcations in the system with s, d and \(r_I\) small and positive. It suffices to choose \(\alpha _0=\rho \), \(\alpha _1=s\), \(\alpha _2=d\) and \(\alpha _3=r_I\). This proves that (1.3a)–(1.3c) of [11] admits a Hopf bifurcation. For a suitable choice of parameters augmenting a periodic solution of (1.3a)–(1.3c) of [11] by the constants \((U^*,R^*)\) gives a periodic solution of the full system (1.3a)–(1.3e) of that paper. \(\square \)

In [6] the authors considered a system which corresponds to that obtained from (1.3a)–(1.3c) of [11] by setting \(d=s=\eta =0\), system (14)–(16) of [6]. Lemma 2 can be used as above to prove that that system has Hopf bifurcations for small values of the parameter \(\rho \) in that system, which corresponds to \(r_I\) in our notation. In [6] it was suggested that there are Hopf bifurcations in that case on the basis of computer calculations.

The proofs above give us no information about the stability of the periodic solutions arising. The following simulation indicates that there exist sustained oscillations. It corresponds to the parameter values \(s= 10\); \(d= 10^{-5}\); \( r_{T}=0.99\); \( r_{I}= 10^{-3}\); \(T_\text {max}= 10 \times 10^{8}\); \(b = 0.0014\); \(\delta = 0.0693\); \(c = 0.693\); \(\rho = 21.5\); \( \alpha = 50\); \(\beta = 15\); \( \epsilon = 0.5 \); \(\sigma = 30\); \(U_\text {max} = 30\); \(\gamma =5\) and initial data \((T_{0}, I_{0}, V_{0}, U_{0}, V_{0})=(1000, 500, 4, 15, 20)\).

5 Positive Steady States and Their Stability

In [11] it was left open how many positive steady states the system (1.3a)–(1.3c) of that paper has. It was proved that there are at most three. However it was not decided whether there can be more than one or whether there must be at least one when the quantity \({{\mathcal {R}}}''_0\) introduced in [11] is greater than one (Fig. 1). An expression for this quantity in the notation used above is

$$\begin{aligned} {{\mathcal {R}}}''_0=\frac{b\rho R^*}{(b+c)(\delta -r_I(1-\frac{p_0}{T_\textrm{max}}))}, \end{aligned}$$
(17)
Fig. 1
figure 1

The time histories and the phase trajectory of system (1)–(5) after Hopf bifurcation occurs for the above parameter values such that the composite basic reproductive number \(\mathcal {R}''_{0}=25.0270\) and intracellular basic reproductive number \(\mathcal {R}'_{0}= 2.5\)

where \(p_0\) is the quantity defined in (6). The expression for \({{\mathcal {R}}}''_0\) in (17) can be obtained from the calculation of the basic reproductive number presented in [16] when its denominator is positive. Otherwise it cannot since condition (A5) of that paper fails to be satisfied. Note that \(\delta -r_I\left( 1-\frac{p_0}{T_\textrm{max}}\right) \ge \delta \left( 1-\max \left\{ \frac{r_Id}{r_T\delta },\frac{r_I}{\delta }\right\} \right) \). Thus sufficient conditions to ensure that the denominator is positive are that \(r_I<\delta \) (maximum proliferation rate of infected cells is less than their death rate) and \(r_T>d\) (proliferation rate of uninfected cells is greater than their death rate). Alternatively to the second condition we can assume that \(r_T\ge r_I\) (infected cells proliferate slower than uninfected ones) and \(d\le \delta \) (infected cells die faster than uninfected ones). These conditions have simple biological interpretations. It is not clear that they must hold in all biologically interesting situations. It is, for instance, conceivable that it is of evolutionary advantage for the virus to cause the cells it infects to proliferate faster than they would otherwise do.

Lemma 3

(i) When \(\rho R^*+r_I-\delta \le 0\) the system (1)–(3) has no positive steady states.

(ii) When \(\rho R^*+r_I-\delta >0\) there is a one-to-one correspondence between positive steady states of the system and roots of a polynomial p in the interval \((X_-,1)\) where \(X_-=\textrm{max}\left\{ 0,{\hat{X}}\right\} \) and \({\hat{X}}=\frac{c(\delta -r_I)}{b(\rho R^*+r_I-\delta )}\). In the case \(X_-\ge 1\) this is to be interpreted as the statement that there are no positive steady states.

Proof

Consider a steady state \((T^*,I^*,V^*)\) of the system (1)–(3) with \(R=R^*\). We use the variable X introduced in the last section. In general this leads to the equations

$$\begin{aligned}{} & {} s+r_TT^*\left( 1-\frac{T^*}{T_\textrm{max}X}\right) -dT^* -bV^*X=0, \end{aligned}$$
(18)
$$\begin{aligned}{} & {} r_II^*\left( 1-\frac{T^*}{T_\textrm{max}X}\right) +bV^*X-\delta I^*=0, \end{aligned}$$
(19)
$$\begin{aligned}{} & {} \rho R^*I^*-cV^*-bV^*X=0. \end{aligned}$$
(20)

A given steady state defines a value of X in the interval (0, 1). Suppose that conversely X is given and we look for a steady state giving rise to it. Equation (20) can be rewritten as

$$\begin{aligned} V^*=\frac{\rho R^*I^*}{c+bX}. \end{aligned}$$
(21)

Substituting this into (19) and cancelling a factor \(I^*\) gives

$$\begin{aligned} r_I\left( 1-\frac{T^*}{T_\textrm{max}X}\right) +\frac{b\rho R^* X}{c+bX} -\delta =0. \end{aligned}$$

Hence

$$\begin{aligned} T^*=\frac{T_\textrm{max}X[b(\rho R^*+r_I-\delta )X+c(r_I-\delta )]}{r_I(c+bX)}. \end{aligned}$$
(22)

If \(\rho R^*+r_I-\delta \) were not positive then \(r_I-\delta \) would be negative and this relation could not hold for a positive steady state. In other words \(\rho R^*+r_I-\delta >0\) is a necessary condition for the existence of a positive steady state. This completes the proof of part (i) of the lemma.

If \(r_I\ge \delta \) then the expression (22) for \(T^*\) is positive. If, on the other hand \(r_I<\delta \) then it is necessary to impose the restriction \(X>{\hat{X}}\). When \(T^*\) has been calculated in terms of X it is possible to determine \(I^*=\frac{T^*(1-X)}{X}\) and \(V^*\). This shows that there is at most one steady state consistent with a given value of X. The quantities \((T^*,I^*,V^*)\) defined in terms of X in this way define steady state solutions of equations (2) and (3). Substituting these equations into (18) and multiplying the result by \(r_I^2(c+bX)^2\) gives

$$\begin{aligned}{} & {} sr_I^2(c+bX)^2+r_TT_\textrm{max}X[b(\rho R^*+r_I-\delta )X+c(r_I-\delta )] [-b(\rho R^*-\delta )X+c\delta ]\\{} & {} \quad +r_IT_\textrm{max}X[b(\rho R^*+r_I-\delta )X+c(r_I-\delta )] [b (\rho R^*-d)X-dc-b\rho R^*]=0. \end{aligned}$$

Alternatively we can write this equation in the form

$$\begin{aligned}{} & {} sr_I^2(c+bX)^2+T_\textrm{max}X[b(\rho R^*+r_I-\delta )X+c(r_I-\delta )]\nonumber \\{} & {} \quad \times [b ((\delta r_T-dr_I)-\rho R^*(r_T-r_I))X +c(\delta r_T-dr_I)-b\rho R^*r_I)]=0.\nonumber \\ \end{aligned}$$
(23)

We can also write it in the form \(p(X)=\sum _{i=0}^3b_iX^i=0\) where

$$\begin{aligned} b_3= & {} T_\textrm{max}b^2(\rho R^*+r_I-\delta )[(\delta -\rho R^*)r_T -(d-\rho R^*)r_I)],\\ b_2= & {} sr_I^2b^2+T_\textrm{max}\beta [(\rho R^*+r_I-\delta )(r_Tc\delta -r_I(dc+b\rho R^*))\\{} & {} +(r_I-\delta )c ((\delta -\rho R^*)r_T-(d-\rho R^*)r_I)],\\ b_1= & {} 2sr_I^2bcT_\textrm{max}+(r_I-\delta )c[r_Tc\delta -r_I (dc+b\rho R^*)],\\ b_0= & {} sr_I^2c^2. \end{aligned}$$

This is the polynomial p mentioned in the statement of the lemma. Suppose now that \(X\in (X_-,1)\) satisfies \(p(X)=0\). Then it is possible to define corresponding positive quantities \((T^*,I^*,V^*)\) as above and they satisfy (2) and (3). Now divide the equation \(p(X)=0\) by \(r_I^2(c+X)^2\). Using (2) and (3) we can write the expressions depending on X in the resulting equation in terms of \(T^*\) and \(V^*\). This shows that (1) holds. Thus \((T^*,I^*,V^*)\) is a positive steady state. It has now been shown that provided \(\rho R^*+r_I-\delta >0\) there is a one-to-one correspondence between positive steady states and roots of the polynomial p in the interval \((X_-,1)\). This is part (ii) of the lemma. \(\square \)

It turns out that a root of the polynomial p can only lie outside the region where it corresponds to a steady state if \({{\mathcal {R}}}_0''<1\).

Lemma 4

If \(\rho R^*+r_I-\delta \le 0\) or \(X_-\ge 1\) then \({{\mathcal {R}}}_0''<1\).

Proof

When \(\rho R^*+r_I-\delta \le 0\) it follows that \(\rho R^*\le \delta -r_I\) and, in particular, that \(\delta -r_I>0\). It can be concluded that

$$\begin{aligned} {{\mathcal {R}}}_0''=\frac{b\rho R^*}{(b+c)\left( \delta -r_I +\frac{p_0r_I}{T_\textrm{max}}\right) } \le \frac{b(\delta -r_I)}{(b+c)\left( \delta -r_I +\frac{p_0r_I}{T_\textrm{max}}\right) }<1. \end{aligned}$$

When \(\rho R^*+r_I-\delta >0\) and \(X_-\ge 1\) it follows that \(b\rho R^*\le (b+c)(\delta -r_I)\). Thus

$$\begin{aligned} {{\mathcal {R}}}_0''=\frac{b\rho R^*}{(b+c)\left( \delta -r_I +\frac{p_0r_I}{T_\textrm{max}}\right) } \le \frac{(b+c)(\delta -r_I)}{(b+c)\left( \delta -r_I +\frac{p_0r_I}{T_\textrm{max}}\right) }<1. \end{aligned}$$

\(\square \)

It is possible to get some information about the number and stability of positive steady states. First we compare with the model of [6] which can be obtained from our model by setting \(d=s=\eta =0\). In that case for general values of the parameters there are up to three boundary steady states. We concentrate on the boundary steady state called \(E_i\) in [6]. Its existence corresponds to the fact that p(X) has a factor X when \(s=0\). Its coordinates are \(\left( 0,\frac{T_\textrm{max}(r_I-\delta )}{r_I}, \frac{\rho R^* T_\textrm{max}(r_I-\delta )}{r_Ic}\right) \). It evidently exists as a non-negative steady state which is not at the origin precisely when \(r_I>\delta \). In this case \(X_-=0\). It is always the case that at least two of the eigenvalues of the linearization about that point are negative. The third has the sign opposite to that of \({{\mathcal {R}}}_0''-\frac{r_T}{r_I}\). Thus if \({{\mathcal {R}}}_0''>\frac{r_T}{r_I}\) it is asymptotically stable. Note that in this case \({{\mathcal {R}}}_0''=\frac{b\rho R^*}{ca}\). Next we perturb the system of [6] by making s slightly positive.

Theorem 7

Suppose that \(r_I>\delta \). If \({{\mathcal {R}}}_0''>\frac{r_T}{r_I}\) then for sufficiently small values of d and s with the other parameters fixed the system (1)–(3) has a positive steady state \({\hat{E}}_i\) close to \(\left( 0,\frac{T_\textrm{max}(r_I-\delta )}{r_I}, \frac{\rho R^* T_\textrm{max}(r_I-\delta )}{r_Ic}\right) \). If \({{\mathcal {R}}}_0''<\frac{r_T}{r_I}\) no such steady state exists. An analogous result holds for the system obtained from (1)–(3) by omitting the term \(-\frac{bTV}{V+I}\). In that case \({\hat{E}}_i\) is asymptotically stable.

Proof

By the implicit function theorem the perturbed system has precisely one steady state close to \(E_i\) when s is small enough. It cannot be on the plane \(I=0\) when \(s\ne 0\). Consider the case where \(E_i\) is asymptotically stable. Call the perturbed steady state \({\hat{E}}_i\). By continuity there is an \(\epsilon >0\) such that each solution which starts in \(B_\epsilon ({\hat{E}}_i)\) converges to \({\hat{E}}_i\). When the perturbation is small enough this ball will intersect the plane \(I=0\). Thus there is a solution which starts on that plane and converges to \({\hat{E}}_i\). When the perturbation and \(\epsilon \) are small enough \(\dot{I}\) is positive on that ball. Hence \({\hat{E}}_i\) lies in the positive octant and is asymptotically stable. If d is increased a little from zero this solution continues to exist and be asymptotically stable. We can also obtain a corresponding solution in the case \(\eta =1\) by the following trick. We consider the system obtained from (1)–(3) by replacing the term \(\frac{bTV}{T+I}\) by \(\frac{\eta bTV}{T+I}\). A simple perturbation argument gives a positive steady state of the system where \(\eta \) is a small positive constant. We can go from there to a steady state with \(\eta =1\) by scaling \(\eta \), \(\rho \) and c by the same factor, since this scaling leaves the set of steady states invariant. Note, however, that it is not clear that this scaling preserves the stability of the steady state. Note that in the case where \(E_i\) is not asymptotically stable there does not exist a positive steady state \({\hat{E}}_i\). To see this consider now the case where \(E_i\) is a saddle. Then there are points on its unstable manifold arbitrarily close to \(E_i\) for which T becomes negative. After a sufficiently small perturbation this will still be the case. If \({\hat{E}}_i\) satisfied the condition \(T>0\) then this would give a contradiction since it would mean that a solution initially in the positive octant leaves the positive region. \(\square \)

In some cases positive steady states can be obtained from steady states on the boundary by a perturbation analysis, sometimes associated with the term ’backward bifurcation’. The setting is one where the boundary steady state to be perturbed is such that the reproductive ratio \(R_0\) is equal to one. Under a perturbation of the parameters this point remains a steady state but \(R_0\) varies. In some well-known simple cases a positive steady state near the boundary steady state exists for \(R_0>1\) and it is stable. In other cases a positive steady state exists for \(R_0<1\) and is unstable. The latter case has been called a backward bifurcation. This is because the positive steady state arises when moving to smaller values of \(R_0\) (backwards) rather than (as in the most familiar case) when moving to larger values of \(R_0\) (forwards). An example where this occurs is an in-host model for HIV studied in [8]. There it is proved that there is a backward bifurcation and it is proved that for \(R_0\) slightly less than one there are two positive steady states, one of which is stable and one unstable. That model includes a description of therapy appropriate for a reverse transcriptase inhibitor. Mathematically this means that the coefficient of the infection term in the evolution equation for I is less than that in the evolution equation for T. In that model mass action kinetics is assumed for the infection and absorption of the virus is not included.

To investigate which of these phenomena occur in our model we follow the analysis of [16]. This will be done for the boundary steady state \(E_0'\) where the corresponding reproductive ratio is \({{\mathcal {R}}}_0''\). We now want to vary a parameter which leaves \(E_0'\) invariant but changes \({{\mathcal {R}}}_0''\). This suggests leaving d, \(r_T\), s and \(T_\textrm{max}\) unchanged while varying \(\delta \), \(r_I\), b, \(\rho \) or c. The next step in the analysis is to find left and right eigenvectors corresponding to the eigenvalue zero. The vector with components \((-a_{12}(b+c)-b\rho R^*,a_{11}(b+c),a_{11}\rho R^*)\) is in the right kernel while that with components \((0,b+c,b)\) is in the left kernel. The other two eigenvalues are negative. We would now like to compute the quantities a and b defined in Section 5 of [16]. As explained there it is enough to know a subset of the second derivatives of the right hand sides in order to do this. In the terminology of [16] we choose I and V to be the infected variables and the term containing b as the only one which contributes to the matrix \({\mathcal {F}}\). Denote the right hand sides of the evolution equations for y and v by \(f_2\) and \(f_3\). Then the relevant second derivatives are the following derivatives of \(f_2\) and \(f_3\).

$$\begin{aligned} \frac{\partial ^2 f_2}{\partial I^2}= & {} -\frac{2r_I}{T_\textrm{max}} +\frac{2b VT}{(I+T)^3},\ \frac{\partial ^2 f_2}{\partial I\partial V}=-\frac{\beta I}{(I+T)^2},\ \frac{\partial ^2 f_2}{\partial V^2}=0,\\ \frac{\partial ^2 f_3}{\partial I^2}= & {} -\frac{2b VT}{(I+T)^3},\ \frac{\partial ^2 f_3}{\partial I\partial V}=\frac{bT}{(I+T)^2},\ \frac{\partial ^2 f_3}{\partial V^2}=0,\\ \frac{\partial ^2 f_2}{\partial T\partial I}= & {} -\frac{r_I}{T_\textrm{max}} +\frac{bV(T-I)}{(I+T)^3},\ \frac{\partial ^2 f_2}{\partial T\partial V}=\frac{bI}{(I+T)^2},\\ \frac{\partial ^2 f_3}{\partial T\partial I}= & {} -\frac{bV(T-I)}{(I+T)^3},\ \frac{\partial ^2 f_3}{\partial T\partial V}=-\frac{bI}{(I+T)^2}. \end{aligned}$$

These derivatives should be evaluated at the disease-free steady state which means in particular that it is possible to set \(I=0\) and \(V=0\), which implies that \(I+T=T\). In the present case the matrix with components \(\alpha _{lk}\) introduced in [16] has components \((0,\frac{\rho R^*}{b+c})\). We now have all the elements necessary to compute the quantity a introduced in [16] to characterize backward bifurcations, which we denote by \(a_{VW}\).

$$\begin{aligned} a_{VW}= & {} a_{11}^2(b+c)\left[ -\frac{r_I}{T_\textrm{max}}(b+c)^2 -\frac{b\rho R^*c}{I}\right] \\{} & {} +\frac{a_{11}b(\rho R^*)^2c I}{(b+c)I^2} \left[ -a_{12}(b+c)-b\rho R^*\right] . \end{aligned}$$

This calculation shows that \(a_{VW}\) is always negative and thus that no backward bifurcation occurs in this system. To show that there is a forward bifurcation it remains only to show that the bifurcation parameter can be chosen so that the quantity b in [16], call it \(b_{VW}\), is non-zero. A suitable choice of bifurcation parameter is \(\rho -\rho _0\), where \(\rho _0\) is the value of \(\rho \) corresponding to \({{\mathcal {R}}}_0''=1\). In that case \(b_{VW}=a_{11}(b+c)\rho R^*>0\). It follows from Theorem 4 of [16] that for \(\gamma \) slightly larger than \(\gamma _0\) there is an asymptotically stable positive steady state close to the bifurcation point. What is more interesting than the fact that a forward bifurcation occurs here is the fact that no backward bifurcation occurs.

We now study the way in which the number of steady states changes when the parameters are varied.

Lemma 5

Consider a sequence of positive parameters with \(\rho _nR^*_n+(r_I)_n-\delta _n>0\) for all n converging to a limit and a sequence \(X_n\in ((X_-)_n,1)\) of roots of p for these parameter values converging to some limit \(X^*\). Suppose that \(\rho _nR^*_n+(r_I)_n-\delta _n\) does not tend to zero as \(n\rightarrow \infty \) and that \({{\mathcal {R}}}_0''\) does not tend to one along the sequence. Then \(X^*\) is not equal to the value \(X_-^*\) of \(X_-\) corresponding to the limiting parameter values or to one.

Proof

Suppose first that \(X_n\) tends to \(X_-^*\). Then \(p(X_-^*)=0\) for the limiting parameters, contradicting (23). Next suppose that \(X_n\) tends to one. For each n there is a steady state \((T_n,I_n,V_n)\) corresponding to \(X_n\) and since \(\rho R^*+r_I-\delta \) remains positive in the limit there is also a corresponding non-negative steady state \((T^*,I^*,V^*)\). In fact, since \(I=\frac{(1-X)T}{X}\), \(V^*=I^*=0\) and thus this steady state is \(E_0'\). It follows that along the sequence two steady states approach each other and that the linearization at the limiting steady state must have zero as an eigenvalue. This implies that \({{\mathcal {R}}}_0''\) tends to one, contradicting the assumptions of the lemma. \(\square \)

Theorem 8

If \(\rho R^*+r_I-\delta >0\) the number of positive steady states of the system (10)–(12) with \(\eta =1\) is even for \({{\mathcal {R}}}_0''\le 1\) and odd for \({{\mathcal {R}}}_0''>1\). In particular, there always exists at least one positive steady state in the latter case.

Proof

Call the given parameter set \(z_0\) and suppose that for that parameter set \(\rho =\rho _0\). Consider a family z(u), \(u\in [0,1]\), with \(z(0)=z_0\) obtained from z by setting \(\rho (u)=(1-u)\rho _0+u\rho _1\) and leaving the other parameters unchanged. Choose \(\rho _1\) so that \({{\mathcal {R}}}_0''=1\) for \(u=1\). Consider first the case \({{\mathcal {R}}}_0''<1\). Then within the parameter family \(\rho \) is an increasing function of U and \(\rho R^*+r_I-\delta \) does not approach zero. It follows from Lemma 5 that the roots of p, which vary continuously with u, cannot approach the endpoints of the interval for \(u<1\). Hence the parity of the number of positive steady states is independent of \(\rho \) for \({{\mathcal {R}}}_0''<1\). Consider next the case \({{\mathcal {R}}}_0''>1\). If \(\rho R^*+r_I-\delta \) approached zero for some \(u<1\) then \({{\mathcal {R}}}_0''\) would become less than one as a consequence of Lemma 4, in contradiction to the definition of the family. We can then argue as before to see that the parity of the number of positive steady states is independent of \(\rho \) for \({{\mathcal {R}}}_0''>1\). We have seen what happens when \({{\mathcal {R}}}_0''\) is perturbed a little in the discussion of backward bifurcations. For \({{\mathcal {R}}}_0''\) slightly less than one the number of roots does not change while for \({{\mathcal {R}}}_0''\) slightly greater than one it increases by one. Thus the parity changes when \({{\mathcal {R}}}_0''\) passes through one. It remains to determine the parity of the number of positive steady states for \({{\mathcal {R}}}_0''=1\). In that case \(p(X_-)>0\) and \(p(1)=0\). Thus the parity of the number of roots in the interval of interest is even. \(\square \)

6 Summary and Outlook

In [11] an in-host model for hepatitis C was introduced and some properties of its solutions were determined. At the same time a variety of questions concerning this model were left open. In the present paper we obtain some answers to these questions. In [11] a restriction on the parameters was identified which implies that every solution converges to a steady state. It is shown here that convergence to steady states does not hold without restriction since there exist periodic solutions for some values of the parameters. Another question is that of the number of steady states. It is shown that the parity of the number of steady states is determined by \({{\mathcal {R}}}_0''-1\). This is achieved without any restriction on the parameters other than their positivity. It follows in particular that for \({{\mathcal {R}}}_0''>1\) there is always at least one positive steady state. It is shown that when this solution is close enough to the disease-free steady state it is asymptotically stable.

There remain a number of open questions concerning the model studied in this paper. Does there ever exist a positive steady state in the case \({{\mathcal {R}}}_0''\le 1\)? Does there ever exist more than one positive steady state in the case \({{\mathcal {R}}}_0''>1\)? Despite the fact that a general parametrization of steady states by the roots of a polynomial has been obtained we did not succeed in answering these questions. One route which might have led to an answer is via backward bifurcations. However it turns out that these do not exist in this model. Related to this is the fact that we do not know if there ever exist unstable positive steady states in this model. Another route which might lead to answers to these questions is to investigate the existence of fold bifurcations in the positive region. At the moment the question of whether bifurcations of this type occur remains open.

Is is proved that there exist periodic solutions but no proof of their stability is available. The only indication we have that they are stable are simulations. It might be possible to investigate their stability using the method of Li and Muldowney, generalizing what was done in [11]. While studying the properties of the model of [11] some properties of solutions of related models were obtained as by-products. This concerns in particular the model of Guedj and Neumann [4] but in that case the question of the existence of periodic solutions was not settled.

It is clear from this discussion that there remain many open mathematical questions concerning the model of [11] and related ones. Beyond this there remain questions concerning the relations of these mathematical models to the diseases which they are intended to describe. In which phases of which diseases is which model most appropriate? Once a model has been chosen in a given biological situation what are appropriate restrictions on the parameters? We hope that in the future answers to these questions will lead to a better understanding of hepatitis C and other viral diseases and to new ideas for treating them.