1 Introduction

In this note we study random one-dimensional systems consisting of increasing homeomorphisms defined on the interval [0, 1]. These systems can be looked upon from various points of view. Here we are interested in the existence of a unique invariant measure on (0, 1) (unique ergodicity). Observe that existence of such a measure may not be obtained by the Krylov–Bogolubov theorem since we are looking for a measure on a non-compact interval. On the other hand, every convex combination of the Dirac measures \(\delta _0\) and \(\delta _1\) is invariant, therefore the aforementioned theorem, applied to the closed interval [0, 1], does not carry information about the problem we are interested in.

Alsedá and Misiurewicz studied similar problem for some function systems consisting of piecewise linear homeomorphisms (see [2]). More general iterated function systems were considered by Gharaei and Homburg in [14]. Recently Malicet obtained unique ergodicity as a consequence of the contraction principle for time homogeneous random walks on the topological group of homeomorphisms defined on the circle and interval (see [17]). In turn, Zdunik and the second author [24] defined a class of iterated function systems (the so-called admissible iterated function systems) satisfying unique ergodicity on (0, 1). All the results were formulated for the systems consisting of finitely many transformations.

Here we give a necessary and sufficient condition for the existence and uniqueness of an invariant measure for more general IFS’s with probabilities. Our condition is of the same type as that given in [22]. In fact, this condition is expressed in the language of functional equations. Also the proof of uniqueness is based upon some results on existence of solutions of some functional equations.

The second aim of this paper is to study properties of the considered iterated function system with probabilities such as properties of supports of its invariant measures, asymptotic stability, and strong law of large numbers.

2 Preliminaries

Fix a probability space \((\Omega ,{\mathcal {A}},P)\) and a function \(f:[0,1]\times \Omega \rightarrow [0,1]\) such that:

(H\(_1\)):

for every \(x\in [0,1]\) the function \(f(x,\cdot ):\Omega \rightarrow [0,1]\) is \({\mathcal {A}}\)-measurable;

(H\(_2\)):

for every \(\omega \in \Omega \) the function \(f_\omega :[0,1]\rightarrow [0,1]\) defined by \(f_\omega (x)=f(x,\omega )\) is an increasing homeomorphism of [0, 1] onto itself.

The family \({\mathcal {F}}=\{f_\omega \,|\,\omega \in \Omega \}\) forms an iterated function system (IFS for short) and the pair \(({\mathcal {F}},P)\), in which we are interested in this paper, forms an iterated function system with probabilities (IFSP for short). Many authors have widely studied such systems (see [8] and the references therein). It is worth mentioning here that we do not make any assumptions on contractivity or at least local contractivity on average of the system as authors usually do (see e.g. [3, 19, 20]). The contractivity, for instance, does not hold for homeomorphisms. In our setting we do not assume smoothness of homeomorphisms \(f_\omega \).

Note that f is a Carathéodory function (i.e., continuous with respect to the first variable and \({\mathcal {A}}\)-measurable with respect to the second one), and hence it is measurable with respect to \(\sigma \)-algebra \({\mathcal {B}}\otimes {\mathcal {A}}\), where \({\mathcal {B}}\) denotes the \(\sigma \)-algebra of all Borel subsets of [0, 1] (see [1, Lemma 4.51]). Therefore, f is also a random-valued function (rv-function for short), which iterates were introduced independently in [5] and [10] for different needs, as follows

$$\begin{aligned} f^{1}(x,\omega )=f(x,\omega _1)\quad \text {and}\quad f^{n+1}(x,\omega )=f(f^{n}(x,\omega ),\omega _{n+1}) \end{aligned}$$

for all \(x\in [0,1]\), \(n\in {\mathbb {N}}\) and \(\omega =(\omega _1,\omega _2,\ldots )\in \Omega ^{\infty }\); here \(\Omega ^\infty \) is defined as the one-sided shift space on \(\Omega \), which plays an essential role in topological and symbolic dynamics (for more details see [23]). Observe that for every \(n\in {\mathbb {N}}\) the iterate \(f^n:[0,1]\times \Omega ^\infty \rightarrow [0,1]\) is again an rv-function, but now on the product probability space \((\Omega ^\infty ,{\mathcal {A}}^\infty , P^\infty )\). More precisely, the iterate \(f^n\) is measurable with respect to the product \(\sigma \)-algebra \({\mathcal {B}}\otimes {\mathcal {A}}_n\), where \({\mathcal {A}}_n\) denotes the \(\sigma \)-algebra of all sets of the form

$$\begin{aligned} \{(\omega _1,\omega _2,\ldots )\in \Omega ^{\infty }\,|\,(\omega _1,\ldots ,\omega _n)\in A\} \end{aligned}$$

with A from the product \(\sigma \)-algebra \({\mathcal {A}}^n\).

Fix \(B\in {\mathcal {B}}\). Since f is an rv-function, it follows that \(f^{-1}(B)\in {\mathcal {B}}\otimes {\mathcal {A}}\). Hence the \(\Omega \)-section \(f^{-1}(B)_\omega \) of the set \(f^{-1}(B)\), determined by \(\omega \in \Omega \), is Borel and

$$\begin{aligned} f^{-1}(B)_\omega =\{x\in [0,1]\,|\,f(x,\omega )\in B\}=\{x\in [0,1]\,|\,f_\omega (x)\in B\}=f_\omega ^{-1}(B). \end{aligned}$$

Moreover, if \(\mu \) is a Borel probability measure, defined on [0, 1], then the function \("\omega \mapsto \mu (f_\omega ^{-1}(B))"\) is \({\mathcal {A}}\)-measurable and \(\int _{\Omega }\mu (f_\omega ^{-1}(B))dP(\omega )=P\otimes \mu (f^{-1}(B))\). This justify to the following definition.

A Borel probability measure \(\mu _*\), defined on [0, 1], is said to be an invariant measure for the IFSP \(({\mathcal {F}},P)\), if

$$\begin{aligned} \mu _*(B)=\int _{\Omega }\mu _*\big (f_\omega ^{-1}(B)\big )dP(\omega ) \end{aligned}$$
(2.1)

for every Borel set \(B\subset [0,1]\). Note that \(\mu _*\) is invariant measure for \(({\mathcal {F}},P)\) if and only if \(\mu _*(B)=P\otimes \mu _*(f^{-1}(B))\) for every Borel set \(B\subset [0,1]\).

Denote by \(B([0,1],{\mathbb {R}})\) the set of all bounded functions from [0, 1] to \({\mathbb {R}}\), and by \(C([0,1],{\mathbb {R}})\) its subset of all continuous functions. Put

$$\begin{aligned} {\mathcal {I}}=\{\varphi \in B([0,1],{\mathbb {R}})\,|\,{} & {} \varphi \text { is increasing}, \varphi (0)=0\text { and } \varphi (1)=1\}, \\{} & {} {\mathcal {D}}={\mathcal {I}}\cap C([0,1],{\mathbb {R}}). \end{aligned}$$

Following the idea from [22] we introduce the following conditions:

  1. (a)

    for any \(x\in (0,1)\) there exists a set \(\Omega ^x_-\in {\mathcal {A}}\) with \(P(\Omega ^x_-)>0\) and such that \(f_\omega (x)<x\) for every \(\omega \in \Omega ^x_-\) or for any \(x\in (0,1)\) there exists a set \(\Omega ^x_+\in {\mathcal {A}}\) with \(P(\Omega ^x_+)>0\) and such that \(x<f_\omega (x)\) for every \(\omega \in \Omega ^x_+\);

  2. (b)

    there exist \(\rho _1,\rho _2\in {\mathcal {I}}\) which are right-continuous, continuous at 1, such that \(\rho _1\le \rho _2\) and

    $$\begin{aligned} \rho _1(x)\le \int _\Omega \rho _1(f_\omega ^{-1}(x))dP(\omega )\quad \text {and}\quad \int _\Omega \rho _2(f_\omega ^{-1}(x))dP(\omega )\le \rho _2(x) \end{aligned}$$

    for every \(x\in [0,1]\).

Note that if the point \(x\in (0,1)\) is a common fixed point of all functions of the family \({\mathcal {F}}\), then the Dirac measure \(\delta _x\) is an invariant measure for \(({\mathcal {F}},P)\). In such a case, the search for invariant measures for \(({\mathcal {F}},P)\) reduces to an independent search for invariant measures for \(({\mathcal {F}}|_{[0,x]},P)\) and \(({\mathcal {F}}|_{[x,1]},P)\), where families \({\mathcal {F}}|_{[0,x]}\) and \({\mathcal {F}}|_{[x,1]}\) consist of all functions from \({\mathcal {F}}\) restricted to [0, x] and [x, 1], respectively. Condition (A) excludes this situation. In particular, the invariant measure we are looking for cannot be the Dirac measure of any point of the interval (0, 1). Moreover, as we will see later that condition (A) guarantees the atomlessness of the invariant measure we are interested in, as well as its uniqueness. On the other hand, it is well known that to prove the existence of an invariant measure on (0, 1), some conditions on the behaviour in the neighbourhood of 0 and 1 have to be assumed. Condition (B) is parallel with the existence of positive Lyapunov exponents of smooth functions.

From now on we fix an IFSP \(({\mathcal {F}},P)\) with \(f:[0,1]\times \Omega \rightarrow [0,1]\) satisfying (H\(_{1}\)) and (H\(_{2}\)), and consider a function \(g:[0,1]\times \Omega \rightarrow [0,1]\) given by

$$\begin{aligned} g(x,\omega )=f_\omega ^{-1}(x). \end{aligned}$$
(2.2)

It is clear that \(g_\omega =f_\omega ^{-1}\) for every \(\omega \in \Omega \), and hence g satisfies (H\(_{2}\)). To see that g satisfies also (H\(_{1}\)) it suffices to fix \(x,a\in [0,1]\) and note that

$$\begin{aligned} \{\omega \in \Omega \,|\,g(x,\omega )\le a\} =\{\omega \in \Omega \,|\,(\omega ,a)\in f^{-1}([x,1])\}\in {\mathcal {A}}. \end{aligned}$$

Therefore, g is a Carathéodory function as well as an rv-function.

In the proofs of our results it will be more convenient to use the function g instead of f.

3 Key Observation

The following simple observation (cf. e.g. [12, 9.1.1]) is motivated by [18] and reads as follows.

Proposition 3.1

  1. (i)

    If \(\mu _*\) is an invariant measure for \(({\mathcal {F}},P)\), then the function \(\varphi _*:[0,1]\rightarrow [0,1]\) given by

    $$\begin{aligned} \varphi _*(x)=\mu _*([0,x]) \end{aligned}$$
    (3.1)

    is increasing, right-continuous, \(\varphi _*(1)=1\) and

    $$\begin{aligned} \varphi _*(x)=\int _\Omega \varphi _*(g(x,\omega ))dP(\omega )\quad \text {for every }x\in [0,1]. \end{aligned}$$
    (3.2)
  2. (ii)

    If \(\varphi _*:[0,1]\rightarrow [0,1]\) is an increasing and right-continuous function satisfying (3.2) with \(\varphi _*(1)=1\), then formula (3.1) determines uniquely an invariant measure for \(({\mathcal {F}},P)\).

Proposition 3.1 says that we have a mutual, one-to-one, correspondence between invariant measures for \(({\mathcal {F}},P)\) and right-continuous and increasing functions satisfying (3.2) taking value 1 at 1. Using for subsets \(A\subset [0,1]\) the notation \(\chi _A\) to denote the characteristic function of A defined on [0, 1], we observe that the function \(\chi _{[0,1]}\) corresponds to the Dirac measure \(\delta _0\) and the function \(\chi _{\{1\}}\) corresponds to the Dirac measure \(\delta _1\).

The next observation shows that if \(\varphi _*\) solves (3.2), then there is a fairly large family of pair of functions \(\rho _1,\rho _2\) satisfying (B).

Remark 3.2

Fix \(\varphi _*\in {\mathcal {I}}\) satisfying (3.2) and assume that it is right-continuous and continuous at 1. If \(\psi \in {\mathcal {D}}\) is convex and \(\phi \in {\mathcal {D}}\) is concave, then \(\psi \circ \varphi _*,\phi \circ \varphi _*\in {\mathcal {I}}\) are right-continuous, continuous at 1, and, by the Jensen inequality (see e.g. [12, 10.2.7]), we have

$$\begin{aligned} (\psi \circ \varphi _*)(x)&\le \int _{\Omega } (\psi \circ \varphi _*)(f_\omega ^{-1}(x))dP(\omega )\le \int _{\Omega }(\phi \circ \varphi _*)(f_\omega ^{-1}(x))dP(\omega )\\&\le (\phi \circ \varphi _*)(x) \end{aligned}$$

for every \(x\in [0,1]\).

Proposition 3.3

Assume that

$$\begin{aligned} \{x\in (0,1)\,|\,f_\omega (x)=x\text { for { P}-almost all }\omega \in \Omega \}=\emptyset . \end{aligned}$$
(3.3)

If \(\mu \) is an invariant measure for \(({\mathcal {F}},P)\), then \(\mu (\{x\})=0\) for every \(x\in (0,1)\).

Proof

Let \(\varphi _*:[0,1]\rightarrow [0,1]\) be the function that corresponds to \(\mu \) by Proposition 3.1. It follows from [21, Lemma 1] that \(\varphi _*\) is continuous at every point of (0, 1), or equivalently, that \(\mu (\{x\})=0\) for every \(x\in (0,1)\). \(\square \)

Corollary 3.4

Assume that \(({\mathcal {F}},P)\) satisfies (3.3). Then the existence of an invariant measure for \(({\mathcal {F}},P)\) different from all convex combinations of Dirac measures \(\delta _0\) and \(\delta _1\) is equivalent to the existence of a continuous and increasing function \(\varphi _*:[0,1]\rightarrow [0,1]\) satisfying (3.2) with \(\varphi _*(0)=0\) and \(\varphi _*(1)=1\).

Proof

\((\Rightarrow )\) Let \(\mu \) be an invariant measure for \(({\mathcal {F}},P)\) that is not a convex combination of \(\delta _0\) and \(\delta _1\). By assertion (i) of Proposition 3.1 there exists a right-continuous and increasing function \(\varphi :[0,1]\rightarrow [0,1]\) satisfying (3.2) with \(\varphi (1)=1\) and such that \(\varphi \ne c\chi _{[0,1]}+(1-c)\chi _{\{1\}}\) for every \(c\in [0,1]\). Moreover, Proposition 3.3 implies that \(\varphi \) is continuous at every point of (0, 1). Therefore, the function \(\varphi _*:[0,1]\rightarrow [0,1]\) given by

$$\begin{aligned} \varphi _*(x)={\left\{ \begin{array}{ll} \frac{\displaystyle {\varphi (x)-\varphi (0)}}{\displaystyle {\lim _{y\rightarrow 1^-}\varphi (y)-\varphi (0)}},&{}\text {if }x\in [0,1)\\ 1,&{}\text {if }x=1 \end{array}\right. } \end{aligned}$$

is continuous, increasing, satisfies (3.2), \(\varphi _*(0)=0\) and \(\varphi _*(1)=1\).

\((\Leftarrow )\) It is enough to apply assertion (ii) of Proposition 3.1. \(\square \)

Remark 3.5

If \(({\mathcal {F}},P)\) satisfies (A), then (3.3) holds.

In view of Remark 3.5 we see that Corollary 3.4 says that if there is an invariant measure \(\mu \) for \(({\mathcal {F}},P)\) satisfying (A) and such that \(\mu ((0,1))>0\), then there is an extreme invariant measure \(\mu _*\) for \(({\mathcal {F}},P)\) such that \(\mu _*((0,1))=1\); recall that an extreme invariant measure for \(({\mathcal {F}},P)\) is an invariant measure for \(({\mathcal {F}},P)\) that cannot be represented as a convex combination of other invariant measures for \(({\mathcal {F}},P)\).

We now prove that under condition (A) the considered IFSP has at most three extreme invariant measures.

Theorem 3.6

Assume that \(({\mathcal {F}},P)\) satisfies (A). Then there exists at most one invariant measure \(\mu _*\) for \(({\mathcal {F}},P)\) with \(\mu _*((0,1))=1\).

Proof

According to Propositions 3.1 and 3.3, and Remark 3.5, it suffices to prove that there exists at most one continuous function \(\varphi _*:[0,1]\rightarrow [0,1]\) satisfying (3.2) with \(\varphi _*(0)=0\) and \(\varphi _*(1)=1\).

Suppose, on the contrary, that there are two different continuous functions \(\varphi _1,\varphi _2:[0,1]\rightarrow [0,1]\) satisfying (3.2) such that \(\varphi _1(0)=\varphi _2(0)=0\) and \(\varphi _1(1)=\varphi _2(1)=1\). Put \(\varphi =\varphi _1-\varphi _2\). Clearly, \(\varphi \) is a non-trivial continuous solution of equation (3.2). Put \(M=\sup \{|\varphi (x)|\,|\,x\in [0,1]\}\) and \(S=\{x\in [0,1]\,|\,|\varphi (x)|=M\}\). Obviously, \(a=\min S\in (0,1)\) and \(b=\max S\in (0,1)\). By (A) and (2.2), at least one of the following cases occurs: there exists a set \(\Omega _-^b\) with \(P(\Omega _-^b)>0\) such that

$$\begin{aligned} b<g(b,\omega )\quad \text { for every }\omega \in \Omega _-^b \end{aligned}$$
(3.4)

or there exists set \(\Omega _+^a\) with \(P(\Omega _+^a)>0\) such that

$$\begin{aligned} g(a,\omega )<a\quad \text { for every }\omega \in \Omega _+^a. \end{aligned}$$
(3.5)

Note that for every \(x\in S\) we have

$$\begin{aligned} M=|\varphi (x)|\le \int _\Omega |\varphi (g(x,\omega ))|dP(\omega )\le M, \end{aligned}$$

and hence \(g(x,\omega )\in S\) for P-almost all \(\omega \in \Omega \). In consequence,

$$\begin{aligned} a\le g(a,\omega )\le g(b,\omega )\le b \end{aligned}$$

for P-almost all \(\omega \in \Omega \), which contradicts both (3.4) and (3.5). \(\square \)

We end this section with an application of results obtained up to now.

Example 3.7

Fix a real number \(a>1\), an \({\mathcal {A}}\)-measurable function \(c:\Omega \rightarrow (0,a)\) with \(\int _\Omega c(\omega )dP(\omega )=1\) that is not equal to 1 almost everywhere, and consider the family \({{\mathcal {F}}}\) consisting of functions of the form

$$\begin{aligned} f_\omega (x)={\left\{ \begin{array}{ll} \frac{x}{c(\omega )}&{} \text { for}~ x\in [0,\frac{c(\omega )}{a}],\\ \frac{(a-1)x+1-c(\omega )}{a-c(\omega )}&{} \text { for}~ x\in (\frac{c(\omega )}{a},1]. \end{array}\right. } \end{aligned}$$

Clearly, (A) holds and for every \(\omega \in \Omega \) we have

$$\begin{aligned} g(x,\omega )={\left\{ \begin{array}{ll} c(\omega )x&{} \text {for }x\in [0,\frac{1}{a}],\\ \frac{c(\omega )(1-x)+ax-1}{a-1}&{} \text {for }x\in (\frac{1}{a},1]. \end{array}\right. } \end{aligned}$$

Moreover, if \(x\in [0,\frac{1}{a}]\), then

$$\begin{aligned} \int _\Omega g(x,\omega )dP(\omega )=x\int _\Omega c(\omega )dP(\omega )=x, \end{aligned}$$

and if \(x\in (\frac{1}{a},1]\), then

$$\begin{aligned} \int _\Omega g(x,\omega )dP(\omega )=\frac{1-x}{a-1}\int _\Omega c(\omega )dP(\omega )+\frac{ax-1}{a-1}=x. \end{aligned}$$

Thus we see that the function \(\textrm{id}_{[0,1]}\) solves equation (3.2); in particular, (B) holds with \(\rho _1=\rho _2=\textrm{id}_{[0,1]}\). Therefore, Proposition 3.1 implies that the one-dimensional Lebesgue measure on [0, 1] is an extreme invariant measure for the considered \(({\mathcal {F}},P)\). Moreover, by Theorem 3.6, there is no other extreme invariant measure \(\mu \) for \(({\mathcal {F}},P)\) with \(\mu ((0,1))>0\).

4 The Case of Two Extreme Measures

Combining Proposition 3.1 with [4, Theorem 2.2] we obtain the following result.

Proposition 4.1

Assume that the family \({\mathcal {F}}\) consists of pairwise commuting functions. If

$$\begin{aligned} \int _\Omega g(x,\omega )dP(\omega )\ne x\quad \text {for every }x\in (0,1), \end{aligned}$$
(4.1)

then \(\delta _0\) and \(\delta _1\) are the only extreme invariant measures for \(({\mathcal {F}},P)\).

The next result is a counterpart to Proposition 4.1.

Proposition 4.2

Assume that \(({\mathcal {F}},P)\) satisfies (A). If there exists an increasing homeomorphisms \(h:[0,1]\rightarrow [0,1]\), commuting with every member of \({\mathcal {F}}\), such that

$$\begin{aligned} h(x)\ne x\quad \text {for every }x\in (0,1), \end{aligned}$$
(4.2)

then \(\delta _0\) and \(\delta _1\) are the only extreme invariant measures for \(({\mathcal {F}},P)\).

Proof

Suppose that, on the contrary, there is an invariant measure \(\mu _*\) for \(({\mathcal {F}},P)\) with \(\mu _*((0,1))=1\). Let \(\varphi _*\) be the function that corresponds to \(\mu _*\), by Proposition 3.1. From Theorem 3.6, Remark 3.5 and Proposition 3.3 we conclude that \(\varphi _*\) is the unique increasing and continuous function satisfying equation (3.2) with \(\varphi _*(0)=0\) and \(\varphi _*(1)=1\). Since the function \(\varphi =\varphi _*\circ h\) is increasing, continuous \(\varphi (0)=0\), \(\varphi (1)=1\), and for every \(x\in [0,1]\) we have

$$\begin{aligned} \varphi (x)&=\int _\Omega \varphi _*(g(h(x),\omega ))dP(\omega )=\int _\Omega \varphi _*(h(g(x,\omega )))dP(\omega )\\&=\int _\Omega \varphi (g(x,\omega ))dP(\omega ) \end{aligned}$$

it follows that \(\varphi =\varphi _*\), i.e. \(\varphi _*(h(x))=\varphi _*(x)\) for every \(x\in [0,1]\). This jointly with the assumptions imposed on h implies that \(\varphi _*\) is constant on (0, 1), which contradicts its continuity. \(\square \)

Note that (4.1) implies (A), but (A) does not yield (4.1). Moreover, if the family \({\mathcal {F}}\) consists of pairwise commuting function, then (4.1) yields the existence of at least one member in the family \({\mathcal {F}}\), which satisfies (4.2) and commutes with every member of \({\mathcal {F}}\).

The next example shows the possible application of Propositions 4.1 and 4.2.

Example 4.3

Fix an \({\mathcal {A}}\)-measurable function \(\alpha :\Omega \rightarrow (0,\infty )\) and consider the family \({\mathcal {F}}\) that consists of functions of the form

$$\begin{aligned} f_\omega (x)=x^{\alpha (\omega )}. \end{aligned}$$

We first observe that the function \(h:[0,1]\rightarrow [0,1]\) given by \(h(x)=x^2\) is an increasing homeomorphism satisfying (4.2) that commutes with every member of \({\mathcal {F}}\). If there exists \(\Omega _0\subset \Omega \) with \(P(\Omega _0)>0\) such that \(\alpha (\omega )\ne 1\) for every \(\omega \in \Omega _0\), then \(({\mathcal {F}},P)\) satisfies (A), and hence \(\delta _0\) and \(\delta _1\) are the only extreme invariant measures for \(({\mathcal {F}},P)\), by Proposition 4.2. If there is no set \(\Omega _0\) with the above property, then clearly any probability measure on [0, 1] is invariant measure for \(({\mathcal {F}},P)\).

Note that in specific situations of the considered family \({\mathcal {F}}\), Proposition 4.1 can also be applied. For example, assume that \(\Omega =[0,1]\), \({\mathcal {A}}={\mathcal {B}}\), P is the one-dimensional Lebesgue measure on [0, 1] and \(\alpha (\omega )=\frac{1}{\omega +\alpha }\), where \(\alpha \in (0,\infty )\) is fixed. Then for every \(x\in (0,1)\) we have

$$\begin{aligned} \int _0^1 g(x,\omega )d\omega =\int _0^1x^{\omega +\alpha }d\omega =x^\alpha \,\frac{x-1}{\log x}, \end{aligned}$$

and hence, \(x^\alpha \,\frac{x-1}{\log x}\ge \sqrt{x}\,\frac{x-1}{\log x}>x\) if \(\alpha \in (0,\frac{1}{2}]\), whereas \(x^{\alpha }\,\frac{x-1}{\log x}\le x\,\frac{x-1}{\log x} <x\) if \(\alpha \in [1,\infty )\). Therefore, Proposition 4.1 can be applied for every \(\alpha \in (0,\frac{1}{2}]\cup [1,\infty )\); however it cannot be applied if \(\alpha \in (\frac{1}{2},1)\) because (4.1) does not hold.

5 The Case of Three Extreme Measures

We are now in a position to prove the main results of this paper. For this purpose define the operator \(\textbf{T}:{\mathcal {I}}\rightarrow {\mathcal {I}}\) by

$$\begin{aligned} \textbf{T}\varphi (x)=\int _\Omega \varphi (g(x,\omega ))dP(\omega ) \end{aligned}$$

for every \(x\in [0,1]\).

Theorem 5.1

Assume that \(({{\mathcal {F}}},P)\) satisfies (A) and (B). Then there exists exactly one invariant measure \(\mu _*\) for \(({{\mathcal {F}}},P)\) with \(\mu _*((0,1))=1\).

Proof

The uniqueness is settled by Theorem 3.6. It is only the existence that must be proved.

Since

$$\begin{aligned} \rho _1\le \textbf{T}^n\rho _1\le \textbf{T}^{n+1}\rho _1\le \textbf{T}^{n+1}\rho _2\le \textbf{T}^n\rho _2\le \rho _2 \quad \text {for every }n\in {\mathbb {N}}, \end{aligned}$$

the sequence \((\textbf{T}^n\rho _1)_{n\in {\mathbb {N}}}\) pointwise converges and its limit \(\varphi \) is an increasing function, \(\rho _1\le \varphi \le \rho _2\) and, by the Monotone Convergence Theorem,

$$\begin{aligned} \varphi (x)=\int _\Omega \varphi (g(x,\omega ))dP(\omega )\quad \text {for every }x\in [0,1]. \end{aligned}$$

Consequently, the function \(\varphi _*:[0,1]\rightarrow [0,1]\) defined by

$$\begin{aligned} \varphi _*(x)=\lim _{y\rightarrow x^+}\varphi (y)\quad \text {for every }x\in [0,1),\quad \varphi _*(1)=1, \end{aligned}$$

is increasing, right-continuous, satisfies (3.2), and \(\rho _1\le \varphi _*\le \rho _2\). In particular, \(\varphi _*\) is continuous at 1. By Proposition 3.1 formula (3.1) determines an invariant measure for \(({{\mathcal {F}}},P)\), which vanishes on \(\{0,1\}\). \(\square \)

It turns out that condition (B) is not only sufficient, but also necessary for the IFS \(({{\mathcal {F}}},P)\) satisfying (A) to have the third extreme measure. More precisely, we have the following consequence of Theorem 5.1.

Corollary 5.2

Assume that \(({{\mathcal {F}}},P)\) satisfies (A). Then there exists an invariant measure \(\mu _*\) for \(({{\mathcal {F}}},P)\) with \(\mu _*((0,1))=1\) if and only if (B) holds.

Proof

\((\Leftarrow )\) This implication follows directly from Theorem 5.1.

\((\Rightarrow )\) Suppose that there exists an invariant measure \(\mu _*\) for \(({{\mathcal {F}}},P)\) with \(\mu _*((0,1))=1\). Let \(\varphi _*\) be the function that corresponds to \(\mu _*\) by Proposition 3.1. From Proposition 3.3, which we may apply by Remark 3.5, we see that \(\varphi _*\) is continuous on (0, 1) and since \(\mu _*(\{0\})=\mu _*(\{1\})=0\) we conclude that \(\varphi _*\) is continuous. Hence \(\varphi _*\in {\mathcal {D}}\). Now, it is enough to apply Remark 3.2. \(\square \)

Returning to the family \({\mathcal {F}}\) from Example 4.3, note that it does not satisfy (B) if (A) holds, which is an immediate consequence of Corollary 5.2 or Theorem 5.1. However, if (A) does not hold, then (B) is satisfied with any right-continuous and continuous at 1 functions \(\rho _1, \rho _2\in {\mathcal {I}}\).

6 Supports of Invariant Measures

Given a family \({\mathcal {H}}=\{h_\omega \,|\,\omega \in \Omega \}\) of increasing homeomorphisms of an interval \(I\subset {\mathbb {R}}\) onto itself, we say that an interval \(J\subset I\) is \({\mathcal {H}}\)-invariant if \(h_\omega (J)\subset J\) happens P-a.s., and we say that it is \({\mathcal {H}}\)-expansive if \(J\subset h_\omega (J)\) happens P-a.s.

Put

$$\begin{aligned} {\mathcal {J}}_1= & {} \big \{[a,b]\subset (0,1)\,|\, [a,b]\text { is a minimal in the sense of inclusion }{\mathcal {F}}\text {- expansive}\\{} & {} \qquad \text { interval}\big \},\\ {\mathcal {J}}_2= & {} \Big \{[a,b]\subsetneq [0,1]\,|\, [a,b]\text { is a maximal in the sense of inclusion }{\mathcal {F}} \text {- expansive}\\{} & {} \qquad \text { interval disjoint from }\bigcup {\mathcal {J}}_1\text { with }a=0 \text { or }b=1\Big \},\\{} & {} \qquad \qquad \qquad \qquad \qquad \qquad \qquad {\mathcal {J}}={\mathcal {J}}_1\cup {\mathcal {J}}_2. \end{aligned}$$

Note that (3.3) implies that the family \({\mathcal {J}}_1\) consists of pairwise disjoint non-degenerate intervals, whereas (A) yields \({\mathcal {J}}_1=\emptyset \).

Theorem 6.1

Assume that \(({{\mathcal {F}}},P)\) satisfies (3.3). If \(\mu \) is an invariant measure for \(({{\mathcal {F}}},P)\), then \({{\,\textrm{supp}\,}}\mu \cap (a,b)=\emptyset \) for every \([a,b]\in {\mathcal {J}}\).

Proof

Let \(\mu \) be an invariant measure for \(({{\mathcal {F}}},P)\). If \(\mu \) is a convex combination of \(\delta _0\) and \(\delta _1\), then there is noting to prove. Therefore, we assume that \(\mu ((0,1))>0\). Put \(\mu _*=\frac{1}{\mu ((0,1))}\mu \) and let \(\varphi _*\) be the function that corresponds to \(\mu _*\) by Proposition 3.1. Note that \(\varphi _*\) is continuous by Proposition 3.3.

Let \(\psi :{\mathbb {R}}\rightarrow (0,1)\) be an increasing homeomorphism. For every \(\omega \in \Omega \) we put \(h_\omega =\psi ^{-1}\circ g_\omega \circ \psi \) and define a function \(h:{\mathbb {R}}\times \Omega \rightarrow {\mathbb {R}}\) putting \(h(x,\omega )=h_\omega (x)\). It is clear that for every \(x\in {\mathbb {R}}\) the function \(h(x,\cdot ):\Omega \rightarrow {\mathbb {R}}\) is \({\mathcal {A}}\)-measurable and for every \(\omega \in \Omega \) the function \(h_\omega \) is an increasing homeomorphism of \({\mathbb {R}}\) onto itself. By (3.3) we have

$$\begin{aligned} \{x\in {\mathbb {R}}\,|\,h(x,\omega )=x\text { for { P}-almost all }\omega \in \Omega \}=\emptyset . \end{aligned}$$

Moreover, \(\varphi _*\) satisfies (3.2) if and only if \(\varphi =\varphi _*\circ \psi \) satisfies

$$\begin{aligned} \varphi (x)=\int _\Omega \varphi (h(x,\omega ))dP(\omega )\quad \text {for every }x\in {\mathbb {R}}. \end{aligned}$$

Fix \([a,b]\in {\mathcal {J}}_1\). Then \([\psi ^{-1}(a),\psi ^{-1}(b)]\) is minimal \(\{h_\omega ^{-1}\,|\,\omega \in \Omega \}\)-expansive. Applying [16, Theorem 2] we obtain \(\varphi (\psi ^{-1}(a))=\varphi (\psi ^{-1}(b))\), and hence \(\varphi _*(a)=\varphi _*(b)\). In consequence, \({{\,\textrm{supp}\,}}\mu ={{\,\textrm{supp}\,}}\mu _*\subset [0,a]\cup [b,1]\).

Let \([0,b]\in {\mathcal {J}}_2\). Then \((-\infty ,\psi ^{-1}(b)]\) is maximal \(\{h_\omega ^{-1}\,|\,\omega \in \Omega \}\)-expansive disjoint from any \(\{h_\omega ^{-1}\,|\,\omega \in \Omega \}\)-expansive compact interval \(I\subset {\mathbb {R}}\). Applying [16, Remark 2] we conclude that \(\varphi \) is constant on \((-\infty ,\varphi (\psi ^{-1}(b))]\), i.e., \(\varphi _*\) is constant on (0, b]. In consequence, \({{\,\textrm{supp}\,}}\mu ={{\,\textrm{supp}\,}}\mu _*\subset \{0\}\cup [b,1]\).

If \([a,1]\in {\mathcal {J}}_2\), then arguing as above we come to \({{\,\textrm{supp}\,}}\mu \subset [0,a]\cup \{1\}\). \(\square \)

The next result gives additional information about supports of invariant measures for \(({{\mathcal {F}}},P)\) satisfying (A).

Theorem 6.2

Assume that \(({{\mathcal {F}}},P)\) satisfies (A). If \(\mu \) is an invariant measure for \(({{\mathcal {F}}},P)\), then \({{\,\textrm{supp}\,}}\mu \cap \{0,1\}\ne \emptyset \).

Proof

Without loss of generality we can assume that \(\mu ((0,1))=1\). Let \(\varphi \) be correspond to \(\mu \) by Proposition 3.1.

Put

$$\begin{aligned} a=\sup \{x\in [0,1]\,|\,\varphi (x)=0\}\quad \text {and}\quad b=\inf \{x\in [0,1]\,|\,\varphi (x)=1\}. \end{aligned}$$

Proposition 3.3 jointly with Remark 3.5 imply \(\varphi (a)=0\) and \(\varphi (b)=1\). In particular, \(a<b\). Since

$$\begin{aligned} 0=\varphi (a)=\int _\Omega \varphi (g(a,\omega ))dP(\omega )\quad \text {and}\quad 1=\varphi (b)=\int _\Omega \varphi (g(b,\omega ))dP(\omega ), \end{aligned}$$

it follows that \(\varphi (g(a,\omega ))=0\) and \(\varphi (g(b,\omega ))=1\) for P-almost all \(\omega \in \Omega \). Hence \(g(a,\omega )\le a<b\le g(b,\omega )\) for P-almost all \(\omega \in \Omega \), i.e.

$$\begin{aligned} a\le f_\omega (a)<f_\omega (b)\le b\quad \text { for }P-\text {almost all }\omega \in \Omega . \end{aligned}$$
(6.1)

Suppose that \({{\,\textrm{supp}\,}}\mu \cap \{0,1\}=\emptyset \). Then we would have \(0<a<b<1\), which jointly with (6.1) contradicts (A). \(\square \)

Remark 6.3

Let \(\mu \) be an invariant measure for \(({{\mathcal {F}}},P)\), \(x\in (0,1)\) and \(n\in {\mathbb {N}}\). Then \(\{\omega \in \Omega ^{\infty }\,|\,f^n(x,\omega )\in (\inf {{\,\textrm{supp}\,}}\mu ,\sup {{\,\textrm{supp}\,}}\mu )\} \subset \{\omega \in \Omega ^{\infty }\,|\,f^{n+1}(x,\omega )\in (\inf {{\,\textrm{supp}\,}}\mu ,\sup {{\,\textrm{supp}\,}}\mu )\}\) holds \(P^\infty \)-a.s.

Proof

Put \(a=\inf {{\,\textrm{supp}\,}}\mu \), \(b=\sup {{\,\textrm{supp}\,}}\mu \). Clearly, the interval [ab] is \({\mathcal {F}}\)-invariant. Therefore, if \(\omega =(\omega _1,\omega _2,\ldots )\in \Omega ^\infty \) and \(a<f^n(x,\omega )<b\), then \(a\le f(a,\omega _{n+1})<f(f^n(x,\omega ),\omega _{n+1})=f^{n+1}(x,\omega )=f(f^n(x,\omega ),\omega _{n+1})<f(b,\omega _{n+1})\le b\) for P-almost all \(\omega _{n+1}\in \Omega \). \(\square \)

7 Stability

Using some martingale techniques developed in [13] (see also [8]), we prove the stability of \(({{\mathcal {F}}},P)\). Before formulating our result, let us denote by \(\textbf{P}\) the Markov–Feller operator that corresponds to \(({{\mathcal {F}}},P)\), i.e.

$$\begin{aligned} \textbf{P}\mu (B)=\int _{[0,1]}\left( \int _\Omega \chi _B(f(x,\omega ))dP(\omega )\right) d\mu (x) \end{aligned}$$
(7.1)

for all \(\mu \in {\mathcal {M}}_1\) and \(B\in {\mathcal {B}}\); here \({\mathcal {M}}_1\) denotes the family of all probability Borel measures on [0, 1].

Theorem 7.1

Assume that \(({{\mathcal {F}}},P)\) satisfies (A) and (B). If \(\mu _*\) is the unique invariant measure for \(({{\mathcal {F}}},P)\) with \(\mu _*((0,1))=1\), then for every \(x\in (0,1)\) we have

$$\begin{aligned} \begin{aligned} \lim _{n\rightarrow \infty }\int _{\Omega ^\infty }\psi (f^n(x,\omega ))dP^{\infty }(\omega ) =\int _{[0,1]}\psi (y)d\mu _*(y)\\ \text {for any }\psi \in C([0,1],{\mathbb {R}}). \end{aligned} \end{aligned}$$
(7.2)

Proof

Following [15], for every \(n\in {\mathbb {N}}\), we consider the operator \(\sigma _n:\Omega ^\infty \rightarrow \Omega ^\infty \) given by

$$\begin{aligned} \sigma _n(\omega _1,\omega _2,\ldots )=(\omega _n,\ldots ,\omega _1,\omega _{n+1},\ldots ). \end{aligned}$$

Next, for all \(\omega \in \Omega ^\infty \) and \(x\in (0,1)\) we define a sequence \((f_n(x,\omega ))_{n\in {\mathbb {N}}}\) by setting

$$\begin{aligned} f_n(x,\omega )=f^n(x,\sigma _n(\omega )) \end{aligned}$$

and for any \(\psi \in C([0,1],{\mathbb {R}})\) we define a sequence \((\xi _n^{\psi })_{n\in {\mathbb {N}}}\) of random variables putting

$$\begin{aligned} \xi _n^{\psi }(\omega )=\int _{[0,1]}\psi (f_n(y,\omega ))d\mu _*(y)\quad \text {for every } \omega \in \Omega ^\infty . \end{aligned}$$
(7.3)

Since \(\mu _*\) is an invariant measure, we easily check that \((\xi _n^{\psi })_{n\in {\mathbb {N}}}\) is a bounded martingale with respect to the natural filtration. From the Martingale Convergence Theorem (see [11, Section XI.14]) it follows that \((\xi _n^{\psi })_{n\in {\mathbb {N}}}\) is convergent \(P^{\infty }\)-a.s. This jointly with the separability of \(C([0,1],{\mathbb {R}})\) implies that there exists a set \(\Omega _0\subset \Omega ^{\infty }\) with \(P^{\infty }(\Omega _0)=1\) such that \((\xi _n^{\psi }(\omega ))_{n\in {\mathbb {N}}}\) is convergent for any \(\psi \in C([0,1],{\mathbb {R}})\) and \(\omega \in \Omega _0\). Therefore, by the Riesz Representation Theorem, there exists a function \(\mu :\Omega _0\rightarrow {\mathcal {M}}_1\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\xi _n^{\psi }(\omega )=\int _{[0,1]}\psi (y)d\mu (\omega )(y)\quad \text {for every }\psi \in C([0,1],{\mathbb {R}}). \end{aligned}$$
(7.4)

Arguing as in the proof of [8, Theorem 2] we can easily show that there exist a set \(\Omega _0^0\subset \Omega _0\) and a function \(s:\Omega _0^0\rightarrow [0,1]\) such that \(P^{\infty }(\Omega _0^0)=1\) and

$$\begin{aligned} \mu (\omega )=\delta _{s(\omega )}\quad \text {for every }\omega \in \Omega _0^0. \end{aligned}$$
(7.5)

Moreover,

$$\begin{aligned} \mu _*=\int _{\Omega _0^0}\delta _{s(\omega )}dP^{\infty }(\omega ). \end{aligned}$$
(7.6)

By Theorem 6.2, we have \({{\,\textrm{supp}\,}}\mu _*\cap \{0,1\}\ne \emptyset \). Assume that \(0\in {{\,\textrm{supp}\,}}\mu _*\) (the case where \(1\in {{\,\textrm{supp}\,}}\mu _*\) can be considered in the same way) and set \(\gamma =\sup {{\,\textrm{supp}\,}}\mu _*\).

We begin proving (7.2) from the case where \(x\in (0,\gamma )\).

Fix \(x\in (0,\gamma )\). We first want to show that

$$\begin{aligned} \lim _{n\rightarrow \infty }f_n(x,\omega )=s(\omega )\quad \text {for every }\omega \in \Omega _0^0. \end{aligned}$$
(7.7)

For this purpose we fix \(\omega \in \Omega _0^0\), \(\varepsilon >0\), and let \(\psi _\omega \in C([0,1],{\mathbb {R}})\) be such that \(0\le \psi _{\omega }\le 1\), \(\psi _{\omega }(s(\omega ))=1\) and \(\psi _{\omega }(y)=0\) if \(|y-s(\omega )|>\varepsilon \). Then making use of (7.3), (7.4) and (7.5) we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{[0,1]}\psi _{\omega }(f_n(y,\omega ))d\mu _*(y) =\int _{[0,1]}\psi _\omega (y)d\delta _{s(\omega )}(y)=\psi _{\omega }(s(\omega ))=1. \end{aligned}$$

This jointly with the fact that \(\mu _*((0,x))>0\) and \(\mu _*((x,\gamma ))>0\), and the definition of \(\psi _\omega \) implies that there exists \(n_0\in {\mathbb {N}}\) such that for any \(n\ge n_0\) there are \(y_n\in (0,x)\) and \(z_n\in (x,1)\) with \(|f_n(y_n,\omega )-s(\omega )|\le \varepsilon \) and \(|f_n(z_n,\omega )-s(\omega )|\le \varepsilon \). Since

$$\begin{aligned} f_n(y_n,\omega )<f_n(x,\omega )<f_n(z_n,\omega )\quad \hbox {for every }~n\ge n_{0} , \end{aligned}$$

we conclude that \(|f_n(x,\omega )-s(\omega )|<\varepsilon \), which means that (7.7) holds.

According to the Alexandrov Theorem (see [7, Theorem 2.1]), to prove (7.2) it is enough to show that for any \(a\in (0,1)\) we have

$$\begin{aligned} \lim _{n\rightarrow \infty } P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\,f^n(x,\omega )<a\}=\mu _*((0,a)). \end{aligned}$$

To verify this condition we fix \(a\in (0,1)\) and note that

$$\begin{aligned} P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\,f^n(x,\omega )<a\})&= P^\infty (\{\omega \in \Omega ^\infty \,|\, f^n(x,\sigma _n(\omega ))<a\})\\&=P^\infty (\{\omega \in \Omega ^\infty \,|\, f_n(x,\omega )<a\}) \end{aligned}$$

for every \(n\in {\mathbb {N}}\). Then, applying (7.7) and (7.6), we get

$$\begin{aligned} \lim _{n\rightarrow \infty } P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\, f^n(x,\omega )<a\})&=P^{\infty }(\{\omega \in \Omega _0^0\,|\, s(\omega )<a\})\\&=\int _{\{\omega \in \Omega _0^0\,|\, s(\omega )<a\}}\delta _{s(\omega )}((0,1))dP^{\infty }(\omega )\\&=\int _{\Omega _0^0}\delta _{s(\omega )}((0,a))dP^{\infty }(\omega )=\mu _*((0,a)), \end{aligned}$$

which completes the proof in case where \(x\in (0,\gamma )\).

Now, we are going to show that condition (7.2) also holds for any \(x\in [\gamma ,1)\).

Fix \(x\in [\gamma ,1)\) and observe that to complete the proof, it suffices to show that

$$\begin{aligned} \lim _{m\rightarrow \infty } P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\,f^m(x,\omega )\in (0,\gamma )\}=1; \end{aligned}$$
(7.8)

indeed, having this, we can choose \(m\in {\mathbb {N}}\) such that \(f^m(x,\omega )\in (0,\gamma )\) happens \(P^{\infty }\)-a.s., and then, keeping in mind that \(f^{n+m}=f^{n}\circ f^m\) for every \(n\in {\mathbb {N}}\), we are able to apply (7.2), which is valid by the first part of the proof.

To prove (7.8) we fix \(\varepsilon >0\). Observe first that, by Remark 6.3, for every \(n\in {\mathbb {N}}\) we have

$$\begin{aligned} P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\,f^{n+1}(x,\omega )\!\in \! (0,\gamma )\} \!\ge \! P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\,f^{n}(x,\omega )\!\in \! (0,\gamma )\}.\nonumber \\ \end{aligned}$$
(7.9)

Further, by the Krylov–Bogolubov Theorem (see [9, Theorem 7.1]), the sequence

$$\begin{aligned} \left( \frac{1}{n}\sum _{m=0}^{n-1}\textbf{P}^m\delta _x\right) _{n\in {\mathbb {N}}} \end{aligned}$$

converges weakly to an invariant measure for \(({{\mathcal {F}}},P)\). Since \(\mu _*\) is the unique invariant measure for \(({{\mathcal {F}}},P)\) with \(\mu _*((0,1))=1\), it follows that the Alexandrov Theorem implies the existence of \(m\in {\mathbb {N}}\) such that

$$\begin{aligned} \textbf{P}^m\delta _x((0,\gamma ))>\mu _*((0,\gamma ))-\varepsilon =1-\varepsilon , \end{aligned}$$

due to the fact that \(\mu _*\) is atomless (see Proposition 3.3 and Remark 3.5). Therefore,

$$\begin{aligned} P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\, f^m(x,\omega )\in (0,\gamma )\} =\textbf{P}^m\delta _x((0,\gamma ))> 1-\varepsilon , \end{aligned}$$

which jointly with (7.9) yields (7.8). \(\square \)

8 Strong Law of Large Numbers

From the uniqueness of an invariant measure and Breiman’s law of large numbers we may easily derive the strong law of large numbers.

Theorem 8.1

Assume that \(({{\mathcal {F}}},P)\) satisfies (A) and (B). If \(\mu _*\) is the unique invariant measure for \(({{\mathcal {F}}},P)\) with \(\mu _*((0,1))=1\), then for all \(x\in (0,1)\) and \(P^{\infty }\)-almost all \(\omega \in \Omega ^{\infty }\) we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{k=0}^{n-1}\psi (f^k(x,\omega ))=\int _{[0,1]}\psi (x)d\mu _*(x) \quad \text {for any }\psi \in C([0,1],{\mathbb {R}}). \end{aligned}$$

Proof

Fix \(x\in (0,1)\). From Breiman’s law of large numbers (see [6, Corollary 3.4]) it follows that for \(P^{\infty }\)-almost all \(\omega \in \Omega ^\infty \) the sequence

$$\begin{aligned} \left( \frac{1}{n}\sum _{m=0}^{n-1}\delta _{f^m(x,\omega )}\right) _{n\in {\mathbb {N}}} \end{aligned}$$
(8.1)

converges weakly to an invariant measure for \(({{\mathcal {F}}},P)\). To complete the proof we have to show that this invariant measure is equal to \(\mu _*\). This, however, easily follows from (7.8) (or its counterpart \(\lim _{m\rightarrow \infty } P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\,f^m(x,\omega )\in (\inf {{\,\textrm{supp}\,}}\mu _*,1)\}=1\)) and the Birkhoff Ergodic Theorem applied to the dynamical system corresponding to the considered Markov process on \((\Omega ,{\mathcal {A}},P)\). In fact, since

$$\begin{aligned} \lim _{m\rightarrow \infty } P^{\infty }(\{\omega \in \Omega ^{\infty }\,|\,f^m(x,\omega )\in (\beta ,\gamma )\}=1, \end{aligned}$$

where \(\beta =\inf {{\,\textrm{supp}\,}}\mu _*\) and \(\gamma =\sup {{\,\textrm{supp}\,}}\mu _*\), with no loss of generality we may (and do) assume that \(x\in (\beta ,\gamma )\); note that, by Theorem 6.2, \(\beta =0\) or \(\gamma =1\). Choose \(x_1,x_2\in (\beta ,\gamma )\) such that \(x_1<x<x_2\). Due to the Birkhoff Ergodic Theorem for all \(i\in \{1,2\}\), \(c\in (\beta ,\gamma )\) and \(P^{\infty }\)-almost all \(\omega \in \Omega ^{\infty }\) we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{m=0}^{n-1}\chi _{(0,c)}(f^m(x_i,\omega ))=\mu _*((0,c))\in (0,1) \end{aligned}$$
(8.2)

and

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{m=0}^{n-1}\chi _{(c,1)}(f^m(x_i,\omega ))= \mu _*((c,1))=1-\mu _*((0,c)), \end{aligned}$$
(8.3)

since \(\mu _*\) is atomless. Furthermore, (H\(_{2}\)) yields

$$\begin{aligned} \frac{1}{n}\sum _{m=0}^{n-1}f^m(x_1,\omega )<\frac{1}{n}\sum _{m=0}^{n-1}f^m(x,\omega ) <\frac{1}{n}\sum _{m=0}^{n-1}f^m(x_2,\omega ), \end{aligned}$$

which jointly with (8.2) and (8.3) implies that the weak limit of the sequence (8.1) is equal to \(\mu _*\). \(\square \)