1 Introduction

The Fermi–Ulam ping-pong is a model describing how charged particles bounce off magnetic mirrors and thus gain energy. They undergo the so called Fermi acceleration and one central question is whether the particles velocities can get close to the speed of light that way. The model was introduced by Fermi [13] in order to explain the origin of high energy cosmic radiation. A common one-dimensional mathematical formulation of this problem is as follows: The point particle bounces completely elastically between two vertical plates of infinite mass, one fixed at \(x=0\) and one moving in time as \(x=p(t)\) for some forcing function \(p=p(t)>0\). The particle alternately hits the walls and experiences no external force in between the collisions. The motion can be described by the successor map \(f:(t_0,v_0) \mapsto (t_1,v_1)\), mapping the time \(t_0\in {\mathbb {R}}\) of an impact at the left plate \(x=0\) and the corresponding velocity \(v_0>0\) right after the collision to \((t_1,v_1)\), representing the subsequent impact at \(x=0\). Since one is interested in the long term behavior, we study the forward iterates \((t_n,v_n) = f^n(t_0,v_0)\) for \(n \in {\mathbb {N}}\) and in particular the ‘escaping set’

$$\begin{aligned} E = \left\{ (t_0,v_0) : \lim _{n\rightarrow \infty } v_n = \infty \right\} , \end{aligned}$$

consisting of initial data, which lead to infinitely fast particles. The most studied case is that of a periodic forcing p(t). Ulam [27] conjectured an increase in energy with time on the average. Based on some numerical simulations, he however realized that rather large fluctuations and no clear gain in energy seemed to be the typical behavior. Two decades later, the development of KAM theory allowed to prove that the conjecture is indeed false. If the forcing p is sufficiently smooth, all orbits stay bounded in the phase space, since the existence of invariant curves prevents the orbits from escaping [17, 23]. The proofs are based on Moser’s twist thoerem [18], which relies on the higher regularity. And indeed, Zharnitsky [29] showed the existence of escaping orbits if only continuity is imposed on p. In the non-periodic case, one can even find \({\mathcal {C}}^\infty \)-forcings with this behavior [15]. More recently, Dolgopyat and De Simoi developed a new approach. They consider the periodic case and study some maps which are basically approximations of the successor map f. This way they could prove several results regarding the Lebesgue measure of the escaping set E [9,10,11, 25].

Finally, Zharnitsky [30] investigated the case of a quasi-periodic forcing function whose frequencies satisfy a Diophantine inequality. Again, using an invariant curve theorem, he was able to show that the velocity of every particle is uniformly bounded in time. Since no such theorem is available if the Diophantine condition is dropped, a different approach is necessary in this case. This was done by Kunze and Ortega in [16]. They apply a refined version of the Poincaré recurrence theorem due to Dolgopyat [8] to the set of initial condition leading to unbounded orbits, and thereby show that most orbits are recurrent. Thus, typically the escaping set E will have Lebesgue measure zero. Now, in this work we will give an affirmative answer to the question raised in [16] whether this result can be generalized to the almost periodic case. Indeed, most of their arguments translate naturally into the language of Bohr almost periodic functions. Our main theorem (Theorem 5.1) states that the escaping set E is most likely to have measure zero, provided the almost periodic forcing p is sufficiently smooth.

In order to explain more precisely what we mean by ‘most likely’, we first need to introduce some properties and notation regarding almost periodic functions. This is done in Sect. 2. Subsequently we will study measure-preserving successor maps of a certain type and their iterations. We end this part by stating Theorem 3.1, a slightly generalized version of a theorem by Kunze and Ortega [16], which describes conditions under which the escaping set typically will have measure zero. This will be the most important tool and its proof will be given in the following section. Then, in the last section we discuss the ping-pong model in more detail and finally state and prove the main theorem.

2 Almost Periodic Functions and Their Representation

2.1 Compact Topological Groups and Minimal Flows

Let \(\Omega \) be a commutative topological group, which is metrizable and compact. We will consider the group operation to be additive. Moreover, suppose there is a continuous homomorphism \(\psi :{\mathbb {R}}\rightarrow \Omega \), such that the image \(\psi ({\mathbb {R}})\) is dense in \(\Omega \). This function \(\psi \) induces a canonical flow on \(\Omega \), namely

$$\begin{aligned} \Omega \times {\mathbb {R}}\rightarrow \Omega , \;\; \omega \cdot t = \omega + \psi (t). \end{aligned}$$

This flow is minimal, since

$$\begin{aligned} \overline{\omega \cdot {\mathbb {R}}} = \overline{\omega + \psi ({\mathbb {R}})} = \omega + \overline{\psi ({\mathbb {R}})} = \Omega \end{aligned}$$

holds for every \(\omega \in \Omega \). Let us also note that in general \(\psi \) can be nontrivial and periodic, but this happens if and only if \(\Omega \cong {{\mathbb {S}}}^1\) [21].

Now consider the unit circle \({{\mathbb {S}}}^1= \{ z \in {\mathbb {C}}: |z |= 1\}\) and a continuous homomorphism \(\varphi :\Omega \rightarrow {{\mathbb {S}}}^1\). Such functions \(\varphi \) are called characters and together with the point wise product they form a group, the so called dual group \(\Omega ^*\). Its trivial element is the constant map with value 1. It is a well known fact that nontrivial characters exist, whenever \(\Omega \) is nontrivial [22]. Also non-compact groups admit a dual group. Crucial to us will be the fact that

$$\begin{aligned} {\mathbb {R}}^* = \{ t \mapsto e^{i\alpha t}: \alpha \in {\mathbb {R}}\}. \end{aligned}$$

Now, for a nontrivial character \(\varphi \in \Omega ^*\) we define

$$\begin{aligned} \Sigma = \ker \varphi = \{ \omega \in \Omega : \varphi (\omega ) = 1 \}. \end{aligned}$$

Then \(\Sigma \) is a compact subgroup of \(\Omega \). If in addition \(\Omega \ncong {{\mathbb {S}}}^1\), it can be shown that \(\Sigma \) is perfect [21]. This subgroup will act as a global cross section to the flow on \(\Omega \). Concerning this, note that since \(\varphi \circ \psi \) describes a nontrivial character of \({\mathbb {R}}\), there is a unique \(\alpha \ne 0\) such that

$$\begin{aligned} \varphi (\psi (t))=e^{i \alpha t } \end{aligned}$$

for all \(t \in {\mathbb {R}}\). Therefore, the minimal period of this function,

$$\begin{aligned} S = \frac{2\pi }{|\alpha |}, \end{aligned}$$

can be seen as a returning time on \(\Sigma \) in the following sense. If we denote by \(\tau (\omega )\) the unique number in [0, S) such that \(\varphi (\omega ) = e^{i\alpha \tau (\omega )}\), then one has

$$\begin{aligned} \varphi (\omega \cdot t)=\varphi (\omega +\psi (t)) = \varphi (\omega )\varphi (\psi (t))= e^{i \alpha \tau (\omega )} e^{i \alpha t} \end{aligned}$$

and thus

$$\begin{aligned} \omega \cdot t \in \Sigma \Leftrightarrow t \in -\tau (\omega ) + S{\mathbb {Z}}. \end{aligned}$$

Also \(\tau \) as defined above is a function \(\tau :\Omega \rightarrow [0,S)\) that is continuous where \(\tau (\omega )\ne 0\), i.e. on \(\Omega \setminus \Sigma \). From this we can derive that the restricted flow

$$\begin{aligned} \Phi : \Sigma \times [0,S) \rightarrow \Omega , \;\; \Phi (\sigma ,t) = \sigma \cdot t, \end{aligned}$$

is a continuous bijection. Like \(\tau (\omega )\), its inverse

$$\begin{aligned} \Phi ^{-1}(\omega ) = (\omega \cdot (-\tau (\omega )), \tau (\omega )) \end{aligned}$$

is continuous only on \(\Omega \setminus \Sigma \). Therefore, \(\Phi \) describes a homeomorphism from \(\Sigma \times (0,S)\) to \(\Omega \setminus \Sigma \).

Example 2.1

One important example for such a group \(\Omega \) is the N-Torus \({\mathbb {T}}^N\), where \({\mathbb {T}}= {\mathbb {R}}/ {\mathbb {Z}}\). We will denote classes in \({\mathbb {T}}^N\) by \({\bar{\theta }}= \theta + {\mathbb {Z}}\). Then, the image of the homomorphism

$$\begin{aligned} \psi (t) = (\overline{\nu _1 t},\ldots ,\overline{\nu _N t}) \end{aligned}$$

winds densely around the torus \({\mathbb {T}}^N\), whenever the frequency vector \(\nu =(\nu _1,\ldots ,\nu _N) \in {\mathbb {R}}^N\) is nonresonant, i.e. rationally independent. It is easy to verify that the dual group of \({\mathbb {T}}^N\) is given by

$$\begin{aligned} ({\mathbb {T}}^N)^* = \{(\bar{\theta }_1,\ldots ,\bar{\theta }_N) \mapsto e^{2\pi i (k_1\theta _1+\ldots + k_N \theta _N)} : k \in {\mathbb {Z}}^N \}. \end{aligned}$$

Therefore, one possible choice for the cross section would be

$$\begin{aligned} \Sigma = \{ (\bar{\theta }_1,\ldots ,\bar{\theta }_N)\in {\mathbb {T}}^N : e^{2\pi i \theta _1} = 1 \} = \{0\}\times {\mathbb {T}}^{N-1}, \end{aligned}$$

so \(\varphi (\bar{\theta }_1,\ldots ,\bar{\theta }_N)=e^{2\pi i \theta _1}\). In this case, consecutive intersections of the flow and \(\Sigma \) would be separated by an interval of the length \(1/\nu _1\).

Fig. 1
figure 1

On the 2-torus \({\mathbb {T}}^2\), intersections of \(\Sigma =\{ 0 \}\times {\mathbb {T}}\) and the orbit of \(\psi (t)\) are separated by time intervals of length \(S=1/\nu _1\)

Example 2.2

Lets consider another important topological group. Let \(\Omega = {\mathcal {S}}_{\varvec{p}}\) be the \(\varvec{p}\)-adic solenoid, where \(\varvec{p}= (p_i)_{i\in {\mathbb {N}}}\) is a sequence of prime numbers. \({\mathcal {S}}_{\varvec{p}}\) is defined as the projective limit of the inverse limit system

$$\begin{aligned} {\mathcal {S}}_{\varvec{p}}: {\mathbb {S}}^1 \xleftarrow {\; z^{p_1} \;} {\mathbb {S}}^1 \xleftarrow {\; z^{p_2} \;} {\mathbb {S}}^1 \xleftarrow {\; z^{p_3} \;} \cdots , \end{aligned}$$

where \({\mathbb {S}}^1 \xleftarrow {\; z^{p_i} \;} {\mathbb {S}}^1\) denotes the mapping \(z \mapsto z^{p_i}\) of the circle \({\mathbb {S}}^1\) into itself. A point \(z \in {\mathcal {S}}_{\varvec{p}}\) has the form \(z=(z_0,z_1,z_2,\ldots )\), where \(z_{k-1} = z_k^{p_k}\) for \(k \in {\mathbb {N}}\). Moreover, if we take the coordinatewise multiplication as the action, \({\mathcal {S}}_{\varvec{p}}\) becomes a compact abelian group with neutral element \((1,1,\ldots )\) [14, Theorem 10.13]. It can be endowed with the metric

$$\begin{aligned} d(z,w)= \sum _{k= 0}^{\infty }\frac{d_{{\mathbb {S}}^1}(z_k,w_k)}{q_k} , \end{aligned}$$

where \(d_{{\mathbb {S}}^1}\) denotes the canonical metric on \({\mathbb {S}}^1\) and \(q_0=1\), \(q_k=p_1\cdots p_k\) for \(k \in {\mathbb {N}}\). For \(\lambda >0\), the map

$$\begin{aligned} \psi (t)=(e^{2\pi i \lambda t/q_0},e^{2\pi i \lambda t/q_1},e^{2\pi i \lambda t/q_2},\ldots ) \end{aligned}$$

provides a minimal flow on \({\mathcal {S}}_{\varvec{p}}\). A cross section with return time \(S=1/\lambda \) is then given by

$$\begin{aligned} \Sigma = \{z \in {\mathcal {S}}_{\varvec{p}} : z_0 = 1\}. \end{aligned}$$

Geometrically, \({\mathcal {S}}_{\varvec{p}}\) can be described as the intersections of a sequence of solid tori \(T_1\supset T_2 \supset \ldots \) in \({\mathbb {R}}^3\), where \(T_{k+1}\) is wrapped \(p_k\) times longitudinally inside \(T_k\) without self intersecting. See [3] for a nice description of the construction in the case of the dyadic solenoid \({\mathcal {S}}_{\varvec{2}}\).

2.2 Almost Periodic Functions

The notion of almost periodic functions was introduced by H. Bohr as a generalization of strictly periodic functions [5]. A function \(u \in {\mathcal {C}}({\mathbb {R}})\) is called (Bohr) almost periodic, if for any \(\epsilon >0\) there is a relatively dense set of \(\epsilon \)-almost-periods of this function. By this we mean, that for any \(\epsilon >0\) there exists \(L=L(\epsilon )\) such that any interval of length L contains at least on number T such that

$$\begin{aligned} |u(t+T) - u(t) |< \epsilon \;\; \forall t\in {\mathbb {R}}. \end{aligned}$$

Later, Bochner [4] gave an alternative but equivalent definition of this property: For a continuous function u, denote by \(u_\tau (t)\) the translated function \(u(t + \tau )\). Then u is (Bohr) almost periodic if and only if every sequence \(\left( u_{\tau _n}\right) _{n\in {\mathbb {N}}}\) of translations of u has a subsequence that converges uniformly.

There are several other characterizations of almost periodicity, as well as generalizations due to Stepanov [26], Weyl [28] and Besicovitch [1]. In this work we will only consider the notion depicted above and therefore call the corresponding functions just almost periodic (a.p.). We will however introduce one more way to describe a.p. functions using the framework of the previous section:

Consider \((\Omega ,\psi )\) as above and a function \(U \in {\mathcal {C}}(\Omega )\). Then, the function defined by

$$\begin{aligned} u(t)= U(\psi (t)) \end{aligned}$$
(2.1)

is almost periodic. This can be verified easily with the alternative definition due to Bochner. Since \(U \in {\mathcal {C}}(\Omega )\), any sequence \(\left( u_{\tau _n}\right) _{n\in {\mathbb {N}}}\) will be uniformly bounded and equicontinuous. Hence the Arzelà-Ascoli theorem guarantees the existence of a uniformly convergent subsequence. We will call any function obtainable in this manner representable over \((\Omega ,\psi )\). Since the image of \(\psi \) is assumed to be dense, it is clear that the function \(U \in {\mathcal {C}}(\Omega )\) is uniquely determined by this relation. As an example take \(\Omega \cong {{\mathbb {S}}}^1\), then \(\psi \) is periodic. Thus (2.1) gives rise to periodic functions. Conversely it is true, that any almost periodic function can be constructed this way. For this purpose we introduce the notion of hull. The hull \({\mathcal {H}}_u\) of a function u is defined by

$$\begin{aligned} {\mathcal {H}}_u = \overline{\{u_\tau : \tau \in {\mathbb {R}}\}}, \end{aligned}$$

where the closure is taken with respect to uniform convergence on the whole real line. Therefore if u is a.p.  then \({\mathcal {H}}_u\) is a compact metric space. If one uses the continuous extension of the rule

$$\begin{aligned} u_\tau * u_s = u_{\tau +s} \;\; \forall \tau ,s \in {\mathbb {R}}\end{aligned}$$

onto all of \({\mathcal {H}}_u\) as the group operation, then the hull becomes a commutative topological group with neutral element u. (For \(v,w \in {\mathcal {H}}_u\) with \(v = \lim _{n\rightarrow \infty } u_{\tau _n^v}\) and \(w = \lim _{n\rightarrow \infty } u_{\tau _n^w}\) we have

$$\begin{aligned} v * w = \lim _{n\rightarrow \infty } u_{\tau _n^v+\tau _n^v}, \;\; -v = \lim _{n\rightarrow \infty } u_{-\tau _n^v}. \end{aligned}$$

These limits exist by Lemma 6.1 from the appendix. The continuity of both operations can be shown by a similar argument.) If we further define the flow

$$\begin{aligned} \psi _u (\tau ) = u_\tau , \end{aligned}$$
(2.2)

then the pair \(({\mathcal {H}}_u,\psi _u)\) matches perfectly the setup of the previous section. Now, the representation formula (2.1) holds for \(U \in {\mathcal {C}}({\mathcal {H}}_u)\) defined by

$$\begin{aligned} U(w) = w(0) \;\; \forall w \in {\mathcal {H}}_u. \end{aligned}$$
(2.3)

This function is sometimes called the ‘extension by continuity’ of the almost periodic function u(t) to its hull \({\mathcal {H}}_u\). This construction is standard in the theory of a.p. functions and we refer the reader to [20] for a more detailed discussion.

For a function \(U:\Omega \rightarrow {\mathbb {R}}\) let us introduce the derivative along the flow by

$$\begin{aligned} \partial _\psi U(\omega ) = \lim _{t \rightarrow 0} \frac{U(\omega + \psi (t)) - U(\omega )}{t}. \end{aligned}$$

Let \({\mathcal {C}}^1_\psi (\Omega )\) be the space of continuous functions \(U:\Omega \rightarrow {\mathbb {R}}\) such that \(\partial _\psi U\) exists for all \(\omega \in \Omega \) and \(\partial _\psi U \in {\mathcal {C}}(\Omega )\). The spaces \({\mathcal {C}}^k_\psi (\Omega )\) for \(k \ge 2\) are defined accordingly. Let us also introduce the norm \(\Vert U \Vert _{{\mathcal {C}}^k_\psi (\Omega )} = \Vert U \Vert _\infty + \sum _{n=1}^{k} \Vert \partial _\psi ^{(n)} U \Vert _\infty \). Now consider \(U\in {\mathcal {C}}(\Omega )\) and assume the almost periodic function \(u(t)=U(\psi (t))\) is continuously differentiable. Then \(\partial _\psi U\) exists on \(\psi ({\mathbb {R}})\) and we have

$$\begin{aligned} u'(t) = \partial _\psi U\left( \psi (t)\right) \;\; \text { for all } t \in {\mathbb {R}}. \end{aligned}$$

Lemma 2.3

Let \(U\in {\mathcal {C}}(\Omega )\) and \(u\in {\mathcal {C}}({\mathbb {R}})\) be such that \(u(t)=U(\psi (t))\). Then we have \(u\in {\mathcal {C}}^1({\mathbb {R}})\) and \(u'(t)\) is a.p. if and only if \(U \in {\mathcal {C}}^1_\psi (\Omega )\).

One part of the equivalence is trivial. The proof of the other part can be found in [21, Lemma 13]. We also note that the derivative \(u'(t)\) of an almost periodic function is itself a.p. if and only if it is uniformly continuous. This, and many other interesting properties of a.p. functions are demonstrated in [1].

Example 2.4

Let us continue Example 2.1, where \(\Omega = {\mathbb {T}}^N\). For \(U \in {\mathcal {C}}({\mathbb {T}}^N)\) consider the function

$$\begin{aligned} u(t) = U(\psi (t)) = U(\overline{\nu _1 t},\ldots ,\overline{\nu _N t}). \end{aligned}$$

Such functions are called quasi-periodic. In this case, \(\partial _\psi \) is just the derivative in the direction of \(\nu \in {\mathbb {R}}^N\). So if U is in the space \({\mathcal {C}}^1({\mathbb {T}}^N)\) of functions in \({\mathcal {C}}^1({\mathbb {R}}^N)\), which are 1-periodic in each argument, then

$$\begin{aligned} \partial _\psi U = \sum _{i=1}^{N} \nu _i \, \partial _{\theta _i} U. \end{aligned}$$

Note however, that in general \({\mathcal {C}}^1({\mathbb {T}}^N)\) is a proper subspace of \({\mathcal {C}}^1_\psi ({\mathbb {T}}^N)\).

Example 2.5

The so called limit periodic functions are another important subclass of the a.p. functions. Here, a map \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is called limit periodic if it is the uniform limit of continuous periodic functions. Now, in continuation of Example 2.2, let \(\Omega = {\mathcal {S}}_{\varvec{p}}\) and consider \(U \in {\mathcal {C}}({\mathcal {S}}_{\varvec{p}})\). Then, the function u(t) defined by

$$\begin{aligned} u(t) = U(\psi (t)) = U(e^{2\pi i \lambda t/q_0},e^{2\pi i \lambda t/q_1},e^{2\pi i \lambda t/q_2},\ldots ) \end{aligned}$$

is limit periodic, since it is the uniform limit of a sequence of \(q_k\)-periodic functions \(u_k\) given by

$$\begin{aligned} u_k(t) = U(e^{2\pi i\lambda t/q_0},\ldots ,e^{2\pi i\lambda t/q_k},1,1,\ldots ). \end{aligned}$$

Vice versa it is true, that for suitable \(\varvec{p}\) and \(\lambda >0\) any limit periodic function v(t) can be obtained in this manner. To see this, first note that v can be expanded in a uniformly convergent series of continuous 1-periodic functions,

$$\begin{aligned} v(t)=\sum _{k=0}^{\infty } v_k(t/T_k), \end{aligned}$$

with \(T_0>0\) and \(T_k\) such that \(T_{k}/T_{k-1}\in {\mathbb {N}}\) for all \(k\in {\mathbb {N}}\) (cf. [6]). W.l.o.g. we can assume that \(p_k := \frac{T_k}{T_{k-1}}\) is a prime number for all k. Moreover, one can show that the hull \({\mathcal {H}}_v\) of v(t) consists of those functions \(w_{\phi }(t)\) which can be written in the form

$$\begin{aligned} w_{\phi }(t)=\sum _{k=0}^{\infty } v_k((t+\phi _k)/T_k), \end{aligned}$$

where \(\phi _k\) is an angle defined modulo \(T_k\) such that \(\phi _{k-1} \equiv \phi _{k} \mod T_{k-1}\) for all \(k\in {\mathbb {N}}\) (see [19] for a more detailed discussion in the case of a specific example). We can then define \(z_k = e^{2\pi i \phi _k / T_k}\) to obtain a series \((z_k)_{k\in {\mathbb {N}}_0}\) in \({\mathbb {S}}^1\) so that

$$\begin{aligned} z_k^{p_k} = e^{2\pi i \phi _{k} / T_{k-1} } = e^{2\pi i \phi _{k-1} / T_{k-1}} = z_{k-1}. \end{aligned}$$

In other words, we have \(z \in {\mathcal {S}}_{\varvec{p}}\), where \(\varvec{p} = (p_k)_{k\in {\mathbb {N}}}\). We write \(\eta :{\mathcal {S}}_{\varvec{p}} \rightarrow {\mathcal {H}}_v\) for the resulting continuous map \((z_k)= (e^{2\pi i \phi _k / T_k})\mapsto w_\phi (t)\). The translation flow restricted to \({\mathcal {H}}_v\) as described in (2.2) then corresponds to \(\psi \) as above with \(\lambda = 1/T_0\), that is \(\psi (\tau ) = (e^{2\pi i\tau /T_0},e^{2\pi i\tau /T_1},\ldots )\). Indeed, defining \(V = U\circ \eta \), where U is the extension by continuity as in (2.3), yields

$$\begin{aligned} V(\psi (\tau )) = U(w_\tau ) = w_\tau (0) = \sum _{k=0}^{\infty } v_k(\tau /T_k) = v(\tau ). \end{aligned}$$

2.3 Haar Measure and Decomposition Along the Flow

It is a well known fact, that for every compact commutative topological group \(\Omega \) there is a unique Borel probability measure \(\mu _\Omega \), which is invariant under the group operation, i.e. \(\mu _\Omega ({\mathcal {D}}+\omega ) = \mu _\Omega ({\mathcal {D}})\) holds for every Borel set \({\mathcal {D}}\subset \Omega \) and every \(\omega \in \Omega \). This measure is called the Haar measure of \(\Omega \). (This follows from the existence of the invariant Haar integral of \(\Omega \) and the Riesz representation theorem. Proofs can be found in [22] and [14], respectively.) For Example if \(\Omega ={{\mathbb {S}}}^1\) we have

$$\begin{aligned} \mu _{{{\mathbb {S}}}^1}({\mathcal {B}}) = \frac{1}{2\pi }\lambda \{t\in [0,2\pi ):e^{it}\in {\mathcal {B}}\}, \end{aligned}$$

where \(\lambda \) is the Lebesgue measure on \({\mathbb {R}}\). Let \(\psi \), \(\Sigma \) and \(\Phi \) be as in Sect. 2.1. Then \(\Phi \) defines a decomposition \(\Omega \cong \Sigma \times [0,S) \) along the flow. Since \(\Sigma \) is a subgroup, it has a Haar measure \(\mu _\Sigma \) itself. Also the interval [0, S) naturally inherits the probability measure

$$\begin{aligned} \mu _{[0,S)}(I) = \frac{1}{S} \lambda (I). \end{aligned}$$

As shown in [7], the restricted flow \(\Phi :\Sigma \times [0,S)\rightarrow \Omega , \Phi (\sigma ,t)= \sigma \cdot t\) also allows for a decomposition of the Haar measure \(\mu _\Omega \) along the flow.

Lemma 2.6

The map \(\Phi \) is an isomorphism of measure spaces, i.e.

$$\begin{aligned} \mu _\Omega ({\mathcal {B}}) = \frac{1}{S} (\mu _\Sigma \otimes \lambda ) (\Phi ^{-1}({\mathcal {B}})) \end{aligned}$$
(2.4)

holds for every Borel set \({\mathcal {B}} \subset \Omega \).

Before we prove this lemma, let us begin with some preliminaries. Consider the function \(\chi : \Sigma \times [0,\infty ) \rightarrow \Sigma \times [0,S)\) defined by

$$\begin{aligned} \chi (\sigma ,t) = \Phi ^{-1}(\sigma \cdot t) = \Phi ^{-1}(\sigma +\psi (t)). \end{aligned}$$
(2.5)

Since \(\Phi \) is just the restricted flow, we have \(\chi =\text {id}\) on \(\Sigma \times [0,S)\). This yields

$$\begin{aligned} \chi (\sigma ,t)= & {} \Phi ^{-1}(\sigma + \psi (t)) = \Phi ^{-1}\left( \sigma + \psi \left( \left\lfloor \frac{t}{S}\right\rfloor S \right) + \psi \left( t-\left\lfloor \frac{s}{S}\right\rfloor S\right) \right) \\= & {} \left( \sigma + \psi \left( \left\lfloor \frac{t}{S}\right\rfloor S \right) , t-\left\lfloor \frac{t}{S}\right\rfloor S \right) \end{aligned}$$

for every \((\sigma ,t)\in \Sigma \times {\mathbb {R}}\), where \(\lfloor \cdot \rfloor \) indicates the floor function. This representation shows that \(\chi \) is measure-preserving on every strip \( \Sigma \times [t,t+S)\) of width S, since \(\mu _{\Sigma }\) and \(\lambda \) are invariant under translations in \(\Sigma \) and \({\mathbb {R}}\), respectively. Moreover, the equality

$$\begin{aligned} \chi (\Phi ^{-1}(\omega ) + \Phi ^{-1}({{\tilde{\omega }}})) = \Phi ^{-1}(\omega + {\tilde{\omega }}) \;\; \forall \omega ,{\tilde{\omega }}\in \Omega \end{aligned}$$
(2.6)

follows directly from the definition of \(\chi \).

Fig. 2
figure 2

Let \(\chi (\sigma ,t)=({\tilde{\sigma }},s)\). The map \(\chi \) ‘divides out’ every complete period of \(\varphi \circ \psi \), i.e. \(s = t\mod S\), while preserving the relation \({\tilde{\sigma }}\cdot s = \omega = \sigma \cdot t\)

Proof of Lemma 2.6

First we show that \(\Phi ^{-1}\) is Borel measurable. To prove this, it suffices to show that the image \(\Phi (A\times I)\) of every open rectangle \(A\times I\subset \Sigma \times [0,S)\) is a Borel set. If \(0 \notin I\) this image is open in \(\Omega \setminus \Sigma \), since \(\Phi ^{-1}\) is continuous. But if \(0 \in I\), again \(\Phi (A\times (I\setminus \{0\}))\) is open and \(\Phi (A\times \{0\}) = A\) is it as well.

Now, consider the measure \(\mu _\Phi \) on \(\Omega \) defined by

$$\begin{aligned} \mu _\Phi ({\mathcal {B}}) = \frac{1}{S} (\mu _\Sigma \otimes \lambda ) (\Phi ^{-1}({\mathcal {B}})). \end{aligned}$$
(2.7)

Since \(\mu _\Phi (\Omega )=1\), this is a Borel probability measure. We will show that \(\mu _\Phi \) is also invariant under addition in the group. For this purpose, let \({\mathcal {B}} \subset \Omega \) be a Borel set and let \(\omega _0 \in \Omega \). Then, by (2.6) we have

$$\begin{aligned} \mu _\Phi ({\mathcal {B}}+\omega _0) =&\frac{1}{S} (\mu _\Sigma \otimes \lambda ) (\Phi ^{-1}({\mathcal {B}}+\omega _0)) \end{aligned}$$
(2.8)
$$\begin{aligned} =&\frac{1}{S} (\mu _\Sigma \otimes \lambda ) \left( \chi (\Phi ^{-1}({\mathcal {B}}) + \Phi ^{-1}(\omega _0))\right) . \end{aligned}$$
(2.9)

Denoting \(\Phi ^{-1}(\omega _0)=(\sigma _0,s_0)\), we get \(\Phi ^{-1}({\mathcal {B}}) + \Phi ^{-1}(\omega _0)\subset \Sigma \times [s_0,s_0+S)\). So it is contained in a strip of width S and therefore

$$\begin{aligned} \frac{1}{S} (\mu _\Sigma \otimes \lambda ) \left( \chi (\Phi ^{-1}({\mathcal {B}}) + (\sigma _0,s_0))\right) = \frac{1}{S} (\mu _\Sigma \otimes \lambda ) \left( \Phi ^{-1}({\mathcal {B}}) + (\sigma _0,s_0)\right) \end{aligned}$$

But the product measure \(\mu _\Sigma \otimes \lambda \) is invariant under translations in \(\Sigma \times {\mathbb {R}}\). Thus, in total we have

$$\begin{aligned} \mu _\Phi ({\mathcal {B}}+\omega _0) = \frac{1}{S} (\mu _\Sigma \otimes \lambda ) \left( \Phi ^{-1}({\mathcal {B}})\right) =\mu _\Phi ({\mathcal {B}}). \end{aligned}$$
(2.10)

Therefore, \(\mu _\Phi \) is a Borel probability measure on \(\Omega \) which is invariant under group action. Since the Haar measure is unique, it follows \(\mu _\Omega = \mu _\Phi \). \(\square \)

3 A Theorem About Escaping Sets

3.1 Measure-Preserving Embeddings

From now on we will consider functions

$$\begin{aligned} f:{\mathcal {D}}\subset \Omega \times (0,\infty ) \rightarrow \Omega \times (0,\infty ), \end{aligned}$$

where \({\mathcal {D}}\) is an open set. We will call such a function measure-preserving embedding, if f is continuous, injective and furthermore

$$\begin{aligned} (\mu _{\Omega } \otimes \lambda ) (f({\mathcal {B}}))= (\mu _{\Omega } \otimes \lambda ) ({\mathcal {B}}) \end{aligned}$$

holds for all Borel sets \({\mathcal {B}}\subset {\mathcal {D}}\), where \(\lambda \) denotes the Lebesgue measure of \({\mathbb {R}}\). It is easy to show that under these conditions, \(f:{\mathcal {D}} \rightarrow \tilde{{\mathcal {D}}}\) is a homeomorphism, where \(\tilde{{\mathcal {D}}}=f({\mathcal {D}})\).

Since we want to use the iterations of f, we have to carefully construct a suitable domain on which these forward iterations are well-defined. We initialize \({\mathcal {D}}_1 = {\mathcal {D}}, \;\; f^1=f\) and set

$$\begin{aligned} {\mathcal {D}}_{n+1}=f^{-1}({\mathcal {D}}_{n}), \;\; f^{n+1}= f^n \circ f \; \text { for } \; n \in {\mathbb {N}}. \end{aligned}$$

This way \(f^n\) is well-defined on \({\mathcal {D}}_n\). Clearly, \(f^n\) is a measure-preserving embedding as well. Also inductively it can be shown that \({\mathcal {D}}_{n+1}=\{(\omega ,r)\in {\mathcal {D}}:f(\omega ,r),\ldots ,f^n(\omega ,r)\in {\mathcal {D}} \}\) and therefore \({\mathcal {D}}_{n+1}\subset {\mathcal {D}}_n \subset {\mathcal {D}}\) for all \(n \in {\mathbb {N}}\). Initial conditions in the set

$$\begin{aligned} {\mathcal {D}}_\infty = \bigcap \limits _{n=1}^\infty {\mathcal {D}}_n \subset \Omega \times (0,\infty ) \end{aligned}$$

correspond to complete forward orbits, i.e. if \((\omega _0,r_0)\in {\mathcal {D}}_\infty \), then

$$\begin{aligned} (\omega _n,r_n)=f^n(\omega _0,r_0) \end{aligned}$$

is defined for all \(n \in {\mathbb {N}}\). It could however happen that \({\mathcal {D}}_\infty = \emptyset \) or even \({\mathcal {D}}_{n} = \emptyset \) for some \(n\ge 2\). The set of initial data leading to unbounded orbits is denoted by

$$\begin{aligned} {\mathcal {U}} = \left\{ (\omega _0,r_0)\in {\mathcal {D}}_\infty : \limsup _{n\rightarrow \infty } r_n = \infty \right\} . \end{aligned}$$
(3.1)

Complete orbits such that \(\lim _{n\rightarrow \infty } r_n = \infty \) will be called escaping orbits. The corresponding set of initial data is

$$\begin{aligned} {\mathcal {E}} = \left\{ (\omega _0,r_0)\in {\mathcal {D}}_\infty : \lim _{n\rightarrow \infty } r_n = \infty \right\} . \end{aligned}$$

3.2 Almost Periodic Successor Maps

Now, consider a measure-preserving embedding \(f:{\mathcal {D}}\subset \Omega \times (0,\infty ) \rightarrow \Omega \times (0,\infty )\), which has the special structure

$$\begin{aligned} f(\omega ,r)=(\omega +\psi (F(\omega ,r)),r + G(\omega ,r)), \end{aligned}$$
(3.2)

where \(F,G:{\mathcal {D}}\rightarrow {\mathbb {R}}\) are continuous. For \(\omega \in \Omega \) we introduce the notation \(\psi _\omega (t) = \omega + \psi (t) = \omega \cdot t\) and define

$$\begin{aligned} {D}_{\omega } = (\psi _{\omega } \times \text {id})^{-1}({\mathcal {D}}) \subset {\mathbb {R}}\times (0,\infty ). \end{aligned}$$

On this open set, consider the map \(f_{\omega }: {D}_{\omega } \subset {\mathbb {R}}\times (0,\infty ) \rightarrow {\mathbb {R}}\times (0,\infty )\) given by

$$\begin{aligned} f_{\omega }(t,r)=(t+F(\psi _\omega (t),r),r+G(\psi _\omega (t),r)). \end{aligned}$$
(3.3)

Then \(f_{\omega }\) is continuous and meets the identity

$$\begin{aligned} f \circ (\psi _{\omega } \times \text {id}) = (\psi _{\omega } \times \text {id}) \circ f_{\omega } \;\; \text {on} \;\; D_{\omega }, \end{aligned}$$

i.e. the following diagram is commutative:

figure a

Therefore \(f_{\omega }\) is injective as well. Again we define \(D_{\omega ,1} = D_{\omega }\) and \(D_{\omega ,n+1} = f_{\omega }^{-1}(D_{\omega ,n})\) to construct the set

$$\begin{aligned} D_{\omega , \infty } = \bigcap \limits _{n=1}^{\infty } D_{\omega ,n} \subset {\mathbb {R}}\times (0,\infty ), \end{aligned}$$

where the forward iterates \((t_n,r_n)=f_{\omega }^n(t_0,t_0)\) are defined for all \(n\in {\mathbb {N}}\). Analogously, unbounded orbits are generated by initial conditions in the set

$$\begin{aligned} U_\omega = \left\{ (t_0,r_0)\in {D}_{{\omega },\infty }: \limsup _{n\rightarrow \infty }r_n=\infty \right\} \end{aligned}$$

and escaping orbits originate in

$$\begin{aligned} {E_{\omega }}= \left\{ (t_0,r_0)\in {D}_{{\omega },\infty }: \lim _{n\rightarrow \infty }r_n=\infty \right\} . \end{aligned}$$

These sets can also be obtained through the relations

$$\begin{aligned} D_{\omega , \infty } = (\psi _{\omega } \times \text {id})^{-1}({\mathcal {D}}_\infty ), \;\; U_{\omega } = (\psi _{\omega } \times \text {id})^{-1}({\mathcal {U}}), \;\; E_{\omega } = (\psi _{\omega } \times \text {id})^{-1}({\mathcal {E}}). \end{aligned}$$

Finally we are in position to state the theorem [16, Theorem 3.1]:

Theorem 3.1

Let \(f:{\mathcal {D}}\subset \Omega \times (0,\infty ) \rightarrow \Omega \times (0,\infty )\) be a measure-preserving embedding of the form (3.2) and suppose that there is a function \(W=W(\omega ,r)\) satisfying \(W\in {\mathcal {C}}^1_\psi (\Omega \times (0,\infty ))\),

$$\begin{aligned} 0<\beta \le \partial _r W(\omega ,r) \le \delta \;\; \text {for} \;\; \omega \in \Omega , \;\; r \in (0,\infty ), \end{aligned}$$
(3.4)

with some constants \(\beta ,\delta >0\), and furthermore

$$\begin{aligned} W(f(\omega ,r)) \le W(\omega ,r) + k(r) \;\; \text {for} \;\; (\omega ,r)\in {\mathcal {D}}, \end{aligned}$$
(3.5)

where \(k:(0,\infty ) \rightarrow {\mathbb {R}}\) is a decreasing and bounded function such that \(\lim _{r\rightarrow \infty } k(r)=0\). Then, for almost all \(\omega \in \Omega \), the set \(E_{\omega }\subset {\mathbb {R}}\times (0,\infty )\) has Lebesgue measure zero.

Here, \({\mathcal {C}}^1_\psi (\Omega \times (0,\infty ))\) denotes the space of continuous functions \(U(\omega ,r)\) such that both derivatives \(\partial _\psi U\) and \(\partial _r U\) exist on \(\Omega \times (0,\infty )\) and \(\partial _\psi U,\partial _r U \in {\mathcal {C}}(\Omega \times (0,\infty ))\). The function W can be seen as a generalized adiabatic invariant, since any growth will be slow for large energies.

4 Proof of Theorem 3.1

The proof of Theorem 3.1 is based on the fact, that almost all unbounded orbits of f are recurrent. In order to show this, we will apply the Poincaré recurrence theorem to the set \({\mathcal {U}}\) of unbounded orbits and the corresponding restricted map \(f\big |_{\mathcal {U}}\). We will use it in the following form [16, Lemma 4.2].

Lemma 4.1

Let \((X,{\mathcal {F}},\mu )\) be a measure space such that \(\mu (X)<\infty \). Suppose that there exists a measurable set \(\Gamma \subset X\) of measure zero and a map \(T:X\setminus \Gamma \rightarrow X\) which is injective and so that the following holds:

  1. (a)

    T is measurable, in the sense \(T(B),T^{-1}(B) \in {\mathcal {F}}\) for \(B\in {\mathcal {F}}\), and

  2. (b)

    T is measure-preserving, in the sense that \(\mu (T(B))=\mu (B)\) for \(B\in {\mathcal {F}}\).

Then for every measurable set \(B\subset X\) almost all points of B visit B infinitely many times in the future (i.e. T is infinitely recurrent).

Since we can not guarantee that \({\mathcal {U}}\) has finite measure, we will also need the following refined version of the recurrence theorem due to Dolgopyat [8, Lemma 4.3].

Lemma 4.2

Let \((X,{\mathcal {F}},\mu )\) be a measure space and suppose that the map \(T:X\rightarrow X\) is injective and such that the following holds:

  1. (a)

    T is measurable, in the sense \(T(B),T^{-1}(B) \in {\mathcal {F}}\) for \(B\in {\mathcal {F}}\),

  2. (b)

    T is measure-preserving, in the sense that \(\mu (T(B))=\mu (B)\) for \(B\in {\mathcal {F}}\), and

  3. (c)

    there is a set \(A\in {\mathcal {F}}\) such that \(\mu (A)<\infty \) with the property that almost all points from X visit A in the future.

Then for every measurable set \(B\subset X\) almost all points of B visit B infinitely many times in the future (i.e. T is infinitely recurrent).

For the sake of completeness let us state the proof.

Proof of Lemma 4.2

Let \(\Gamma \subset X\) be measurable such that \(\mu (\Gamma )=0\) and all points of \(X\setminus \Gamma \) vist A in the future. Thus, the first return time \(r(x)=\min \{ k\in {\mathbb {N}}: T^k(x)\in A \}\) is well-defined for \(x \in X \setminus \Gamma \). It induces a map \(S:X \setminus \Gamma \rightarrow A\) defined by \(S(x) = T^{r(x)}(x)\). The restriction \(S\big |_{A\setminus \Gamma }\) is injective: Assume \(S(x)=S(y)\) for distinct points \(x,y\in A\setminus \Gamma \) and suppose \(r(x)>r(y)\), then \(T^{r(x)-r(y)}(x) = y \in A\) is a contradiction to the minimality of r(x). It is also measure-preserving [12, cf. Lemma 2.43]. Now, consider a measurable set \(B\subset X\) and define \(B_j = \{ y \in B\setminus \Gamma : r(y) \le j\}\) as well as

$$\begin{aligned} A_j = S(B_j) = \bigcup _{k=1}^j (T^k(B)\cap A) \subset A \;\; \forall j \in {\mathbb {N}}. \end{aligned}$$

But since \(\mu (A)< \infty \) by assumption, the Poincaré recurrence theorem (Lemma 4.1) applies to \(A_j\). Thus we can find measurable sets \(\Gamma _j \subset A_j\) with measure zero, such that every point \(x \in A_j\setminus \Gamma _j\) returns to \(A_j\) infinitely often (via S). Now consider the set

$$\begin{aligned} F = B \cap \bigg ( \Gamma \cup \bigcup _{j\in {\mathbb {Z}}} S^{-1}(\Gamma _j) \bigg ). \end{aligned}$$

Then \(\mu (F)=0\) and every point \(y \in B\setminus F\) returns to B infinitely often in the future. To see this, select \(j \in {\mathbb {N}}\) such that \(r(y)\le j\), i.e. \(y \in B_j\). Then \(x = S(y) \in A_j \setminus \Gamma _j\). Hence there exist infinitely many \(k \in {\mathbb {N}}\) so that \(k \ge j\) and \(S^k(x) \in A_j\). Let us fix one of these k. Then \(S^k(x)=S(z)\) for some \(z \in B_j\). So in total we have

$$\begin{aligned} T^{r(z)}(z) = S(z) = S^k(x) = S^{k+1}(y) = T^{\sum _{j=1}^{k} r(S^j(y))}(y). \end{aligned}$$

Now, since \(\sum _{j=1}^{k} r(S^j(y)) \ge k+1 > j \ge r(z)\), this yields \(T^m(y) = z \in B_j \subset B\), where \(m = \sum _{j=1}^{k} r(S^j(y)) -r(z) \in {\mathbb {N}}\). \(\square \)

One way to construct such a set A of finite measure is given by the next lemma [16]. It is based on the function \(W(\omega ,r)\) introduced in Theorem 3.1 and in fact is the only reason to assume the existence of W in the first place.

Lemma 4.3

Let \(f:{\mathcal {D}}\subset \Omega \times (0,\infty ) \rightarrow \Omega \times (0,\infty )\) be a measure-preserving embedding and suppose that there is a function \(W=W(\omega ,r)\) satisfying \(W \in {\mathcal {C}}^1_\psi (\Omega \times (0,\infty ))\), (3.4) and (3.5). Let \((\epsilon _j)_{j\in {\mathbb {N}}}\) and \((W_j)_{j\in {\mathbb {N}}}\) be sequences of positive numbers with the properties \(\sum _{j=1}^{\infty }\epsilon _j < \infty \), \(\lim _{j\rightarrow \infty } W_j = \infty \) and \(\lim _{j\rightarrow \infty } \epsilon _j^{-1} k(\frac{1}{4\gamma }W_j) = 0\). Denote

$$\begin{aligned} {\mathcal {A}} = \bigcup _{j\in {\mathbb {N}}} {\mathcal {A}}_j, \;\; {\mathcal {A}}_j = \left\{ (\omega ,r)\in \Omega \times (0,\infty ): |W(\omega ,r) - W_j |\le \epsilon _j \right\} . \end{aligned}$$
(4.1)

Then \({\mathcal {A}}\) has finite measure and every unbounded orbit of f enters \({\mathcal {A}}\). More precisely, if \((\omega _0,r_0)\in {\mathcal {U}}\), where \({\mathcal {U}}\) is from (3.1), and if \((\omega _n,r_n)_{n\in {\mathbb {N}}}\) denotes the forward orbit under f, then there is \(K\in {\mathbb {N}}\) so that \((\omega _K,r_K) \in {\mathcal {A}}\).

Proof

First let us show that \({\mathcal {A}}\) has finite measure. By Fubini’s theorem,

$$\begin{aligned} (\mu _{\Omega }\otimes \lambda )({\mathcal {A}}_j) = \int _\Omega \lambda ({\mathcal {A}}_{j,\omega }) \, d\mu _\Omega (\omega ) \end{aligned}$$

holds for the sections \({\mathcal {A}}_{j,\omega } = \{ r \in (0,\infty ): (\omega ,r) \in {\mathcal {A}}_j \}\). Now, consider the diffeomorphism \(w_\omega :r \mapsto W(\omega ,r)\). Its inverse \(w_\omega ^{-1}\) is Lipschitz continuous with constant \(\beta ^{-1}\), due to (3.4). But then, \({\mathcal {A}}_{j,\omega } = w_\omega ^{-1}( (W_j-\epsilon _j, W_j + \epsilon _j) )\) implies \(\lambda ({\mathcal {A}}_{j,\omega })\ge 2\beta ^{-1}\epsilon _j\). Thus in total we have

$$\begin{aligned} (\mu _\Omega \otimes \lambda )({\mathcal {A}}) \le \sum _{j=1}^{\infty } (\mu _\Omega \otimes \lambda )({\mathcal {A}}_j) \le \sum _{j=1}^{\infty } \frac{2\epsilon _j}{\beta } < \infty . \end{aligned}$$

Next we will prove the recurrence property. To this end, let \((\omega _0,r_0) \in {\mathcal {U}}\) be fixed and denote by \((\omega _n,r_n)\) the forward orbit under f. We will start with some preliminaries. Using (3.4) and the mean value theorem, we can find \({\hat{r}}\) such that

$$\begin{aligned} \frac{\beta }{2} \le \frac{W(\omega ,r)}{r} \le 2\delta \;\; \forall (\omega ,r)\in \Omega \times ({\hat{r}},\infty ). \end{aligned}$$
(4.2)

Furthermore, by assumption we can find an index \(j_0\ge 2\) such that

$$\begin{aligned} W_{j_0} > \max \{ W(\omega _1,r_1), \Vert k \Vert _\infty + \max _{\omega \in \Omega } W(\omega ,{\hat{r}}) , 2 \Vert k \Vert _\infty \} \;\; \text {and} \;\; k\bigg (\frac{1}{4\gamma }W_{j_0}\bigg ) \le \epsilon _{j_0}. \end{aligned}$$

Moreover we have \(\limsup _{n \rightarrow \infty } W(\omega _n,r_n) = \infty \): Due to \(\limsup _{n \rightarrow \infty } r_n = \infty \), (3.4) implies

$$\begin{aligned} W(\omega _n,r_n) \ge \beta (r_n-r_1) + W(\omega _n,r_1) \end{aligned}$$

for n sufficiently large. But then \(\limsup _{n \rightarrow \infty } W(\omega _n,r_n) = \infty \) follows from the compactness of \(\Omega \). Now, since \(W(\omega _1,r_1)<W_{j_0}\) we can select the first index \(K\ge 2\) such that \(W(\omega _K,r_K) > W_{j_0}\). So in particular this means \(W(\omega _{K-1},r_{K-1}) \le W_{j_0}\). Since (3.5) yields \(W(\omega _K,r_K) \le W(\omega _{K-1},r_{K-1}) + k(r_{K-1})\), we can derive the following inequality:

$$\begin{aligned} W(\omega _{K-1},r_{K-1})\ge & {} W(\omega _{K},r_{K}) - \Vert k \Vert _\infty > W_{j_0} - \Vert k \Vert _\infty \ge \max _{\omega \in \Omega } W(\omega ,{\hat{r}})\\\ge & {} W(\omega _{K-1},{\hat{r}}) \end{aligned}$$

Then, the monotonicity of \(w_{\omega _{K-1}}\) implies \(r_{K-1} > {\hat{r}}\). Hence we can combine (4.2) with the previous estimate to obtain

$$\begin{aligned} r_{K-1}\ge \frac{1}{2\delta } W(\omega _{K-1},r_{K-1}) \ge \frac{1}{2\delta } (W_{j_0} - \Vert k \Vert _\infty ) \ge \frac{1}{4\delta } W_{j_0}. \end{aligned}$$

Finally, since k(r) is decreasing, \(W(\omega _{K},r_{K}) > W_{j_0} \ge W(\omega _{K-1},r_{K-1})\) yields

$$\begin{aligned} |W(\omega _{K},r_{K}) - W_{j_0} |\le & {} W(\omega _{K},r_{K}) - W(\omega _{K-1},r_{K-1}) \le k(r_{K-1})\\\le & {} k\bigg (\frac{1}{4\delta } W_{j_0}\bigg ) \le \epsilon _{j_0}, \end{aligned}$$

which implies \((\omega _K,r_K) \in {\mathcal {A}}_{j_0}\). \(\square \)

Now, we are ready to prove the theorem.

Proof of Theorem 3.1

Consider the set

$$\begin{aligned} {\mathcal {U}} = \left\{ (\omega _0,r_0) \in {\mathcal {D}}_\infty : \limsup _{n \rightarrow \infty } r_n = \infty \right\} . \end{aligned}$$

We will assume that \({\mathcal {U}}\ne \emptyset \), since otherwise the assertion would be a direct consequence. Step 1: Almost all unbounded orbits are recurrent. We will prove the existence of a set \({\mathcal {Z}}\subset {\mathcal {U}}\) of measure zero such that if \((\omega _0,r_0)\in {\mathcal {U}}\setminus {\mathcal {Z}}\), then

$$\begin{aligned} \liminf _{n\rightarrow \infty } r_n <\infty . \end{aligned}$$

In particular, we would have \({\mathcal {E}} \subset {\mathcal {Z}}\). To show this, we consider the restriction \(T=f\big |_{\mathcal {U}}:{\mathcal {U}}\rightarrow {\mathcal {U}}\). This map is well-defined, injective and, like f, measure-preserving. We will distinguish three cases:

  1. (i)

    \((\mu _\Omega \otimes \lambda )({\mathcal {U}})=0\),

  2. (ii)

    \(0<(\mu _\Omega \otimes \lambda )({\mathcal {U}})<\infty \), and

  3. (iii)

    \((\mu _\Omega \otimes \lambda )({\mathcal {U}})=\infty \).

In the first case \({\mathcal {Z}}={\mathcal {U}}\) is a valid choice. In case (ii) we can apply the Poincaré recurrence theorem (Lemma 4.1), whereas in case (iii) the modified version of Dolgopyat (Lemma 4.2) is applicable due to Lemma 4.3. Now, let us cover \(\Omega \times {\mathbb {R}}\) by the sets \({\mathfrak {B}}_j = \Omega \times (j-1,j+1)\) for \(j \in {\mathbb {N}}\). Then, for \({\mathcal {B}}_j = {\mathfrak {B}}_j \cap {\mathcal {U}}\) one can use the recurrence property to find sets \({\mathcal {Z}}_j\subset {\mathcal {B}}_j\) of measure zero such that every orbit \((\omega _n,r_n)_{n\in {\mathbb {N}}}\) starting in \({\mathcal {B}}_j \setminus {\mathcal {Z}}_j\) returns to \({\mathcal {B}}_j\) infinitely often. But this implies \(\liminf _{n\rightarrow \infty } r_n \le r_0 +2<\infty \). Therefore, the set \({\mathcal {Z}}= \bigcup _{j\in {\mathbb {N}}} {\mathcal {Z}}_j \subset {\mathcal {U}}\) has all the desired properties.

Step 2: We will show the existence of a subgroup \(\Sigma \subset \Omega \) such that \(E_\sigma \) has Lebesgue measure zero for almost all \(\sigma \in \Sigma \). Since \({\mathcal {E}}\subset {\mathcal {Z}}\) by construction, the inclusion

$$\begin{aligned} E_\omega = (\psi _\omega \otimes \text {id})^{-1}({\mathcal {E}}) \subset (\psi _\omega \otimes \text {id})^{-1}({\mathcal {Z}}) \end{aligned}$$

holds for all \(\omega \in \Omega \). To \(j\in {\mathbb {Z}}\) we can consider the restricted flow

$$\begin{aligned} \Phi _j : \Sigma \times [jS,(j+1)S) \rightarrow \Omega , \;\; \Phi _j(\sigma ,t) = \sigma \cdot t = \psi _\sigma (t). \end{aligned}$$

It is easy to verify that just like \(\Phi =\Phi _0\) of Lemma 2.6 those functions are isomorphisms of measure spaces. In other words, \(\Phi _j\) is bijective up to a set of measure zero, both \(\Phi _j\) and \(\Phi _j^{-1}\) are measurable, and for every Borel set \({\mathcal {B}} \subset \Omega \) we have

$$\begin{aligned} \mu _\Omega ({\mathcal {B}}) = \frac{1}{S} (\mu _\Sigma \otimes \lambda ) (\Phi ^{-1}_j({\mathcal {B}})). \end{aligned}$$
(4.3)

This clearly implies

$$\begin{aligned} (\mu _\Omega \otimes \lambda )(B) = \frac{1}{S} (\mu _\Sigma \otimes \lambda ^2) (\Phi ^{-1}_j\times \text {id}) (B) \end{aligned}$$
(4.4)

for every Borel set \(B \subset \Omega \times (0,\infty )\). Let

$$\begin{aligned} C_j= & {} \{ (\sigma ,t,r) \in \Sigma \times [jS,(j+1)S) \times (0,\infty ) : (\Phi _j(\sigma ,t),r) \in {\mathcal {Z}} \} \\= & {} (\Phi _j^{-1}\times \text {id})({\mathcal {Z}}). \end{aligned}$$

Since \({\mathcal {Z}}\) has measure zero, (4.4) yields \((\mu _\Sigma \otimes \lambda ^2) (C_j) = 0\). Next we consider the cross sections

$$\begin{aligned} C_{j,\sigma } = \{ (t,r) \in [jS,(j+1)S) \times (0,\infty ) : (\sigma ,t,r) \in C_j \}. \end{aligned}$$

Then, \(\lambda ^2(C_{j,\sigma })=0\) for \(\mu _\Sigma \)-almost all \(\sigma \in \Sigma \) follows from Fubini’s theorem. So for every \(j\in {\mathbb {Z}}\) there is a set \(M_j \subset \Sigma \) with \(\mu _\Sigma (M_j)=0\) such that \(\lambda ^2(C_{j,\sigma }) = 0\) for all \(\sigma \in \Sigma \setminus M_j\). Thus \(M= \bigcup _{j\in {\mathbb {Z}}}M_j\) has measure zero as well and

$$\begin{aligned} \lambda ^2\bigg (\bigcup _{j\in {\mathbb {Z}}} C_{j,\sigma } \bigg ) = 0 \end{aligned}$$

for all \(\sigma \in \Sigma \setminus M\). But we have

$$\begin{aligned} \bigcup _{j\in {\mathbb {Z}}} C_{j,\sigma } = \{ (t,r) \in {\mathbb {R}}\times (0,\infty ) : (\psi _\sigma (t),r)\in {\mathcal {Z}} \} = (\psi _\sigma \times \text {id})^{-1}({\mathcal {Z}}), \end{aligned}$$

and recalling that \(E_\sigma \subset (\psi _\sigma \times \text {id})^{-1}({\mathcal {Z}})\), we therefore conclude \(\lambda ^2(E_\sigma )=0\) for all \(\sigma \in \Sigma \setminus M\).

Step 3: Concluding from \(\Sigma \) to \(\Omega \). If we denote by \(T_s(t,r)=(t+s,r)\) the translation in time, then clearly

$$\begin{aligned} f_{\omega \cdot s} = T_{-s}\circ f_\omega \circ T_s \;\; \text {on} \;\; D_{\omega \cdot s} \end{aligned}$$

holds for all \(\omega \in \Omega \) and \(s\in {\mathbb {R}}\). But this implies \(T_s(E_{\omega \cdot s}) = E_\omega \), since the identity above stays valid under iterations. In particular we have

$$\begin{aligned} \lambda ^2(E_{\omega \cdot s}) = \lambda ^2(E_\omega ), \;\;\forall \omega \in \Omega ,s\in {\mathbb {R}}. \end{aligned}$$

Again, we consider the restricted flow \(\Phi :\Sigma \times [0,S) \rightarrow \Omega \), \(\Phi (\omega ,t)=\omega \cdot t\). Using \(M\subset \Sigma \) of Step 2 we define \(Z_* = \Phi (M\times [0,S)) \subset \Omega \). Then, (4.3) and \(\mu _\Sigma (M)=0\) imply that also \(Z_*\) has measure zero. Now let \(\omega \in \Omega \setminus Z_*\) be fixed and let \((\sigma ,\tau ) = \Phi ^{-1}(\omega )\). Then \(\sigma \in \Sigma \setminus M\) and \(\sigma \cdot \tau = \omega \). Therefore, Step 2 implies

$$\begin{aligned} \lambda ^2(E_\omega )= \lambda ^2(E_{\sigma \cdot \tau }) = \lambda ^2(E_\sigma ) = 0, \end{aligned}$$

which proves the assertion. \(\square \)

5 Statement and Proof of the Main Result

We start with a rigorous description of the ping-pong map. To this end, let p be a forcing such that

$$\begin{aligned} p \in {\mathcal {C}}^2({\mathbb {R}}), \;\; 0<a\le p(t) \le b \;\;\forall t \in {\mathbb {R}}, \;\; \Vert p\Vert _{{\mathcal {C}}^2} = \Vert p \Vert _\infty + \Vert \dot{p} \Vert _\infty + \Vert \ddot{p} \Vert _\infty < \infty .\nonumber \\ \end{aligned}$$
(5.1)

Now, we consider the map

$$\begin{aligned} (t_0,v_0) \mapsto (t_1,v_1), \end{aligned}$$

which sends a time \(t_0\) of impact to the left plate \(x=0\) and the corresponding velocity \(v_0>0\) immediately after the impact to their successors \(t_1\) and \(v_1\) describing the subsequent impact to \(x=0\). If we further denote by \({\tilde{t}}\in (t_0,t_1)\) the time of the particle’s impact to the moving plate, then we can determine \({\tilde{t}}={\tilde{t}}(t_0,v_0)\) implicitly through the equation

$$\begin{aligned} ({\tilde{t}}-t_0)v_0 = p({\tilde{t}}), \end{aligned}$$
(5.2)

since this relation describes the distance that the particle has to travel before hitting the moving plate. With that we derive a formula for the successor map:

$$\begin{aligned} t_1 = {\tilde{t}} + \frac{p({\tilde{t}})}{v_1}, \;\; v_1 = v_0-2{\dot{p}}({\tilde{t}}) \end{aligned}$$
(5.3)

To ensure that this map is well defined, we will assume that

$$\begin{aligned} v_0 > v_* := 2 \max \left\{ \sup _{t\in {\mathbb {R}}} {\dot{p}}(t),0 \right\} . \end{aligned}$$
(5.4)

This condition guarantees that \(v_1\) is positive and also implies that there is a unique solution \({\tilde{t}}={\tilde{t}} (t_0,v_0) \in {\mathcal {C}}^1({\mathbb {R}}\times (v_*,\infty ))\) to (5.2). Thus we can take \({\mathbb {R}}\times (v_*,\infty )\) as the domain of the ping-pong map (5.3). Now, we are finally ready to state the main theorem.

Theorem 5.1

Assume \(0<a<b\) and \(P \in {\mathcal {C}}^2_\psi (\Omega )\) are such that

$$\begin{aligned} a \le P(\omega ) \le b \;\; \forall \omega \in \Omega . \end{aligned}$$
(5.5)

Consider the family \(\{p_\omega \}_{\omega \in \Omega }\) of almost periodic forcing functions defined by

$$\begin{aligned} p_\omega (t) = P(\omega + \psi (t)), \;\; t \in {\mathbb {R}}. \end{aligned}$$
(5.6)

Let \(v_{*}= 2\max \{ \max _{\varpi \in \Omega } \partial _\psi P(\varpi ),0\}\) and denote by

$$\begin{aligned} E_\omega = \left\{ (t_0,v_0) \in {\mathbb {R}}\times (v_*,\infty ): (t_n,v_n)_{n\in {\mathbb {N}}} \text { is well defined and } \lim _{n\rightarrow \infty } v_n = \infty \right\} \end{aligned}$$

the escaping set for the ping-pong map with forcing function \(p(t) = p_\omega (t)\). Then, for almost all \(\omega \in \Omega \), the set \(E_\omega \subset {\mathbb {R}}^2\) has Lebesgue measure zero.

Remark 5.2

The notation \(v_{*}= 2\max \{ \max _{\varpi \in \Omega } \partial _\psi P(\varpi ),0\}\) is consistent with (5.4), since for every \(\omega \in \Omega \) the set \(\omega \cdot {\mathbb {R}}\) lies dense in \(\Omega \) and thus

$$\begin{aligned} \sup _{t\in {\mathbb {R}}} {\dot{p}}_\omega (t) = \sup _{t\in {\mathbb {R}}} \partial _\psi P(\omega + \psi (t)) = \max _{\varpi \in \Omega } \partial _\psi P(\varpi ). \end{aligned}$$

We will give some further preliminaries before starting the actual proof. First we note, that the ping-pong map \((t_0,v_0) \mapsto (t_1,v_1)\) is not symplectic. To remedy this defect, we reformulate the model in terms of time t and energy \(E=\frac{1}{2}v^2\). In these new coordinates the ping-pong map becomes

$$\begin{aligned}&{\mathcal {P}}:(t_0,E_0) \mapsto (t_1,E_1), \end{aligned}$$
(5.7)
$$\begin{aligned}&\quad t_1 = {\tilde{t}} + \frac{p({\tilde{t}})}{\sqrt{2 E_1}}, \;\; E_1 = E_0 - 2\sqrt{2 E_0}{\dot{p}}({\tilde{t}}) + 2{\dot{p}}({\tilde{t}})^2 = \left( \sqrt{E_0}-\sqrt{2}{\dot{p}}({\tilde{t}})\right) ^2, \end{aligned}$$
(5.8)

where \({\tilde{t}}={\tilde{t}}(t_0,E_0)\) is determined implicitly through the relation \({\tilde{t}} = t_0 + \frac{p({\tilde{t}})}{\sqrt{2E_0}}\). This map is defined for \((t_0,E_0) \in {\mathbb {R}}\times (\frac{1}{2}v_*^2,\infty )\). Since it has a generating function [15, Lemma 3.7], it is measure-preserving. Furthermore, from the inverse function theorem we can derive that \({\mathcal {P}}\) is locally injective. Note however, that in general \({\mathcal {P}}\) fails to be injective globally (see Appendix 6.2).

Now, we will demonstrate that \(W(t_0,E_0)= p(t_0)^2 E_0\) acts as an adiabatic invariant for the ping-pong map. For this purpose we will cite the following lemma [16, Lemma 5.1]:

Lemma 5.3

There is a constant \(C>0\), depending only upon \(\Vert p \Vert _{{\mathcal {C}}^2}\) and \(a,b>0\) from (5.1), such that

$$\begin{aligned} |p(t_1)^2 E_1 - p(t_0)^2 E_0|\le C \Delta (t_0,E_0) \;\; \forall (t_0,E_0)\in {\mathbb {R}}\times (v_*^2/2,\infty ), \end{aligned}$$

where \((t_1,E_1)={\mathcal {P}}(t_0,E_0)\) denotes the ping-pong map for the forcing p, and \(\Delta (t_0,E_0) = E_0^{-1/2} + \sup \{ |\ddot{p}(t) - \ddot{p}(s)|: t,s \in [t_0-C,t_0+C],|t-s |\le C E_0^{-1/2} \}\).

So far we have depicted the case of a general forcing function p. Now we will replace p(t) by \(p_\omega (t)\) from (5.6) and study the resulting ping-pong map. First we note that due to \(P \in {\mathcal {C}}^2_\psi (\Omega )\) we have \(p_\omega \in {\mathcal {C}}^2({\mathbb {R}})\). Also \(0<a\le p_\omega (t) \le b\) holds for all \(\omega \in \Omega \) by assumption. Furthermore, since \(\omega \cdot {\mathbb {R}}\) lies dense in \(\Omega \) it is

$$\begin{aligned} \Vert {p}_\omega \Vert _\infty = \Vert P \Vert _\infty , \;\; \Vert {\dot{p}}_\omega \Vert _\infty = \Vert \partial _\psi P \Vert _\infty , \;\; \Vert \ddot{p}_\omega \Vert _\infty = \Vert \partial _\psi ^2 P \Vert _\infty . \end{aligned}$$

In particular this means \(\Vert p_\omega \Vert _{{\mathcal {C}}^2({\mathbb {R}})} = \Vert P \Vert _{{\mathcal {C}}^2_\psi (\Omega )}\) for all \(\omega \in \Omega \). Therefore all considerations above apply with uniform constants. As depicted in Remark 5.2, also the threshold \(v_*= 2\max \{ \max _{\varpi \in \Omega } \partial _\psi P(\varpi ),0\}\) is uniform in \(\omega \). Finally, since \(\ddot{p}_\omega (t) = \partial _\psi ^2 P(\omega +\psi (t))\), the function \(\Delta (t_0,E_0)\) can be uniformly bounded by

$$\begin{aligned} \Delta (E_0) = E_0^{-1/2} + \sup \{ |\partial _\psi ^2 P(\varpi ) - \partial _\psi ^2 P(\varpi ') |: \varpi ,\varpi '\in \Omega , \Vert \varpi - \varpi ' \Vert \le C E_0^{-1/2} \}. \end{aligned}$$

Hence, from Lemma 5.3 we obtain

Lemma 5.4

There is a constant \(C>0\), uniform in \(\omega \in \Omega \), such that

$$\begin{aligned} |p(t_1)^2 E_1 - p(t_0)^2 E_0|\le C \Delta (E_0) \;\; \forall (t_0,E_0)\in {\mathbb {R}}\times (v_*^2/2,\infty ), \end{aligned}$$

where \((t_0,E_0)\mapsto (t_1,E_1)\) denotes the ping-pong map \({\mathcal {P}}\) for the forcing function \(p_\omega (t)\).

Consider the equation

$$\begin{aligned} \tau = \frac{1}{\sqrt{2E_0}}P(\omega _0 + \psi (\tau )). \end{aligned}$$
(5.9)

Since \(P \in {\mathcal {C}}^1_\psi (\Omega )\) and \(1-(2E_0)^{-1/2} \partial _\psi P(\omega _0 + \psi (\tau )) \ge \frac{1}{2}>0\) for \(E_0 > \frac{1}{2}v_*^2\), equation (5.9) can be solved implicitly for \(\tau =\tau (\omega _0,E_0)\in {\mathcal {C}}(\Omega \times (v_*^2/2,\infty ))\) (cf. [2] for a suitable implicit function theorem). For \(\omega \in \Omega \) and \(t_0\in {\mathbb {R}}\) one can consider (5.9) with \(\omega _0 = \omega + \psi (t_0)\). Then \(P\in {\mathcal {C}}^1_\psi (\Omega )\) and the classical implicit function theorem yield \(\tau \in {\mathcal {C}}^1_\psi (\Omega \times (v_*^2/2,\infty ))\). Moreover, comparing this to the definition of \({\tilde{t}}\), we observe the following relation:

$$\begin{aligned} {\tilde{t}}(t_0,E_0) = t_0 + \tau (\omega + \psi (t_0), E_0). \end{aligned}$$
(5.10)

Now we will give the proof of the main theorem, in which we will link the ping-pong map corresponding to \(p_\omega (t)\) to the setup of Sect. 3.

Proof of Theorem 5.1

Let \({\mathcal {D}} = \Omega \times (E^*,\infty )\), where \(E^* = \max \{ \frac{1}{2}v_*^2,E_{**} \}\) and \(E_{**}\) will be determined below. Consider \(f:{\mathcal {D}}\subset \Omega \times (0,\infty ) \rightarrow \Omega \times (0,\infty ), f(\omega _0,E_0) = (\omega _1,E_1)\), given by

$$\begin{aligned} \omega _1 = \omega _0 + \psi (F(\omega _0,E_0)), \;\; E_1 = E_0 + G(\omega _0,E_0), \end{aligned}$$

where

$$\begin{aligned}&F(\omega _0,E_0) = \bigg (\frac{1}{\sqrt{2E_0}} + \frac{1}{\sqrt{2E_1}}\bigg ) P(\omega _0 + \psi (\tau )),\\&G(\omega _0,E_0) = -2\sqrt{2E_0} \partial _\psi P(\omega _0 + \psi (\tau )) + 2\partial _\psi P(\omega _0 + \psi (\tau ))^2, \end{aligned}$$

for \(\tau = \tau (\omega _0,E_0)\). Then f has special form (3.2) and therefore we can study the family \(\{ f_\omega \}_{\omega \in \Omega }\) of planar maps defined by (3.3). But plugging (5.10) into the definition of \({\mathcal {P}}\) shows, that \(f_\omega \) is just the ping-pong map \({\mathcal {P}}\) in the case of the forcing \(p_\omega (t)\). Independently of \(\omega \), these maps are defined on \(D_\omega = (\psi _\omega \times \text {id})^{-1}({\mathcal {D}}) = {\mathbb {R}}\times (E^*,\infty )\).

Let us show that f is injective on \(\Omega \times (E_{**},\infty )\), if \(E_{**}\) is sufficiently large. Therefore suppose \(f(\omega _0,E_0) = (\omega _1,E_1) = f({\tilde{\omega }}_0,{\tilde{E}}_0)\). Since \(\omega _0 + \iota (F(\omega _0,E_0)) = {\tilde{\omega }}_0 + \iota (F({\tilde{\omega }}_0,{\tilde{E}}_0))\) there is \(\omega \in \Omega \) and \(t_0,{\tilde{t}}_0\in {\mathbb {R}}\) such that \(\omega _0 = \omega +\psi (t_0)\) and \({\tilde{\omega }}_0 = \omega +\psi ({\tilde{t}}_0)\). Implicit differentiation yields \(\partial _{t_0} \tau (\omega + \psi (t_0),E_0) = {\mathcal {O}}(E_0^{-1/2})\) and \(\partial _{E_0} \tau (\omega + \psi (t_0),E_0) = {\mathcal {O}}(E_0^{-3/2})\). Moreover, \(E_1={\mathcal {O}}(E_0)\) implies

$$\begin{aligned} D_{f_\omega }(t_0,E_0) = \begin{pmatrix} 1 + {\mathcal {O}}(E_0^{-1/2}) &{} {\mathcal {O}}(E_0^{-3/2}) \\ {\mathcal {O}}(E_0^{1/2}) &{} 1 + {\mathcal {O}}(E_0^{-1/2}) \end{pmatrix} \end{aligned}$$

for the Jacobian matrix of \(f_\omega \). Throughout this paragraph C will denote positive constants depending on \(E_{**}\) and \(\Vert P \Vert _{{\mathcal {C}}^2_\psi (\Omega )}\), which will not be further specified. Without loss of generality we may assume \(E_0 \le {\tilde{E}}_0\). Then, applying the mean value theorem yields \(|t_0 - {\tilde{t}}_0 |\le C E_0^{-1/2} |t_0 - {\tilde{t}}_0 |+ C E_0^{-3/2} |E_0 - {\tilde{E}}_0 |\) and \(|E_0 - {\tilde{E}}_0 |\le C {\tilde{E}}_0^{1/2} |t_0 - {\tilde{t}}_0 |+ C E_0^{-1/2} |E_0 - {\tilde{E}}_0 |\), provided \(E_{**}\) is sufficiently big. Thus, for large \(E_{**}\) we get \(|t_0 - {\tilde{t}}_0 |\le C E_0^{-3/2} |E_0 - {\tilde{E}}_0 |\) and \(|E_0 - {\tilde{E}}_0 |\le C {\tilde{E}}_0^{1/2} |t_0 - {\tilde{t}}_0 |\). Now, combining these inequalities gives us \(|t_0 - {\tilde{t}}_0 |\le C E_0^{-3/2} {\tilde{E}}_0^{1/2} |t_0 - {\tilde{t}}_0 |\). But since \(E_1 ={\mathcal {O}}(E_0)\) and also \({\tilde{E}}_0 ={\mathcal {O}}(E_1)\), we can conclude \(|t_0 - {\tilde{t}}_0 |\le C E_0^{-1} |t_0 - {\tilde{t}}_0 |\). In turn, this implies \(t_0 = {\tilde{t}}_0\) and \(E_0 = {\tilde{E}}_0\) for \(E_{**}\) sufficiently large, which proves the injectivity of \(f_\omega \) and f.

Next we want to show that f is also measure-preserving. To this end, consider the maps \(g:\Sigma \times [0,S) \times (E^*,\infty ) \rightarrow \Sigma \times [0,\infty ) \times (0,\infty )\) defined by

$$\begin{aligned} g(\sigma ,s,E) = (\sigma , f_{\sigma }(s,E)) \end{aligned}$$

and \(\chi : \Sigma \times [0,\infty ) \rightarrow \Sigma \times [0,S) , \;\; \chi (\sigma ,t) = \Phi ^{-1}(\sigma \cdot t)\) from (2.5). Then, the identity

$$\begin{aligned} f = (\Phi \times \text {id}) \circ (\chi \times \text {id}) \circ g \circ (\Phi ^{-1}\times \text {id}) \end{aligned}$$

holds on \({\mathcal {D}}\). This can be illustrated as follows:

figure b

Recalling Lemma 2.6 and the fact that \(f_\omega \) has a generating function, it suffices to show that \(\chi \times \text {id}\) preserves the measure of any Borel set \({\mathcal {B}} \subset g \left( (\Phi ^{-1}\times \text {id})({\mathcal {D}})\right) \). Therefore, consider the sets

$$\begin{aligned} {\mathcal {B}}_k = {\mathcal {B}} \cap \left( \Sigma \times [(k-1)S,kS) \times (0,\infty )\right) , \;\; k\in {\mathbb {N}}. \end{aligned}$$

Then we have

$$\begin{aligned} (\mu _{\Sigma }\otimes \lambda ^2) \left( (\chi \times \text {id})({\mathcal {B}}_k )\right) = (\mu _{\Sigma }\otimes \lambda ^2) \left( {\mathcal {B}}_k\right) , \end{aligned}$$

as depicted in Sect. 2.3. Moreover, the injectivity of f implies the injectivity of \(\chi \times \text {id}\) on \({\mathcal {B}}\) and thus the sets \((\chi \times \text {id})({\mathcal {B}}_k)\) are mutually disjoint. Since \({\mathcal {B}}=\cup _{k\in {\mathbb {N}}} {\mathcal {B}}_k\), this yields \((\mu _{\Sigma }\otimes \lambda ^2) \left( (\chi \times \text {id})({\mathcal {B}})\right) = (\mu _{\Sigma }\otimes \lambda ^2) \left( {\mathcal {B}}\right) \).

Finally, we need to find a function \(W\in {\mathcal {C}}^1_\psi (\Omega \times (0,\infty ))\) such that (3.4) and (3.5) are verified. For this define

$$\begin{aligned} W(\omega _0,E_0) = P(\omega _0)^2 E_0. \end{aligned}$$

Conditions (3.4) clearly holds if we take \(\beta =a^2\) and \(\delta = b^2\) with ab from (5.5). Moreover, the definition of f yields

$$\begin{aligned} W(f(\omega _0,E_0)) - W(\omega _0,E_0)&= P(\omega _1)^2 E_1 - P(\omega _0)^2 E_0 \\&= P(\omega _0 + \psi (F(\omega _0,E_0)))^2 E_1 - P(\omega _0)^2 E_0 \\&= p_{\omega _0}(F(\omega _0,E_0))^2 E_1 - p_{\omega _0}(0)^2 E_0. \end{aligned}$$

Now let \(t_0=0\) and \((t_1,E_1)= f_{\omega _0}(t_0,E_0)\). Then \(t_1 = F(\omega _0,E_0)\) and thus Lemma 5.4 yields

$$\begin{aligned} W(f(\omega _0,E_0)) - W(\omega _0,E_0) = p_{\omega _0}(t_1)^2 E_1 - p_{\omega _0}(t_0)^2 E_0 \le C \Delta (E_0), \end{aligned}$$

where \(C>0\) is uniform in \(\omega _0\). But then taking \(k(E_0) = C\Delta (E_0)\) proves (3.5), since \(\lim _{r\rightarrow \infty }\Delta (r) = 0\) follows from \(\partial _\psi ^2P \in {\mathcal {C}}(\Omega )\).

Now we have validated all conditions of Theorem 3.1 for the map \(f:{\mathcal {D}} \rightarrow \Omega \times (0,\infty )\). Applying it yields \(\lambda ^2({\hat{E}}_\omega )=0\) for almost all \(\omega \in \Omega \), where \({\hat{E}}_\omega = \{ (t_0,E_0) \in {\hat{D}}_{\omega ,\infty }: \lim _{n\rightarrow \infty } E_n = \infty \}\) and \({\hat{D}}_{\omega ,\infty }\) is defined as in Sect. 3.2. This can be translated back to the original coordinates \((t,v) = (t,\sqrt{2E})\): Let us denote by \(g_\omega \) the ping-pong map \((t_0,v_0)\mapsto (t_1,v_1)\) from (5.3) for the forcing \(p(t)=p_\omega (t)\) and let

$$\begin{aligned} {\tilde{D}}_\omega = {\mathbb {R}}\times (\sqrt{2E^*},\infty ), \;\;\; {\tilde{D}}_{\omega ,1} = {\tilde{D}}_\omega , \;\;\; {\tilde{D}}_{\omega ,n+1} = g_\omega ^{-1}({\tilde{D}}_{\omega ,n}), \;\;\; {\tilde{D}}_{\omega ,\infty } = \bigcap _{n=1}^{\infty } {\tilde{D}}_{\omega ,n}. \end{aligned}$$

Then \(\lambda ^2({\tilde{E}}_\omega ) = 0\) for almost all \(\omega \in \Omega \), where \({\tilde{E}}_\omega = \{ (t_0,v_0) \in {\tilde{D}}_{\omega ,\infty }: \lim _{n\rightarrow \infty } v_n = \infty \}\). Now, consider the escaping set \(E_\omega \) from the theorem and take \((t_0,v_0) \in E_\omega \). Since \(\lim _{n\rightarrow \infty } v_n = \infty \), there is \(n_0\in {\mathbb {N}}\) such that \(v_n > \sqrt{2E^*}\) for all \(n\ge n_0\). But this just means \((t_n,v_n) \in {\tilde{E}}_\omega \) for \(n\ge n_0\). In particular, this implies \(E_\omega \subset \bigcup _{n\in {\mathbb {N}}} g_\omega ^{-n}({\tilde{E}}_\omega )\). Considering that \(g_\omega \) is area-preserving, this proves the assertion: \(\lambda ^2(E_\omega )=0\) for almost all \(\omega \in \Omega \). \(\square \)

Remark 5.5

Let us also point out that the framework developed in the present paper can be applied to a lot of other dynamical systems. A famous example of such a system is given by the so called Littlewood boundedness problem. There, the question is whether solutions of an equation \(\ddot{x} + G'(x) = p(t)\) stay bounded in the \((x,{\dot{x}})\)-phase space if the potential G satisfies some superlinearity condition. In [24] it is shown that the associated escaping set E typically has Lebesgue measure zero for \(G'(x) = |x |^{\alpha -1}x\) with \(\alpha \ge 3\) and a quasi-periodic forcing function p(t). Indeed, this result can be improved to the almost periodic case in a way analogous to the one presented here (for the ping-pong problem).