1 Introduction

A classical problem in the study of qualitative theory of planar differential equations in the plane is the 16th Hilbert Problem, related to the bifurcation of limit cycles (isolated periodic orbits) in a polynomial class of fixed degree. A large number of works in this line have been published so far for several polynomial families of differential equations. A different problem that has aroused interest during the last decades is the study of the isochronicity of a system, as well as its bifurcation of critical periods. These problems consist on analyzing the flatness and the oscillations of the period function of the system, respectively. As a matter of fact, the bifurcation of limit cycles and critical periods are analogous in terms of the techniques that can be used to be approached. For this reason, in this paper we suggest the study of the bifurcation of limit cycles and critical periods simultaneously, a problem that to the best of our knowledge has not been formulated yet.

Let us consider a real polynomial system of differential equations in the plane whose origin is a nondegenerate monodromic equilibrium point, so the matrix associated to the differential system evaluated at the origin has zero trace and positive determinant. It is a well-known fact that, by a suitable change of coordinates and time rescaling, it can be written in the form

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=\alpha x-y+X(x,y)=:P(x,y),\\ {\dot{y}}=x+\alpha y+Y(x,y)=:Q(x,y), \end{array}\right. } \end{aligned}$$
(1)

where X and Y are polynomials of degree \(n\ge 2\) which start at least with quadratic monomials and the dot indicates the derivative with respect to the time. We can consider system (1) in complex coordinates \((z,w)=(z,{\overline{z}})=(x+{{\,\mathrm{i}\,}}y,x-{{\,\mathrm{i}\,}}y),\) which will be represented by only one equation as

$$\begin{aligned} {\dot{z}}=(\alpha +{{\,\mathrm{i}\,}}) z+Z(z,w)={\mathcal {Z}}(z,w), \end{aligned}$$
(2)

where Z is a polynomial starting with monomials of at least second degree. For all the results developed in this work, we will consider a fixed transversal section \(\Sigma _x\) defined as the positive x-axis, this is the semi-axis \(\{(x,0),x>0\}.\) This restriction is necessary for the study of the period function because when the origin of (1) is not a center we can construct an isochronous section, as can be seen in [3]. Then, for example, the bifurcation of critical periods can depend on the transversal section.

Consider system (1), with \(\alpha =0,\) in polar coordinates,

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{r}}={\mathcal {R}}(r,\varphi ),\\ {\dot{\varphi }}=1+\Theta (r,\varphi ). \end{array}\right. } \end{aligned}$$
(3)

It is straightforward that from this system we can write

$$\begin{aligned} \frac{dr}{d\varphi }=a_2(\varphi )r^2+a_3(\varphi )r^3+O_4(r), \end{aligned}$$
(4)

where we denote by \(O_j(r)\) the monomials in r of at least degree j. From here we deduce the solution

$$\begin{aligned} r(\varphi ,\rho )=\rho +v_2(\varphi )\rho ^2+v_3(\varphi )\rho ^3+O_4(\rho ), \end{aligned}$$
(5)

being \(\rho =r(0)\) the initial condition. Evaluating it at \(2\pi \) we obtain the so-called Poincaré return map:

$$\begin{aligned} r(2\pi ,\rho )=\rho +V_{2k+1}\rho ^{2k+1}+O_{2k+2}(\rho ). \end{aligned}$$
(6)

As we will see in Sect. 2, the first nonvanishing term of the displacement map \(d(\rho )=r(2\pi ,\rho )-\rho \) has odd degree in \(\rho \). See more details in [28].

Solution (5) can be considered on the second equation of (3) to obtain the expression of the time t in terms of the angle \(\varphi .\) Let us invert such equation,

$$\begin{aligned} dt=\frac{d\varphi }{1+\Theta (r,\varphi )}=\left( 1+\tau _1(\varphi )r+\tau _2(\varphi )r^2+O_3(r)\right) d\varphi , \end{aligned}$$

and by integrating we obtain the expression of the time, the so-called period function, as a Taylor series in \(\rho ,\) this is

$$\begin{aligned} T(\rho )=2\pi +T_{l}\rho ^{l}+O_{l+1}(\rho ). \end{aligned}$$
(7)

When \(\alpha \ne 0,\) the origin of (1) is clearly a hyperbolic focus. When \(\alpha =0\) we are in the center-focus case. Using the expressions of the return and period maps given in (6) and (7), the origin of system (1) satisfies one of the following properties, with respect to the transversal section \(\Sigma _x\):

L:

The origin is an isochronous center, so the system is linearizable and we can say that \(k=\infty \) and \(l=\infty \).

C:

The origin is a center with weakness of order l on the period, so \(k=\infty \) and \(1\le l<\infty .\)

W:

The origin is an isochronous weak focus of order k, so \(1\le k<\infty \) and \(l=\infty .\)

B:

The remaining case is when both the center and isochronicity properties are not kept at the same time, so \(1\le k,l<\infty \) and we can say that the origin is a bi-weak monodromic equilibrium point of type [kl].

It is usual to restrict the study of the period function to the class of centers, i.e. systems that remain in type \({\mathbf {C}}\) in the aforementioned canonical form changes of variables classification. The study of the monotonicity or the number of oscillations of such function are difficult problems, see for example [6, 11, 26, 36] and the references therein. There are not so many global studies of the period function for general classes of centers. Gavrilov ([18]) proved the existence of at most one critical period for the Hamiltonian potentials \(x''+x+ax^2+bx^3=0,\) a problem started by Chow and Sanders ([8]) in 1986. In 2006, Mañosas and Villadelprat ([25]) proved that the derivative of the period function for Hamiltonian potentials \(x''+x+ax^3+bx^5=0\) has only one zero. Some years later, Grau and Villadelprat ([22]) proved that only two critical periods appear in some cubic homogeneous nonlinearity classes. In those cases, we say that the systems have one and two critical periods, respectively. For centers in the quadratic class, the most relevant study was done by Chicone and Jacobs in 1989 ([7]). Among others, they studied the local problem for the quadratic family, proving that only two critical periods bifurcate from the center equilibrium point. The answer for the global problem remains open. The greatest difficulty to deal with is the fact that the outer boundary of the period annulus changes together with the parameters inside a fixed family, see for example [15, 33]. Hence, the usual perturbation techniques are not useful and new tools need to be developed ([27]). As we have described, the maximal number of zeros of the derivative of the period function under perturbation (in some fixed class) is known as the criticality of the center. Recently, this problem has been studied for low degree polynomial vector fields in the class of reversible centers, see [31, 32].

Similarly to the above problem, we can restrict our analysis to the class of vector fields that remain in type \({\mathbf {W}}\). This is the case associated to the problem of studying isochronous foci, a problem that was addressed for example by Giné in [19,20,21]. In this special class, the cyclicity problem is also an interesting problem to be approached, which up to our knowledge is not completely solved even for low degree vector fields. A special family of systems in this class are the so-called rigid (or uniformly isochronous) systems. They satisfy that \({{\dot{\varphi }}}=1\) in the usual polar coordinates. Inside this class, quadratics have no limit cycles and there are cubics with at least two ([16]), but there is no answer for the global question about the total number of limit cycles in rigid cubic systems.

When \(\alpha =0,\) the origin of system (1) can be a either a center or a weak focus of a certain order. This classical notion of order will be recalled in the next section, where we will generalize it to the bi-weakness property previously defined. This gives birth to the idea of duality in weakness of a nonisochronous focus. In this work we will restrict our analysis to the case in which the transversal section is the horizontal axis.

As we have previously commented and as we will see in Sect. 2, the first nonzero term of the displacement map, defined from (6), always has an odd exponent. The coefficient \(V_{2k+1}\) is known as the kth Lyapunov constant. We also know that the first nonzero term of the derivative of the period function (7) is the so-called lth period constant \(T_l\). For centers it corresponds to an even subindex l ([1]) with \(k=\infty .\) However, this is not the case for systems which do not have a center at the origin. For instance, let us consider the quadratic system

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+x^2-\frac{10}{9}xy, \\ {\dot{y}}=x+x^2+4xy-\frac{25}{9}y^2. \end{array}\right. } \end{aligned}$$
(8)

For this system, the origin is bi-weak of type [1, 3],  because we can easily find that the first nonzero Lyapunov constant is \(V_3=\pi \) and the first nonzero coefficient of the period function is \(T_3=\pi ,\) so the origin is not a center because \(V_3\ne 0\) and we have \(T_3\ne 0.\) Therefore, if the center property is not kept then the property which states that the first nonzero period constant has even subindex does not hold.

The main purpose of this work is to present the problems of simultaneous cyclicity and criticality and bi-weak equilibrium points of type [kl], as well as some useful tools to deal with them. The first result we present is related to bi-weak [kl] foci in Liénard, quadratic, and linear plus cubic homogeneous systems, and our aim is then to find the highest [kl] before the either the center or the isochronicity (or both) occur. This problem is introduced in Sect. 2 using the Lie bracket, with a method which only finds bi-weak points of type [m, 2m].

Theorem 1

  1. (i)

    There exist cubic Liénard systems of the form

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+a_2 x^2 +a_3 x^3,\\ {\dot{y}}=x+b_2 x^2 +b_3 x^3, \end{array}\right. } \end{aligned}$$
    (9)

    being \(a_2,a_3,b_2,b_3\in {\mathbb {R}},\) which have a bi-weak [2, 4] focus at the origin.

  2. (ii)

    There exist quartic Liénard systems of the form

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+a_2 x^2 +a_3 x^3+a_4 x^4,\\ {\dot{y}}=x+b_2 x^2 +b_3 x^3+b_4 x^4, \end{array}\right. } \end{aligned}$$
    (10)

    being \(a_2,a_3,a_4,b_2,b_3,b_4\in {\mathbb {R}},\) which have a bi-weak [3, 6] focus at the origin.

  3. (iii)

    There exist quadratic systems of the form

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+a_{20} x^2 +a_{11} xy+a_{02} y^2,\\ {\dot{y}}=x+b_{20} x^2 +b_{11} xy+b_{02} y^2, \end{array}\right. } \end{aligned}$$
    (11)

    being \(a_{20},a_{11},a_{02},b_{20},b_{11},b_{02}\in {\mathbb {R}},\) which have a bi-weak [2, 4] focus at the origin.

  4. (iv)

    There exist linear plus cubic homogeneous systems of the form

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+a_{30} x^3 +a_{21} x^2y+a_{12} xy^2+a_{03} y^3,\\ {\dot{y}}=x+b_{30} x^3 +b_{21} x^2y+b_{12} xy^2+b_{03} y^3, \end{array}\right. } \end{aligned}$$
    (12)

    being \(a_{30},a_{21},a_{12},a_{03},b_{30},b_{21},b_{12},b_{03}\in {\mathbb {R}},\) which have a bi-weak [3, 6] focus at the origin.

The second result deals with the simultaneous cyclicity and criticality for cubic Liénard systems.

Theorem 2

For the cubic Liénard family (9), adding the trace parameter \(\alpha \) as in (1), we have the following properties with respect to the transversal section \(\Sigma _x\):

  1. (i)

    When the origin is a center, either it is isochronous or it has at most 1 critical period.

  2. (ii)

    There at most two limit cycles of small amplitude bifurcating from the origin. Simultaneously, there are at least three critical periods also bifurcating from the origin.

  3. (iii)

    The origin is never an isochronous focus.

This paper is structured as follows. In Sect. 2 we introduce the classical methods to find Lyapunov and period constants as well as a method which uses the Lie bracket. We also discuss the pros and cons of each of the approaches. The presented Lie bracket method is used in Sect. 3 to study the bi-weakness of Liénard, quadratic, and linear plus cubic homogeneous systems in order to prove Theorem 1. Finally, in Sect. 4 we show some properties of several Liénard systems and give the isochronicity conditions for some cases, and finish by proving Theorem 2. We remark however that the aim of this work is not to provide any isochronicity characterization, not even for an a priori simple class of systems as the Liénard family, a characterization of which has been obtained very recently in [4].

2 Lyapunov and Period Constants Computation. The Lie Bracket Method

In this section we retrieve some classical concepts related with the Taylor developments of the return and period maps. The coefficients of these series, as we will see in Sect. 2.2, allow us to define the Lyapunov and period constants. These values will be computed simultaneously in Sect. 2.3 by using the Lie bracket operator, which is recalled in complex coordinates the following Sect. 2.1 together with the characterization of isochronous centers by using commutators. See more details in [1, 30].

2.1 The Lie Bracket

In this subsection we present the Lie bracket, a powerful tool to study the isochronicity of a system and to find its Lyapunov and period constants under certain conditions that we will see. Some parts of this introduction have been directly extracted from a previous work [31]. We define the Lie bracket of two planar vector fields \(F_1=(F_1^{1},F_1^{2})\) and \(F_2=(F_2^{1},F_2^{2})\) in variables \((x,y)\in {\mathbb {K}}^2,\) where \({\mathbb {K}}={\mathbb {R}}\) or \({\mathbb {C}},\) as a new vector field which has the form

$$\begin{aligned} \begin{aligned}{} [F_1,F_2]=&\Bigg (\frac{\partial F_1^{1}}{\partial x} F_2^{1}+\frac{\partial F_1^{1}}{\partial y} F_2^{2}-\frac{\partial F_2^{1}}{\partial x} F_1^{1}-\frac{\partial F_2^{1}}{\partial y} F_1^{2},\\&\quad \frac{\partial F_1^{2}}{\partial x} F_2^{1}+\frac{\partial F_1^{2}}{\partial y} F_2^{2}-\frac{\partial F_2^{2}}{\partial x} F_1^{1}-\frac{\partial F_2^{2}}{\partial y} F_1^{2}\Bigg ). \end{aligned} \end{aligned}$$
(13)

Let us consider now the Lie bracket in the case of having two complex planar vector fields \({\mathcal {Z}},{\mathcal {U}},\) corresponding to two real vector fields. Observe that, in such situation, second components are obtained by complex conjugation of first components, thus both vector fields \({\mathcal {Z}}\) and \({\mathcal {U}}\) and their Lie bracket can be described only from their first components, so in some cases we will simply write

$$\begin{aligned}{}[{\mathcal {Z}},{\mathcal {U}}]=\frac{\partial {\mathcal {Z}}}{\partial z} {\mathcal {U}}+\frac{\partial {\mathcal {Z}}}{\partial w} \overline{{\mathcal {U}}}-\frac{\partial {\mathcal {U}}}{\partial z} {\mathcal {Z}}-\frac{\partial {\mathcal {U}}}{\partial w} \overline{{\mathcal {Z}}}. \end{aligned}$$
(14)

This definition also appears in [13].

The first proof of the next geometrical equivalence was done by Sabatini in [30].

Theorem 3

([1]) Equation (2) with \(\alpha =0\) has an isochronous center at the origin if and only if there exists \(\dot{z}={\mathcal {U}}(z,w)=z+O(|z,w|^2)\) such that \([{\mathcal {Z}},{\mathcal {U}}]=0.\)

It is a well-known fact that holomorphic systems are isochronous, see for example [1]. The paper [14] also deals with this problem, and gives \({\dot{z}}={{\,\mathrm{i}\,}}f(z)\) as a linearizing system. As a first example of application of Theorem 3, we can prove this same result and see that actually \({\mathcal {U}}:{\dot{z}}=k\, f(z)\) for any \(k\in {\mathbb {C}}\) is a linearizing system. Indeed,

$$\begin{aligned}{}[{\mathcal {Z}},{\mathcal {U}}]= \,&[f(z),k\, f(z)]=\frac{\partial f(z)}{\partial z} k\, f(z)+\frac{\partial f(z)}{\partial w} \overline{k\, f(z)}-\frac{\partial \left( k\, f(z)\right) }{\partial z} f(z)\\ {}&-\frac{\partial \left( k\, f(z)\right) }{\partial w} \overline{f(z)} =\frac{\partial f(z)}{\partial z} k\, f(z)+0-k\frac{\partial f(z)}{\partial z} f(z)-0=0. \end{aligned}$$

2.2 The Classical Method to Compute Lyapunov and Period Constants

We start by presenting the classical method of finding Lyapunov and period constants. Let us write system (1), with \(\alpha =0\), in polar coordinates by performing the usual change \((x,y)=(r\cos \varphi ,r\sin \varphi ),\) and one obtains

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{r}}=\sum \limits _{i=1}^{n-1} U_i (\varphi ) r^{i+1},\\ {\dot{\varphi }}=1+\sum \limits _{i=1}^{n-1} W_i (\varphi )r^{i}, \end{array}\right. } \end{aligned}$$
(15)

where \(U_i (\varphi )\) and \(W_i (\varphi )\) are homogeneous polynomials in \(\sin \varphi \) and \(\cos \varphi \) of degree \(i+2.\) Eliminating time and doing the Taylor series expansion in r we obtain

$$\begin{aligned} \frac{dr}{d\varphi }=\sum _{j=2}^\infty R_j (\varphi ) r^{j}. \end{aligned}$$
(16)

The initial value problem for (16) with the initial condition \((r,\varphi )=(\rho ,0)\) has a unique analytic solution which can be expanded as

$$\begin{aligned} r(\varphi ,\rho )=\rho +\sum _{j=2}^\infty u_j(\varphi )\rho ^j. \end{aligned}$$
(17)

As \(r(0,\rho )=\rho \), it immediately follows that \(u_j(0)=0\) for every j. Let us study the stability near the origin, \(r=0\), by using

$$\begin{aligned} r(2\pi ,\rho )=\rho +V_{m}\,\rho ^{m}+O_{m+1}(\rho ), \end{aligned}$$
(18)

where \(V_{m}:=u_{m}(2\pi )\) is the first coefficient which does not vanish. It is a well-known fact that the first nonidentically zero \(V_m\) has odd m ([5, 28]). As a consequence, expression (18) can be rewritten as

$$\begin{aligned} r(2\pi ,\rho )=\rho +V_{2k+1}\,\rho ^{2k+1}+O_{2k+2}(\rho ). \end{aligned}$$

We will denote the \(V_{2k+1}\) with odd subscript as \(L_k:=V_{2k+1},\) and these values are the Lyapunov constants. These objects are the key tool to study the center and cyclicity problems of a system of the form (1). Observe that \(r(2\pi ,\rho )\) indicates the radius after a whole loop starting in the initial value \(\rho ,\) and then we define the Poincaré map

$$\begin{aligned} \Pi (\rho ):=\rho +\sum _{j=3}^\infty V_j\,\rho ^j. \end{aligned}$$

Alternatively, this can be written as

$$\begin{aligned} d(\rho ):=\Pi (\rho )-\rho =\sum _{j=3}^\infty V_j\,\rho ^j, \end{aligned}$$
(19)

which is the so-called displacement map. We recall that we are taking a nondegenerate equilibrium point with zero trace, i.e. \(\alpha =0\) in (1).

Now let us illustrate how the period constants can be found, for which the reader is referred to [24, 28, 31]. First, we substitute the power series (17) into the second equation of (15), which yields a differential equation of the form

$$\begin{aligned} \frac{d\varphi }{dt}=1+\sum _{i=1}^{\infty } F_i (\varphi ) \rho ^{i}, \end{aligned}$$

for some trigonometric polynomials \(F_i (\varphi ).\) Rewriting this equation as

$$\begin{aligned} dt=\dfrac{d\varphi }{1+\sum \limits _{i=1}^{\infty } F_i (\varphi ) \rho ^{i}}=\left( 1+\sum _{i=1}^\infty \Psi _i (\varphi ) \rho ^{i} \right) d\varphi \end{aligned}$$

and integrating \(\varphi \) from 0 to \(2\pi \) yields the period function

$$\begin{aligned} T(\rho ){=}\int _0^{T(\rho )}dt{=}\int _0^{2\pi }\left( 1+\sum _{i=1}^\infty \Psi _i (\varphi ) \rho ^{i} \right) d\varphi =2\pi +\sum _{i=1}^\infty \left( \int _0^{2\pi }\Psi _i (\varphi ) \, d\varphi \right) \rho ^{i}, \nonumber \\ \end{aligned}$$
(20)

where the series converges for \(0\le \varphi \le 2\pi \) and sufficiently small \(\rho \ge 0.\) In the center case, (20) can be seen as the lowest period of the trajectory of (1) passing through \((x,y)=(\rho ,0)\) for \(\rho \ne 0.\) The coefficients \(T_i\) of the period function are then given by the expression

$$\begin{aligned} T_i=\frac{1}{2\pi }\int _0^{2\pi } \Psi _i (\varphi )\,d\varphi , \end{aligned}$$

and the first nonzero \(T_l\) is known as the lth period constant of the system.

If we assume now that system (15) depends on some parameters \(\lambda \in {\mathbb {R}}^d,\) we can follow exactly the same procedure as before, and now we have that both the Lyapunov constants \(V_{2k+1}=u_{2k+1}(2\pi ,\lambda )\) and the period constants

$$\begin{aligned} T_l(\lambda )=\frac{1}{2\pi }\int _0^{2\pi } \Psi _l (\varphi ,\lambda )\,d\varphi \end{aligned}$$
(21)

are polynomials in the parameters \(\lambda \) (see [10, 28]).

Finally, we consider the case with nonzero trace (\(\alpha \ne 0\)). The following result studies how the return map and the period function, together with cyclicity and criticality, change when the considered system has nonzero trace.

Lemma 4

Let us consider a system of the form (1) written as

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=\alpha x-y+a_{20}x^2+a_{11}xy+a_{02}y^2+{\widetilde{X}}(x,y),\\ {\dot{y}}=x+\alpha y+b_{20}x^2+b_{11}xy+b_{02}y^2+{\widetilde{Y}}(x,y), \end{array}\right. } \end{aligned}$$
(22)

where \({\widetilde{X}}\) and \({\widetilde{Y}}\) are polynomials which start at least with cubic monomials, and such that when \(\alpha =0\) the first non vanishing Lyapunov and period constants are \(V_3\) and \(T_2.\)

  1. (i)

    The displacement map (19) when \(\alpha \ne 0\) has the form

    $$\begin{aligned} d(\rho )=({{\,\mathrm{e}\,}}^{2\pi \alpha }-1)\rho + O_{2}(\rho ). \end{aligned}$$
    (23)

    Moreover, if \(\alpha \, V_3<0\) and \(\alpha \) is small enough, then a unique limit cycle bifurcates from the origin due to a Hopf bifurcation.

  2. (ii)

    The period function (20) when \(\alpha \ne 0\) has the form

    $$\begin{aligned} T(\rho )=2\pi +({{\,\mathrm{e}\,}}^{2\pi \alpha }-1)\widetilde{T_1}(\alpha )\,\rho +O_2(\rho ), \end{aligned}$$
    (24)

    where

    $$\begin{aligned} \widetilde{T_1}(\alpha )=\frac{-\alpha ^3 b_{20}-\alpha ^2 (a_{20} -b_{11})+\alpha (2 a_{11}-2 b_{02}-7 b_{20})-6 a_{02}-3 a_{20}+3 b_{11}}{\alpha ^4+10\alpha ^2+9}. \end{aligned}$$

    Moreover, if \(({{\,\mathrm{e}\,}}^{2\pi \alpha }-1)\,\widetilde{T_1}(\alpha )\, T_2<0\) and \(\alpha \) is small enough, then a unique critical period bifurcates from the origin.

Proof

It is a well-known fact and a straightforward computation that the displacement map of system (22) has the form (23). Thus, as \(\alpha \) and \({{\,\mathrm{e}\,}}^{2\pi \alpha }-1\) have the same sign, it is immediate to see that if \(\alpha \, V_3<0\) then \(d(\rho )\) has an extra change of sign in the coefficients of \(\rho \), so a unique new positive zero can appear and this implies the unfolding of a limit cycle of small amplitude. See more details in [5, 29].

For the case of the period, we can change system (22) to polar coordinates and integrate the angular equation of \(d\varphi /dt\) as we did above. After doing the computations, we obtain that the period function in the case of nonzero trace becomes (24). Notice that in (22) we have only considered the coefficients of the quadratic part because by construction they are the only ones which can actually play a part in the coefficient of \(\rho \) in the period function, in the sense that higher degree coefficients would appear in higher degree terms of the period function. Now if \(({{\,\mathrm{e}\,}}^{2\pi \alpha }-1)\widetilde{T_1}(\alpha )\, T_2<0,\) there is an extra change of sign in the coefficients of the period function, which implies the bifurcation of an extra critical period. \(\square \)

2.3 Lyapunov and Period Constants via the Lie Bracket

In the previous section we presented the classical method to compute Lyapunov and period constants. Such method involves some integrals (equation (20)), which easily become too difficult to be explicitly obtained, so this technique is not useful in many cases. To deal with this inconvenience, a method to find period constants which is based on the Lie bracket tool was introduced in [24] and recently used in [31, 32]. However, this technique was only valid when the origin is a center, which is not our case in this paper, since we aim to study cyclicity and criticality simultaneously. Here we present a new approach based on the Lie bracket which will allows to find both Lyapunov and period constants at the same time. This method will provide some valuable advantages, even though it also has its limitations.

Let us consider system (2) with \(\alpha =0\). By applying near the identity changes of variables, as the spirit of normal form transformations, such system can be simplified to

$$\begin{aligned} {\dot{z}}={{\,\mathrm{i}\,}}z+\sum _{j=1}^N ( \alpha _{2j+1}+{{\,\mathrm{i}\,}}\beta _{2j+1})z(zw)^j+O_{2N+3}(z,w), \end{aligned}$$
(25)

where \(N\in {\mathbb {N}}\) is arbitrary, \(O_{2N+3}(z,w)\) denotes a sum of monomials of degree at least \(2N+3\) in z and w, and \( \alpha _{2j+1},\beta _{2j+1}\in {\mathbb {R}}\) (see [1] for more details). Let us consider the Nth order truncation of differential system (25)

$$\begin{aligned} {\dot{z}}={{\,\mathrm{i}\,}}z+\sum _{j=1}^N ( \alpha _{2j+1}+{{\,\mathrm{i}\,}}\beta _{2j+1})z(zw)^j. \end{aligned}$$
(26)

Proposition 5

The truncated system (26) in polar coordinates takes the form

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{r}}=\sum \limits _{j=1}^N \alpha _{2j+1} r^{2j+1},\\ {\dot{\varphi }}=1+\sum \limits _{j=1}^N \beta _{2j+1} r^{2j}. \end{array}\right. } \end{aligned}$$
(27)

Proof

The proof is straightforward by applying the change of variables

$$\begin{aligned} (r,\varphi )\mapsto \left( z=r {{\,\mathrm{e}\,}}^{{{\,\mathrm{i}\,}}\varphi },w=r {{\,\mathrm{e}\,}}^{-{{\,\mathrm{i}\,}}\varphi }\right) \end{aligned}$$

to differential equation (27). Indeed,

$$\begin{aligned} {\dot{z}}=\,&{\dot{r}}{{\,\mathrm{e}\,}}^{{{\,\mathrm{i}\,}}\varphi }+{{\,\mathrm{i}\,}}r {\dot{\varphi }}{{\,\mathrm{e}\,}}^{{{\,\mathrm{i}\,}}\varphi }=\left( \sum \limits _{j=1}^N \alpha _{2j+1} r^{2j+1}\right) {{\,\mathrm{e}\,}}^{{{\,\mathrm{i}\,}}\varphi }+{{\,\mathrm{i}\,}}r {{\,\mathrm{e}\,}}^{{{\,\mathrm{i}\,}}\varphi }\left( 1+\sum \limits _{j=1}^N \beta _{2j+1} r^{2j}\right) \\ =\,&z\left( \sum \limits _{j=1}^N \alpha _{2j+1} r^{2j}\right) +{{\,\mathrm{i}\,}}z\left( 1+\sum \limits _{j=1}^N \beta _{2j+1} r^{2j}\right) ={{\,\mathrm{i}\,}}z+\sum _{j=1}^N ( \alpha _{2j+1}+{{\,\mathrm{i}\,}}\beta _{2j+1})z(zw)^j, \end{aligned}$$

where we have used the trivial relation \(r^2=zw.\) \(\square \)

The next result shows how we can compute the Lyapunov and period constants of (26) by using the Lie bracket. This result was previously obtained in [24], but its proof is included here for completeness.

Theorem 6

Let us denote by \({\mathcal {Z}}\) the vector field (26), and consider system \({\mathcal {U}}\) defined by the differential equation \({\dot{z}}=z+\sum \nolimits _{k=2N+2}^\infty \sum \nolimits _{l=0}^k A_{k-l,l}z^{k-l}w^l.\) Then the Lie bracket between \({\mathcal {Z}}\) and \({\mathcal {U}}\) has the form

$$\begin{aligned}{}[{\mathcal {Z}},{\mathcal {U}}]=\left( \sum \limits _{j=1}^N p_{2j+1} z(zw)^j+O_{2N+2}(z,w),\sum \limits _{j=1}^N {\overline{p}}_{2j+1} w(zw)^j+O_{2N+2}(z,w)\right) , \end{aligned}$$

where \(O_{2N+2}(z,w)\) denotes a sum of monomials of degree at least \(2N+2\) in z and w, and

$$\begin{aligned} V_{2j+1}=&\frac{1}{2j} {{\,\mathrm{Re}\,}}(p_{2j+1})=\frac{p_{2j+1}+{\overline{p}}_{2j+1}}{4j}=\alpha _{2j+1}, \end{aligned}$$
(28)
$$\begin{aligned} T_{2j}=&\frac{1}{2j} {{\,\mathrm{Im}\,}}(p_{2j+1})=\frac{p_{2j+1}-{\overline{p}}_{2j+1}}{4j{{\,\mathrm{i}\,}}}=\beta _{2j+1}, \end{aligned}$$
(29)

are the coefficients of the return map and the period function of system (26), respectively.

Proof

Observe that, by Proposition 5, the quantities \(\alpha _{2j+1}\) and \(\beta _{2j+1}\) from (26) are in fact the Lyapunov and period constants, respectively, of the system in this normal form.

Let us compute the Lie bracket between \({\mathcal {Z}}\) and \({\mathcal {U}}.\) Actually, we only need to find the first component \([{\mathcal {Z}},{\mathcal {U}}]^1\) of such operation, since its second component is its complex conjugate.

$$\begin{aligned}{}[{\mathcal {Z}},{\mathcal {U}}]^1=&\left( {{\,\mathrm{i}\,}}+\sum _{j=1}^N ( \alpha _{2j+1}+{{\,\mathrm{i}\,}}\beta _{2j+1})(j+1)(zw)^j\right) \left( z+\sum \limits _{k=2N+2}^\infty \sum \limits _{l=0}^k A_{k-l,l}z^{k-l}w^l\right) \\&+\left( \sum _{j=1}^N ( \alpha _{2j+1}+{{\,\mathrm{i}\,}}\beta _{2j+1})jz^{j+1}w^{j-1} \right) \left( w+\sum \limits _{k=2N+2}^\infty \sum \limits _{l=0}^k {\overline{A}}_{k-l,l}z^l w^{k-l}\right) \\&-\left( 1+\sum \limits _{k=2N+2}^\infty \sum \limits _{l=0}^{k-1} A_{k-l,l}(k-l)z^{k-l-1}w^l\right) \left( {{\,\mathrm{i}\,}}z+\sum _{j=1}^N ( \alpha _{2j+1}+{{\,\mathrm{i}\,}}\beta _{2j+1})z(zw)^j\right) \\&-\left( \sum \limits _{k=2N+2}^\infty \sum \limits _{l=1}^k A_{k-l,l}lz^{k-l}w^{l-1}\right) \left( -{{\,\mathrm{i}\,}}w+\sum _{j=1}^N ( \alpha _{2j+1}-{{\,\mathrm{i}\,}}\beta _{2j+1})w(zw)^j\right) \\ =&\sum _{j=1}^N 2j ( \alpha _{2j+1}+{{\,\mathrm{i}\,}}\beta _{2j+1})z(zw)^j+O_{2N+2}(z,w)=:\sum \limits _{j=1}^N p_{2j+1} z(zw)^j+O_{2N+2}(z,w), \end{aligned}$$

and the second component is

$$\begin{aligned}{}[{\mathcal {Z}},{\mathcal {U}}]^2= & {} \sum _{j=1}^N 2j ( \alpha _{2j+1}-{{\,\mathrm{i}\,}}\beta _{2j+1})w(zw)^j+O_{2N+2}(z,w)\\= & {} \sum \limits _{j=1}^N {\overline{p}}_{2j+1} w(zw)^j+O_{2N+2}(z,w). \end{aligned}$$

Notice that we have denoted by \(p_{2j+1}\) the coefficient of the Lie bracket with degree \(2j+1\), and it has the expression \(p_{2j+1}=2j ( \alpha _{2j+1}+{{\,\mathrm{i}\,}}\beta _{2j+1}).\) Finally, as we observed that \(\alpha _{2j+1}\) and \(\beta _{2j+1}\) are the Lyapunov and period constants of system (26), formulas (28) and (29) follow. \(\square \)

Remark 7

It is worth remarking that this Lie bracket method does not allow to obtain the general expression of the Lyapunov and period constants \(V_{2j+1}\) and \(T_{2j},\) but only under the conditions \(V_{2i+1}=T_{2i}=0\) for every \(i<j.\) In this sense, and with a slight abuse of notation, when using the Lie bracket method throughout this paper we will denote by \(V_{2j+1}\) the jth Lyapunov constant assuming that \(V_{2i+1}=T_{2i}=0\) for every \(i<j,\) and by \(T_{2j}\) the jth period constant \(V_{2i+1}=T_{2i}=0\) for every \(i<j.\)

We have seen that, when having a system in normal form (26), one can use the Lie bracket to find the corresponding Lyapunov and period constants. The last step is to prove that, in fact, the system does not have to be necessarily in such normal form. In other words, we will show that the method is equally valid if a change of variables is performed on the system, and this will allow to apply the Lie bracket method to general systems of the form (2) and not only to those which are in normal form. A similar approach can be also seen in [2].

Before the main result we present the following lemma.

Lemma 8

Let \(\phi =(\phi _1,\phi _2):{\mathbb {K}}^2\rightarrow {\mathbb {K}}^2\) be a \({\mathcal {C}}^2\)-diffeomorphism which maps \((u,v)\in {\mathbb {K}}^2,\) where \({\mathbb {K}}={\mathbb {R}}\) or \({\mathbb {C}},\) to new variables \((x,y)=\phi (u,v)=(\phi _1(u,v),\phi _2(u,v))\in {\mathbb {K}}^2,\) and whose inverse is \((u,v)=\phi ^{-1}(x,y)=(\phi _1^{-1}(x,y),\phi _2^{-1}(x,y)).\) Then the following equivalences hold:

$$\begin{aligned} \dfrac{\partial \phi _1^{-1}}{\partial x}(x,y)\cdot \dfrac{\partial \phi _1}{\partial u}(\phi ^{-1}(x,y))+\dfrac{\partial \phi _1^{-1}}{\partial y}(x,y)\cdot \dfrac{\partial \phi _2}{\partial u}(\phi ^{-1}(x,y))&=1, \end{aligned}$$
(30)
$$\begin{aligned} \dfrac{\partial \phi _2^{-1}}{\partial x}(x,y)\cdot \dfrac{\partial \phi _1}{\partial v}(\phi ^{-1}(x,y))+\dfrac{\partial \phi _2^{-1}}{\partial y}(x,y)\cdot \dfrac{\partial \phi _2}{\partial v}(\phi ^{-1}(x,y))&=1, \end{aligned}$$
(31)
$$\begin{aligned} \dfrac{\partial \phi _1^{-1}}{\partial x}(x,y)\cdot \dfrac{\partial \phi _1}{\partial v}(\phi ^{-1}(x,y))+\dfrac{\partial \phi _1^{-1}}{\partial y}(x,y)\cdot \dfrac{\partial \phi _2}{\partial v}(\phi ^{-1}(x,y))&=0, \end{aligned}$$
(32)
$$\begin{aligned} \dfrac{\partial \phi _2^{-1}}{\partial x}(x,y)\cdot \dfrac{\partial \phi _1}{\partial u}(\phi ^{-1}(x,y))+\dfrac{\partial \phi _2^{-1}}{\partial y}(x,y)\cdot \dfrac{\partial \phi _2}{\partial u}(\phi ^{-1}(x,y))&=0. \end{aligned}$$
(33)

Proof

The proof is straightforward by applying the chain rule for two variable functions. For (30), we use that \(\phi _1(u,v)=x\) and \(\phi _2(u,v)=y\) and the chain rule to rewrite it as

$$\begin{aligned} \dfrac{\partial \phi _1^{-1}}{\partial x}(x,y)\cdot \dfrac{\partial x}{\partial u}(\phi ^{-1}(x,y)){+}\dfrac{\partial \phi _1^{-1}}{\partial y}(x,y)\cdot \dfrac{\partial y}{\partial u}(\phi ^{{-}1}(x,y)){=}\dfrac{\partial \phi _1^{-1}(x,y)}{\partial u}{=}\dfrac{\partial u}{\partial u}{=}1. \end{aligned}$$

For (31), (32), and (33), we follow an analogous procedure and we obtain that they are equivalent to \(\partial v/\partial v=1\), \(\partial u/\partial v=0\), and \(\partial v/\partial u=0\), respectively. \(\square \)

Theorem 9

Let us consider two vector fields \(F_1=(F_1^{1},F_1^{2})\) and \(F_2=(F_2^{1},F_2^{2})\) in variables \((u,v)\in {\mathbb {K}}^2,\) where \({\mathbb {K}}={\mathbb {R}}\) or \({\mathbb {C}}.\) Let \(\phi =(\phi _1,\phi _2):{\mathbb {K}}^2\rightarrow {\mathbb {K}}^2\) be a \({\mathcal {C}}^2\)-diffeomorphism which maps \((u,v)\in {\mathbb {K}}^2\) to new variables \((x,y)=\phi (u,v)=(\phi _1(u,v),\phi _2(u,v))\in {\mathbb {K}}^2,\) and whose inverse is \((u,v)=\phi ^{-1}(x,y)=(\phi _1^{-1}(x,y),\phi _2^{-1}(x,y)).\) Let us denote by \(G_1=(G_1^{1},G_1^{2})\) and \(G_2=(G_2^{1},G_2^{2})\) the vector fields \(F_1\) and \(F_2,\) respectively, in variables (xy) after applying the change of variables \((x,y)=\phi (u,v).\) Then the following equivalence between the Lie brackets of \(F_1,F_2\) and \(G_1,G_2\) holds:

$$\begin{aligned}{}[G_1,G_2]^T=J_\phi (\phi ^{-1}(x,y))\cdot [F_1,F_2]^T_{\phi ^{-1}(x,y)}, \end{aligned}$$
(34)

where \(J_\phi (\phi ^{-1}(x,y))\) is the Jacobian matrix of \(\phi (u,v)\) evaluated at \((u,v)=\phi ^{-1}(x,y),\) \([F_1,F_2]_{\phi ^{-1}(x,y)}\) denotes the Lie bracket between \(F_1\) and \(F_2\) also evaluated at \((u,v)=\phi ^{-1}(x,y),\) and the superindex \(\,^T\) denotes the transpose vector.

Proof

Let us start by observing that equivalence (34) can be rewritten in matrix form as

$$\begin{aligned} \begin{pmatrix} [G_1,G_2]^1\\ [G_1,G_2]^2 \end{pmatrix} = \begin{pmatrix} \dfrac{\partial \phi _1}{\partial u} (\phi ^{-1}(x,y))&{} \dfrac{\partial \phi _1}{\partial v} (\phi ^{-1}(x,y))\\ \dfrac{\partial \phi _2}{\partial u} (\phi ^{-1}(x,y)) &{} \dfrac{\partial \phi _2}{\partial v} (\phi ^{-1}(x,y)) \end{pmatrix} \cdot \begin{pmatrix} [F_1,F_2]^1_{\phi ^{-1}(x,y)}\\ [F_1,F_2]^2_{\phi ^{-1}(x,y)} \end{pmatrix} .\qquad \end{aligned}$$
(35)

We will prove the first component of equivalence (35), since the second one can be analogously seen. Then, our aim is to show that

$$\begin{aligned}{}[G_1,G_2]^1= & {} \dfrac{\partial \phi _1}{\partial u} (\phi ^{{-}1}(x,y))\cdot [F_1,F_2]^1_{\phi ^{{-}1}(x,y)}\nonumber \\&+ \dfrac{\partial \phi _1}{\partial v} (\phi ^{-1}(x,y))\cdot [F_1,F_2]^2_{\phi ^{-1}(x,y)}. \end{aligned}$$
(36)

The idea of the proof is to develop the expression of \([G_1,G_2]^1\) when performing the change of variables \(\phi \) and to check that it satisfies (36). First we will see how \(G_1\) and \(G_2\) can be expressed in terms of \(F_1\) and \(F_2.\) Let us denote by \({\widetilde{G}}_i^j(u,v)\) the vector field such that \(G_i^j(x,y)=({\widetilde{G}}_i^j\circ \phi ^{-1})(x,y)\) for \(i,j=1,2.\) As vector fields \(G_i^1\) and \(G_i^2\) correspond respectively to \({\dot{x}}\) and \({\dot{y}}\), we can take the derivative of \((x,y)=\phi (u,v)\) with respect to time and apply the chain rule to see that

$$\begin{aligned} {\widetilde{G}}_i^j=\dfrac{\partial \phi _j}{\partial u} F_i^1+\dfrac{\partial \phi _j}{\partial v} F_i^2, \end{aligned}$$
(37)

since vector fields \(F_i^1\) and \(F_i^2\) correspond respectively to \({\dot{u}}\) and \({\dot{v}}.\)

The first partial derivative we need to find for \([G_1,G_2]^1\) is \(\partial G_1^1/\partial x\) and, by applying the chain rule, it is

$$\begin{aligned} \dfrac{\partial G^1_1}{\partial x}(x,y)= & {} \dfrac{\partial ({\widetilde{G}}_1^1\circ \phi ^{-1})}{\partial x}(x,y)=\dfrac{\partial {\widetilde{G}}_1^1}{\partial u}(\phi ^{{-}1}(x,y))\cdot \dfrac{\partial u}{\partial x}\nonumber \\&+\dfrac{\partial {\widetilde{G}}_1^1}{\partial v}(\phi ^{-1}(x,y))\cdot \dfrac{\partial v}{\partial x}.\nonumber \\ \end{aligned}$$
(38)

We can find the expression for \(\partial {\widetilde{G}}_1^1/\partial u\) by taking the derivative of (37) with respect to u, and we get

$$\begin{aligned} \dfrac{\partial {\widetilde{G}}_1^1}{\partial u}=\dfrac{\partial }{\partial u}\left( \dfrac{\partial \phi _1}{\partial u} F_1^1+\dfrac{\partial \phi _1}{\partial v} F_1^2\right) =\dfrac{\partial ^2 \phi _1}{\partial u^2} F_1^1+\dfrac{\partial \phi _1}{\partial u} \dfrac{\partial F_1^1}{\partial u}+\dfrac{\partial ^2 \phi _1}{\partial u\partial v} F_1^2+\dfrac{\partial \phi _1}{\partial v} \dfrac{\partial F_1^2}{\partial u}. \end{aligned}$$

To find \(\partial {\widetilde{G}}_1^1/\partial v\) we proceed analogously. Now, by substituting these two expressions in (38), we get \(\partial G_1^1/\partial x\) in terms of the derivatives of \(F_1\) and \(F_2\) as we wanted. The same procedure can be applied to \(\partial G_1^1/\partial y,\) \(\partial G_2^1/\partial x,\) and \(\partial G_2^1/\partial y,\) and then we have all the terms of \([G_1,G_2]^1\) expressed with \(F_1\) and \(F_2\).

For the sake of compactness, from now on we will use the following notation to write this expression of \([G_1,G_2]^1\):

$$\begin{aligned} \phi _{ju}=\dfrac{\partial \phi _j}{\partial u}(\phi ^{-1}(x,y)),\quad \phi _{jv}=\dfrac{\partial \phi _j}{\partial v}(\phi ^{-1}(x,y)),\quad \phi _{juu}=\dfrac{\partial ^2 \phi _j}{\partial u^2}(\phi ^{-1}(x,y)),\\ \phi _{juv}=\dfrac{\partial ^2 \phi _j}{\partial u\partial v}(\phi ^{-1}(x,y))=\dfrac{\partial ^2 \phi _j}{\partial v\partial u}(\phi ^{-1}(x,y)),\quad \phi _{jvv}=\dfrac{\partial ^2 \phi _j}{\partial v^2}(\phi ^{-1}(x,y)). \end{aligned}$$

Also, with a slight abuse of notation, we will denote the derivatives \((\partial F_i^j/\partial u) (\phi ^{{-}1}(x,y))\) and \((\partial F_i^j/\partial v) (\phi ^{-1}(x,y))\) simply as \(\partial F_i^j/\partial u\) and \(\partial F_i^j/\partial v,\) respectively.

The expression of \([G_1,G_2]^1\) can then be written as

$$\begin{aligned} \begin{aligned}&{[}G_1,G_2]^1=\frac{\partial G_1^{1}}{\partial x} G_2^{1}+\frac{\partial G_1^{1}}{\partial y} G_2^{2}-\frac{\partial G_2^{1}}{\partial x} G_1^{1}-\frac{\partial G_2^{1}}{\partial y} G_1^{2}\\&=\left( \left( \phi _{1uu} F_1^1+\phi _{1u}\frac{\partial F_1^1}{\partial u}+\phi _{1uv}F_1^2+\phi _{1v}\frac{\partial F_1^2}{\partial u}\right) \dfrac{\partial u}{\partial x}\right. \\&\quad \left. +\left( \phi _{1uv} F_1^1+\phi _{1u}\frac{\partial F_1^1}{\partial v}+\phi _{1vv}F_1^2+\phi _{1v}\frac{\partial F_1^2}{\partial v}\right) \dfrac{\partial v}{\partial x}\right) \left( \phi _{1u}F_2^1+\phi _{1v}F_2^2\right) \\&\quad +\left( \left( \phi _{1uu} F_1^1+\phi _{1u}\frac{\partial F_1^1}{\partial u}+\phi _{1uv}F_1^2+\phi _{1v}\frac{\partial F_1^2}{\partial u}\right) \dfrac{\partial u}{\partial y}\right. \\&\quad \left. +\left( \phi _{1uv} F_1^1+\phi _{1u}\frac{\partial F_1^1}{\partial v}+\phi _{1vv}F_1^2+\phi _{1v}\frac{\partial F_1^2}{\partial v}\right) \dfrac{\partial v}{\partial y}\right) \left( \phi _{2u}F_2^1+\phi _{2v}F_2^2\right) \\&\quad -\left( \left( \phi _{1uu} F_2^1+\phi _{1u}\frac{\partial F_2^1}{\partial u}+\phi _{1uv}F_2^2+\phi _{1v}\frac{\partial F_2^2}{\partial u}\right) \dfrac{\partial u}{\partial x}\right. \\&\quad \left. +\left( \phi _{1uv} F_2^1+\phi _{1u}\frac{\partial F_2^1}{\partial v}+\phi _{1vv}F_2^2+\phi _{1v}\frac{\partial F_2^2}{\partial v}\right) \dfrac{\partial v}{\partial x}\right) \left( \phi _{1u}F_1^1+\phi _{1v}F_1^2\right) \\&\quad -\left( \left( \phi _{1uu} F_2^1+\phi _{1u}\frac{\partial F_2^1}{\partial u}+\phi _{1uv}F_2^2+\phi _{1v}\frac{\partial F_2^2}{\partial u}\right) \dfrac{\partial u}{\partial y}\right. \\&\quad \left. +\left( \phi _{1uv} F_2^1+\phi _{1u}\frac{\partial F_2^1}{\partial v}+\phi _{1vv}F_2^2+\phi _{1v}\frac{\partial F_2^2}{\partial v}\right) \dfrac{\partial v}{\partial y}\right) \left( \phi _{2u}F_1^1+\phi _{2v}F_1^2\right) . \end{aligned}\nonumber \\ \end{aligned}$$
(39)

This expression can be expanded, and a long sum of terms in products of \(F_1^1,F_1^2,F_2^1,F_2^2\) and their derivatives is obtained. Even though we will not show the complete expansion here due to length reasons, we will split it into several parts and see what happens in each case.

First, it is straightforward to see that the terms with \(F_1^1 F_2^1\) cancel with each other, as well as those terms with \(F_1^2 F_2^2.\) Now let us focus on the terms with \(F_1^1 F_2^2\) and \(F_1^2 F_2^1.\) The coefficient of \(F_1^1 F_2^2\) in (39) is

$$\begin{aligned}&\left( \left( \dfrac{\partial v}{\partial x}\phi _{1v}+\dfrac{\partial v}{\partial y}\phi _{2v}\right) - \left( \dfrac{\partial u}{\partial x}\phi _{1u}+\dfrac{\partial u}{\partial y}\phi _{2u}\right) \right) \phi _{1uv}\\&\quad +\left( \dfrac{\partial u}{\partial x}\phi _{1v}+\dfrac{\partial u}{\partial y}\phi _{2v}\right) \phi _{1uu}-\left( \dfrac{\partial v}{\partial x}\phi _{1u}+\dfrac{\partial v}{\partial y}\phi _{2u}\right) \phi _{1vv}. \end{aligned}$$

We can show that this expression is zero by applying Lemma 8. In particular, the coefficient of \(\phi _{1uv}\) equals zero due to (30) and (31), and the coefficients of \(\phi _{1uu}\) and \(\phi _{1vv}\) also equal zero due to (32) and (33), respectively. Therefore, the term with \(F_1^1 F_2^2\) vanishes in (39). Analogously, by applying Lemma 8, we can see that the coefficient of \(F_1^2 F_2^1\) also vanishes.

Now let us see how the Lie bracket of \(F_1\) and \(F_2\) appears in (39). One can see that the coefficient of \(\dfrac{\partial F_1^1}{\partial u}F_2^1\) in this expression is

$$\begin{aligned} \phi _{1u}\left( \dfrac{\partial u}{\partial x}\phi _{1u}+\dfrac{\partial u}{\partial y}\phi _{2u}\right) =\phi _{1u}, \end{aligned}$$

where we have applied (30) from Lemma 8. Analogously, applying both (30) and (31) from such lemma, we see that the coefficients of \(\dfrac{\partial F_1^{1}}{\partial v} F_2^{2},\) \(\dfrac{\partial F_2^{1}}{\partial u} F_1^{1},\) and \(\dfrac{\partial F_2^{1}}{\partial v} F_1^{2}\) in (39) are respectively \(\phi _{1u},-\phi _{1u},\) and \(-\phi _{1u}\). Therefore, as these are the terms of the first component of the Lie bracket according to (13), we obtain that those terms altogether equal \(\phi _{1u}[F_1,F_2]^1.\) The same procedure can be followed with terms \(\dfrac{\partial F_1^{2}}{\partial u} F_2^{1},\) \(\dfrac{\partial F_1^{2}}{\partial v} F_2^{2},\) \(\dfrac{\partial F_2^{2}}{\partial u} F_1^{1},\) and \(\dfrac{\partial F_2^{2}}{\partial v} F_1^{2}\) from (39), and we see that these terms equal \(\phi _{1v}[F_1,F_2]^2.\)

After all this, expression (39) can be rewritten as

$$\begin{aligned} \begin{aligned} {[}G_1,G_2]^1&=\phi _{1u}[F_1,F_2]^1+\phi _{1v}[F_1,F_2]^2\\&\quad +\left( \dfrac{\partial u}{\partial x}\phi _{1v}+\dfrac{\partial u}{\partial y}\phi _{2v}\right) \left( \phi _{1u}\dfrac{\partial F_1^1}{\partial u}F_2^2+\phi _{1v}\dfrac{\partial F_1^2}{\partial u}F_2^2-\phi _{1u}\dfrac{\partial F_2^1}{\partial u}F_1^2-\phi _{1v}\dfrac{\partial F_2^2}{\partial u}F_1^2 \right) \\&\quad +\left( \dfrac{\partial v}{\partial x}\phi _{1u}+\dfrac{\partial v}{\partial y}\phi _{2u}\right) \left( \phi _{1u}\dfrac{\partial F_1^1}{\partial v}F_2^1+\phi _{1v}\dfrac{\partial F_1^2}{\partial v}F_2^1-\phi _{1u}\dfrac{\partial F_2^1}{\partial v}F_1^1-\phi _{1v}\dfrac{\partial F_2^2}{\partial v}F_1^1 \right) . \end{aligned}\nonumber \\ \end{aligned}$$
(40)

Finally, applying (32) and (33) on the first factor in the second and third lines in (40) and seeing that they are zero, we prove relation (36) for the first component of \([G_1,G_2].\)

To prove the relation for the second component, this is

$$\begin{aligned}{}[G_1,G_2]^2=\dfrac{\partial \phi _2}{\partial u} (\phi ^{-1}(x,y))\cdot [F_1,F_2]^1_{\phi ^{-1}(x,y)}+ \dfrac{\partial \phi _2}{\partial v} (\phi ^{-1}(x,y))\cdot [F_1,F_2]^2_{\phi ^{-1}(x,y)}, \end{aligned}$$

we follow an analogous procedure and see that

$$\begin{aligned}{}[G_1,G_2]^2=\phi _{2u}[F_1,F_2]^1+\phi _{2v}[F_1,F_2]^2, \end{aligned}$$

and the proof finishes. \(\square \)

Now let us consider the Lie bracket of two planar real vector fields written (abusing notation) in complex coordinates as \((F_1,{\overline{F}}_1)\) and \((F_2,{\overline{F}}_2)\), and the Lie bracket of new vector fields \((G_1,{\overline{G}}_1)\) and \((G_2,{\overline{G}}_2)\) after a change of variables \((\phi ,{\overline{\phi }})\) on \(F_1\) and \(F_2,\) respectively. As we already stated, these Lie brackets can be described only from their first components, as the second ones are the complex conjugates, so in such case we will denote the fist components of the Lie bracket simply by \([F_1,F_2]\) and \([G_1,G_2],\) with a slight abuse of notation as in (14). Therefore, relation (34) from the theorem can be written, in this case, as

$$\begin{aligned}{}[G_1,G_2]=\phi _{u}[F_1,F_2]+{\overline{\phi }}_{u}\overline{[F_1,F_2]}. \end{aligned}$$

With this relation, the following corollary from Theorem 9 is straightforward.

Corollary 10

Let \((F_1,{\overline{F}}_1)\) and \((F_2,{\overline{F}}_2)\) be two planar complex vector fields which correspond to two real vector fiels in the plane, and let \((G_1,{\overline{G}}_1)\) and \((G_2,{\overline{G}}_2),\) respectively, be the same vector fields after a change of variables \((\phi ,{\overline{\phi }}).\) If \([F_1,F_2]=0\) then \([G_1,G_2]=0.\)

3 Bi-Weakness for Certain Families

This section is devoted to prove Theorem 1. The way to do this is to apply the Lie bracket method introduced in Sect. 2, so all the found bi-weak types will be of the form [k, 2k]. Actually, the proved results are maximal in the sense that if we vanish \(V_{2m+1}\) and \(T_{2m}\) for \(m\le k\) then \(V_{2k+3}=0\) and \(T_{2k+2}=0.\) However, this does not mean that such bi-weak type is the maximal for the system, but the maximal which can be found by means of the Lie bracket method, since such method only detects bi-weak types of the form [k, 2k].

(i) Let us start by studying system (9). We find \(V_3\) and \(T_2\) by using the Lie bracket method,

$$\begin{aligned} V_3=\frac{1}{4}(2 b_2 a_2-3 a_3),\,\, T_2 = \frac{1}{12}(4 a_2^2+10 b_2^2-9 b_3). \end{aligned}$$
(41)

System \(\{V_3=0,T_2=0\}\) has solution \(S_1^{(1)}=\{a_3 = \frac{2}{3} b_2 a_2, b_3 = \frac{4}{9}a_2^2+\frac{10}{9}b_2^2\}.\) The next step is to find \(V_5\) and \(T_4\) assuming that conditions in \(S_1^{(1)}\) hold. This gives

$$\begin{aligned} V_5 = -\frac{5}{27}(2 b_2 a_2^3+5 b_2^3 a_2),\,\, T_4 = -\frac{5}{27}(11 a_2^2 b_2^2-14 b_2^4), \end{aligned}$$

and \(\{V_5=0,T_4=0\}\) has the solution \(S_1^{(2)}=\{b_2=0\}.\) Finally, if we compute \(V_7\) and \(T_6\) under conditions \(S_1^{(1)}\cup S_1^{(2)},\) we find \(V_7=0\) and \(T_6=0,\) and this case corresponds to system

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+a_2 x^2,\\ {\dot{y}}=x+\frac{4}{9}a_2^2 x^3, \end{array}\right. } \end{aligned}$$
(42)

which is an isochronous center as we will see in Proposition 12. System (9) under condition \(S_1^{(1)}\) has then a bi-weak [2, 4] type, which is the maximal of the form [k, 2k] as we have seen that [3, 6] does not appear.

(ii) For the quartic Liénard (10), we proceed analogously. By using the Lie bracket technique we find that \(V_3\) and \(T_2\) are the same that in the cubic case, (41), so they vanish again for \(S_2^{(1)}=\{a_3 = \frac{2}{3} b_2 a_2, b_3 = \frac{4}{9}a_2^2+\frac{10}{9}b_2^2\}.\) Now we find the next constants, which are

$$\begin{aligned} V_5=&\frac{1}{54}(-20a_2^3 b_2-50a_2 b_2^3+27 a_2 b_4+90 a_4 b_2),\\ T_4=&\frac{1}{54}(-110 a_2^2 b_2^2-140 b_2^4+72 a_2 a_4+189 b_2 b_4), \end{aligned}$$

and \(\{V_5=0,T_4=0\}\) has two solutions

$$\begin{aligned}S_2^{(2a)}=\left\{ a_4 = -\frac{5}{3}\frac{b_2^2 a_2(a_2^2+7b_2^2)}{4a_2^2-35 b_2^2},b_4 = \frac{10}{27}\frac{b_2(8a_2^4-35 a_2^2 b_2^2-70 b_2^4)}{4 a_2^2-35 b_2^2}\right\} \end{aligned}$$

and \(S_2^{(2b)}=\{a_2=0,b_2=0\}.\) The next step is to find \(V_7\) and \(T_6\), which under conditions \(S_2^{(1)}\cup S_2^{(2a)}\) are

$$\begin{aligned} V_7^{(a)} =&-\frac{35}{216}\frac{b_2^3 a_2(8 a_2^6+525 a_2^4 b_2^2-2520 a_2^2 b_2^4-4900 b_2^6)}{(4 a_2^2-35 b_2^2)^2},\\ T_6^{(a)} =&\frac{35}{648} \frac{b_2^4 (430 a_2^6-3900 a_2^4 b_2^2+46431 a_2^2 b_2^4+56350 b_2^6)}{(4 a_2^2-35 b_2^2)^2}, \end{aligned}$$

and under \(S_2^{(1)}\cup S_2^{(2b)},\)

$$\begin{aligned} V_7^{(b)}= \frac{21}{16} a_4 b_4,\,\,T_6^{(b)} = \frac{1}{80}(84 a_4^2+189 b_4^2). \end{aligned}$$

System \(\{V_7^{(a)}=0,T_6^{(a)}=0\}\) has the solution \(S_2^{(3a)}=\{b_2=0\}\), and \(\{V_7^{(b)}=0,T_6^{(b)}=0\}\) has the solution \(S_2^{(3b)}=\{a_4=0,b_4=0\}.\) Finally, if we substitute \(S_2^{(1)}\cup S_2^{(2a)}\cup S_2^{(3a)}\) and \(S_2^{(1)}\cup S_2^{(2b)}\cup S_2^{(3b)}\) in \(V_9\) and \(T_8\) we obtain \(V_9=0\) and \(T_8=0\) in both cases, so the maximal bi-weak type is of the form [k, 2k] is [3, 6] and this proves Theorem 1 (ii). We notice that case \(S_2^{(1)}\cup S_2^{(2a)}\cup S_2^{(3a)}\) corresponds to system (42) and case \(S_2^{(1)}\cup S_2^{(2b)}\cup S_2^{(3b)}\) is the linear center, so both systems are isochronous centers.

(iii) Let us study the quadratic case. First, for the sake of simplicity in the obtained expressions, we will consider system (11) in complex coordinates as

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{z}}={{\,\mathrm{i}\,}}z+r_{20} z^2 +r_{11} zw+r_{02} w^2,\\ {\dot{w}}=-{{\,\mathrm{i}\,}}w+s_{02} z^2 +s_{11} zw+s_{20} w^2, \end{array}\right. } \end{aligned}$$

being \(s_{ij}={\overline{r}}_{ij}.\) The first Lyapunov and period constants are

$$\begin{aligned} V_3={{\,\mathrm{i}\,}}(- r_{11} r_{20}+s_{11} s_{20}),\,\,T_2=\frac{4}{3} r_{02} s_{02}+2 r_{11} s_{11}- r_{11} r_{20}-s_{11} s_{20}. \end{aligned}$$

In this case, solving the systems for Lyapunov and period constants equal to zero as we did in Liénard families is cumbersome, as many more solutions are obtained. For this reason, we will study Lyapunov and period constants \(V_{2k+1}\) and \(T_{2k}\) in the Bautin type ideal \({\mathcal {B}}_{k-1}:=\langle V_3,T_2,\ldots ,V_{2k-1},T_{2k-2}\rangle .\)

First we want to check whether \(V_5\) and \(T_4\) belong to \({\mathcal {B}}_{1}=\langle V_{3},T_{2}\rangle .\) The expression of \(V_5\) and \(T_4\) in \({\mathcal {B}}_{1}\) is

$$\begin{aligned} V_5=&\,\frac{1}{3}{{\,\mathrm{i}\,}}(-4 r_{02} r_{20}^2 s_{11}+6 r_{02} r_{20} s_{11}^2+4 r_{02} s_{11}^3-4 r_{11}^3 s_{02}-6 r_{11}^2 s_{02} s_{20}+4 r_{11} s_{02} s_{20}^2),\\ T_4=&\,\frac{1}{6}\left( -8 r_{02} r_{20}^2 s_{11}+36 r_{02} r_{20} s_{11}^2-40 r_{02} s_{11}^3-40 r_{11}^3 s_{02}-96 r_{11}^2 r_{20}^2\right. \\&\quad \left. +219 r_{11}^2 r_{20} s_{11}+36 r_{11}^2 s_{02} s_{20}-135 r_{11}^2 s_{11}^2+12 r_{11} r_{20}^2 s_{20}-8 r_{11} s_{02} s_{20}^2\right) . \end{aligned}$$

As these constants are not identically zero, we have a bi-weak [2, 4] type.

Let us check now that [3, 6] does not appear, so we find the expressions of \(V_7\) and \(T_6\) in \({\mathcal {B}}_2=\langle V_3,T_2,V_5,T_4 \rangle ,\)

$$\begin{aligned} V_7=&\frac{1}{64}{{\,\mathrm{i}\,}}\left( -80 r_{02} r_{11} r_{20}^2 s_{11}^2+360 r_{02} r_{11} r_{20} s_{11}^3-100 r_{02} r_{11} s_{11}^4-300 r_{11}^4 s_{02} s_{11}\right. \\&\quad \left. +60 r_{11}^3 r_{20}^3-480 r_{11}^3 r_{20}^2 s_{11}+1095 r_{11}^3 r_{20} s_{11}^2-675 r_{11}^3 s_{11}^3\right) ,\\ T_6=&\frac{1}{480}\left( 3032 r_{02} r_{11} r_{20}^2 s_{11}^2-4548 r_{02} r_{11} r_{20} s_{11}^3-2282 r_{02} r_{11} s_{11}^4\right. \\&\quad \quad +750 r_{11}^4 s_{02} s_{11}+94178 r_{11}^3 r_{20}^3-249834 r_{11}^3 r_{20}^2 s_{11}+287559 r_{11}^3 r_{20} s_{11}^2\\&\quad \quad \left. -117863 r_{11}^3 s_{11}^3-14832 r_{11}^2 r_{20}^3 s_{20}+792 r_{11} r_{20}^3 s_{20}^2\right) . \end{aligned}$$

However, we can see that actually \(V_7^2=0\) in \({\mathcal {B}}_2,\) i.e. \(V_7=0\) in the variety of zeros defined by the elements in \({\mathcal {B}}_2,\) so the bi-weak [3, 6] type cannot appear as we have a center and in this case [2, 4] is maximal. We notice that there are quadratic centers with \(T_6\ne 0,\) which makes sense as it is a classical result that quadratic centers can have at most 2 oscillations in the period function (see [7]), because in the center case only \(T_2,\) \(T_4,\) and \(T_6\) are necessary to characterize the isochronicity property. In fact, \(T_8^2=0\) in \({\mathcal {B}}_3=\langle V_3,T_2,V_5,T_4,T_6\rangle .\)

Let us make the following observation regarding the quadratic case. It is a well-known fact that the general quadratic family (11) classically unfolds 3 limit cycles, hence the case \(V_3=V_5=0\) with nonvanishing \(V_7\) exists. This is not the case for the simultaneous study of Lyapunov and period constants we have seen here, but as we are adding the conditions for vanishing the corresponding period constants, the fact that the obtained Lyapunov constants vanish at lower orders is not inconsistent.

(iv) Finally, let us study the bi-weakness of cubic homogeneous nonlinearity family (12). To simplify the expressions we will perform a change to complex coordinates, obtaining

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{z}}={{\,\mathrm{i}\,}}z+r_{30} z^3 +r_{21} z^2w+r_{12} zw^2+r_{03} w^3,\\ {\dot{w}}=-{{\,\mathrm{i}\,}}w+s_{03} z^3 +s_{12} z^2w+s_{21} zw^2+s_{30} w^3, \end{array}\right. } \end{aligned}$$

being \(s_{ij}={\overline{r}}_{ij}.\) We will proceed analogously to the quadratic case, by studying \(V_{2k+1}\) and \(T_{2k}\) in the Bautin ideal \({\mathcal {B}}_{k-1}:=\langle V_{2m+1},T_{2m}\rangle _{m\le k-1}.\)

The first Lyapunov and period constants are

$$\begin{aligned} V_3=-r_{21} - s_{21},\,\,T_2 = {{\,\mathrm{i}\,}}(r_{21} - s_{21}). \end{aligned}$$

The next step is to find the expressions of \(V_5\) and \(T_4\) simplified with respect to \({\mathcal {B}}_{1}=\langle V_3,T_2 \rangle ,\)

$$\begin{aligned} V_5=-2 {{\,\mathrm{i}\,}}(r_{12} r_{30}-s_{12} s_{30}),\,\, T_4= 3 r_{03} s_{03}-2 r_{12} r_{30}+4 r_{12} s_{12}-2 s_{12} s_{30}. \end{aligned}$$

Being in \({\mathcal {B}}_2=\langle V_3,T_2,V_5,T_4 \rangle \) we have

$$\begin{aligned} V_7=&\frac{3}{8}(3 r_{03} r_{30}^2-8 r_{03} r_{30} s_{12}-3 r_{03} s_{12}^2-3 r_{12}^2 s_{03}-8 r_{12} s_{03} s_{30}+3 s_{03} s_{30}^2),\\ T_6=&-\frac{3}{8}{{\,\mathrm{i}\,}}(3 r_{03} r_{30}^2-16 r_{03} r_{30} s_{12}+21 r_{03} s_{12}^2-21 r_{12}^2 s_{03}+16 r_{12} s_{03} s_{30}-3 s_{03} s_{30}^2). \end{aligned}$$

As they are not identically zero, this proves the existence of a bi-weak [3, 6] type. The proof finishes due to the fact that \(V_9=0\) in \({\mathcal {B}}_{3}=\langle V_3,T_2,V_5,T_4,V_7,T_6 \rangle .\)

4 Some Results on Liénard Families

This section is devoted to the study of Liénard systems and it is divided into three parts. First, we use the Lie bracket method to deduce the structure of Lyapunov and period constants of a Liénard system starting with an odd and an even degree monomials on its first differential equation. Second, we classify the centers and the isochronicity of a Liénard family. Finally, we use the previous results to provide a complete study of the simultaneous bifurcation of limit cycles and critical periods of the cubic Liénard family, which proves Theorem 2.

4.1 A Liénard Family Starting with an Odd and an Even Degree Monomials

In this subsection we will consider the Liénard family

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+a_m x^m+a_n x^n+x^d P(x),\\ {\dot{y}}=x, \end{array}\right. } \end{aligned}$$
(43)

where m and n are even and odd natural numbers, respectively, \(d=\max (m,n)+1,\) and P(x) is a polynomial in x.

In order to motivate the problem, let us start by considering (43) being \(P_d(x)=0.\) It is a well-known fact that in this classical Liénard family the coefficients corresponding to odd powers control the center property, so if \(a_n=0\) we have a center at the origin. In this case the even power controls the isochronicity property, so if \(a_m=0\) the system has an isochronous center. A study in this line is presented for example in [10], but in our case we will present our result by using the Lie bracket method introduced in Subsect. 2.3 to find the Lyapunov and period constants. It is worth recalling that different methods can lead to Lyapunov and period constants that differ in a multiplicative constant, but the dependence on parameters \(a_m\) and \(a_n\) is the same and the center and linearizability conditions are also kept. The result is as follows.

Theorem 11

For system (43),

  1. (i)

    if \(n<2m-1,\) then the first nonidentically zero Lyapunov constant is

    $$\begin{aligned} V_n=\frac{a_n}{2^n} (n-1) {{n}\atopwithdelims (){\frac{n-1}{2}}}, \end{aligned}$$
    (44)

    and the system vanishes all its period constants up to order n;

  2. (ii)

    if \(n=2m-1,\) then the first nonidentically zero Lyapunov and period constants are (44) and

    $$\begin{aligned} T_{2m-2}= -\frac{a_m^2}{2^{2m-1}}\frac{m^2(m-1)}{(m+1)^2} {{2m}\atopwithdelims (){m}}, \end{aligned}$$
    (45)

    respectively;

  3. (iii)

    if \(n>2m-1,\) then the first nonidentically zero period constant is (45), and the system vanishes all its Lyapunov constants constants up to order \(2m-1\).

Proof

Let us start by writing system (43) in complex coordinates. By using the Binomial Theorem,

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{z}}=iz+\frac{a_m}{2^m}\sum \limits _{j=0}^{m}{{m}\atopwithdelims (){j}}z^{m-j}w^j+\frac{a_n}{2^n}\sum \limits _{k=0}^{n}{{n}\atopwithdelims (){k}}z^{n-k}w^k+\left( \frac{z+w}{2}\right) ^dP(z,w)=:{\mathcal {Z}}(z,w),\\ {\dot{w}}=-iw+\frac{a_m}{2^m}\sum \limits _{j=0}^{m}{{m}\atopwithdelims (){j}}z^{j}w^{m-j}+\frac{a_n}{2^n}\sum \limits _{k=0}^{n}{{n}\atopwithdelims (){k}}z^{k}w^{n-k}+\left( \frac{z+w}{2}\right) ^d{\overline{P}}_d(z,w)=:\overline{{\mathcal {Z}}}(z,w). \end{array}\right. } \end{aligned}$$
(46)

We will consider the system

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{z}}=z+\sum \limits _{l=0}^{m} (A_{m-l,l}+{{\,\mathrm{i}\,}}B_{m-l,l})z^{m-l}w^l+\sum \limits _{p=0}^{n} (A_{n-p,p}+{{\,\mathrm{i}\,}}B_{n-p,p})z^{n-p}w^p=:{\mathcal {U}}(z,w),\\ {\dot{w}}=w+\sum \limits _{l=0}^{m} (A_{m-l,l}-{{\,\mathrm{i}\,}}B_{m-l,l})z^{l}w^{m-l}+\sum \limits _{p=0}^{n} (A_{n-p,p}-{{\,\mathrm{i}\,}}B_{n-p,p})z^{p}w^{n-p}=:\overline{{\mathcal {U}}}(z,w). \end{array}\right. } \end{aligned}$$
(47)

To calculate the Lyapunov and period constants we will find the structure of the Lie bracket of (46) and (47). Observe that, due to the fact that they are associated to a real vector field, by using (14) their Lie bracket can be described only from its first component:

$$\begin{aligned}{}[{\mathcal {Z}},{\mathcal {U}}]^1=\frac{\partial {\mathcal {Z}}}{\partial z} {\mathcal {U}}+\frac{\partial {\mathcal {Z}}}{\partial w} \overline{{\mathcal {U}}}-\frac{\partial {\mathcal {U}}}{\partial z} {\mathcal {Z}}-\frac{\partial {\mathcal {U}}}{\partial w} \overline{{\mathcal {Z}}}={\mathcal {H}}_m+{\mathcal {H}}_n+{\mathcal {H}}_{2m-1}+{\mathcal {H}}_{2n-1}+O_{m+n-1}(z,w),\nonumber \\ \end{aligned}$$
(48)

where each \({\mathcal {H}}_q\) is an homogeneous qth degree polynomial in zw. These polynomials have been found, but they are not written here for the sake of brevity.

We have that

$$\begin{aligned} {\mathcal {H}}_m=\sum \limits _{l=0}^{m} \left[ (-B_{m-l,l}+{{\,\mathrm{i}\,}}A_{m-l,l})(2l-m+1)+\frac{a_m}{2^m}(m-1){{m}\atopwithdelims (){l}}\right] z^{m-l}w^{l}.\qquad \end{aligned}$$
(49)

This homogeneous mth degree part vanishes taking

$$\begin{aligned} A_{m-l,l}=0,\,\,B_{m-l,l}=\frac{a_m}{2^m}\frac{m-1}{2l-m+1}{{m}\atopwithdelims (){l}}\text { for } l\in \{0,\dots ,m\}. \end{aligned}$$
(50)

The homogeneous nth degree polynomial \({\mathcal {H}}_n\) is analogous to (49), only changing m by n. We can take then

$$\begin{aligned} \begin{aligned} A_{n-p,p}&=0\text { for } p\in \{0,\dots ,n\},\\ B_{n-p,p}&=\frac{a_n}{2^n}\frac{n-1}{2p-n+1}{{n}\atopwithdelims (){p}}\text { for } p\in \{0,\dots ,n\}\backslash \bigg \{\frac{n-1}{2}\bigg \}, \end{aligned} \end{aligned}$$
(51)

as for the term corresponding to \(p=\frac{n-1}{2}\) we have that \(2p-n+1=0,\) and therefore the coefficient of \(z^{\frac{n+1}{2}}w^{\frac{n-1}{2}}=z(zw)^{\frac{n-1}{2}}\) cannot be vanished; in this case we take \(B_{\frac{n+1}{2},\frac{n-1}{2}}=0.\) Notice that this fact occurs with n because it is an odd number, but not with m which is even.

Let us see the structure of \({\mathcal {H}}_{2m-1}.\) After substituting (50), we obtain

$$\begin{aligned} \begin{aligned} {\mathcal {H}}_{2m-1}=&{{\,\mathrm{i}\,}}\frac{a_m^2}{2^{2m}}\sum \limits _{j,l=0}^m \frac{m-1}{2l-m+1}{{m}\atopwithdelims (){l}}{{m}\atopwithdelims (){j}}\left[ (l-j) z^{2m-l-j-1}w^{j+l}\right. \\&\left. -jz^{m+l-j}w^{m+j-l-1}-lz^{m+j-l}w^{m+l-j-1}\right] . \end{aligned} \end{aligned}$$
(52)

We know by Theorem 6 that the period constant associated to (52) is the coefficient of \(z(zw)^{m-1}.\) Then, after some basic combinatorial computations, this coefficient can be written as

$$\begin{aligned} p_{2m-1}={{\,\mathrm{i}\,}}\frac{a_m^2}{2^{2m}}(m-1)\left( \sum \limits _{j=0}^{m-1} {{m}\atopwithdelims (){j}}{{m}\atopwithdelims (){j+1}}-\sum \limits _{j=0}^m \frac{2j}{2j-m+1}{{m}\atopwithdelims (){j}}^2\right) . \end{aligned}$$
(53)

Let us write this expression in a more compact way. We can use the generating functions \((1+x)^m\) and \((1+x)^{2m}\) to rewrite the first summation in (53). In the relation

$$\begin{aligned} \sum \limits _{k=0}^{2m}{{2m}\atopwithdelims (){k}}x^k =(1+x)^{2m}=\left( (1+x)^{m}\right) ^2=\left( \sum \limits _{k=0}^{m}{{m}\atopwithdelims (){k}}x^k\right) ^2=\sum \limits _{k,j=0}^{m}{{m}\atopwithdelims (){k}}{{m}\atopwithdelims (){j}}x^{k+j}, \end{aligned}$$

we equate the coefficients of \(x^{m-1},\) and we obtain

$$\begin{aligned} \sum \limits _{j=0}^{m-1}{{m}\atopwithdelims (){j}}{{m}\atopwithdelims (){j+1}}={{2m}\atopwithdelims (){m-1}}=\frac{m}{m+1}{{2m}\atopwithdelims (){m}}. \end{aligned}$$
(54)

For the second summation in (53), the equality

$$\begin{aligned} \sum \limits _{j=0}^m \frac{2j}{2j-m+1}{{m}\atopwithdelims (){j}}^2=\frac{m(3m+1)}{(m+1)^2}{{2m}\atopwithdelims (){m}} \end{aligned}$$
(55)

holds. This equality has been obtained with a computer algebra system by means of the Zeilberger’s algorithm. For more details on how to use such method, the reader is referred to [34, 35], as well as [23] and the references therein. Notice that this Zeilberger’s algorithm can also be used to justify (54).

Adding (54) and (55), one can rewrite (53) as

$$\begin{aligned} p_{2m-1}=-{{\,\mathrm{i}\,}}\frac{a_m^2}{2^{2m-1}}\frac{m^2(m-1)}{(m+1)^2}{{2m}\atopwithdelims (){m}}. \end{aligned}$$
(56)

After these calculations we can finally prove our result. To this end, we will consider (48) assuming (50) and (51), so \({\mathcal {H}}_m=0\) and \({\mathcal {H}}_n\) only has the \(z(zw)^{\frac{n-1}{2}}\) term. If \(n<2m-1,\) then the lowest degree homogeneous polynomial is the nth degree one,

$$\begin{aligned} {\mathcal {H}}_n=\frac{a_n}{2^n}(n-1){{n}\atopwithdelims (){\frac{n-1}{2}}}z(zw)^{\frac{n-1}{2}}, \end{aligned}$$

and due to Theorem 6 this coefficient is the first nonidentically zero Lyapunov constant (44), while all period constants vanish up to this order.

If \(n=2m-1\), then the lowest degree homogeneous polynomial in (48) is \({\mathcal {H}}_n+{\mathcal {H}}_{2m-1},\) so the coefficient of \(z(zw)^{\frac{n-1}{2}}=z(zw)^{m-1}\) is (44) plus (56). Therefore, by Theorem 6 the real part is the Lyapunov constant (44) and the imaginary part is the first nonidentically zero period constant (45).

Finally, for \(n>2m-1,\) an analogous procedure shows that the lowest degree homogeneous polynomial in (48) is \({\mathcal {H}}_{2m-1},\) so the period constant is the imaginary part of (56) and all Lyapunov constants vanish up to this order. \(\square \)

4.2 Center and Isochronicity Classification of a Liénard Family

In this subsection we present, as an application of our approach, a characterization of isochronous centers in a Liénard family. For a general proof we refer the reader to the very recent work of Amel’kin ([4]). We start with the following result.

Proposition 12

Let us consider the Liénard system

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+a_2 x^2 +a_3 x^3+a_4 x^4+a_5 x^5+a_6 x^6,\\ {\dot{y}}=x+b_2 x^2 +b_3 x^3+b_4 x^4+b_5 x^5+b_6 x^6. \end{array}\right. } \end{aligned}$$
(57)

The only isochronous center having the form (57) is (42).

Proof

We start by finding the first five Lyapunov and period constants of (57) by using the Lie bracket method from Subsect. 2.3. Then we solve the resulting system \(\{V_3=0,V_5=0,V_7=0,V_9=0,V_{11}=0,T_2=0,T_4=0,T_6=0,T_8=0,T_{10}=0\},\) and we compute that the only nontrivial solution is \(\{a_3=0,a_4=0,a_5=0,b_2=0,b_3=\frac{4}{9}a_2^2,b_4=0,b_5=0\},\) which corresponds to system (42). Such system has a center at the origin due to Theorem 1 from [9], which classifies a class of Liénard systems known as composition centers; this center classification can also be found in [17].

To show the isochronicity of (42) we will start by rescaling the system via the change \((x,y)\rightarrow (\frac{3}{2a_2}x,\frac{3}{2a_2}y)\) for \(a_2\ne 0,\) which results in

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+\frac{3}{2} x^2,\\ {\dot{y}}=x+x^3. \end{array}\right. } \end{aligned}$$
(58)

Observe that for the case \(a_2=0,\) the system is isochronous because it corresponds to the linear center. System (58) is isochronous because it commutes with

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=xA(x,y),\\ {\dot{y}}=\left( y+\frac{1}{2}x^2\right) A(x,y), \end{array}\right. } \end{aligned}$$

where \(A(x,y)=-1-y+\frac{1}{2}x^2,\) as the Lie bracket between both systems is 0.

We will present an alternative proof for the isochronicity of (58), because we consider that it is interesting as it simply uses a first integral and the system itself. A first integral of (58) is

$$\begin{aligned} H(x,y)=\frac{x^4-4 x^2 y+4 x^2+4 y^2}{(x^2-2y-2)^2}, \end{aligned}$$
(59)

which satisfies that \(H(0,0)=0.\) The idea is to prove the isochronicity of (58) by using (59) to find an expression for the level curves \(\gamma _h\) and check that its integral through a whole loop is \(2\pi .\)

Let us consider \(H(x,y)=h^2\) with \(h>0\) and solve it for x. As it has degree 4 in x we obtain four solutions, two of which are imaginary for values of h close to 0. The other two solutions are

$$\begin{aligned} X_{\pm }(y,h)=\pm \frac{\sqrt{-2(h^2-1)(-h^2 y-h^2+\sqrt{2h^2 y+3h^2-2y+1}+y-1})}{h^2-1}, \end{aligned}$$

which are real for \(0<h<1,\) and they correspond to the level curve on the right and left side of the vertical axis. By solving \(H(0,y)=h^2\) with respect to y, we can find that the intersections of the level curves with the vertical axis for level \(h^2\) are \(y_{\pm }=h/(1\pm h),\) being \(0<h<1.\) Considering the second equation in (58), we aim to calculate the integral for the time

$$\begin{aligned} T=\int _0^T dt= \int _{\gamma _h} \frac{dy}{{\dot{y}}}=2\int _{y_{-}}^{y_{+}} \frac{dy}{X_{+}(y,h)+X_{+}^3(y,h)}=:2 I(h), \end{aligned}$$

and see that we obtain \(2\pi .\) Here we have used that, due to the symmetry of the problem, we can simply integrate from \(y_{-}\) to \(y_{+}\) and check that we obtain \(I(h)=\pi ,\) which represents half a loop.

To simplify the expression of I(h) we will consider the change of variables \(z^2=2h^2 y+3h^2-2y+1,\) or equivalently \(y=\frac{3 h^2-z^2+1}{2(1-h^2)}.\) After applying this change both to \(X_{+}(y,h)\) and the integration limits \(y_{\pm },\) the integral becomes

$$\begin{aligned} I(h)=\int _{1+h}^{1-h} -\frac{1-h^2}{(z-2)\sqrt{-h^4+h^2 z^2-2 h^2 z+2 h^2-z^2+2 z-1}} dz. \end{aligned}$$

This integral can be simplified a little more so that the integration limits become \(\pm 1.\) Let us apply the change \(z=1-hw\) to a new variable w,  and we have

$$\begin{aligned} \begin{aligned} I(h)=&\int _{-1}^{1} \frac{\sqrt{1-h^2}}{\sqrt{1-w^2}(1+hw)}dw=\lim _{k\rightarrow 1}\int _{-k}^{k} \frac{\sqrt{1-h^2}}{\sqrt{1-w^2}(1+hw)}dw\\ =&\lim _{k\rightarrow 1}\left[ \arctan {\frac{h+w}{\sqrt{1-h^2}\sqrt{1-w^2}}}\right] _{w=-k}^{w=k}=\frac{\pi }{2}-\left( -\frac{\pi }{2}\right) =\pi , \end{aligned} \end{aligned}$$

for \(0<h<1,\) so the result follows. \(\square \)

By studying the Liénard system from the previous result we have come across the Liénard family

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+a x^n,\\ {\dot{y}}=x+b x^{2n-1}, \end{array}\right. } \end{aligned}$$
(60)

for \(a,b\in {\mathbb {R}},n\in {\mathbb {N}},\) being \(n\ge 2.\) Indeed, (42) is a particular case of (60). The centers and isochronicity of this family are classified on Theorem 14. For the proof of this classification result, we first need the following proposition.

Proposition 13

System (60) with even n\(b=\frac{n^2}{(n+1)^2}a^2,\) and \(a\ne 0\) has a first integral of the form

$$\begin{aligned} H(x,y)=\frac{A(x,y)}{\left( x^2+y^2-2x^ny+x^{2n}\right) ^{\frac{n-1}{2}}}, \end{aligned}$$
(61)

being A(xy) a polynomial in xy.

Proof

For \(b=\frac{n^2}{(n+1)^2}a^2\) and \(a\ne 0,\) system (60) can be rewritten as

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=-y+(n+1) x^n=:P(x,y),\\ {\dot{y}}=x+n^2 x^{2n-1}=:Q(x,y), \end{array}\right. } \end{aligned}$$
(62)

after the rescaling \((x,y)\rightarrow \left( \left( \frac{n+1}{a}\right) ^{\frac{1}{n-1}}x,\left( \frac{n+1}{a}\right) ^{\frac{1}{n-1}}y\right) .\) It can be checked that function \(F(x,y)=x^2+y^2-2x^ny+x^{2n}\) is an invariant curve of system (62) with cofactor \(K=2nx^{n-1},\) and the divergence of the vector field is \({{\,\mathrm{div}\,}}(P,Q)=n(n+1)x^{n-1}.\) Now according to Darboux integrability theory (see for instance [12]), for \(\lambda \) satisfying \(\lambda K=-{{\,\mathrm{div}\,}}(P,Q)\) we have that \(R(x,y)=F(x,y)^\lambda \) is an integrating factor of the system. In our case, \(\lambda K=-{{\,\mathrm{div}\,}}(P,Q)\) is \(\lambda 2nx^{n-1}=-n(n+1)x^{n-1},\) so \(\lambda =-\frac{n+1}{2}\) and \(R(x,y)=\left( x^2+y^2-2x^ny+x^{2n}\right) ^{-\frac{n+1}{2}}\) is an integrating factor.

Having an integrating factor R(xy) of the system, we know that a first integral H(xy) satisfies \(\frac{\partial H}{\partial x}=Q(x,y)R(x,y)\) and \(\frac{\partial H}{\partial y}=-P(x,y)R(x,y),\) so we can integrate this second equation to find the form of such first integral,

$$\begin{aligned} H(x,y)&=\int -P(x,y)R(x,y) dy =\int \frac{y-nx^n- x^n}{\left( x^2+y^2-2x^ny+x^{2n}\right) ^{\frac{n{+}1}{2}}} dy\nonumber \\&=\int \frac{y-x^n}{\left( x^2+y^2-2x^ny+x^{2n}\right) ^{\frac{n+1}{2}}} dy-n \int \frac{x^n}{\left( x^2+y^2-2x^ny+x^{2n}\right) ^{\frac{n+1}{2}}} dy. \end{aligned}$$
(63)

The first term in (63) is an immediate integral and its result is \(\frac{-1}{(n-1)\left( x^2+y^2-2x^ny+x^{2n}\right) ^{\frac{n-1}{2}}},\) taking into account that due to being a first integral we can consider that the integration constant is 0. For the second integral we will perform the change of variables \(\omega =\frac{x}{\sqrt{x^2+y^2-2x^ny+x^{2n}}},\) so \(dy=-\frac{x}{\omega ^2\sqrt{1-\omega ^2}}d\omega \) and the integral can be written as

$$\begin{aligned} \int \frac{x^n}{\left( x^2+y^2-2x^ny+x^{2n}\right) ^{\frac{n+1}{2}}} dy=-\int \frac{\omega ^{n-1}}{\sqrt{1-\omega ^2}} d\omega . \end{aligned}$$

Now we can apply a trigonometric change of variables \(\omega =\sin \phi ,\) and

$$\begin{aligned} -\int \frac{\omega ^{n-1}}{\sqrt{1-\omega ^2}} d\omega= & {} -\int \frac{\sin ^{n-1}\phi }{\sqrt{1-\sin ^2\phi }} \cos \phi d\phi =-\int \sin ^{n-1}\phi d\phi \\= & {} -\int \sin \phi \sin ^{n-2}\phi d\phi . \end{aligned}$$

As n is even, so is \(n-2\) and the integral becomes

$$\begin{aligned} -\int \sin \phi \left( \sin ^2\phi \right) ^{\frac{n-2}{2}} d\phi= & {} -\int \sin \phi \left( 1-\cos ^2\phi \right) ^\frac{n-2}{2} d\phi \\= & {} -\int \sin \phi \left( \sum \limits _{j=0}^{\frac{n-2}{2}}a_j\cos ^{2j}\phi \right) d\phi , \end{aligned}$$

for certain coefficients \(a_j,\) where in the last equality we have used the Binomial Theorem. Then, after swapping the sum and the integral we get

$$\begin{aligned} -\sum \limits _{j=0}^{\frac{n-2}{2}}a_j \int \sin \phi \cos ^{2j}\phi d\phi =\sum \limits _{j=0}^{\frac{n-2}{2}}\frac{a_j}{2j+1} \cos ^{2j+1}\phi =\cos \phi \sum \limits _{j=0}^{\frac{n-2}{2}}\frac{a_j}{2j+1} \cos ^{2j}\phi . \end{aligned}$$

By using \(\cos ^{2j}\phi =\left( 1-\sin ^2\phi \right) ^j\) and applying the Binomial Theorem again, we have that for new coefficients \(b_j,\)

$$\begin{aligned} \cos \phi \sum \limits _{j=0}^{\frac{n-2}{2}}\frac{a_j}{2j+1} \cos ^{2j}\phi =\cos \phi \sum \limits _{j=0}^{\frac{n-2}{2}}b_j\sin ^{2j}\phi =\sqrt{1-\omega ^2}\sum \limits _{j=0}^{\frac{n-2}{2}}b_j\omega ^{2j}, \end{aligned}$$

after undoing the change \(\omega =\sin \phi .\) Finally, we can substitute \(\omega =\frac{x}{\sqrt{x^2+y^2-2x^ny+x^{2n}}},\) and after some trivial calculations we get

$$\begin{aligned} \sqrt{1-\omega ^2}\sum \limits _{j=0}^{\frac{n-2}{2}}b_j\omega ^{2j}=\frac{(y-x^n)\sum \nolimits _{j=0}^{\frac{n-2}{2}}b_j x^{2j}(x^2+y^2-2x^ny+x^{2n})^{\frac{n-2}{2}-j} }{(x^2+y^2-2x^ny+x^{2n})^{\frac{n-1}{2}}}, \end{aligned}$$

so joining both terms from (63) the result follows. \(\square \)

Theorem 14

The Liénard family (60) satisfies that,

  1. (i)

    for odd n,  the system is a center if and only if \(a=0\) –in which case it is a reversible center–, and it is an isochronous center if and only if \(a=b=0;\)

  2. (ii)

    for even n,  the system is a reversible center for any a and b,  and it is an isochronous center if and only if \(b=\frac{n^2}{(n+1)^2}a^2.\)

Proof

Let us start with the case of n being odd. If \(a=0,\) it is immediate to see that (60) is a time-reversible center. Consider the function

$$\begin{aligned} H(x,y)=\frac{1}{2}x^2+\frac{1}{2}y^2+\frac{b}{2n}x^{2n}, \end{aligned}$$
(64)

and compute

$$\begin{aligned} \nabla H\cdot ({\dot{x}},{\dot{y}})=\left( x+b x^{2n-1},y \right) \cdot \left( -y+a x^n,x+b x^{2n-1}\right) =ax^{n+1}\left( 1+bx^{2n-2} \right) . \end{aligned}$$

We can see then that for \(a=0\) function H is a first integral of the system, and for \(a\ne 0\) it is a Lyapunov function of the system. In the latter case, the origin is a focus and the sign of a determines its stability near the origin.

We will prove now the isochronicity condition provided that we have a center, i.e. \(a=0.\) If \(b=0,\) (60) becomes the linear center so it is isochronous. Now let us show that if \(b\ne 0\) then the system is not isochronous, for which we will integrate the second equation in (60) along the level curves \(\gamma _h\) determined by the first integral (64). The integral to solve is

$$\begin{aligned} T=\int _0^T dt= \int _{\gamma _h} \frac{dy}{{\dot{y}}}=\int _{\gamma _h} \frac{dy}{x+b x^{2n-1}}. \end{aligned}$$
(65)

By considering (64) on a level curve such that \(H(x,y)=\frac{h^2}{2}\) being \(h>0,\) we obtain that \(y^2=h^2-x^2-\frac{b}{n}x^{2n},\) and we can use this relation to perform a change of variables from y to x in (65) as follows:

$$\begin{aligned} T= \int _{\gamma _h}\frac{dy}{x+b x^{2n-1}}=2\int _{x_h^{-}}^{x_h^{+}}\frac{dx}{\sqrt{h^2-x^2-\frac{b}{n}x^{2n}}}=4\int _{0}^{x_h}\frac{dx}{\sqrt{h^2-x^2-\frac{b}{n}x^{2n}}},\nonumber \\ \end{aligned}$$
(66)

where we have used the symmetry of the integral and \(x_h^{-},x_h^{+}=x_h,\) are the intersections of the level curve with the horizontal axis, i.e. the real solutions of \(H(x,0)=\frac{h^2}{2}\) or equivalently \(x^2+\frac{b}{n}x^{2n}-h^2=0.\) As \(x_h\) depends on h,  we will perform a second change of variables \(x=x_h z\) so that the integration limits are constant, so we rewrite (66) as

$$\begin{aligned} T=4\int _{0}^{1}\frac{x_h}{\sqrt{h^2-{x_h}^2 z^2-\frac{b}{n}{x_h}^{2n}z^{2n}}}dz. \end{aligned}$$
(67)

As we are not able to find the explicit solution of \(L(h):=x^2+\frac{b}{n}x^{2n}-h^2=0\) to have an expression of \(x_h,\) we consider a power series expansion of \(x_h\) with respect to h. One can check that such series expansion starts

$$\begin{aligned} x_h=h-\frac{b}{2n}h^{2n-1}+O_{4n-3}(h), \end{aligned}$$
(68)

as for this \(x_h\) we have

$$\begin{aligned} L(h)&=\left( h-\frac{b}{2n}h^{2n-1}+O_{4n-3}(h)\right) ^2+\frac{b}{n}\left( h-\frac{b}{2n}h^{2n-1}+O_{4n-3}(h)\right) ^{2n}-h^2\\&=h^2-\frac{b}{n}h^{2n}+O_{4n-2}(h)+\frac{b}{n}\left( h^{2n}+O_{4n-2}(h)\right) -h^2=O_{4n-2}(h). \end{aligned}$$

Now by substituting (68) in (67) and developing the expression we have

$$\begin{aligned} T=4\int _{0}^{1}\frac{1-\frac{b}{2n}h^{2n-2}+O_{4n-4}(h) }{\sqrt{1-z^2+\frac{b}{n}\left( z^2-z^{2n}\right) {h}^{2n-2}+O_{4n-4}(h)}} dz, \end{aligned}$$

where we can expand the denominator as a power series and obtain

$$\begin{aligned} T&=4\int _{0}^{1}\left( 1-\frac{b}{2n}h^{2n-2}+O_{4n-4}(h) \right) \left( \frac{1}{\sqrt{1-z^2}}-\frac{b}{2n}\frac{z^2-z^{2n}}{(1-z^2)^{3/2}}{h}^{2n-2}+O_{4n-4}(h)\right) dz\\&=4\int _{0}^{1}\left[ \frac{1}{\sqrt{1-z^2}}-\frac{b}{2n}\left( \frac{1}{\sqrt{1-z^2}}+\frac{z^2-z^{2n}}{(1-z^2)^{3/2}}\right) {h}^{2n-2}+O_{4n-4}(h)\right] dz\\&=4\int _{0}^{1}\left[ \frac{1}{\sqrt{1-z^2}}-\frac{b}{2n}\frac{1+z^2+z^4+\cdots +z^{2n-2}}{\sqrt{1-z^2}}{h}^{2n-2}+O_{4n-4}(h)\right] dz. \end{aligned}$$

Notice that on the last equality we have used the formula for the sum of terms in a geometric progression. Finally, this integral becomes

$$\begin{aligned} T&=4\int _{0}^{1}\frac{dz}{\sqrt{1-z^2}}-\frac{2b}{n}{h}^{2n-2}\int _{0}^{1}\frac{1+z^2+z^4+\cdots +z^{2n-2}}{\sqrt{1-z^2}}dz+O_{4n-4}(h)\\&=2\pi -\frac{2b}{n}{h}^{2n-2}\int _{0}^{1}\frac{1+z^2+z^4+\cdots +z^{2n-2}}{\sqrt{1-z^2}}dz+O_{4n-4}(h). \end{aligned}$$

As the remaining integral has a strictly positive integrand in the considered interval, the integral is strictly positive and nonzero, so if \(b\ne 0\) the term of order \(2n-2\) in h is nonzero and the center is not isochronous, hence the isochronicity condition is proved.

For the case of n being even, it is straightforward to see that (60) is a time-reversible center for any value of the parameters a and b. To prove the isochronicity condition, we will see that if \(b=\frac{n^2}{(n+1)^2}a^2\) then there exists a transversal system such that its Lie bracket with the original system vanishes, and for the reciprocal we will check that if \(b\ne \frac{n^2}{(n+1)^2}a^2\) then the period function is not constant.

Let us assume that \(b=\frac{n^2}{(n+1)^2}a^2,\) and consider the rescaled system (62). According to Proposition 13, this system has a first integral of the form (61). Observe that, due to the fact of being a first integral, the condition \(\frac{\partial H}{\partial x} P+\frac{\partial H}{\partial y} Q=0\) must be satisfied. In our case, when finding the partial derivatives of the first integral (61) we obtain

$$\begin{aligned} \frac{\partial H}{\partial x} P+\frac{\partial H}{\partial y} Q&=-\Big (n(n-1)x^n A(x,y)+\left( xy-(n+1)x^{n+1}\right) \frac{\partial A}{\partial x}\\&\quad -\left( x^2+n^2 x^{2n}\right) \frac{\partial A}{\partial y}\Big )F(x,y), \end{aligned}$$

where \(F(x,y)=x^2+y^2-2x^ny+x^{2n}\) is the invariant curve of the system found in the proof of Proposition 13. Therefore, the condition that must be fulfilled is

$$\begin{aligned} n(n-1)x^n A(x,y)+\left( xy-(n+1)x^{n+1}\right) \frac{\partial A}{\partial x}-\left( x^2+n^2 x^{2n}\right) \frac{\partial A}{\partial y}=0. \end{aligned}$$
(69)

Let us consider system

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{x}}=xA(x,y),\\ {\dot{y}}=\left( y+(n-1)x^n\right) A(x,y), \end{array}\right. } \end{aligned}$$
(70)

where A(xy) is the numerator of the first integral (61) of the system. A straightforward computation shows that the Lie bracket between systems (62) and (70) is exactly the left hand side of equality (69), so by construction it equals 0 and the system is isochronous due to Theorem 3.

Finally, we have to check that if \(b\ne \frac{n^2}{(n+1)^2}a^2,\) then the center is not isochronous. To this end, we will use the Lie bracket method to find the first period constant and check that it only vanishes for \(b=\frac{n^2}{(n+1)^2}a^2.\) System (60) can be rewritten in complex coordinates as

$$\begin{aligned} {\dot{z}}=iz+a \left( \frac{z+w}{2}\right) ^n+{{\,\mathrm{i}\,}}b\left( \frac{z+w}{2}\right) ^{2n-1}, \end{aligned}$$
(71)

which is actually system (46) choosing \(a_m=a,\) \(a_n={{\,\mathrm{i}\,}}b,\) and \(P(z,w)=0,\) and taking into account that (mn) in (46) corresponds to \((n,2n-1)\) in (71). By applying the Lie bracket method analogously to the proof of Theorem 11, one can see that the first nonzero term is that of degree \(2n-1,\) whose coefficient is

$$\begin{aligned} p_{2n-1}={{\,\mathrm{i}\,}}\frac{b}{2^{2n-1}} (2n-2) {{2n-1}\atopwithdelims (){n-1}}-{{\,\mathrm{i}\,}}\frac{a^2}{2^{2n-1}}\frac{n^2(n-1)}{(n+1)^2}{{2n}\atopwithdelims (){n}}. \end{aligned}$$
(72)

By Theorem 6, the imaginary part of (72) is the period constant

$$\begin{aligned} T_{2n-2}=\frac{b}{2^{2n-1}} (n-1) {{2n}\atopwithdelims (){n}}- \frac{a^2}{2^{2n-1}}\frac{n^2(n-1)}{(n+1)^2}{{2n}\atopwithdelims (){n}}=\left( b-\frac{n^2}{(n+1)^2}a^2\right) \frac{n-1}{2^{2n-1}}{{2n}\atopwithdelims (){n}}, \end{aligned}$$

which only vanishes for \(b=\frac{n^2}{(n+1)^2}a^2.\) \(\square \)

Observation 15

After presenting the new isochronous family (60) for even n and \(b=\frac{n^2}{(n+1)^2}a^2,\) a natural question is to wonder if these isochronous centers will provide a high criticality under perturbation –notice that we do not consider odd n because in this case the only isochronous center is the linear one. To inquire into this question, we find the number of critical periods which can bifurcate from the system by considering only linear parts in the perturbative parameters of the period constants, following the ideas presented in previous works [31, 32]. In particular, we have performed this study for \(n=2,4,6,8\), which correspond to systems of degrees \(2n-1=3,7,11,15\) respectively, and by adding either a time-reversible perturbation with respect to the horizontal axis \((x,y,t)\rightarrow (x,-y,-t)\) or with respect to the vertical axis \((x,y,t)\rightarrow (-x,y,-t)\), none of which breaks the center property. After computing period constants up to first order and calculating the corresponding ranks, we find the number of critical periods presented in the next chart. We also show the number of critical periods \((n^2+n-4)/2\) obtained in [32] for a class of holomorphic systems also with linear parts, which were found with a reversible perturbation with respect to the horizontal axis.

n

Degree

Critical periods (reversible respect horizontal axis)

Critical periods (reversible respect vertical axis)

\(\frac{n^2+n-4}{2}\)

2

3

4

2

4

4

7

20

13

26

6

11

44

30

64

8

15

76

53

118

As we can check, the obtained criticality of the isochronous family (60) for even n found up to linear parts in the period constants is much worse than the one obtained for the holomorphic family in [32] also with linear parts. This leads us believe that family (60) will not provide a high number of oscillations of the period function, so we will not go further in the study of its criticality.

4.3 Limit Cycles and Critical Periods in the Cubic Liénard Family

The aim of this subsection is to prove Theorem 2, this is to study the simultaneous bifurcation of limit cycles and critical periods of the cubic Liénard system (9) adding the trace parameter \(\alpha \) as in (1). With the tools introduced in Sect. 2 we can find the first Lyapunov and period constants for system (9), assuming that the previous ones vanish:

$$\begin{aligned} \begin{aligned} V_3&=-\frac{\pi }{4}(2 a_2 b_2-3 a_3),&V_5&=\frac{5\pi }{12} a_2 b_2 b_3,\\ T_2&=\frac{\pi }{12} (4 a_2^2+10 b_2^2-9 b_3),&T_3&=\frac{\pi }{12} a_2 (2 a_2 b_2-3 a_3),\\ T_4&=-\frac{\pi }{3456} (3484 a_2^2 b_2^2+4480 b_2^4+81 a_3^2),&T_5&=-\frac{\pi }{162}b_2 (10 a_2^4+3193 a_2^2 b_2^2+4032 b_2^4). \end{aligned}\nonumber \\ \end{aligned}$$
(73)

Let us make the following observation. As we already saw for the quadratic system (8), in a system not having a center at the origin it is not generally true that its first period constant must have even subindex. The presented cubic Liénard system is another example of this fact. If we compare \(V_3\) and \(T_3\) we can observe this: for a center we would have \(V_3=0\) which means \(2 a_2 b_2-3 a_3=0,\) and this implies \(T_3=0,\) but \(T_3\) can be nonzero if \(V_3\ne 0.\) We can also see by comparing \(V_5\) and \(T_5\) that this equality in the factors of \(V_3\) and \(T_3\) is not a general fact for any pair \(V_{2k+1}\) and \(T_{2k+1}.\)

Proof of Theorem 2

(i) Let us start by finding the centers in family (9). To do this we solve the system formed by the two first Lyapunov constants of (9) being equal to 0, this is \(\{V_3=0,V_5=0\},\) and we obtain three solutions:

$$\begin{aligned}&({\dot{x}},{\dot{y}})=(-y,x+b_2 x^2 +b_3 x^3), \end{aligned}$$
(74)
$$\begin{aligned}&({\dot{x}},{\dot{y}})=(-y+a_2 x^2,x+b_3 x^3), \end{aligned}$$
(75)
$$\begin{aligned}&({\dot{x}},{\dot{y}})=\left( -y+a_2 x^2 +\frac{2}{3}a_2 x^3,x+b_2 x^2\right) . \end{aligned}$$
(76)

Equations (74) and (75) are time-reversible families with respect to the x and y axis, respectively, and (76) corresponds to a center due to Theorem 1 from [9]. Observe that such theorem is also an alternative proof for (75) having a center at the origin. Therefore, we have seen that for (9) only two Lyapunov constants are necessary to solve the center problem, this is \(V_3=V_5=0\) implies \(V_{2k+1}=0\) for any \(k\ge 3,\) which is actually a well-known fact.

Now we will check that (74) unfolds one critical period. To see this, we find from (73) that for this family \(T_2=\frac{1}{12} \pi \left( 10 b_2^2-9 b_3\right) ,\) \(T_3=0\) –as it is a center–, and \(T_4=-\frac{35}{27} \pi b_2^4.\) Then, if \(b_3=\frac{10}{9}b_2^2\) we see that \(T_2=0\) but \(T_4\) can be different from 0, so 1 critical period unfolds. Now if we vanish \(T_2\) and \(T_4\), automatically \(b_2=b_3=0\) and (74) becomes the linear center, so it is isochronous.

We will also see that for (75) and (76), when vanishing \(T_2\) an isochronous system is obtained so no critical periods appear, thus the critical period obtained from (74) is maximal. For (75) we have that \(T_2=\frac{1}{12} \pi \left( 4 a_2^2-9 b_3\right) ,\) and it vanishes for \(b_3=\frac{4}{9} a_2^2,\) which corresponds to system (42) and it is isochronous due to Proposition 12. For (75), \(T_2=\frac{1}{12} \pi \left( 4 a_2^2+10 b_2^2\right) ,\) which only vanishes for \(a_2=b_2=0\) and in this case the system becomes the linear center, hence is isochronous. This finishes the proof of the statement.

(ii) The first part of the statement follows by using the fact that the first two Lyapunov constants in (73) characterize the centers for the Liénard system given in (9), they are independent and the Bautin ideal \(\langle V_3,V_5 \rangle \) is radical. For more details see [29]. Taking again (73), the simultaneous bifurcation will follow by looking for a bi-weak equilibrium point of type [2, 4],  that is, solving \(\{\alpha =V_3=T_2=T_3=0\}\) with \(V_5\) and \(T_4\) nonvanishing. Under these conditions, we get the set \(S=\{a_3=\frac{2}{3}a_2 b_2,b_3=\frac{4}{9}a_2^2+\frac{10}{9}b_2^2,a_2b_2\ne 0\}\) and we can check that on S we have

$$\begin{aligned} V_5=\frac{5}{54}\pi a_2 b_2 (2a_2^2+5b_2^2),\quad T_4=-\frac{5}{54}\pi b_2^2(11 a_2^2+14 b_2^2). \end{aligned}$$

We take \(\alpha =0\) and \((a_2,a_3,b_2,b_3)\rightarrow (-3+\varepsilon e_1,6+\varepsilon e_2,-3+\varepsilon e_3,14+\varepsilon e_4).\) Clearly, when \(\varepsilon =0\) we have a bi-weak equilibrium point of type [2, 4]. It can be checked that the linear parts of \(V_3,T_2,\) and \(T_3\) with respect to \(\varepsilon \) are \(V_3^{[1]}=T_3^{[1]}=3\pi \varepsilon (2 e_1+ e_2+2 e_3)/4\) and \(T_2^{[1]}=-\pi \varepsilon (8 e_1+20 e_3+3e_4)/4,\) respectively. Therefore, as they are independent, we have one limit cycle and two critical periods of small amplitude. The proof of this statement finishes by adding the parameter \(\alpha \) and using Lemma 4.

(iii) We have already shown that when vanishing \(T_2,\) \(T_3,\) and \(T_4\) the system automatically becomes a center, so no isochronous foci exist for this system and the result follows. \(\square \)