1 Introduction

Let us consider a singular point of a germ of analytic vector field \(\varvec{X}\) on \(({\mathbb {C}},0)\). If the singular point is simple, then the germ of vector field is analytically linearizable. If the singular point is multiple, also called parabolic, then the vector field is analytically conjugate to any one of the following normal forms:

$$\begin{aligned} \varvec{X}(x)&=\left( x^{k+1}-\mu x^{2k+1} \right) \frac{\partial }{\partial x},\end{aligned}$$
(1.1)
$$\begin{aligned} \varvec{X}(x)&= \frac{x^{k+1}}{1+\mu x^k}\frac{\partial }{\partial x}, \end{aligned}$$
(1.2)

where \(\mu =\text {Res}_{x=0}\varvec{X}^{-1}\) is the residue of the dual form. The first normal form is more frequent in the older works of the Russian school. The second one is easier to manipulate, for instance, the rectifying coordinate (time coordinate) \(\int \varvec{X}^{-1}\) is simple to calculate. The next natural question is to consider normal forms for unfoldings of germs of analytic vector fields at a singular point. When the singular point is simple the normal form of an unfolding is linear, and hence unique. When the singular point is parabolic, Kostov proved that the following standard deformation of (1.1) is versal [7]:

$$\begin{aligned} \varvec{X}_1(x,y) =\left( x^{k+1}+y_{k-1} x^{k-1} + \cdots +y_1x+y_0 - (\mu + y_{2k+1}) x^{2k+1}\right) \frac{\partial }{\partial x}.\nonumber \\ \end{aligned}$$
(1.3)

The proof uses that (1.3) is an infinitesimal deformation of (1.1), and then calls for the machinery of Martinet’s Reduction Lemma (see for instance [2]). The philosophy behind this normal form is two-fold. First, a parabolic point of codimension k is the merging of \(k+1\) simple singular points, each having its own eigenvalue, which is an analytic invariant. Hence it is natural that a full unfolding would involve \(k+1\) parameters. Second, the geometry of an unfolding of a parabolic point is simple, hence the convergence to the normal form.

Kostov’s normal form is very important for many bifurcation problems. For instance, when one studies the unfolding of a parabolic point of a germ of 1-diffeomorphism of \(({\mathbb {C}},0)\) (i.e. a multiple fixed point), then a formal normal form is given by the time one map of a vector field of the form (1.3). The change of coordinate to this normal form diverges and the obstruction to the convergence is the classifying object of the unfoldings (an extension of the Écalle–Voronin modulus to sectors in parameter space, see for instance [10, 15], and [11, 12]). The same normal form is used to classify germs of unfoldings of 2-dimensional vector fields in \(({\mathbb {C}}^2,0)\) with either a saddle-node or a resonant saddle point: indeed the vector fields are orbitally analytically equivalent if and only if the holonomy map of the separatrices (the strong separatrices in the case of a saddle-node) are conjugate (see [13] and [14]).

Very soon, other normal forms equivalent to (1.3) appeared in the literature without proof:

$$\begin{aligned} \varvec{X}_2(x,y)&= \left( x^{k+1}+y_{k-1} x^{k-1} + \cdots +y_1x+y_0\right) \!\left( 1 - (\mu + y_{2k+1}) x^k\right) \frac{\partial }{\partial x},\end{aligned}$$
(1.4)
$$\begin{aligned} \varvec{X}_3(x,y)&= \frac{x^{k+1}+y_{k-1} x^{k-1} + \cdots +y_1x+y_0}{1+(\mu +y_{2k+1}) x^k}\frac{\partial }{\partial x}, \end{aligned}$$
(1.5)

and they are all called Kostov’s theorem. In practice, most authors use the normal form (1.5), which is much more suitable for computations.

The paper [14] indirectly suggests that the normal form (1.5) is universal by showing that the normal form (1.5) associated to a generic k-parameter unfolding of a parabolic point of codimension k is unique up to the action of the group \({\mathbb {Z}}/k{\mathbb {Z}}\) of rotations of order k. This uniqueness property is extremely important in all classification problems of unfoldings under conjugacy or analytic equivalence: it shows that the parameters of the normal forms are essentially unique and hence analytic invariants of the unfoldings. Hence, to show that two unfoldings are analytically equivalent, the first step is to change to the canonical parameters and it then suffices to study the equivalence problem for fixed values of the parameters. In this paper, we provide self-contained proofs that the three normal forms (1.3), (1.4) and (1.5) are unique up to the action of the group \({\mathbb {Z}}/k{\mathbb {Z}}\), and universal. These self-contained proofs are useful for further generalizations, for instance when the vector field has some symmetry or reversibility property, and also for the formal case, the \({\mathcal {C}}^\infty \)-case, and mixed cases where the variable is analytic and the dependence on the parameters is only real-analytic: this mixed case occurs when one considers bifurcations of antiholomorphic parabolic fixed points (i.e. \(f(x) = {\bar{x}} \pm {\bar{x}}^{k+1} + o({\bar{x}}^{k+1})\)).

As a second part of the paper, we briefly address the real analytic, formal, and smooth cases. In the first two cases, each of the corresponding unfoldings is universal. In the smooth case, we give an explicit example showing that the unfolding is only versal and cannot be universal, namely the two vector fields \(\varvec{X}(x,\lambda )=(x^2+\lambda ^2)\tfrac{\partial }{\partial x}\), and \(\varvec{X}'(x,\lambda )=(x^2+(\lambda +\omega (\lambda ))^2)\tfrac{\partial }{\partial x}\) are \({\mathcal {C}}^\infty \)-conjugate when \(\omega (\lambda )\) is infinitely flat at \(\lambda =0\). Let us explain one difference with the analytic case. In the latter case the eigenvalues at the singular points are complex \({\mathcal {C}}^1\)-invariants and for a given set of \(k+1\) eigenvalues at the singular points, there are only a finite number of solutions for the \(y_j\) in any of the normal forms (1.3), (1.4) and (1.5) with the prescribed eigenvalues. In the smooth case, only the eigenvalues at the real singular points are \({\mathcal {C}}^1\)-invariants and we can smoothly glue anything at the complex singular points. The two systems of our counterexample have purely imaginary singular points. An open question is to know if we have universal unfoldings in the smooth case when all the singular points are real.

The original articles [7, 8] of Kostov cover a much more general case of deformations of differential forms of real power \(\alpha \). However, our goal is not to redo what has been done well by Kostov, but to provide an elementary and self-contained proof in the case of vector fields, that is power \(\alpha =-1\), which is why we do not discuss the other cases. Nevertheless, we believe that our proof of the uniqueness in the formal/analytic case, which is missing in Kostov’s article, could be well adapted to general \(\alpha \).

2 The Analytic Theory

The following definitions are classical: see for instance [1].

Definition 2.1

  1. (1)

    Two germs of analytic (resp. real analytic, formal, \({\mathcal {C}}^\infty \)) parametric families of vector fields \(\varvec{X}(x,\lambda )\), \(\varvec{X}'(x',\lambda )\) depending on a same parameter \(\lambda \) are conjugate if there exists an analytic (resp. real analytic, formal, \({\mathcal {C}}^\infty \)) invertible change of coordinate

    $$\begin{aligned} x'= & {} \phi (x,\lambda ), \end{aligned}$$
    (2.1)

    changing one family to the other. We write \(\varvec{X}=\phi ^*\varvec{X}'\) as a pullback of \(\varvec{X}'\). In the analytic, real analytic or \({\mathcal {C}}^\infty \) cases, what is meant is a change of coordinate defined on a fixed neighborhood of the origin in x-space for all values of the parameter in a fixed neighborhood of the origin in parameter space. In the formal case, the invertible change of coordinate is an invertible series in x with coefficients given by power series in \(\lambda \).

  2. (2)

    Let \(\lambda \mapsto \lambda '=\psi (\lambda )\) be a germ of analytic (resp. real analytic, formal, \({\mathcal {C}}^\infty \)) map (not necessarily invertible), then \(\varvec{X}(x,\lambda )=\varvec{X}'(x,\psi (\lambda ))\) is a family induced from \(\varvec{X}'\).

  3. (3)

    A parametric family of vector fields \(\varvec{X}(x,\lambda )\) is a deformation of \(\varvec{X}(x,0)\). Two deformations \(\varvec{X}(x,\lambda )\), \(\varvec{X}'(x,\lambda )\) of the same initial vector field \(\varvec{X}(x,0)=\varvec{X}'(x,0)\) with the same parameter \(\lambda \) are equivalent (as deformations) if the two families are conjugate by means of an invertible transformation (2.1) with \(\phi (x,0)\equiv x\).

  4. (4)

    A deformation \(\varvec{X}'(x,\lambda ')\) of \(\varvec{X}'(x,0)\) is versal if any other deformation \(\varvec{X}(x,\lambda )\) of \(\varvec{X}'(x,0)= \varvec{X}(x,0)\) is equivalent to one induced from it. It is universal if the inducing map \(\lambda '=\psi (\lambda )\) is unique.

In this section we provide a self-contained proof of the following theorem:

Theorem 2.2

In the analytic case, for \(k\ge 1\), the deformation (1.5) of (1.2) is universal.

Corollary 2.3

In the analytic case, for \(k\ge 1\), the deformations (1.3) and (1.4) of (1.1) are universal.

As explained in the introduction, the proof of the versality is due to Kostov (for (1.3) see [7], while for (1.5) it has been often stated in literature without explicit proof, see e.g. [3, p.116]), and the uniqueness comes from [14]. Theorem 2.2 can be rephrased in more precise terms as the following theorem of which it is a direct consequence.

Theorem 2.4

 

  1. (i)

    (Variant of Kostov’s theorem [7]) Any analytic germ of a family of vector fields \({\tilde{\varvec{X}}}(x,\lambda )\) depending on a multi-parameter \(\lambda \) unfolding \({\tilde{\varvec{X}}}(x,0)=x^{k+1}\frac{1}{\omega (x)}\frac{\partial }{\partial x}\), \(\omega (0)\ne 0\), \(k\ge 0\), is analytically conjugate to a family of the form

    $$\begin{aligned} \varvec{X}(x,\lambda )&=c(\lambda )x\frac{\partial }{\partial x},&k&=0,\end{aligned}$$
    (2.2)
    $$\begin{aligned} \varvec{X}(x,\lambda )&=\frac{x^{k+1}+y_{k-1}(\lambda )x^{k-1}+\cdots +y_0(\lambda )}{1+\mu (\lambda )x^k}\frac{\partial }{\partial x},&k&\ge 1, \end{aligned}$$
    (2.3)

    with \(y_0(0)=\cdots =y_{k-1}(0)=0\), where

    $$\begin{aligned} \mu (\lambda )= & {} -\text {Res}_{x=\infty }\varvec{X}(x,\lambda )^{-1} \end{aligned}$$

    is the sum of the residues of \({\tilde{\varvec{X}}}(x,\lambda )^{-1}\) over its local polar locus around the origin.

  2. (ii)

    (Rousseau, Teyssier [14, Theorem 3.5], [6, Theorem 7.2]) The normal forms (2.2) for \(k=0\) and (2.3) for \(k=1\) are unique, while the normal form (2.3) for \(k > 1\) is unique up to the action of \(x\mapsto e^{2\pi i\frac{l}{k}}x\), \(l\in {\mathbb {Z}}_k\),

    $$\begin{aligned} y_{j}(\lambda )\mapsto e^{-2\pi i\frac{(j-1)l}{k}}y_{j}(\lambda ),\quad j=0,\ldots ,k-1. \end{aligned}$$

    More precisely, if \((x,\lambda )\mapsto (\phi (x,\lambda ),\lambda )\) is a transformation between two vector fields (2.3), then

    $$\begin{aligned} \phi (x,\lambda )= & {} {\left\{ \begin{array}{ll} e^{t(\lambda )}x, &{} \text {if }\ k=0,\\ e^{2\pi i\frac{l}{k}} \exp (t(\lambda )\varvec{X})(x,\lambda ), &{} \text {if }\ k\ge 1,\end{array}\right. } \end{aligned}$$

    for some \(l\in {\mathbb {Z}}_k\) and some analytic germ \(t(\lambda )\).

The first step in proving Theorem 2.4 is the following “prenormal form”, which can be also found for example in [11, Proposition 5.13].

Proposition 2.5

(Prenormal form) Any germ of a family of vector fields \({\tilde{\varvec{X}}}(x,\lambda )\) depending on a multi-parameter \(\lambda \) unfolding \({\tilde{\varvec{X}}}(x,0)=(cx^{k+1}+\cdots )\frac{\partial }{\partial x}\), \(k\ge 1\), is analytically conjugate to a family of the form

$$\begin{aligned} \varvec{X}(x,\lambda )= & {} \frac{x^{k+1}+y_{k-1}(\lambda )x^{k-1} +\cdots +y_0(\lambda )}{1+u_0(\lambda ) +\cdots +u_{k-1}(\lambda )x^{k-1}+\mu (\lambda )x^k}\frac{\partial }{\partial x}, \end{aligned}$$
(2.4)

where \(y_j(0)=0=u_j(0)\), \(j=0,\ldots ,k-1\), and

$$\begin{aligned} \mu (\lambda )= & {} -\text {Res}_{x=\infty }\varvec{X}(x,\lambda )^{-1}. \end{aligned}$$

Proof

First, let us transform \({\tilde{\varvec{X}}}(x,0)=x^{k+1}\frac{1}{\omega (x)}\frac{\partial }{\partial x}\) to a form \(\varvec{X}(x,0)=\frac{x^{k+1}}{1+\mu (0)x^k}\frac{\partial }{\partial x}\). Up to a linear change \(x\mapsto ax\), \(a\in {\mathbb {C}}\smallsetminus \{0\}\), we can assume \(\omega (0)=1\). Write \(\omega (x)=1+\omega _1x+\cdots +\omega _kx^k+x^{k+1}r(x)\), let \(\mu (0):=\omega _k\), and let

$$\begin{aligned} \alpha (x):= & {} \int \left( {\tilde{\varvec{X}}}(x,0)^{-1}-\varvec{X}(x,0)^{-1}\right) =-\tfrac{\omega _1}{k-1}x^{1-k}-\cdots -\frac{\omega _{k-1}}{1}x^{-1}+\int _0^xr(x)dx. \end{aligned}$$

Then \(\alpha \) is a meromorphic germ with pole of order at most \(k-1\) at the origin, and the desired transformation is provided by Lemma 2.6 below.

By Weierstrass preparation and division theorem, any family \({\tilde{\varvec{X}}}(x,\lambda )\) can be written in the form \({\tilde{\varvec{X}}}(x,\lambda )=\frac{P(x,\lambda )}{Q(x,\lambda ) +P(x,\lambda )R(x,\lambda )}\frac{\partial }{\partial x}\), for some Weierstrass polynomials \(P(x,\lambda )=x^{k+1}+y_{k-1}(\lambda )x^{k-1}+\cdots +y_0(\lambda )\), \(Q(x,\lambda )=1+u_0(\lambda )+\cdots +u_{k-1}(\lambda )x^{k-1}+\mu (\lambda )x^k\), and some analytic germ \(R(x,\lambda )\). Let \(\alpha (x,\lambda )=\int R(x,\lambda )dx\), then

$$\begin{aligned} {\tilde{\varvec{X}}}(x,\lambda )= & {} \frac{\varvec{X}(x,\lambda )}{1+\varvec{X}(x,\lambda ).\alpha (x,\lambda )} \end{aligned}$$

for \(\varvec{X}(x,\lambda )=\frac{P(x,\lambda )}{Q(x,\lambda )}\frac{\partial }{\partial x}\) of the form (2.4). The result follows from Lemma 2.6. \(\square \)

The following lemma is classical (see for example [16, Proposition 2.2] to which it is essentially equivalent).

Lemma 2.6

Let \(\varvec{X}_0,\ \varvec{X}_1\) be two germs of analytic families of vector fields vanishing at the origin, and assume there exists an analytic germ \(\alpha (x,\lambda )\) such that \(\varvec{X}_1=\frac{\varvec{X}_0}{1+\varvec{X}_0.\alpha }\). Then the flow map of the vector field \(\varvec{Y}(x,t,\lambda )=\frac{\partial }{\partial t}-\frac{\alpha \varvec{X}_0}{1+t\varvec{X}_0.\alpha }\)

$$\begin{aligned} \phi _1(x,\lambda )= & {} x\circ \exp (\varvec{Y})\Big |_{t=0}, \end{aligned}$$

conjugates \(\varvec{X}_1\) with \(\varvec{X}_0=\phi _1^*\varvec{X}_1\).

The statement is also true if \(\alpha \) is meromorphic such that \(\varvec{X}_0.\alpha \) and \(\alpha \varvec{X}_0\) are analytic and vanish for \((x,\lambda )=0\) (so that the flow of \(\varvec{Y}\) is defined for all \(t\in [0,1]\)).

Proof

On the one hand, if \(\varvec{X}_t:=\frac{\varvec{X}_0}{1+t\varvec{X}_0.\alpha }\) and \(\varvec{Y}=\frac{\partial }{\partial t}-\frac{\alpha \varvec{X}_0}{1+t\varvec{X}_0.\alpha }\) are vector fields in \(x,t,\lambda \), then \([\varvec{Y},\varvec{X}_t]=0\), which means that the flow \(\exp (s\varvec{Y}):(x,t)\mapsto (\Phi _s(x,t),t+s)\) of \(\varvec{Y}\) preserves \(\varvec{X}_t=\Phi _s^*\varvec{X}_{t+s}\). In particular \(\phi _s(x):=\Phi _s(x,0)\) is such that \(\phi _s^*\varvec{X}_s=\varvec{X}_0\). \(\square \)

The following proposition shows how to find a change of coordinates allowing to get rid of the quantities \(u_i(\lambda )\) in (2.4).

Proposition 2.7

Consider two families of vector fields \(\varvec{X}_0\) and \(\varvec{X}_1\) of the form

$$\begin{aligned} \varvec{X}_t= & {} \frac{x^{k+1}+y_{k-1}x^{k-1}+\cdots +y_0}{1+t(u_0+\cdots +u_{k-1}x^{k-1})+\mu x^k}\frac{\partial }{\partial x}, \end{aligned}$$
(2.5)

depending on parameters \((y,u,\mu )\). Then there exists an analytic transformation \((x,y)\mapsto \big (\phi (x,y,u,\mu ),\psi (y,u,\mu )\big )\) tangent at identity at \((x,y)=0\) that conjugates \(\varvec{X}_1\) to \(\varvec{X}_0\).

Proof

Let \(\varvec{X}_t=\frac{P(x,y)}{Q(x,t,u,\mu )}\frac{\partial }{\partial x}\) be as above (2.5). We want to construct a family of transformations depending analytically on \(t\in [0,1]\) between \(\varvec{X}_0\) and \(\varvec{X}_t\), defined by a flow of a vector field \(\varvec{Y}\) of the form

$$\begin{aligned} \varvec{Y}= & {} \frac{\partial }{\partial t}+\sum _{j=0}^{k-1}\xi _j(t,y,u,\mu )\frac{\partial }{\partial y_j}+ \frac{H(x,t,y,u,\mu )}{Q(x,t,u,\mu )}\frac{\partial }{\partial x}, \end{aligned}$$

for some \(\xi _j\) and H, such that \([\varvec{Y},\varvec{X}_t]=0\), that is

$$\begin{aligned} -\frac{UP}{Q^2}+\frac{\Xi }{Q}+\frac{H}{Q}\frac{\partial }{\partial x} \left( \frac{P}{Q}\right) -\frac{P}{Q}\frac{\partial }{\partial x}\left( \frac{H}{Q}\right) =0, \end{aligned}$$

where \(U(x,u)=u_0+\cdots +u_{k-1}x^{k-1}\) and \(\Xi (x,\xi )=\xi _0+\cdots +\xi _{k-1}x^{k-1}\), which is equivalent to

$$\begin{aligned} H\tfrac{\partial }{\partial x}P-P\tfrac{\partial }{\partial x}H+Q\Xi= & {} UP. \end{aligned}$$
(2.6)

We see that we can choose H as a polynomial in x:

$$\begin{aligned} H= & {} h_0(t,y,u,\mu )+\cdots +h_k(t,y,u,\mu )x^k. \end{aligned}$$

Write \(UP=b_0(y,u)+\cdots +b_{2k}(y,u)x^{2k}\), then the equation (2.6) is equivalent to saying that two polynomials of degree 2k in x are identical. Identifying the coefficients of same degree yields a non-homogeneous linear system of \(2k+1\) equations for the \(2k+1\) coefficients \((\xi ,h)=(\xi _0,\ldots ,\xi _{k-1},h_0,\ldots , h_k)\) as functions of \(b=(b_0,\ldots ,b_{2k})\):

$$\begin{aligned} A(t,y,u,\mu )\begin{pmatrix}\xi \\ h\end{pmatrix}=b(y,u). \end{aligned}$$

For \(y=u=0\) the equation (2.6) is

$$\begin{aligned} (k+1)x^kH-x^{k+1}\tfrac{\partial }{\partial x}H+(1+\mu x^k)\Xi =0, \end{aligned}$$

hence

$$\begin{aligned} A(t,0,0,\mu )= & {} \begin{pmatrix} 1&{}&{} &{}0&{}&{} &{}0\\ &{}\ddots &{} &{}&{}\ddots &{} &{}\\ &{}&{}1 &{}&{}&{}0 &{}0\\ \mu &{}&{} &{}k+1&{}&{} &{}0\\ &{}\ddots &{} &{}&{} \ddots &{} &{}\\ &{}&{} \mu &{}&{}&{}2 &{}0\\ 0&{}\ldots &{}0 &{}0&{} \ldots &{}0 &{}1 \end{pmatrix}. \end{aligned}$$

This means that \(A(t,y,u,\mu )\) is invertible for \((t,\mu )\) from any compact in \({\mathbb {C}}\times {\mathbb {C}}\) if \(|y|,\ |u|\) are small enough. Since \(b(0,0)=0\), the constructed vector field \(\varvec{Y}(x,t,y,u,\mu )\) is such that \(\varvec{Y}(0,t,0,0,\mu )=\frac{\partial }{\partial t}\) and its flow is well-defined for all \(|t|\le 1\) as long as \(|y|,\ |u|\) are small enough. \(\square \)

Proof of Theorem 2.4

  1. (i)

    The existence of an analytic normalizing transformation to (2.3) when \(k\ge 1\) follows directly from Propositions 2.5 and 2.7. For \(k=0\) it follows from Lemma 2.6.

  2. (ii)

    Let us prove the uniqueness. For \(k=0\) it is obvious. For \(k>0\), let \(\phi (x,\lambda )\) be a transformation between \(\varvec{X}=\frac{P(x,\lambda )}{1+\mu (\lambda ) x^k}\frac{\partial }{\partial x}\) and \(\varvec{X}'=\frac{P'(x,\lambda )}{1+\mu '(\lambda ) x^k}\frac{\partial }{\partial x}\), preserving the parameter \(\lambda \), and such that \(\phi ^*\varvec{X}=\varvec{X}'\). By the invariance of the residue, \(\mu (\lambda )=\mu '(\lambda )\). Let \(\phi (x,0)=cx+\ldots \) for some \(c\ne 0\), necessarily \(c=e^{2\pi i\frac{l}{k}}\) for some \(l\in {\mathbb {Z}}_k\). Up to precomposition with a map \(x\mapsto cx\), we can assume that \(c=1\) and that \(\phi (x,0)=x+\ldots \) is tangent to identity. Let

    $$\begin{aligned} G(x,t, \lambda )= & {} \exp (-t \varvec{X})\circ \phi (x,\lambda ), \end{aligned}$$

    and

    $$\begin{aligned} K(t,\lambda )= & {} \frac{\partial ^{k+1}G}{\partial x^{k+1}}\big |_{x=0}. \end{aligned}$$

    The map K is analytic and \(\frac{\partial K}{\partial t}(t,0)= -(k+1)!\ne 0\). For \(\lambda =0\), there exists \(t_0\) such that \(K(t_0,0)=0\) (in fact \(t_0=\frac{1}{(k+1)!}\frac{\partial ^{k+1}\phi }{\partial x^{k+1}}(0,0)\) since \(\exp (t\varvec{X}(x,0))=x+tx^{k+1}+\ldots \)). By the implicit function theorem, there exists a unique function \(t(\lambda )\) such that \(K(t(\lambda ), \lambda )\equiv 0\). Then considering the new transformation \(\psi =\exp (-t(\lambda )X)\circ \phi \), it suffices to prove that \(\psi \equiv id\). This is done by the infinite descent. Let \(\psi (x,\lambda )=x+f(x,\lambda )\), where \(\frac{\partial ^{k+1} f}{\partial x^{k+1}}\equiv 0\). Denote \({\mathcal {I}}_\lambda \) the ideal of analytic functions of \((x,\lambda )\) that vanish when \(\lambda =0\). To show that \(\psi (x,0)\equiv x\) it suffices to show that \(f\in {\mathcal {I}}_\lambda ^n\) for all n. For \(\lambda =0\) both vector field \(\varvec{X}(x,0)\) and \(\varvec{X}'(x,0)\) are equal to \(\frac{x^{k+1}}{1+\mu (0)x^k}\frac{\partial }{\partial x}\), and it is easy to verify that \(\psi (x,0)\equiv x\) (for instance using power series), which gives us the induction base \(f\in {\mathcal {I}}_\lambda \). Suppose now that \(f\in {\mathcal {I}}_\lambda ^n\). Developing the right side of the transformation equation

    $$\begin{aligned} \tfrac{P}{1+\mu x^k}\tfrac{\partial }{\partial x}\psi= & {} \tfrac{P'\circ \psi }{1+\mu \psi ^k}, \end{aligned}$$

    we have

    $$\begin{aligned} \tfrac{P}{1+\mu x^k}\cdot \tfrac{\partial }{\partial x}(x+f)= & {} \tfrac{P'}{1+\mu x^k}+ f\cdot \tfrac{\partial }{\partial x}\tfrac{P'}{1+\mu x^k}\mod \mathcal {I}_\lambda ^{n+1}, \end{aligned}$$

    from which

    $$\begin{aligned} P-P'= & {} -x^{k+1}\tfrac{\partial }{\partial x}f +f\cdot \big ((k+1)x^k-\tfrac{k\mu (0)x^{2k}}{1+\mu (0)x^k}\big ) \mod \mathcal {I}_\lambda ^{n+1}. \end{aligned}$$

    The left side being a polynomial of order \(\le k-1\) in x, this means that both sides vanish modulo \({\mathcal {I}}_\lambda ^{n+1}\). Therefore on the left side \(P=P'\mod \mathcal {I}_\lambda ^{n+1}\), while the right side can be rewritten as

    $$\begin{aligned} -(1+\mu (0)x^k)x\tfrac{\partial }{\partial x}f +f\cdot ((k+1)+\mu (0)x^{k})\equiv 0 \mod \mathcal {I}_\lambda ^{n+1}, \end{aligned}$$

    Putting \(f=\sum _{j=0}^\infty f_j x^j\), yields

    $$\begin{aligned} \sum _{j=0}^{k-1} (k+1-j)f_j x^{j} +\sum ^\infty _{j=k} (k+1-j)(f_j+\mu (0)f_{j-k})x^{j} \equiv 0 \mod \mathcal {I}_\lambda ^{n+1}, \end{aligned}$$

    from which we get that all \(f_j\in {\mathcal {I}}_\lambda ^{n+1}\) since \(f_{k+1}\equiv 0\). \(\square \)

Proof of Corollary 2.3

Consider the two families (1.3) and (1.5)

$$\begin{aligned} \varvec{X}_1(x,y)&= x^{k+1}+y_{k-1} x^{k-1} + \cdots +y_1x+y_0 - (\mu + y_{2k+1}) x^{2k+1}\frac{\partial }{\partial x},\\ \varvec{X}_3(x',y')&= \frac{x'^{k+1}+y'_{k-1} x'^{k-1} + \cdots +y'_1x'+y'_0}{1+(\mu +y'_{2k+1}) x'^k}\frac{\partial }{\partial x'}. \end{aligned}$$

By Theorem 2.4 we know that there exists a map

$$\begin{aligned} x'= & {} \phi (x,y),\quad y_j'=\psi _j(y),\ j=0,\ldots ,k-1,\ 2k+1, \end{aligned}$$

such that \(\varvec{X}_1(x,y)=\phi ^*\varvec{X}_3(x,\psi (y))\), that is, such that

$$\begin{aligned} \begin{aligned}&\frac{\phi ^{k+1}+\psi _{k-1} \phi ^{k-1} + \cdots +\psi _1\phi +\psi _0}{1+(\mu +\psi _{2k+1}) \phi ^k} \\&\qquad \qquad =\left( x^{k+1}+y_{k-1}x^{k-1} + \cdots +y_1 x+y_0 - (\mu + y_{2k+1})x^{2k+1}\right) \frac{\partial }{\partial x}\phi . \end{aligned} \end{aligned}$$

We want to show that \(\psi \) is invertible.

For \(y=0\) we have

$$\begin{aligned} \frac{\phi (x,0)^{k+1}}{1+\mu \phi (x,0)^k}=\left( x^{k+1}-\mu x^{2k+1}\right) \frac{\partial }{\partial x}\phi (x,0), \end{aligned}$$

and (up to apre-composition with a flow map of \(\varvec{X}_1\) killing the term in \(x^{k+1}\)) we can assume that

$$\begin{aligned} \phi (x,0)= & {} x+\mu ^2\tfrac{1}{k}x^{2k+1}+O(x^{2k+2}). \end{aligned}$$

Write \(\phi (x,y)=\phi (x,0)+f(x,y)\), with \(f(x,y)=\sum _{l=0}^\infty f_l(y)x^l\), and denote \({\mathcal {I}}_y\) the ideal of functions that vanish when \(y=0\). Then calculating modulo \({\mathcal {I}}_y^2\):

$$\begin{aligned} \begin{aligned}&\frac{\phi (x,0)^{k+1}+\psi _{k-1} \phi (x,0)^{k-1} + \cdots +\psi _1\phi (x,0)+\psi _0}{1+(\mu +\psi _{2k+1}) \phi (x,0)^k}\\&\qquad +\frac{(k+1)\phi (x,0)^k+\mu \,\phi (x,0)^{2k}}{(1+\mu \,\phi (x,0)^k)^2}f(x,y)\\&\quad =\left( x^{k+1}+y_{k-1}x^{k-1} + \cdots +y_1 x+y_0 - (\mu + y_{2k+1})x^{2k+1}\right) \frac{\partial }{\partial x}\phi (x,0)\\&\qquad +\left( x^{k+1}-\mu x^{2k+1}\right) \frac{\partial }{\partial x}f(x,y) \mod \mathcal {I}_y^2. \end{aligned} \end{aligned}$$

Comparing the coefficients of \(x^j\), \(j=0,\ldots ,k-1\), on both sides we have

$$\begin{aligned} \psi _j=y_j\mod \mathcal {I}_y^2,\quad j=0,\ldots ,k-1, \end{aligned}$$

for \(j=k+1\)

$$\begin{aligned} -\psi _1\mu +(k+1)f_1(y)= & {} f_1(y) \mod \mathcal {I}_y^2, \end{aligned}$$

and for \(j=2k+1\)

$$\begin{aligned} \begin{aligned}&-\mu -\psi _{2k+1}+\mu ^2\tfrac{1}{k}\psi _1-(2k+1)\mu f_1+(k+1)f_{k+1}=\\&\quad =-\mu - y_{2k+1}+\mu ^2\tfrac{2k+1}{k}y_1+(k+1)f_{k+1}-\mu f_1\mod \mathcal {I}_y^2, \end{aligned} \end{aligned}$$

from which

$$\begin{aligned} \psi _{2k+1}= & {} y_{2k+1}-4\mu ^2y_1 \mod \mathcal {I}_y^2. \end{aligned}$$

This means that the transformation \((x,y)\mapsto (\phi (x,y),\psi (y))\) is invertible for small xy.

Similarly for the families (1.4) and (1.5) \(\square \)

3 Real Analytic, Formal and Smooth Theory

Theorem 3.1

(Real analytic theory) The statement of Theorem 2.4 is also true in the real analytic setting, with the exception that (2.3) needs to be replaced by

$$\begin{aligned} \varvec{X}_{\text {real}}(x,\lambda )= & {} \frac{x^{k+1}+y_{k-1}(\lambda )x^{k-1}+\cdots +y_0(\lambda )}{(\pm 1)^{k+1}+\mu (\lambda )x^k}\frac{\partial }{\partial x}. \end{aligned}$$
(3.1)

Consequently, the real analytic parametric family

$$\begin{aligned} \varvec{X}_{3,\text {real}}(x,y)= & {} \frac{x^{k+1}+y_{k-1} x^{k-1} + \cdots +y_0}{(\pm 1)^{k+1}+(\mu +y_{2k+1}) x^k}\frac{\partial }{\partial x}, \end{aligned}$$
(3.2)

is a universal real analytic deformation for \(\varvec{X}_{3,\text {real}}(x,0)\).

Proof

Assuming that the initial vector field \({\tilde{\varvec{X}}}\) is real analytic then so are all the transformations of Propositions 2.5 and 2.7 with the only exception: the leading coefficient of \({\tilde{\varvec{X}}}(x,0)=\big (cx^{k+1}+\cdots \big )\frac{\partial }{\partial x}\) can be brought to either \(c=\pm 1\) if k is even, and \(c=1\) if k is odd. \(\square \)

Corollary 3.2

If we consider deformations which are symmetric (resp. antisymmetric (also called reversible)) with respect to the real axis, then their associated universal deformations

$$\begin{aligned} \varvec{X}_1'(x,y)&=c\left( x^{k+1}+y_{k-1} x^{k-1} + \cdots +y_1x+y_0 - c\,(\mu +y_{2k+1}) x^{2k+1}\right) \frac{\partial }{\partial x},\\ \varvec{X}_2'(x,y)&= c\left( x^{k+1}+y_{k-1} x^{k-1} + \cdots +y_1x+y_0\right) \!\left( 1 - c\,(\mu +y_{2k+1}) x^k\right) \frac{\partial }{\partial x},\\ \varvec{X}_3'(x,y)&= \frac{x^{k+1}+y_{k-1} x^{k-1} + \cdots +y_1x+y_0}{c+(\mu +y_{2k+1}) x^k}\frac{\partial }{\partial x}, \end{aligned}$$

\(y_0,\ldots y_{k-1},cy_{2k+1}\in {\mathbb {R}}\), \(c\mu \in {\mathbb {R}}\), with \(c=(\pm 1)^{k+1}\) (resp. \(c=i\,(\pm 1)^{k+1}\)), have the same property, and the conjugacy commutes with the symmetry.

Theorem 3.3

(Formal theory) 

  1. (1)

    The statement of Theorem 2.4 and therefore of Theorem 2.2 is also true in the formal setting, of formal parametric germs of vector fields and formal transformations (2.1), where by formal we mean a formal power series in \((x,\lambda )\). In the formal real case (i.e. the series have real coefficients), then the normal form is given by (3.1).

  2. (2)

    (Ribon [11, Proposition 6.1]) Two analytic germs of vector fields \(\varvec{X},\varvec{X}'\) are formally conjugate if and only if they are analytically conjugate. Moreover, denoting \({\hat{\mathcal {I}}}_\lambda \) the ideal of formal series that vanish when \(\lambda =0\), if \({\hat{\phi }}(x,\lambda )\) is a formal conjugating transformation, then for any \(n>0\) there exists an analytic conjugacy \(\phi _n(x,\lambda )\), \(\phi _n^*\varvec{X}'=\varvec{X}\), such that \(\phi _n={\hat{\phi }}\mod \hat{\mathcal {I}}_\lambda ^n\).

The second statement is an analogue of the Artin approximation theorem.

Proof

  1. (1)

    The proof follows exactly the same lines. The key fact is that a formal flow map of a formal vector field

    $$\begin{aligned} {\hat{\varvec{Y}}}= & {} \frac{\partial }{\partial t}+\sum _{j=0}^{k-1}{\hat{\xi }}_j(t,y,u,\mu )\frac{\partial }{\partial y_j}+{\hat{F}}(x,t,y,u,\mu )\frac{\partial }{\partial x}, \end{aligned}$$

    which is analytic in t and the parameters \((u,\mu )\) is well defined: see Lemma 3.4 below.

  2. (2)

    This is a consequence of the uniqueness of the normal form (2.3): each analytic germ of parametric vector field is analytically conjugate to a normal form (2.3), and two such normal forms are formally conjugate if and only if they are conjugate by a rotation \(x\mapsto e^{2\pi i\frac{l}{k}}x\), \(l\in {\mathbb {Z}}_k\), which is analytic. Moreover, the formal conjugacy is a composition of the rotation and of a formal time \({\hat{t}}(\lambda )\)-flow map of the vector field. Replacing \({\hat{t}}(\lambda )\) with an analytic \(t_n(\lambda )={\hat{t}}(\lambda )\mod \hat{\mathcal {I}}_\lambda ^n\) does the trick. \(\square \)

Lemma 3.4

Let \({\hat{\varvec{Y}}}=\tfrac{\partial }{\partial t}+{\hat{\varvec{Z}}}(z,t)\), where \({\hat{\varvec{Z}}}(z,t)\) be a formal vector field in \(z\in {\mathbb {C}}^p\) with coefficients entire in \(t\in {\mathbb {C}}\), that vanishes at \(z=0\): \({\hat{\varvec{Z}}}(0,t)=0\). Then \({\hat{\varvec{Y}}}\) has a well-defined flow \(z\circ \exp (s{\hat{\varvec{Y}}})=\sum _{n=0}^{+\infty }\frac{s^n}{n!}{\hat{\varvec{Y}}}^n.z\), which is a formal power series in z with coefficients analytic in (st) for all \((s,t)\in {\mathbb {C}}^2\).

Proof

For any \(n\in {\mathbb {Z}}_{\ge 0}\), the n-jet with respect to the variable z of \({\hat{\varvec{Y}}}\) is an entire vector field \(j_z^n{\hat{\varvec{Y}}}(z,t)\) in \({\mathbb {C}}^p\times {\mathbb {C}}\) with well defined flow \(z\circ \exp (s\, j_z^n{\hat{\varvec{Y}}})\) fixing the origin in z. For any \(m\le n\), the m-jet of this flow agrees with the m-jet of the one for m: \(j_z^m\big (z\circ \exp (s\,j_z^n{\hat{\varvec{Y}}})\big )=j_z^m\big (z\circ \exp (s\,j_z^m{\hat{\varvec{Y}}})\big )\), meaning that they converge in the Krull topology as \(n\rightarrow +\infty \) to a well-defined formal flow map \(z\circ \exp (s{\hat{\varvec{Y}}})\). See also [5, Theorem 3.9]. \(\square \)

Theorem 3.5

(\({\mathcal {C}}^\infty \)-smooth theory, Kostov [8]) In real \({\mathcal {C}}^\infty \)-smooth setting, the deformation \(\varvec{X}_{3,\text {real}}(x,y)\) (3.2) is a versal deformation of the normal form vector field \(\varvec{X}_{3,\text {real}}(x,0)\).

Proof

The only purely analytic tools used in the proof of the existence of a normalizing transformation were the Weierstrass preparation and division theorems (used in the proof of Proposition 2.7), which have their counterparts in the \({\mathcal {C}}^\infty \)-setting in the Malgrange preparation and division theorems [4, 9]. \(\square \)

The deformation (3.2) is not universal in the \({\mathcal {C}}^\infty \)-setting in general. The issue is the non-uniqueness in the Malgrange division and the lack of control over the potential non-real singularities in the family.

Example 3.6

The deformations \(\varvec{X}(x,\lambda )=(x^2+\lambda ^2)\tfrac{\partial }{\partial x}\), and \(\varvec{X}'(x,\lambda )=\big (x^2+(\lambda +\omega (\lambda ))^2\big )\tfrac{\partial }{\partial x}\), where \(\omega (\lambda )\) is infinitely flat at \(\lambda =0\) (i.e. \(\big (\tfrac{\partial }{\partial \lambda }\big )^n\omega \big |_{\lambda =0}=0\), for all \(n\in {\mathbb {Z}}_{\ge 0}\)), are \({\mathcal {C}}^\infty \)-equivalent by means of a conjugacy \(\phi (x,\lambda )\) with \(\Delta (x,\lambda ):=\phi (x,\lambda )-x\) infinitely flat along \(\lambda =0\).

Indeed we first apply the change \(x\mapsto \frac{\lambda +\omega (\lambda )}{\lambda }x =:C(\lambda )x\) to \(\varvec{X}'\), thus transforming it into \(\varvec{X}''(x,\lambda )=\frac{1}{C(\lambda )}\varvec{X}(x,\lambda )\). Note that \(C(\lambda ) = 1+\frac{\omega (\lambda )}{\lambda }= 1+\mu (\lambda )\), with \(\mu (\lambda )\) infinitely flat. We then apply Lemma 2.6. We look for a germ \(\alpha (x,\lambda )\) such that \(1+\varvec{X}.\alpha = C(\lambda )\), which is equivalent to \((x^2+\lambda ^2)\tfrac{\partial }{\partial x}\alpha = \mu (\lambda )\). This equation has the odd solution

$$\begin{aligned}{\left\{ \begin{array}{ll} \alpha (x,\lambda ) = \frac{\mu (\lambda )}{\lambda } \arctan \frac{x}{\lambda },&{}\lambda \ne 0,\\ 0, &{}\lambda =0.\end{array}\right. } \end{aligned}$$

The function \(\alpha \) is obviously \({\mathcal {C}}^\infty \) since \(\arctan \frac{x}{\lambda }\) is bounded, and each derivative of \(\arctan \frac{x}{\lambda }\) grows no faster than \((x^2+\lambda ^2)^{-n}<\lambda ^{-2n}\) for some n depending on the derivative. The flow of the vector field

$$\begin{aligned} \varvec{Y}=\frac{\partial }{\partial t}-\frac{\alpha \varvec{X}}{1+t\varvec{X}.\alpha }= \frac{\partial }{\partial t}-\frac{\alpha (x,\lambda )(x^2+\lambda ^2)}{1+t\mu (\lambda )}\frac{\partial }{\partial x} \end{aligned}$$

is well defined and \({\mathcal {C}}^\infty \)-smooth for \(t\in [0,1]\) as long as \(|\mu (\lambda )|<1\).

Remark 3.7

The deformations \(\varvec{X}(x,\lambda )=(x^2-\lambda ^2)\tfrac{\partial }{\partial x}\), and \(\varvec{X}'(x,\lambda )=\big (x^2-(\lambda +\omega (\lambda ))^2\big )\tfrac{\partial }{\partial x}\), where \(\omega (\lambda )\) is infinitely flat at \(\lambda =0\), are not \({\mathcal {C}}^\infty \)-conjugate. Indeed, the eigenvalues at the singular points are \({\mathcal {C}}^1\) invariants.

Problem 3.8

Can we expect uniqueness of the induced coefficients in (3.1) in the special case when the deformation is such that it has \(k+1\) merging real singularities when counted with multiplicity?