1 Introduction and Main Results

Solving nonlinear equations is a basic and extremely valuable tool in all fields of science and engineering. Given a function \(f: D\subset \mathbb {C} \rightarrow \mathbb {C}\) defined on a region D in the complex plane \(\mathbb {C}\), one of the most common methods for finding simple roots \(x^{*}\) of a nonlinear equation \(f(x)=0\) is Newton’s method (or Newton–Raphson method) which, starting at an initial guess \(x_0\), iterates by means of

$$\begin{aligned} x_{n+1} = N_f(x_n) := x_n - \frac{f(x_n)}{f'(x_n)}, \quad n \ge 0. \end{aligned}$$
(1)

Under suitable conditions, this sequence will converge to the root \(x^{*}\).

For an iterative method \(x_{n+1} = I_f(x_n)\) to find a root \(x^{*}\) of \(f(x) = 0\), an important concept is the order the convergence. We say that the method converges with order \(p > 1\) if, starting at suitable points near the root, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{|x^{*}-x_{n+1}|}{|x^{*}-x_{n}|^p} = C \ne 0. \end{aligned}$$
(2)

For instance, Newton’s method (1) has order 2, and Chebyshev’s method

$$\begin{aligned} x_{n+1} = C_f(x_n) := x_n - \left( 1 + \frac{f(x_n)f''(x_n)}{2f'(x_n)^2} \right) \frac{f(x_n)}{f'(x_n)}, \quad n \ge 0, \end{aligned}$$

has order 3. In principle, the larger is the order, the faster is the convergence, i.e., the less iterations are necessary to approximate the root with a desired precision. However, it is important to take into account the cost of each iteration step (measured by the number of evaluations of f and its derivative). This is done by defining the efficiency index. The efficiency index of an iterative method of order p requiring d function evaluations per iteration step is \(E(d,p) = \root d \of {p}\), see [23]. The unproved conjecture by Kung and Traub [18] states that a method which uses d evaluations could achieve, at most, a convergence order \(p = 2^{d-1}\). Methods that reach this bound, whose efficiency index is therefore \(E(d,2^{d-1}) = \root d \of {2^{d-1}}\), are known as optimal. Newton’s method has \(d=2\) and \(p=2\), so it is optimal. Chebyshev’s method has \(d=3\) and \(p=3\), so it is not optimal (there are methods that reach order \(p=4\) using just \(d=3\) evaluations).

Although order 2 (that is, Newton’s method) is in general enough for typical situations, high order methods are very useful for high precision computations, and the theoretical and practical interest of these methods are huge.

Considering Kung-Traub’s conjecture, many optimal two-point methods (3 function evaluations and order 4) and three-point methods (4 function evaluations and order 8) have been presented in the recent mathematical literature. In particular, the paper [7] considers twenty nine optimal eighth-order methods, i.e., their efficiency index is \(\root 4 \of {8} \simeq 1.68179\). Some optimal four-step methods (5 function evaluations and order 16) have been considered also; we will mention them later in this paper.

There are several techniques which can be used to increase the convergence order of an iterative method; see, for instance, [8, 16]. Let us start with an optimal (order 4) two-step iterative method for solving nonlinear equations \(f(x)=0\), namely

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ x_{n+1}&:= y_{n} - \dfrac{f(x_n)^2}{(f(x_n)-f(y_n))^2} \dfrac{f(y_n)}{f'(x_n)}. \end{aligned} \right. \end{aligned}$$

Adding an extra Newton evaluation we obtain the three-step scheme

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ z_{n}&:= y_{n} - \dfrac{f(x_n)^2}{(f(x_n)-f(y_n))^2} \dfrac{f(y_n)}{f'(x_n)}, \\ x_{n+1}&:= z_{n} - \dfrac{f(z_n)}{f'(z_n)} \end{aligned} \right. \end{aligned}$$
(3)

which has order 8 (notice that if an iterative method \(x_{n+1} = F(x_n)\) of order \(\alpha \) is composed with an iterative method \(x_{n+1} = G(x_n)\) of order \(\beta \), the order of the new method \(x_{n+1} = G(F(x_n))\) is \(\alpha \beta \)), but it requires five functional evaluations so it is not optimal.

The idea for obtaining an optimal method is to apply suitable approximations of \(f'(z_n)\) to avoid an evaluation without loss of order. To this end, we can approximate

$$\begin{aligned} f'(z_n) \approx \frac{f'(x_n)}{W(t_n,s_n)}, \end{aligned}$$

with \(t_n = \frac{f(y_n)}{f(x_n)}\) and \(s_n = \frac{f(z_n)}{f(y_n)}\) for a suitable function W, which is often called weight function. Thus, writing \({f(x_n)^2}/{(f(x_n)-f(y_n))^2}\) in the central part of Eq. (3) as \(1/(1-t_n)^2\), we have

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ z_{n}&:= y_{n} - \dfrac{1}{(1-t_n)^2} \dfrac{f(y_n)}{f'(x_n)}, \\ x_{n+1}&:= z_{n} - W(t_n,s_n)\dfrac{f(z_n)}{f'(x_n)}. \end{aligned} \right. \end{aligned}$$
(4)

For instance, this has been analyzed in the interesting paper [8], although not for general W(ts) but for 3-variable weights \(W(t,s,u) = H(t,s)G(u)\) and \(W(t,s,u) = H(t,s)+G(u)\) with \(u=ts\).

We can put a weight function \(Q(t_n)\) instead of \(1/(1-t_n)^2\) at the central part of Eq. (4) and look for functions Q(t) and W(ts) for which

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ x_{n+1}&:= y_{n} - Q(t_n) \dfrac{f(y_n)}{f'(x_n)} \end{aligned} \right. \end{aligned}$$
(5)

is an optimal 4th order method and

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ z_{n}&:= y_{n} - Q(t_n) \dfrac{f(y_n)}{f'(x_n)}, \\ x_{n+1}&:= z_{n} - W(t_n,s_n)\dfrac{f(z_n)}{f'(x_n)} \end{aligned} \right. \end{aligned}$$

is an optimal 8th order method.

At this point, we can prove the following:

Theorem 1

Let \(f:D\subset \mathbb {C} \rightarrow \mathbb {C}\) be a nine times continuously differentiable function with a simple zero \(x^{*}\in D\), and let \(Q:\mathbb {C} \rightarrow \mathbb {C}\) and \(W:\mathbb {C}^2 \rightarrow \mathbb {C}\) be sufficiently differentiable functions on a neighborhood of the origin, with \(Q(0) = 1\) and \(Q'(0) = 2\). If the initial point \(x_{0}\) is close enough to \(x^{*}\), then the method defined by

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ z_{n}&:= y_n - Q(t_n) \dfrac{f(y_n)}{f'(x_n)}, \\ x_{n+1}&:= z_n - W(t_n,s_n) \dfrac{f(z_n)}{f'(x_n)}, \end{aligned} \right. \end{aligned}$$
(6)

with \(t_n = {f(y_n)}/{f(x_n)}\) and \(s_n = {f(z_n)}/{f(y_n)}\), converges to \(x^{*}\) with order 8 if the following conditions hold:

$$\begin{aligned} \begin{gathered} W(0,0) = 1, \quad W_t(0,0) = 2, \\ W_{tt}(0,0) = 2+Q''(0), \quad W_{ttt}(0,0) = -24 + 6Q''(0) + Q'''(0), \\ W_s(0,0) = 1, \quad W_{ts}(0,0) = 4. \end{gathered} \end{aligned}$$
(7)

We postpone the proof of this theorem to Sect. 4. Of course, as in the case of [8, 19] or [21], the use of symbolic computation packages is of great help to get the proofs (Mathematica version 12 has been used in this paper).

A very simple pair of functions satisfying the conditions of Theorem 1 is \(Q(t) = 1+2t\),

$$\begin{aligned} W(t,s) = 1+2t+t^2-4t^3+s+4ts. \end{aligned}$$

Other choices for Q are \(Q(t) = 1/(1-2t) = \sum _{k=0}^{\infty } 2^kt^k\) or, for \(a \in \mathbb {C}\), the parametric examples

$$\begin{aligned} Q(t) = \frac{1+at}{1+(a-2)t}, \qquad Q(t) = \frac{t^3-at-1}{(2-a)t-1}, \end{aligned}$$

for which suitable functions W can be easily found.

These functions Q and W are just mere examples. Choosing functions Q and W which include another kind or parameters is not difficult, and it allows to create families of parameter-dependent iteration methods, all of them with convergence of order 8, 4 evaluations, and efficiency index \(\root 4 \of {8}\simeq 1.68179\). We can take the Taylor expansions of Q and W and introduce parameters in some terms not involved in (7). In particular, most of the methods of the survey [7] have convergence of order 8 and could be analyzed (with suitable adjustments of the notation) as particular cases of Theorem 1.

Of course, we can try to do the same again. The method

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ z_{n}&:= y_n - Q(t_n) \dfrac{f(y_n)}{f'(x_n)}, \\ w_{n}&:= z_n - W(t_n,s_n) \dfrac{f(z_n)}{f'(x_n)}, \\ x_{n+1}&:= w_n - \dfrac{f(w_n)}{f'(w_n)} \end{aligned} \right. \end{aligned}$$

has order 16 with 6 evaluations at every step. Then, the idea is to approximate, in the equation which gives \(x_{n+1}\),

$$\begin{aligned} f'(w_n) \approx \frac{f'(x_n)}{H(t_n,s_n,u_n)}, \end{aligned}$$

with \(t_n = \frac{f(y_n)}{f(x_n)}\), \(s_n = \frac{f(z_n)}{f(y_n)}\) and \(u_n = \frac{f(w_n)}{f(z_n)}\) without loss of the sixteenth order. In this way, we would have an optimal 16th iterative method, that is, with 5 evaluations at every step and efficiency index \(\root 5 \of {16}\simeq 1.74110\). Actually, there are not many optimal 16th order iterative methods in the mathematical literature, but some of them can be found in [2, 4,5,6, 10, 11, 19, 20, 26,27,28,29,30].

With our strategy of using a weight H(tsu), this can be done as follows. We choose the simplest weights Q and W satisfying the conditions (7), i.e., with all the derivatives not involved in (7) equal to 0. This avoids unnecessary complications in the notation and in the computational effort for finding the relations among the derivatives of Q, W, and H.

Let us write the partial derivatives of H(tsu) at the origin as

$$\begin{aligned} H_{i,j,k} = \frac{\partial ^{i+j+k}H(t,s,u)}{\partial t^i \,\partial s^j \,\partial u^k}\Big |_{(t,s,u)=(0,0,0)}. \end{aligned}$$
(8)

Theorem 2

Let \(f : D\subset \mathbb {C} \rightarrow \mathbb {C}\) be a 17 times continuously differentiable function with a simple zero \(x^{*}\in D\), and let

$$\begin{aligned} Q(t) = 1+2t, \quad W(t,s) = 1 + 2t + t^2 - 4 t^3 + s + 4ts, \end{aligned}$$
(9)

and \(H:\mathbb {C}^3 \rightarrow \mathbb {C}\) a sufficiently differentiable function on a neighborhood of the origin. If the initial point \(x_{0}\) is close enough to \(x^{*}\), then the method defined by

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ z_{n}&:= y_n - Q(t_n) \dfrac{f(y_n)}{f'(x_n)}, \\ w_{n}&:= z_n - W(t_n,s_n) \dfrac{f(z_n)}{f'(x_n)}, \\ x_{n+1}&:= w_n - H(t_n,s_n,u_n) \frac{f(w_n)}{f'(x_n)}, \end{aligned} \right. \end{aligned}$$
(10)

with \(t_n = {f(y_n)}/{f(x_n)}\), \(s_n = {f(z_n)}/{f(y_n)}\) and \(u_n = {f(w_n)}/{f(z_n)}\), converges to \(x^{*}\) with order 16 if the following conditions hold:

$$\begin{aligned} \begin{gathered} H_{0, 0, 0} = 1, \ H_{0, 0, 1} = 1, \ H_{0, 1, 0} = 1, \ H_{1, 0, 0} = 2, \\ H_{0, 1, 1} = 2, \ H_{0, 2, 0} = 0, \ H_{1, 0, 1} = 2, \ H_{1, 1, 0} = 4, \ H_{2, 0, 0} = 2, \\ H_{0, 3, 0} = -6, \ H_{1, 1, 1} = 8, \ H_{1, 2, 0} = 4, \\ H_{2, 0, 1} = 2, \ H_{2, 1, 0} = 2, \ H_{3, 0, 0} = -24, \\ H_{1, 3, 0} = -24, \ H_{2, 2, 0} = 4, \ H_{3, 0, 1} = -24, \ H_{3, 1, 0} = -24, \ H_{4, 0, 0} = 0, \\ H_{3, 2, 0} = -72, \ H_{4, 1, 0} = -72, \ H_{5, 0, 0} = 0, \\ H_{5, 1, 0} = 720, \ H_{6, 0, 0} = 0, \ H_{7, 0, 0} = 0. \end{gathered} \end{aligned}$$
(11)

In this theorem, if we take \(H_{i,j,k} = 0\) for all the derivatives not involved in (11) and use the 3-variable Taylor expansion

$$\begin{aligned} H(t,s,u) = H(0,0,0) + \sum _{m=1}^{\infty } \sum _{i+j+k=m} \frac{H_{i,j,k}}{i!\,j!\,k!} \,t^i s^j u^k, \end{aligned}$$

we get the polynomial

$$\begin{aligned} \begin{aligned} H(t,s,u)&= 1 + u + s + 2 t + 2 s u + 2 t u + 4 t s + t^2 \\&\qquad - s^3 + 8 t s u + 2 t s^2 + t^2 u + t^2 s - 4 t^3 \\&\qquad - 4 t s^3 + t^2 s^2 - 4 t^3 u - 4 t^3 s - 6 t^3 s^2 - 3 t^4 s + 6 t^5 s. \end{aligned} \end{aligned}$$
(12)

As commented above, several other sixteenth-order iterative methods appear in the mathematical literature. But, as long as we know, no explicitly stated thirty-second-order methods have been given; actually, optimal \(2^d\) methods have been proved to exist for every positive integer d (see [18, 24, 25]), but explicit formulas in closed form have not been provided. Thus, giving a simple thirty-second-order method was a sort of a challenge.

In this paper, the idea is to modify (10) taking an additional intermediate point \(h_n\) instead of \(x_{n+1}\) and then compute \(x_{n+1}\) using a new weight function J defined on a neighborhood of the origin (in \(\mathbb {C}^4\)).

With this aim, we take Q and W as in (9), and H as in (12), which are the “simplest” weights to reach order 16. Let us write the partial derivatives of J(tsuv) at the origin as

$$\begin{aligned} J_{i,j,k,l} = \frac{\partial ^{i+j+k+l}J(t,s,u,v)}{\partial t^i \,\partial s^j \,\partial u^k \,\partial v^l}\Big |_{(t,s,u,v)=(0,0,0,0)}. \end{aligned}$$

Then, we have the following:

Theorem 3

Let \(f : D\subset \mathbb {C} \rightarrow \mathbb {C}\) be a 33 times continuously differentiable function with a simple zero \(x^{*}\in D\), let Q and W as in (9), H as in (12) and \(J:\mathbb {C}^4 \rightarrow \mathbb {C}\) a sufficiently differentiable function in a neighborhood of the origin. If the initial point \(x_{0}\) is sufficiently close to \(x^{*}\), then the method defined by

$$\begin{aligned} \left\{ \begin{aligned} y_{n}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ z_{n}&:= y_n - Q(t_n) \dfrac{f(y_n)}{f'(x_n)}, \\ w_{n}&:= z_n - W(t_n,s_n) \dfrac{f(z_n)}{f'(x_n)}, \\ h_{n}&:= w_n - H(t_n,s_n,u_n) \dfrac{f(z_n)}{f'(x_n)}, \\ x_{n+1}&:= h_n - J(t_n,s_n,u_n,v_n) \frac{f(w_n)}{f'(x_n)}, \end{aligned} \right. \end{aligned}$$
(13)

with \(t_n = {f(y_n)}/{f(x_n)}\), \(s_n = {f(z_n)}/{f(y_n)}\), \(u_n = {f(w_n)}/{f(z_n)}\) and \(v_n = {f(h_n)}/{f(w_n)}\), converges to \(x^{*}\) with order 32 if the following conditions hold:

$$\begin{aligned}&\quad J_{0, 0, 0, 0} = 1, \ J_{0, 0, 0, 1} = 1, \ J_{0, 0, 1, 0} = 1, \ J_{0, 1, 0, 0} = 1, \ J_{1, 0, 0, 0} = 2,\\&\quad J_{0, 0, 1, 1} = 2, \ J_{0, 0, 2, 0} = 0, \ J_{0, 1, 0, 1} = 1, \ J_{0, 1, 1, 0} = 2, \ J_{0, 2, 0, 0} = 0,\\&\quad J_{1, 0, 0, 1} = 2, \ J_{1, 0, 1, 0} = 2, \ J_{1, 1, 0, 0} = 4, \ J_{2, 0, 0, 0} = 2, \ J_{0, 0, 3, 0} = -6,\\&\quad J_{0, 1, 1, 1} = 4, \ J_{0, 1, 2, 0} = 2, \ J_{0, 2, 0, 1} = 0, \ J_{0, 2, 1, 0} = 0, \ J_{0, 3, 0, 0} = -6,\\&\quad J_{1, 0, 1, 1} = 4, \ J_{1, 0, 2, 0} = 0, \ J_{1, 1, 0, 1} = 4, \ J_{1, 1, 1, 0} = 8, \ J_{1, 2, 0, 0} = 4,\\&\quad J_{2, 0, 0, 1} = 2, \ J_{2, 0, 1, 0} = 2, \ J_{2, 1, 0, 0} = 2, \ J_{3, 0, 0, 0} = -24, \ J_{0, 1, 3, 0} = -12,\\&\quad J_{0, 2, 2, 0} = 0, \ J_{0, 3, 0, 1} = -6, \ J_{0, 3, 1, 0} = -6, \ J_{0, 4, 0, 0} = 0, \ J_{1, 0, 3, 0} = -12, \ \\&\quad J_{1, 1, 1, 1} = 16, \ J_{1, 1, 2, 0} = 8, \ J_{1, 2, 0, 1} = 4, \ J_{1, 2, 1, 0} = 4, \ J_{1, 3, 0, 0} = -24, \\&\quad J_{2, 0, 1, 1} = 4, \ J_{2, 0, 2, 0} = 0, \ J_{2, 1, 0, 1} = 2, \ J_{2, 1, 1, 0} = 2, \ J_{2, 2, 0, 0} = 4,\\&\quad J_{3, 0, 0, 1} = -24, \ J_{3, 0, 1, 0} = -24, \ J_{3, 1, 0, 0} = -24, \ J_{4, 0, 0, 0} = 0, \ J_{0, 3, 2, 0} = -12, \\&\quad J_{0, 4, 1, 0} = 0, \ J_{0, 5, 0, 0} = 0, \ J_{1, 1, 3, 0} = -48, \ J_{1, 2, 2, 0} = 8, \ J_{1, 3, 0, 1} = -24, \\&\quad J_{1, 3, 1, 0} = -24, \ J_{1, 4, 0, 0} = 0, \ J_{2, 0, 3, 0} = -12, \ J_{2, 1, 2, 0} = 0, \ J_{2, 2, 0, 1} = 4,\\&\quad J_{2, 2, 1, 0} = 4, \ J_{2, 3, 0, 0} = 0, \ J_{3, 0, 1, 1} = -48, \ J_{3, 0, 2, 0} = 0, \ J_{3, 1, 0, 1} = -24,\\&\quad J_{3, 1, 1, 0} = -24, \ J_{3, 2, 0, 0} = -72, \ J_{4, 0, 0, 1} = 0, \ J_{4, 0, 1, 0} = 0, \ J_{4, 1, 0, 0} = -72,\\&\quad J_{5, 0, 0, 0} = 0, \ J_{0, 5, 1, 0} = 120, \ J_{0, 6, 0, 0} = 0, \ J_{1, 3, 2, 0} = -72, \ J_{1, 4, 1, 0} = -48,\\&\quad J_{1, 5, 0, 0} = 0, \ J_{2, 2, 2, 0} = 0, \ J_{2, 3, 1, 0} = 12, \ J_{2, 4, 0, 0} = 0, \ J_{3, 0, 3, 0} = 144,\\&\quad J_{3, 1, 2, 0} = 0, \ J_{3, 2, 0, 1} = -72, \ J_{3, 2, 1, 0} = -72, \ J_{3, 3, 0, 0} = 0, \ J_{4, 0, 2, 0} = 0,\\&\quad J_{4, 1, 0, 1} = -72, \ J_{4, 1, 1, 0} = -72, \ J_{4, 2, 0, 0} = 0, \ J_{5, 0, 0, 1} = 0, \ J_{5, 0, 1, 0} = 0,\\&\quad J_{5, 1, 0, 0} = 720, \ J_{6, 0, 0, 0} = 0, \ J_{0, 7, 0, 0} = 0, \ J_{1, 5, 1, 0} = 480, \ J_{1, 6, 0, 0} = 0,\\&\quad J_{2, 4, 1, 0} = -48, \ J_{2, 5, 0, 0} = 0, \ J_{3, 2, 2, 0} = -96, \ J_{3, 3, 1, 0} = -216, \ J_{3, 4, 0, 0} = 0,\\&\quad J_{4, 1, 2, 0} = 0, \ J_{4, 2, 1, 0} = -144, \ J_{4, 3, 0, 0} = 0, \ J_{5, 0, 2, 0} = 0, \ J_{5, 1, 0, 1} = 720,\\&\quad J_{5, 1, 1, 0} = 720, \ J_{5, 2, 0, 0} = 0, \ J_{6, 0, 0, 1} = 0, \ J_{6, 0, 1, 0} = 0, \ J_{6, 1, 0, 0} = 0,\\&\quad J_{7, 0, 0, 0} = 0, \ J_{1, 7, 0, 0} = 0, \ J_{2, 6, 0, 0} = 0, \ J_{3, 4, 1, 0} = 1440, \ J_{3, 5, 0, 0} = 0,\\&\quad J_{4, 3, 1, 0} = -576, \ J_{4, 4, 0, 0} = 0, \ J_{5, 1, 2, 0} = 0, \ J_{5, 2, 1, 0} = 1440, \ J_{5, 3, 0, 0} = 0,\\&\quad J_{6, 0, 2, 0} = 0, \ J_{6, 1, 1, 0} = 0, \ J_{6, 2, 0, 0} = 0, \ J_{7, 0, 0, 1} = 0, \ J_{7, 0, 1, 0} = 0,\\&\quad J_{7, 1, 0, 0} = 0, \ J_{8, 0, 0, 0} = 0, \ J_{3, 6, 0, 0} = 0, \ J_{4, 5, 0, 0} = 0, \ J_{5, 3, 1, 0} = 2880,\\&\quad J_{5, 4, 0, 0} = 0, \ J_{6, 2, 1, 0} = 7200, \ J_{6, 3, 0, 0} = 0, \ J_{7, 0, 2, 0} = 0, \ J_{7, 1, 1, 0} = 0,\\&\quad J_{7, 2, 0, 0} = 0, \ J_{8, 0, 1, 0} = 0, \ J_{8, 1, 0, 0} = 0, \ J_{9, 0, 0, 0} = 0, \ J_{5, 5, 0, 0} = 0,\\&\quad J_{6, 4, 0, 0} = 0, \ J_{7, 2, 1, 0} = -80640, \ J_{7, 3, 0, 0} = 0, \ J_{8, 1, 1, 0} = 0, \ J_{8, 2, 0, 0} = 0,\\&\quad J_{9, 0, 1, 0} = 0, \ J_{9, 1, 0, 0} = 0, \ J_{10, 0, 0, 0} = 0, \ J_{7, 4, 0, 0} = 0, \ J_{8, 3, 0, 0} = 0,\\&\quad J_{9, 1, 1, 0} = 0, \ J_{9, 2, 0, 0} = 0, \ J_{10, 0, 1, 0} = 0, \ J_{10, 1, 0, 0} = 0, \ J_{11, 0, 0, 0} = 0,\\&\quad J_{9, 3, 0, 0} = 0, \ J_{10, 2, 0, 0} = 0, \ J_{11, 0, 1, 0} = 0, \ J_{11, 1, 0, 0} = 0, \ J_{12, 0, 0, 0} = 0,\\&\quad J_{11, 2, 0, 0} = 0, \ J_{12, 1, 0, 0} = 0, \ J_{13, 0, 0, 0} = 0, \ J_{13, 1, 0, 0} = 0, \ J_{14, 0, 0, 0} = 0,\\&\quad J_{15, 0, 0, 0} = 0. \end{aligned}$$

The proof of this theorem is postponed to Sect. 5. To be honest, we must acknowledge that we do not have a compete mathematical proof, but a rather satisfactory heuristic proof with many computational evidences.

As in the case of (12), if we take \(J_{i,j,k,l} = 0\) for all the partial derivatives not involved in Theorem 3 and use the 4-variable Taylor expansion

$$\begin{aligned} J(t,s,u,v) = J(0,0,0,0) + \sum _{m=1}^{\infty } \sum _{i+j+k+l=m} \frac{J_{i,j,k,l}}{i!\,j!\,k!\,l!} \,t^i s^j u^k v^l, \end{aligned}$$

we get a polynomial J(tsuv) with integer coefficients.

In general, for finding the roots of a function \(f : D\subset \mathbb {C} \rightarrow \mathbb {C}\) we can think about a multipoint method which, after starting at an initial guess \(x_0\), computes \(x_{n+1}\) from \(x_{n}\) by means of a \((d+1)\)-point iterative scheme with weights of the form

$$\begin{aligned} \left\{ \begin{aligned} x_{n}^{[1]}&:= x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\ x_{n}^{[2]}&:= x_{n}^{[1]} - W^{[1]}(t^{[1]}_n) \dfrac{f(x_{n}^{[1]})}{f'(x_n)}, \\ x_{n}^{[3]}&:= x_{n}^{[2]} - W^{[2]}(t^{[1]}_n,t^{[2]}_n) \dfrac{f(x_{n}^{[2]})}{f'(x_n)}, \\ \dots&:= \dots \\ x_{n}^{[d]}&:= x_{n}^{[d-1]} - W^{[d-1]}(t^{[1]}_n,\ldots ,t^{[d-1]}_n) \dfrac{f(x_{n}^{[d-1]})}{f'(x_n)}, \\ x_{n+1}&:= x_{n}^{[d]} - W^{[d]}(t^{[1]}_n,\ldots ,t^{[d]}_n) \dfrac{f(x_{n}^{[d]})}{f'(x_n)}, \end{aligned} \right. \end{aligned}$$
(14)

with \(t^{[i]}_n = f(x^{[i]}_n)/f(x^{[i-1]}_n)\), \(i=1,\dots ,d\), and where \(x^{[0]}_n = x_n\). Here, the weights \(W^{[i]}\) must be functions \(W^{[i]} : \mathbb {C}^i \rightarrow \mathbb {C}\) sufficiently differentiable in a neighborhood of the origin.

In this scheme, the case \(d=2\) corresponds to Theorem 1 (4 function evaluations per iteration step and order 8), \(d=3\) corresponds to Theorem 2 (5 evaluations and order 16) and \(d=4\) corresponds to Theorem 3 (6 evaluations and order 32). In general, (14) has \(d+2\) evaluations of f or \(f'\) per iteration step. Then, according to the Kung-Traub conjecture [18], the optimal order of convergence could be, at most, \(2^{d+1}\), and the efficiency index would be \(2^{(d+1)/(d+2)}\).

The papers [18, 24, 25] prove the existence of several families of multipoint optimal methods which attain the convergence order \(2^{d+1}\) using \(d+2\) function evaluations per iteration, but none of these families are based on weight functions (a typical tool for proving the existence is the use of inverse interpolatory polynomial or Hermite interpolation polynomial).

In Theorems 2 and 3, the equations involved in the computation of the Taylor coefficients of the weights are rather complicated, but they always have integer solutions. Based on that, we conjecture the existence not only of families of optimal multipoint methods based on weights, but the following:

Conjecture

For every positive integer d there exists an optimal multipoint method of the form (14) (that is, with order \(2^{d+1}\)) where the weights \(W^{[i]}\) are polynomials (in i variables) with coefficients in \(\mathbb {Z}\).

It is clearly impossible to prove this conjecture using the computer-aided proofs of this paper (Sects. 4 and 5).

Besides this introductory section and the above mentioned sections with proofs, this paper has two additional sections. In Sect. 2 we experimentally measure the 32nd order of convergence given in Theorem 3 by means of several tests; this is always an important point for preventing errors in the proofs or in the numerical implementations. Finally, in Sect. 3 we give a couple of examples showing how the process to increase the order of convergence influences the basins of attraction of the roots for a common nonlinear equation \(f(x)=0\).

Let us finally mention that, in this paper, we have not tried to find a practical method to solve nonlinear equations, or compare its advantages with respect to the many other numerical methods published in the mathematical literature. Instead, the idea has been to explore a theoretical general scheme to find optimal methods (in the Kung-Traub sense), to compare its dynamical behavior through some examples, and to use it for finding the first 32nd optimal method, a nice challenge.

2 Experimental Checking of the Order of Convergence

Given an iterative method, computing experimentally the order of convergence in several examples is a good test to detect theoretical errors in the deduction of the method or practical errors in the implementation of the method in a computer. Using the definition (2) to compute experimentally the order of convergence is not practical; that is why suitable approximations were introduced.

The computational order of convergence (COC), defined in [32], is given by the formula

$$\begin{aligned} COC \approx \frac{\ln |(x_{n+1}-x^{*})/(x_{n}-x^{*})|}{\ln |(x_{n}-x^{*})/(x_{n-1}-x^{*})|}. \end{aligned}$$
(15)

When \(x^{*}\) is unknown, which is usual in practice (unless the method is checked in a equation whose root is already known), we can use the approximated computational order of convergence (ACOC), defined in [9], as

$$\begin{aligned} ACOC \approx \frac{\ln |(x_{n+1}-x_{n})/(x_{n}-x_{n-1})|}{\ln |(x_{n}-x_{n-1})/(x_{n-1}-x_{n-2})|}. \end{aligned}$$
(16)

For a comparison among several convergence orders, see [12].

We are going to compute both COC and ACOC for checking the accuracy of our 32nd-order iterative method (13). This is particularly important in this case, because we do not have a complete proof of Theorem 3 (see more details in Sect. 5). If our numerical experiments find that the method has order 32 for a suitable collection of nonlinear equations, then we will have good empirical evidence in favour of thinking that the result is correct.

We make this kind of numerical experiments using the four test functions \(f_j(x)\), \(j=1,\dots ,4\), given in Table 1. The mathematical literature contains a big amount of examples; for instance, the authors of [14] collected, from previous papers, 125 test functions including our \(f_2\) and \(f_4\). In each case, we arrive at the root \(x^{*}\) starting from the point \(x_0\).

Table 1 Test functions \(f_1, \dots , f_4\), root \(x^{*}\), and initial guess \(x_0\)

The results of these numerical experiments can be found in Table 2. Notice that, to estimate the COC and the ACOC, it has been enough to use \(n=3\) in (15) and (16) to get excellent approximations of the order of convergence 32 stated in Theorem 3. To obtain high accuracy and to avoid the loss of significant digits, we have employed multi-precision arithmetic with 100, 000 significant decimal digits in the programming package Mathematica.

Table 2 Errors, COC, and ACOC when the iterative method (13) is applied to find the root of test functions \(f_1,\dots ,f_4\) given in Table 1

3 Dynamics of the Methods

Given an iterative scheme F defined in the whole complex plane and a point \(x_0 \in \mathbb {C}\), the orbit of \(x_0\) is the sequence \({\text {orb}}(x_0) = \{x_n\}_{n=0}^{\infty }\) defined by \(x_{n+1} = F(x_n)\). The basin of attraction of \(x^{*} \in \mathbb {C}\) is the set

$$\begin{aligned} {\text {basin}}(x^{*}) = \{ x_0 \in \mathbb {C} : {\text {orb}}(x_0) \rightarrow x^{*} \}. \end{aligned}$$

For a polynomial f and an iterative scheme F for solving the nonlinear equation \(f(x)=0\), it is well known that the boundary among the basins of attraction of different roots of f shows, in general, an intricate fractal structure. Assigning a color to each basin of attraction, we usually get very nice pictures illustrating the behavior of the iterative method F.

In the case of several iterative methods for a common problem \(f(x) = 0\), the visual comparison of the graphics corresponding to the different methods allow to observe some aspects of the behavior of these methods. This has been widely used at least since the paper [31] was published in 2002, in such a way that it is common to use the pictures of the basins of attraction to graphically compare these methods. More recent papers carrying out this kind of studies are [1, 3, 13,14,15, 22]; and, of course, we cannot forget the book [17] and its nice fractal images.

In particular, here we have root-finding iterative methods of order 2 (Newton’s method (1)), order 4 (Eq. (5) with \(Q(t)=1+2t\)), order 8 (Theorem 1), order 16 (Theorem 2) and order 32 (Theorem 3). The aim of this section is to visually compare them when they are applied to the same problem \(f(x)=0\). In practice, let us observe that, although high order methods can be useful for high precision computation, they are much more demanding with respect to the initial point to reach a root of the function f. (Thus, if one wants to use a high order iterative method to obtain a root with a high precision, a good practical idea is to previously approximate the root by applying several initial steps with Newton’s method.)

To illustrate the behavior of the five methods, we use a \(600 \times 600\) grid of the square \(D = [-3,3]\times [-3,3] \subset \mathbb {C}\) and assign a color to each point \(x_0\in D\) according to the simple root to which the corresponding orbit of the iterative method starting at \(x_0\) converges. We mark the point as black if the orbit does not converge to a root in the sense that after at most 25 iterations its distance to any of the roots is larger than \(10^{-3}\). In this way, we distinguish the attraction basins by their color.

We have done this comparison with a couple of functions. First, we have chosen the typical polynomial

$$\begin{aligned} f(x) = x^3-1 \end{aligned}$$

of degree three, whose three complex roots are

$$\begin{aligned} e^{2k\pi i/3}, \qquad k=0,1,2. \end{aligned}$$

The pictures in Fig. 1 present the corresponding basins of attraction. Independently of the beauty of the graphics, the situation is as expected: increasing the order of convergence changes the aspect (in the dynamics) of the picture, decreasing the size of the basins of attractions.

Fig. 1
figure 1

From left to right and from top to bottom, basins of attraction for \(f(x) = x^3-1\) using methods of order 2 (Newton), 4 (Eq. (5) with \(Q(t)=1+2t\)), 8 (Theorem 1), 16 (Theorem 2) and 32 (Theorem 3)

Secondly, we have chosen the polynomial

$$\begin{aligned} f(x) = (10x^5-1)(x^5+10) \end{aligned}$$

of degree ten, whose ten complex roots are

$$\begin{aligned} {\left( \frac{1}{10}\right) }^{1/5} e^{2k\pi i/5}, \quad -10^{1/5} e^{2k\pi i/5},\qquad k=0,\dots ,4. \end{aligned}$$

The basins of attraction are shown in Fig. 2.

Fig. 2
figure 2

From left to right and from top to bottom, basins of attraction for \(f(x) = (10x^5-1)(x^5+10)\) using methods of order 2 (Newton), 4 (Eq. (5) with \(Q(t)=1+2t\)), 8 (Theorem 1), 16 (Theorem 2) and 32 (Theorem 3)

4 Proof of Theorems 1 and 2

Proof of Theorem 1

Let us write \(Q_i = \frac{d^iG(t)}{dt^i}\Big |_{t=0}\) and \(W_{i,j} = \frac{\partial ^{i+j} W(t,s)}{\partial t^i \,\partial s^j}\Big |_{(t,s)=(0,0)}\). Then, the conditions in (7) become

$$\begin{aligned} \begin{gathered} W_{0,0}=1, \quad W_{1,0}=2, \quad W_{2,0} = 2+Q_2, \\ W_{3,0} = -24+6Q_2+Q_3, \quad W_{0,1}=1, \quad W_{1,1}=4. \end{gathered} \end{aligned}$$

Let \(e_{n} := x_{n}-x^{*}\), \(e_{n,y} := y_{n}-x^{*}\), \(e_{n,z} := z_{n}-x^{*}\), and \(c_{\ell } := \frac{f^{(\ell )}(x^{*})}{\ell !\,f'(x^{*})}\) for \(n\in \mathbb {N}\) (recall that \(f'(x^{*}) \ne 0\) because the root \(x^{*}\) is simple). Since \(f(x^{*})=0\), the Taylor expansion of f at \(x^{*}\) yields

$$\begin{aligned} f(x_{n}) = f'(x^{*}) \big ( e_{n} + c_{2}e_{n}^{2}+c_{3}e_{n}^{3} + \cdots + c_{8}e_{n}^{8} \big ) + O(e_n^{9}) \end{aligned}$$
(17)

and

$$\begin{aligned} f'(x_{n}) = f'(x^{*}) \big ( 1 + 2c_{2}e_{n}+3c_{3}e_{n}^{2}+4c_{4}e_{n}^{3} + \cdots + 9c_9e_n^8 \big ) + O(e_n^{9}). \end{aligned}$$
(18)

Therefore, from (17) and (18),

$$\begin{aligned} \begin{aligned} \frac{f(x_{n})}{f'(x_{n})}&= e_{n} - c_{2}e_{n}^{2} + (2c_{2}^{2}-2c_{3}) e_{n}^{3} + ({-4}c_2^3+7c_2c_3-3c_4) e_n^4\\&\quad + (8c_2^4 - 20c_2^2c_3 + 6c_3^2 + 10c_2c_4 - 4c_5) e_n^5\\&\quad + \big ( {-16}c_2^5 + 52c_2^3c_3 - 28c_2^2c_4 + 17c_3c_4 - c_2(33c_3^2 - 13c_5) - 5c_6 \big ) e_n^6 \\&\quad + O(e_n^{7}) \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} e_{n,y}&= y_n-x^{*} = c_{2}e_{n}^{2} + ({-2}c_2^2+2c_3) e_n^3 + (4c_2^3-7c_2c_3+3c_4) e_n^4 \\&\quad + ({-8}c_2^4+20c_2^2c_3-6c_3^2-10c_2c_4+4c_5) e_n^5 \\&\quad + \big ( 16c_2^5-52c_2^3c_3+28c_2^2c_4-17c_3c_4 + c_2(33c_3^2-13c_5) + 5c_6 \big ) e_n^6 + O(e_n^{7}). \end{aligned} \end{aligned}$$

For \(f(y_n)\), we have the representation

$$\begin{aligned} f(y_{n}) = f'(x^{*}) \big (e_{n,y} + c_{2}e_{n,y}^{2}+c_{3}e_{n,y}^{3} + \cdots + c_{8}e_{n,y}^{8} \big ) + O(e_{n,y}^{9}). \end{aligned}$$
(19)

Then, we obtain from (17) and (19) that

$$\begin{aligned} \begin{aligned} t_n&= \frac{f(y_n)}{f(x_n)} = c_2e_n + ({-3}c_2^2+2c_3)e_n^2+(8c_2^3-10c_2c_3+3c_4) e_n^3 \\&\quad + ({-20}c_2^4+37c_2^2c_3-8c_3^2-14c_2c_4+4c_5) e_n^4 \\&\quad + \big (48c_2^5-118c_2^3c_3+51c_2^2c_4-22c_3c_4 + c_2(55c_3^2-18c_5) + 5c_6\big ) e_n^5 + O(e_n^6). \end{aligned} \end{aligned}$$

Now, let us use

$$\begin{aligned} Q(t_n) = Q_0 + Q_1t_n + Q_2t_n^2/2 + Q_3t_n^3/6 + Q_4t_n^4/24 + \cdots . \end{aligned}$$

Then, the central Eq. in (6) gives

$$\begin{aligned} e_{n,z}&= z_n - x^{*} = \big (c_2^3 (5-\tfrac{Q_2}{2}) - c_2 c_3\big ) e_n^4 \\&\quad + \big ( c_2^4 (5Q_2-\tfrac{Q_3}{6}-36) + c_3 c_2^2 (32-3 Q_2) -2 c_4 c_2-2c_3^2 \big ) e_n^5 \\&\quad + \Big ( \tfrac{-1}{24} c_2^5 (744 Q_2-52Q_3+Q_4-4080) + \tfrac{1}{3} c_3 c_2^3 (111 Q_2-4 Q_3-786) \\&\quad + \tfrac{3}{2} c_4 c_2^2 (32-3 Q_2) - 3 c_2 \big (2 c_3^2 (Q_2-11)+c_5\big ) - 7 c_3 c_4\Big ) e_n^6 + O(e_n^7) \end{aligned}$$

where we have already used that \(Q_0=1\) and \(Q_1=2\) to cancel the coefficients of \(e_n^2\) and \(e_n^3\) (this is equivalent to saying that the first two lines of (6), with \(x_{n+1}\) in the place of \(z_n\), provide a fourth-order method; by this reason, we have already fixed \(Q(0) = 1\) and \(Q'(0) = 2\) in the hypothesis of the theorem).

Using the representation

$$\begin{aligned} f(z_{n}) = f'(x^{*}) \big (e_{n,z} + c_{2}e_{n,z}^{2}+c_{3}e_{n,z}^{3} + \cdots + c_{8}e_{n,z}^{8}\big ) + O(e_{n,z}^{9}), \end{aligned}$$
(20)

from (19) and (20), we get

$$\begin{aligned} \begin{aligned} s_n&= \frac{f(z_n)}{f(y_n)} = \big (c_2^2 (5-\tfrac{Q_2}{2})-c_3\big ) e_n^2 \\&\quad + \big (c_2^3 (4 Q_2-\tfrac{Q_3}{6}-26) - 2 c_3 c_2 (Q_2-10) - 2 c_4\big ) e_n^3 \\&\quad + \Big ( \tfrac{-1}{24} c_2^4 \big (492 Q_2-44 Q_3+Q_4-2232\big ) + \tfrac{1}{2} c_3 c_2^2 \big (43 Q_2-2 (Q_3+130)\big ) \\&\quad + c_4 c_2 (29-3 Q_2) + c_3^2 (19-2 Q_2)-3 c_5 \Big ) e_n^4 \\&\quad + \Big ( c_2^5 (85 Q_2 - \tfrac{73 Q_3}{6} + \tfrac{7 Q_4}{12} - \tfrac{Q_5}{120}-284) \\&\quad + \tfrac{1}{6} c_3 c_2^3 (-828 Q_2+81 Q_3-2 Q_4+3480) + \tfrac{1}{2} c_4 c_2^2 \big (62 Q_2-3(Q_3+120)\big ) \\&\quad + c_2 \big (c_3^2 (38 Q_2-2 Q_3-212) + 2 c_5 (19-2 Q_2)\big ) - 6 c_3 c_4 (Q_2-9)-4 c_6 \Big ) e_n^5 \\&\quad + \Big ( \tfrac{1}{120} c_2^6 \Big (10 \big (3 (Q_2-1260) Q_2+754 Q_3-57Q_4+9840\big ) + 17 Q_5\Big ) \\&\quad + \tfrac{1}{24} c_3 c_2^4 (16500 Q_2-2544 Q_3+131 Q_4-2 Q_5-51600) \\&\quad + \tfrac{1}{6} c_4 c_2^3 (-1170 Q_2+118 Q_3-3 Q_4+4770) \\&\quad - \tfrac{1}{2} c_2^2 \big (c_3^2 (687 Q_2-74 Q_3+2 Q_4-2668) + c_5 (-81 Q_2+4 Q_3+462)\big ) \\&\quad + c_2 \big (c_6 (47-5 Q_2) + c_3 c_4 (109 Q_2-6 Q_3-581)\big ) \\&\quad + \tfrac{1}{2} c_4^2 (76-9 Q_2) + 2 c_3 c_5 (35-4 Q_2) \\&\quad + c_3^3 (22 Q_2-\tfrac{4 Q_3}{3}-113) -5 c_7 \Big ) e_n^6 + O(e_n^7). \end{aligned} \end{aligned}$$

To use the third Eq. in (6), let us expand W at (0, 0), that is,

$$\begin{aligned} W(t_n,s_n)= & {} W_{0,0} + W_{1,0}t_n + W_{0,1}s_n + \frac{1}{2}W_{2,0}t_n^2 + \frac{1}{2}W_{0,2}s_n^2 + W_{1,1}t_ns_n \\&+ \frac{1}{6}W_{3,0}t_n^3 + \cdots . \end{aligned}$$

Substituting these expressions into (6) gives

$$\begin{aligned} e_{n+1} = x_{n+1}-x^{*} = R_4e_n^4 + R_5e_n^5 + R_6e_n^6 + R_7e_n^7 + R_8e_n^8 + O(e_n^9), \end{aligned}$$

with

$$\begin{aligned} R_4 = \frac{1}{2} c_2 (W_{0,0}-1) (c_2^2 (Q_2-10)+2 c_3) \end{aligned}$$

and some expressions for \(R_5\), \(R_6\), \(R_7\) and \(R_8\) that will be considered successively.

By taking

$$\begin{aligned} W_{0,0}=1 \end{aligned}$$

we get \(R_4=0\). Assuming this value, we have

$$\begin{aligned} R_5 = \frac{1}{2} c_2^2 (W_{1,0}-2) \big (c_2^2 (Q_2-10)+2 c_3\big ) \end{aligned}$$

(without the assumption \(W_{0,0}=1\), the expression for \(R_5\) is much more complicated, and the same happens in what follows). The choice

$$\begin{aligned} W_{1,0} = 2 \end{aligned}$$

leads to \(R_5=0\) and

$$\begin{aligned} R_6= & {} -\frac{1}{4} c_2 \big (c_2^2 (Q_2-10)+2 c_3\big )\\&\quad \times \Big ( c_2^2 \big ( (Q_2-10) W_{0,1}-W_{2,0}+12 \big ) + 2 c_3 (W_{0,1}-1) \Big ). \end{aligned}$$

Similarly, taking

$$\begin{aligned} W_{0,1}=1, \quad W_{2,0}=2+Q_2 \end{aligned}$$

gives \(R_6=0\) and

$$\begin{aligned} \begin{aligned} R_7&= -\frac{1}{12} c_2^2 \big ( c_2^2 (Q_2-10)+2 c_3 \big ) \Big ( c_2^2 \big (3 Q_2 (W_{1,1}-2) \\&\quad - 30 W_{1,1}-W_{3,0}+Q_3+96\big ) + 6 c_3 (W_{1,1}-4) \Big ). \end{aligned} \end{aligned}$$

Finally, the choices

$$\begin{aligned} W_{1,1}=4, \quad W_{3,0}=-24+6Q_2+Q_3 \end{aligned}$$

result in \(R_7 = 0\) and thus the convergence is (at least) of order eight.

Actually, a close look at the representation

$$\begin{aligned} R_8&= \frac{1}{48} c_2 \big (c_2^2 (Q_2-10)+2 c_3\big ) \Big ( c_2^4 \Big ( 3 Q_2 \big ( (Q_2-20) W_{0,2}-2 W_{2,1}+36 \big ) \\&\quad + 60 (5W_{0,2}+W_{2,1}-18) + W_{4,0}-8 Q_3-Q_4 \Big ) \\&\quad + 12 c_3 c_2^2 \big (Q_2(W_{0,2}-1) - 10 W_{0,2}-W_{2,1}+38 \big ) + 12 c_3^2 (W_{0,2}-2) -24 c_4 c_2 \Big ) \end{aligned}$$

shows that it is no longer possible to obtain \(R_8=0\) in general independently of the values \(c_{\ell }\), so the order eight cannot be improved in general. \(\square \)

Proof of Theorem 2

In a mathematical sense, the proof can be done like in Theorem 1, with an intermediate additional point \(w_n\) (instead of \(x_{n+1}\)) in every step of the iterative method, and with the use of the additional weight H to compute \(x_{n+1}\). Of course, we must take the expansions (17) and (18) (and similar expansions) up to \(O(e_n^{17})\), but this is not a significative mathematical difference.

However, from a computational point of view, the computations involved are much larger and demanding of memory and computer time (and the same can be said about the space needed to explain them in a paper). To minimize this and simplify the expressions which appear in the process, we fix Q and W as in (9) (that is, taking all the derivatives not involved in (7) as 0).

Finally, we must have

$$\begin{aligned} e_{n+1} = x_{n+1}-x^{*} + \sum _{\ell =8}^{16} R_{\ell } e_n^{\ell } + O(e_n^{17}) \end{aligned}$$

and find the values of \(H_{i,j,k}\) which make \(R_{\ell } = 0\) for \(\ell =8,\ldots ,15\). Anyway, the mathematical process is very similar, so for the sake of brevity we omit the details.

In practice, it is a good idea to do it in several steps. For instance, we can take (17), (18) and similar expressions only up to \(O(e_n^{10})\), and follow the iterative method only to find the \(H_{i,j,k}\) which make \(R_{8} = 0\), and so on. This strategy requires less computational power, but it allows to proof that (11) guarantees order 16. (Actually, these conditions are also necessary, because otherwise it is impossible to get \(R_8 = \cdots = R_{15} = 0\) independently of the \(c_{\ell }\).) \(\square \)

5 Heuristic proof of Theorem 3

As in Sect. 4, we can try to start taking

$$\begin{aligned} f(x_{n}) = f'(x^{*}) \left( e_{n} + \sum _{\ell =2}^{32} c_{\ell }e_{n}^{\ell } \right) + O(e_n^{33}). \end{aligned}$$
(21)

with general coefficients \(c_{\ell }\). Then, the original purpose can be to follow a similar process to achieve

$$\begin{aligned} e_{n+1} = x_{n+1}-x^{*} + \sum _{\ell =16}^{32} R_{\ell } e_n^{\ell } + O(e_n^{33}) \end{aligned}$$

and to find the values of \(J_{i,j,k,l}\) which make \(R_{\ell } = 0\) for \(\ell =16,\ldots ,31\). But, now, the computational complexity of the process is much greater than in the proofs of Theorems 1 and 2, and it has not been possible to follow the corresponding symbolic procedure to find the equations \(R_{\ell } = 0\) which we want to solve. This also happens if we try to do it step by step, firstly taking (21) only up to \(O(e_n^{18})\) to find the equation \(R_{16}=0\), and so on.

The idea is, then, to try to simplify the problem, analyzing cases with only a few of the coefficients \(c_{\ell }\) not null. Recalling that \(c_{\ell } := \frac{f^{(\ell )}(x^{*})}{\ell !\,f'(x^{*})}\) with \(f'(x^{*}) \ne 0\), and because we can assume that \(x^{*}=0\), this is equivalent to studying Theorem 3 for functions of the form

$$\begin{aligned} x + c_2 x^2, \quad x + c_3 x^3, \quad x + c_2 x^2 + c_3 x^3, \quad x + c_2 x^2 + c_4 x^4, \quad \ldots \end{aligned}$$
(22)

and other simple cases (binomials or trinomials). With this kind of simplifications, the symbolic procedure using the weights Q, W, H (prefixed) and a generic weight J with partial derivatives \(J_{i,j,k,l}\) is already computationaly possible.

Now, let us imagine that we follow the procedure with the first example in (22) (that is, only with \(c_2\ne 0\)). Then, we find \(R_{16}\) which will be an expression involving \(c_2\) and some of the \(J_{i,j,k,l}\) (this kind of expressions are not simple, so we do not write it here, but the reader can imagine some like the descriptions of the \(R_{\ell }\) in the proof of Theorem 1). We want to solve the equation \(R_{16}=0\) assigning to the involved \(J_{i,j,k,l}\) values independent of \(c_2\); this gives one or several expressions of combinations of \(J_{i,j,k,l}\)’s that must be null (something like, for instance, the coefficients of \(c_2^2\) and \(c_3\) in \(R_7\), in the proof of Theorem 1). If these equations give a uniquely determined value for a \(J_{i,j,k,l}\), we assign to it this value (of course, this becomes one the conditions in the list of values of Theorem 3). If there still were in \(R_{16}\) some \(J_{i,j,k,l}\) without an assigned value, we could repeat the procedure with another of the examples in (22), to get more combinations of \(J_{i,j,k,l}\)’s that must be null.

Once we have done it with a reasonable quantity of examples like those in (22), we fix a reasonable quantity of coefficients \(J_{i,j,k,l}\) that we can hope are enough to get order of convergence 17 for a general function f. With this procedure, we do not have a mathematical proof for it, but we can check the order using test functions like those of Table 1; if the procedure with the already assigned \(J_{i,j,k,l}\)’s gives COC and ACOC equal to 17 (something like in Table 2), we can assume that we have already succeeded in identifying the conditions for order 17; otherwise, we repeat the process with more examples like in (22) to fix more \(J_{i,j,k,l}\)’s.

Once the previous process has been done for the order of convergence 17, we do it again, that is, we identify a new set of \(J_{i,j,k,l}\)’s to get order 18, and we check it by means of test functions. And so on, up to reach order of convergence 32.

In this way, some weeks of symbolic experiments with Mathematica have allowed to find the suitable values for the \(J_{i,j,k,l}\)’s that appear in Theorem 3, a total of 166 conditions! This has been the most difficult task.

Then, we have not only checked that these conditions give order 32, but also that they are necessary. With this aim, we have checked what happens if we compute COC and ACOC after changing only one of the \(J_{i,j,k,l}\)’s in Theorem 3; in any of those experiments, it turns out that the corresponding COC and ACOC are \(\le 31\). On the other hand, we have also checked what happens if we assign a not null value to any of the \(J_{i,j,k,l}\)’s not involved in Theorem 3 (up to a reasonable degree in the subindex); we have seen that, in all those cases, the corresponding COC and ACOC remain 32, so it is not necessary to fix more \(J_{i,j,k,l}\)’s.

Thus, this is not a complete mathematical proof, but the result is experimentally very well justified and it is not plausible to think that it can be incorrect. (Actually, we have not only used the test functions detailed in Table 1 to experimentally check the order of convergence, but some other ones, which is not necessary to detail.)