We study algorithms for the approximation of functions, the error is measured in an L2 norm. We consider the worst case setting for a general reproducing kernel Hilbert space of functions. We analyze algorithms that use standard information consisting in n function values and we are interested in the optimal order of convergence. This is the maximal exponent b for which the worst case error of such an algorithm is of order n-b.
Let p be the optimal order of convergence of all algorithms that may use arbitrary linear functionals, in contrast to function values only. So far it was not known whether p>b is possible, i.e., whether the approximation numbers or linear widths can be essentially smaller than the sampling numbers. This is (implicitly) posed as an open problem in the recent paper [F.Y. Kuo, G.W. Wasilowski, H. Wozniakowski, On the power of standard information for multivariate approximation in the worst case setting, J. Approx. Theory, to appear] where the authors prove that implies . Here we prove that the case and b=0 is possible, hence general linear information can be exponentially better than function evaluation. Since the case is quite different, it is still open whether b=p always holds in that case.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados