Ir al contenido

Documat


Linear information versus function evaluations for L2-approximation

  • Autores: Aicke Hinrichs, Erich Novak, Jan Vybíral
  • Localización: Journal of approximation theory, ISSN 0021-9045, Vol. 153, Nº 1, 2008, págs. 97-107
  • Idioma: inglés
  • DOI: 10.1016/j.jat.2008.02.003
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • We study algorithms for the approximation of functions, the error is measured in an L2 norm. We consider the worst case setting for a general reproducing kernel Hilbert space of functions. We analyze algorithms that use standard information consisting in n function values and we are interested in the optimal order of convergence. This is the maximal exponent b for which the worst case error of such an algorithm is of order n-b.

      Let p be the optimal order of convergence of all algorithms that may use arbitrary linear functionals, in contrast to function values only. So far it was not known whether p>b is possible, i.e., whether the approximation numbers or linear widths can be essentially smaller than the sampling numbers. This is (implicitly) posed as an open problem in the recent paper [F.Y. Kuo, G.W. Wasilowski, H. Wozniakowski, On the power of standard information for multivariate approximation in the worst case setting, J. Approx. Theory, to appear] where the authors prove that implies . Here we prove that the case and b=0 is possible, hence general linear information can be exponentially better than function evaluation. Since the case is quite different, it is still open whether b=p always holds in that case.


Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno