In this paper, we propose a least squares regularized regression algorithm with l1-regularizer in a sum space of some base hypothesis spaces. This sum space contains more functions than single base hypothesis space and therefore has stronger approximation capability. We establish an excess error bound for this algorithm under some assumptions on the kernels, the input space, the marginal distribution and the regression function.
For error analysis, the excess error is decomposed into the sample error, hypothesis error and regularization error, which are estimated respectively. From the excess error bound, convergency and a learning rate can be derived by choosing a suitable value of the regularization parameter. The utility of this method is illustrated with two simulated data sets and one real life database.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados