Francisco García Riesgo, Sergio Luis Suárez Gómez, Fernando Sánchez Lasheras , Carlos González Gutiérrez, Carmen Peñalver San Cristóbal, Francisco Javier de Cos Juez
To remove the distortion that the atmosphere causes in the observations performed with extremely large telescopes, correction techniques are required. To tackle this problem, adaptive optics systems uses wave front sensors obtain measures of the atmospheric turbulence and hence, estimate a reconstruction of the atmosphere when this calculation is applied in deformable mirrors, which compensates the aberrated wave front. In Multi Object Adaptive Optics (MOAO), several Shack-Hartmann wave front sensors along with reference guide stars are used to characterize the aberration produced by the atmosphere. Typically, this is a two-step process, where a centroiding algorithm is applied to the image provided by the sensor and the centroids from different Shack-Hartmanns wave front sensors are combined by using a Least Squares algorithm or an Artificial Neural Network, such as the Multi-Layer Perceptron. In this article a new solution based on Convolutional Neural Networks is proposed, which allows to integrate both the centroiding and the tomographic reconstruction in the same algorithm, getting a substantial improvement over the traditional Least Squares algorithm and a similar performance than the Multi-Layer Perceptron, but without the need of previously computing the centroiding algorithm.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados