We present an analysis of the computational capabilities of feed-forward neural networks focusing on the role of the output function. The space of configurations that implement a given target function is analyzed for small size networks when different output functions are considered. The generalization complexity and other relevant properties for some complex and useful linearly separable functions are also analyzed. The results indicate that efficient output functions are those with a similar Hamming weight as the target output that at the same time have a high complexity.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados