María Liz Crespo, María Fabiana Piccoli, Alicia Marcela Printista, Raúl Hector Gallard
Fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently great effort hass been done to develop parallel implementation intended for a reduction of learning time. Looking at both sides of the coin, this paper shows firstly two alternatives to parallelise the learning process and then an apllication of neural networks to computing systems. On the parallel alternative distributed implementations to parallelise the learning process of neural networks using pattern partitioning approach. Under this approach weight changes are computed concurently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. On the application side, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device are shown. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system perfomance facilitating further dynamic load balancing. A neural network device inserted into the kernel of a distributed system as an intelligent dool, allows to achieve automatic allocation of execution requests under some predefinided perfomance criteria based on resource availability and incoming process requeriments. Perfomamnec results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities to support parallelism.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados