Ir al contenido

Documat


Resumen de Beta hebbian learning: definition and analysis of a new family of learning rules for exploratory projection pursuit

Héctor Quintián Pardo Árbol académico

  • This thesis comprises an investigation into the derivation of learning rules in artificial neural networks from probabilistic criteria.

    •Beta Hebbian Learning (BHL).

    First of all, it is derived a new family of learning rules which are based on maximising the likelihood of the residual from a negative feedback network when such residual is deemed to come from the Beta Distribution, obtaining an algorithm called Beta Hebbian Learning, which outperforms current neural algorithms in Exploratory Projection Pursuit.

    • Beta-Scale Invariant Map (Beta-SIM). Secondly, Beta Hebbian Learning is applied to a well-known Topology Preserving Map algorithm called Scale Invariant Map (SIM) to design a new of its version called Beta-Scale Invariant Map (Beta-SIM). It is developed to facilitate the clustering and visualization of the internal structure of high dimensional complex datasets effectively and efficiently, specially those characterized by having internal radial distribution. The Beta-SIM behaviour is thoroughly analysed comparing its results, in terms performance quality measures with other well-known topology preserving models.

    • Weighted Voting Superposition Beta-Scale Invariant Map (WeVoS-Beta-SIM).

    Finally, the use of ensembles such as the Weighted Voting Superposition (WeVoS) is tested over the previous novel Beta-SIM algorithm, in order to improve its stability and to generate accurate topology maps when using complex datasets. Therefore, the WeVoS-Beta-Scale Invariant Map (WeVoS-Beta-SIM), is presented, analysed and compared with other well-known topology preserving models.

    All algorithms have been successfully tested using different artificial datasets to corroborate their properties and also with high-complex real datasets.


Fundación Dialnet

Mi Documat