Ir al contenido

Documat


Resumen de Parameter learning in hybrid Bayesian networks using prior knowledge

Inmaculada Pérez-Bernabé, Antonio Fernández, Rafael Rumí, Antonio Salmerón Cerdán Árbol académico

  • Mixtures of truncated basis functions have been recently proposed as a generalisation of mixtures of truncated exponentials and mixtures of polynomials for modelling univariate and conditional distributions in hybrid Bayesian networks. In this paper we analyse the problem of learning the parameters of marginal and conditional MoTBF densities when both prior knowledge and data are available. Incorporating prior knowledge provide a valuable tool for obtaining useful models, especially in domains of applications where data are costly or scarce, and prior knowledge is available from practitioners. We explore scenarios where the prior knowledge can be expressed as an MoTBF density that is afterwards combined with another MoTBF density estimated from the available data. The resulting model remains within the MoTBF class which is a convenient property from the point of view of inference in hybrid Bayesian networks. The performance of the proposed method is tested in a series of experiments carried out over synthetic and real data.


Fundación Dialnet

Mi Documat