Iraide Zipitria Leaniz-Barrutia
One of the goals remaining in Artificial Intelligence in Education is to create applications to evaluate open-ended text in a human-like manner. This dissertation describes the design and develompent of a summary evaluation environment based on human performance. Based on previous research in psychology, critical summarization ability development contexts have been analysed to significantly reflect human summary grading decision making. An empirical study has been carried out to identify underlying processes in overall summary decision grading. As a result, overall grades are computed by applying a Bayesian network based model. The discourse grades involved in the global score are cohesion, coherence, use of language, comprehension and adequacy. Semantic information is comprehended by means of Latent Semantic Analysis, and syntactic information by means of Natural Language Processing tools. The resulting automatic discourse grades have proved to significantly reflect human decisions.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados