Ir al contenido

Documat


Relevant Content Selection through Positional Language Models: An Exploratory Analysis

  • Autores: Marta Vicente Moreno, Elena Lloret Pastor Árbol académico
  • Localización: Procesamiento del lenguaje natural, ISSN 1135-5948, Nº. 65, 2020, págs. 75-82
  • Idioma: inglés
  • Títulos paralelos:
    • Selección de Contenido Relevante mediante Modelos de Lenguaje Posicionales: Un Análisis Experimental
  • Enlaces
  • Resumen
    • español

      Como muchas áreas en el ámbito del Procesamiento de Lenguaje Natural, la generación extractiva de resúmenes ha sucumbido a la tendencia general marcada por el éxito de los enfoques de aprendizaje profundo y redes neuronales. Sin embargo, los recursos que tales aproximaciones requieren - computacionales, temporales, datos - no siempre están disponibles. En este trabajo exploramos un método alternativo basado en técnicas estadísticas que, explotando la información semántica del documento original así como su estructura, proporciona resultados competitivos. Presentamos DICES, un método no supervisado, económico y adaptable que no necesita recursos potentes ni grandes cantidades de datos para lograr resultados prometedores respecto al estado de la cuestión.

    • English

      Extractive Summarisation, like other areas in Natural Language Processing, has succumbed to the general trend marked by the success of neural approaches. However, the required resources-computational, temporal, data-are not always available. We present an experimental study of a method based on statistical techniques that, exploiting the semantic information from the source and its structure, provides competitive results against the state of the art. We propose a Discourse-Informed approach for Cost-effective Extractive Summarisation (DICES). DICES is an unsupervised, lightweight and adaptable framework that requires neither training data nor high-performance computing resources to achieve promising results.

  • Referencias bibliográficas
    • Boudin, F., J. Y. Nie, and M. Dawes. 2010. Positional language models for clinical information retrieval. In Proc. of EMNLP, pages 108–115.
    • Chen, Y.-C. and M. Bansal. 2018. Fast abstractive summarization with reinforceselected sentence rewriting. In Proc. of the ACL, Vol. 1, pages...
    • Cheng, J. and M. Lapata. 2016. Neural summarization by extracting sentences and words. In Proc. of ACL, pages 484–494.
    • Cho, S., L. Lebanoff, H. Foroosh, and F. Liu. 2019. Improving the similarity measure of determinantal point processes for extractive multi-document...
    • Erkan, G. and D. R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence...
    • Gehrmann, S., Y. Deng, and A. Rush. 2018. Bottom-up abstractive summarization. In Proc. of EMNLP, pages 4098–4109.
    • Hermann, K. M., T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. 2015. Teaching machines to read and comprehend....
    • Hirao, T., Y. Yoshida, M. Nishino, N. Yasuda, and M. Nagata. 2013. Singledocument summarization as a tree knapsack problem. In Proc. of EMNLP,...
    • Lin, C.-Y. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81.
    • Liu, Y. and M. Lapata. 2019. Text summarization with pretrained encoders. In Proc. of EMNLP-IJCNLP, pages 3721–3731.
    • Liu, Y., I. Titov, and M. Lapata. 2019. Single document summarization as tree induction. In Proc. of the NAACL, Vol. 1, pages 1745–1755.
    • Liu, Z. and N. Chen. 2019. Exploiting Discourse-Level Segmentation for Extractive Summarization. In Proc. of the 2nd Workshop on New Frontiers...
    • Mann, W. C. and S. A. Thompson. 1987. Rhetorical Structure Theory: Description and Construction of Text Structures. In Natural Language Generation....
    • Nallapati, R., B. Zhou, C. dos Santos, C¸ . Gul¸cehre, and B. Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs...
    • Nenkova, A. and K. McKeown. 2011. Automatic Summarization. Foundations and Trends R in Information Retrieval, 5(2):103–233.
    • Padró, L. and E. Stanilovsky. 2012. FreeLing 3.0: Towards Wider Multilinguality FreeLing project developer. Proc. of LREC.
    • See, A., P. J. Liu, and C. D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. Proc. of ACL, 1:1073–1083.
    • Strubell, E., A. Ganesh, and A. McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proc. of the ACL, pages 3645–3650.
    • Vicente, M., C. Barros, and E. Lloret. 2018. Statistical language modelling for automatic story generation. Journal of Intelligent & Fuzzy...
    • Wu, Y. and B. Hu. 2018. Learning to extract coherent summary via deep reinforcement learning. In AAAI Conf. on Artificial Intelligence, pages...

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno