Ir al contenido

Documat


Evaluación de un modelo transformador aplicado a la tarea de generación de resúmenes en distintos dominios

  • Autores: Isabel Segura Bedmar Árbol académico, Lucía Ruz, Sara Guerrero Aspizua
  • Localización: Procesamiento del lenguaje natural, ISSN 1135-5948, Nº. 66, 2021, págs. 27-39
  • Idioma: español
  • Títulos paralelos:
    • Evaluation of a transformer model applied to the task of text summarization in different domains
  • Enlaces
  • Resumen
    • español

      En los últimos años, las técnicas de deep learning han supuesto un gran impulso tecnológico en muchas de las tareas de Procesamiento de Lenguaje Natural (PLN). La tarea de generación de resúmenes también se ha beneficiado de estas técnicas, y en los últimos años se han implementado distintos modelos, logrando superar los resultados del estado de la cuestión. La mayoría de estos trabajos han sido evaluados en colecciones de textos periodísticos. Este artículo presenta un trabajo preliminar donde aplicamos un modelo transformador, BART, para la tarea de generación de resúmenes y lo evaluamos en varios datasets, uno de ellos formado por textos del dominio biomédico.

    • English

      In recent years, deep learning techniques have provided a significant technological advance in many Natural Language Processing (NLP) tasks. Text summarization has also benefited from these techniques. Recently, several deep learning approaches have been implemented, surpassing the previous state of the art performances. Most of these works have been evaluated on collections of journalistic texts. This article presents a preliminary work where we apply a transforming model, BART, for text summarization. The model is evaluated on several datasets, one of them consisting of texts from the biomedical domain.

  • Referencias bibliográficas
    • Bae, S., T. Kim, J. Kim, y S.-g. Lee. 2019. Summary level training of sentence rewriting for abstractive summarization. En Proceedings of...
    • Beloki, Z., X. Saralegi, K. Ceberio, y A. Corral. 2020. Grammatical error correction for basque through a seq2seq neural architecture and...
    • Celikyilmaz, A., A. Bosselut, X. He, y Y. Choi. 2018. Deep communicating agents for abstractive summarization. En Proceedings of the 2018...
    • Chen, Y.-C. y M. Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. En Proceedings of the 56th Annual...
    • Cheng, J. y M. Lapata. 2016. Neural summarization by extracting sentences and words. En Proceedings of the 54th Annual Meeting of the Association...
    • Colón-Ruiz, C., I. Segura-Bedmar, y P. Martínez. 2019. Análisis de sentimiento en el dominio salud: analizando comentarios sobre fármacos....
    • Devlin, J., M.-W. Chang, K. Lee, y K. Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. En...
    • Dong, Y., Y. Shen, E. Crawford, H. van Hoof, y J. C. K. Cheung. 2018. BanditSum: Extractive summarization as a contextual bandit. En Proceedings...
    • Durrett, G., T. Berg-Kirkpatrick, y D. Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints....
    • Gehrmann, S., Y. Deng, y A. Rush. 2018. Bottom-up abstractive summarization. En Proceedings of the 2018 Conference on Empirical Methods in...
    • Graff, D., J. Kong, K. Chen, y K. Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia, 4(1):34.
    • Grusky, M., M. Naaman, y Y. Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. En Proceedings of...
    • Hermann, K. M., T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, y P. Blunsom. 2015. Teaching machines to read and comprehend....
    • Hochreiter, S. y J. Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
    • Hsu, W.-T., C.-K. Lin, M.-Y. Lee, K. Min, J. Tang, y M. Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency...
    • Lewis, M., Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, y L. Zettlemoyer. 2020. BART: Denoising sequence-to-sequence...
    • Lin, C.-Y. 2004. ROUGE: A package for automatic evaluation of summaries. En Text Summarization Branches Out, páginas 74–81, Barcelona, Spain,...
    • Liu, Y. y M. Lapata. 2019. Text summarization with pretrained encoders. En Proceedings of the 2019 Conference on Empirical Methods in Natural...
    • Miranda-Escalada, A. y I. Segura-Bedmar. 2020. One stage versus two stages deep learning approaches for the extraction of drug-drug interactions...
    • Nallapati, R., F. Zhai, y B. Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents....
    • Nallapati, R., B. Zhou, C. dos Santos, C¸ . Gu`I‡l¸cehre, y B. Xiang. 2016. Abstractive text summarization using sequenceto-sequence RNNs...
    • Narayan, S., S. B. Cohen, y M. Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme...
    • Papineni, K., S. Roukos, T. Ward, y W.-J. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. En Proceedings of the...
    • Pappas, D., P. Stavropoulos, I. Androutsopoulos, y R. McDonald. 2020a. Biomrc: A dataset for biomedical machine reading comprehension. En...
    • Pappas, D., P. Stavropoulos, I. Androutsopoulos, y R. McDonald. 2020b. BioMRC: A dataset for biomedical machine reading comprehension. En...
    • Poncelas, A., K. Sarasola, M. Dowling, A. Way, G. Labaka, y I. Alegria. 2019. Adapting NMT to caption translation inwikimedia commons for...
    • Radford, A., K. Narasimhan, T. Salimans, y I. Sutskever. 2018. Improving language understanding by generative pre-training. Raffel, C., N....
    • Rush, A. M., S. Chopra, y J. Weston. 2015. A neural attention model for abstractive sentence summarization. En Proceedings of the 2015 Conference...
    • See, A., P. J. Liu, y C. D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. En Proceedings of the 55th Annual...
    • Shi, J., C. Liang, L. Hou, J. Li, Z. Liu, y H. Zhang. 2019. Deepchannel: Salience estimation by contrastive learning for extractive document...
    • Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, y I. Polosukhin. 2017. Attention is all you need. En Advances...
    • Xu, J. y G. Durrett. 2019. Neural extractive text summarization with syntactic compression. En Proceedings of the 2019 Conference on Empirical...
    • Zhang, H., J. Cai, J. Xu, y J. Wang. 2019. Pretraining-based natural language generation for text summarization. En Proceedings of the 23rd...
    • Zhang, X., F. Wei, y M. Zhou. 2019. HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization....
    • Zhang, Y., D. Li, Y. Wang, Y. Fang, y W. Xiao. 2019. Abstract text summarization with a convolutional seq2seq model. Applied Sciences, 9(8):1665.
    • Zhong, M., P. Liu, D. Wang, X. Qiu, y X. Huang. 2019. Searching for effective neural extractive summarization: What works and what’s next....

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno