Ir al contenido

Documat


Data selection for NMT using infrequent n-gram recovery

  • Autores: Zuzanna Parcheta, Germán Sanchis Trilles, Francisco Casacuberta Nolla Árbol académico
  • Localización: Proceedings of the 21st Annual Conference of the European Association for Machine Translation: 28-30 May 2018, Universitat d'Alacant, Alacant, Spain / coord. por Juan Antonio Pérez Ortiz Árbol académico, Felipe Sánchez Martínez Árbol académico, Miquel Esplà Gomis, Maja Popovic, Celia Rico Pérez Árbol académico, André Martins, Joachim Van den Bogaert, Mikel L. Forcada Zubizarreta Árbol académico, 2018, ISBN 978-84-09-01901-4, págs. 219-227
  • Idioma: inglés
  • Enlaces
  • Resumen
    • Neural Machine Translation (NMT) has achieved promising results comparable with Phrase-Based Statistical Machine Translation (PBSMT). However, to train a neural translation engine, much more powerful machines are required than those required to develop translation engines based on PBSMT. One solution to reduce the training cost of NMT systems is the reduction of the training corpus through data selection (DS) techniques. There are many DS techniques applied in PBSMT which bring good results. In this work, we show that the data selection technique based on infrequent n-gram occurrence described in (Gascó et al., 2012) commonly used for PBSMT systems also works well for NMT systems. We focus our work on selecting data according to specific corpora using the previously mentioned technique. The specific-domain corpora used for our experiments are IT domain and medical domain. The DS technique significantly reduces the execution time required to train the model between 87% and 93%. Also, it improves translation quality by up to 2.8 BLEU points. The improvements are obtained with just a small fraction of the data that accounts for between 6% and 20% of the total data.


Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno