Ir al contenido

Documat


Overview of EXIST 2022:: sEXism Identification in Social neTworks

  • Autores: Francisco Rodríguez Sánchez, Jorge Carrillo de Albornoz, Laura Plaza Morales Árbol académico, Adrián Mendieta Aragón, Guillermo Marco Remón, Maryna Makeienko, María Plaza, Julio Gonzalo Arroyo Árbol académico, Damiano Spina, Paolo Rosso Árbol académico
  • Localización: Procesamiento del lenguaje natural, ISSN 1135-5948, Nº. 69, 2022, págs. 229-240
  • Idioma: inglés
  • Títulos paralelos:
    • Overview de EXIST 2022:: Identificación de Sexismo en Redes Sociales
  • Enlaces
  • Resumen
    • español

      El artículo describe la organización, objetivos y resultados de EXIST 2022 (sEXism Identification in Social neTworks), una competición que se celebra por segundo año consecutivo en el foro IberLEF. EXIST 2022 consta de dos tareas: detección de sexismo y categorización de sexismo de tweets y gabs, tanto en español como en inglés. Hemos recibido un total de 45 ejecuciones para la tarea de detección de sexismo y 29 ejecuciones para la tarea de categorización de sexismo, enviadas por 19 equipos diferentes. En el presente artículo, presentamos el conjunto de datos, la metodología de evaluación, una descripción general de los sistemas propuestos y los resultados obtenidos. El conjunto final de datos consta de más de 12.000 textos anotados de dos redes sociales (Twitter y Gab) etiquetados siguiendo dos procedimientos diferentes: colaboradores externos y expertos en el dominio.

    • English

      The paper describes the organization, goals, and results of the sEXism Identification in Social neTworks (EXIST)2022 challenge, a shared task proposed for the second year at IberLEF. EXIST 2022 consists of two challenges: sexism identification and sexism categorization of tweets and gabs, both in Spanish and English. We have received a total of 45 runs for the sexism identification task and 29 runs for the sexism categorization task, submitted by 19 different teams. In this paper, we present the dataset, the evaluation methodology, an overview of the proposed systems, and the results obtained. The final dataset consists of more than 12,000 annotated texts from two social networks (Twitter and Gab) labelled following two different procedures: external contributors and trained experts.

  • Referencias bibliográficas
    • Amigo, E., J. Carrillo-de Albornoz, M. Almagro-Cadiz, J. Gonzalo, J. Rodrıguez-Vidal, and F. Verdejo. 2017. Evall: Open access evaluation...
    • ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1301–1304.
    • Amigo, E., J. Gonzalo, S. Mizzaro, and J. Carrillo-de Albornoz. 2020. An effectiveness metric for ordinal classification: Formal properties...
    • Amigo, E., D. Spina, and J. Carrillo-de Albornoz. 2018. An axiomatic analysis of diversity evaluation metrics: Introducing the rank-biased...
    • Arık, S. O. and T. Pfister. 2021. Tabnet: Attentive interpretable tabular learning. In AAAI, volume 35, pages 6679–6687.
    • Basile, V., C. Bosco, E. Fersini, N. Debora, V. Patti, F. M. R. Pardo, P. Rosso, M. Sanguinetti, et al. 2019. Semeval2019 task 5: Multilingual...
    • Berg, S. H. 2006. Everyday sexism and posttraumatic stress disorder in women: A correlational study. Violence Against Women, 12(10):970–988.
    • Canete, J., G. Chaperon, R. Fuentes, and J. Perez. 2020. Spanish pre-trained bert model and evaluation data. PML4DC at ICLR, 2020.
    • Caselli, T., V. Basile, J. Mitrovic, and M. Granitzer. 2020. Hatebert: Retraining bert for abusive language detection in english. arXiv preprint...
    • Chiril, P., V. Moriceau, F. Benamara, A. Mari, G. Origgi, and M. CoulombGully. 2020. He said “who’s gonna take care of your children when...
    • Clark, K., M.-T. Luong, Q. V. Le, and C. D. Manning. 2020. Electra: Pretraining text encoders as discriminators rather than generators. arXiv...
    • Conneau, A., K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzman, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov. 2019. Unsupervised...
    • arXiv preprint arXiv:1911.02116.
    • De la Rosa, J., E. G. Ponferrada, M. Romero, P. Villegas, P. G. de Prado Salas, and M. Grandury. 2022. Bertin: Efficient pretraining of a...
    • Devlin, J., M.-W. Chang, K. Lee, and K. Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding....
    • Donoso-Vazquez, T. and Rebollo-Catalan. 2018. Violencias de genero en entornos virtuales. Ediciones Octaedro.
    • Feng, F., Y. Yang, D. Cer, N. Arivazhagan, and W. Wang. 2020. Language-agnostic bert sentence embedding. arXiv preprint arXiv:2007.01852.
    • Frenda, S., B. Ghanem, M. Montes-y Gomez, and P. Rosso. 2019. Online hate speech against women: Automatic identification of misogyny and sexism...
    • Gutierrez-Fandin˜o, A., J. ArmengolEstape, M. Pa`mies, J. Llop-Palao, J. Silveira-Ocampo, C. P. Carrino, A. Gonzalez-Agirre, C. Armentano-Oller,...
    • Hartmann, J., M. Heitmann, C. Siebert, and C. Schamp. 2020. More than a feeling: Accuracy and application of sentiment analysis.
    • He, P., J. Gao, and W. Chen. 2021. Debertav3: Improving deberta using electra-style pre-training with gradientdisentangled embedding sharing....
    • Lamprinidis, S., F. Bianchi, D. Hardt, and D. Hovy. 2021. Universal joy: A data set and results for classifying emotions across languages....
    • Lan, Z., M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations....
    • Liu, Y., M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. 2019. Roberta: A robustly optimized...
    • Manne, K. 2017. Down girl: The logic of misogyny. Oxford University Press.
    • Mills, S. 2008. Language and sexism. Cambridge University Press.
    • Nguyen, D. Q., T. Vu, and A. T. Nguyen. 2020. Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200.
    • Perez, J. M., D. A. Furman, L. A. Alemany, and F. Luque. 2021. Robertuito: a pre-trained language model for social media text in spanish....
    • Radford, A., J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog,...
    • Raffel, C., N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. 2019. Exploring the limits of transfer learning...
    • Rodrıguez-Sanchez, F., J. Carrillo-de Albornoz, L. Plaza, J. Gonzalo, P. Rosso, M. Comet, and T. Donoso. 2021. Overview of exist 2021: sexism...
    • Rodrıguez-Sanchez, F., J. Carrillo-de Albornoz, and L. Plaza. 2020. Automatic classification of sexism in social networks: An empirical study...
    • Sanh, V., L. Debut, J. Chaumond, and T. Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint...
    • Swim, J., L. Hyers, L. Cohen, and M. Ferguson. 2001. Everyday sexism: Evidence for its incidence, nature, and psychological impact from three...
    • Waseem, Z. 2016. Are you a racist or am I seeing things? annotator influence on hate speech detection on Twitter. In Proceedings of the First...
    • Waseem, Z. and D. Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings...
    • Xue, L., A. Barua, N. Constant, R. AlRfou, S. Narang, M. Kale, A. Roberts, and C. Raffel. 2022. Byt5: Towards a token-free future with pre-trained...
    • Yang, Y., D. Cer, A. Ahmad, M. Guo, J. Law, N. Constant, G. H. Abrego, S. Yuan, C. Tar, Y.-H. Sung, et al. 2019. Multilingual universal sentence...

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno