Ir al contenido

Documat


Quantitative Analysis of Users’ Agreement on Open Educational Resources Quality Inside Repositories

  • Mayara Sousa Stein [1] ; Cristian Cechinel [1] ; Vinicius Faria Culmant Ramos [2]
    1. [1] University of Santa Catarina, Campus Araranguá, Santa Catarina, Araranguá, Brazil
    2. [2] Federal University of Santa Catarina, Araranguá, Brazil
  • Localización: Revista Iberoamericana de Tecnologías del Aprendizaje: IEEE-RITA, ISSN 1932-8540, Vol. 18, Nº. 1, 2023, págs. 2-9
  • Idioma: inglés
  • DOI: 10.1109/RITA.2023.3250446
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • Quality assessment inside learning object repositories is normally performed by the community of users that share interest and rate the same resources. At the same time, this strategy is largely disseminated in the most known repositories. In addition, the final presentation of the overall quality of the resources is normally restricted to the average rating given by the community, thus, hiding the internal distribution of the ratings and the characteristics of the users involved in the evaluation process. The present paper analyzes to which extent different raters tend to agree about the quality of the resources inside the Merlot repository. For that, data were collected from the repository and calculated the Intra-Class Correlation coefficient for 102 pairs of evaluators, as well as the Spearman correlation among the average ratings of a given resource by evaluators coming from the same categories of disciplines. Results point out a high concentration of poor agreement between raters (75% to 85% of the pairs of raters tended to disagree), and no correlation among the average ratings of the resources from the different disciplines. Based on these findings, the authors suggest improvements to the repository interface better presenting the overall quality of the resources.

  • Referencias bibliográficas
    • C. Cechinel, S. Sánchez-Alonso, and E. García-Barriocanal, “Statisticalprofiles of highly-rated learning objects,” Comput. Educ., vol. 57,...
    • S. Downes, “Models for sustainable open educational resources,”Interdiscipl. J. Knowl. Learn., vol. 3, no. 1, pp. 29–44, 2007, doi:10.28945/384.
    • K. Clements, J. Pawlowski, and N. Manouselis, “Open educational resources repositories literature review—Towards a comprehensive quality...
    • M. S. E. E. Cristian Cechinel Vinicius Ramos, “Análise quantitativada concordância entre pares de avaliadores de recursos digitais educacionais:...
    • J. Hylén, “Open educational resources: Opportunities and challenges,”in Proceedings of Open Education, Community, Culture, and Content,Logan,...
    • J. M. Pawlowski, “The quality adaptation model: Adaptation and adoption of the quality standard ISO/IEC 19796–1 for learning, education,and...
    • Shweta, R. Bajpai, and H. K. Chaturvedi, “Evaluation of inter-rateragreement and inter-rater reliability for observational data: An overviewof...
    • D. A. S. Matos, “Confiabilidade e concordância entre juízes: Aplicaçõesna área educacional,” Estudos em Avaliação Educacional, vol. 25, no....
    • H. C. de Vet, C. B. Terwee, D. L. Knol, and L. M. Bouter,“When to use agreement versus reliability measures,” J. Clin. Epidemiol., vol. 59,...
    • H. E. A. Tinsley and D. J. Weiss, “Interrater reliability and agreement,”in Handbook of Applied Multivariate Statistics and Mathematical...
    • H. Vet, “Observer reliability and agreement,” in Wiley StatsRef: StatisticsReference Online. Atlanta, GA, USA: American Cancer Society, 2014,doi:...
    • H. C. Kraemer, “Extension of the Kappa coefficient,” Biometrics, vol. 36, no. 2, pp. 207–216, 1980. [Online]. Available:http://www.jstor.org/stable/2529972
    • T. K. Koo and M. Y. Li, “A guideline of selecting and reporting intraclasscorrelation coefficients for reliability research,” J. Chiropractic...
    • J. L. Fleiss, B. Levin, and M. C. Paik, Statistical Methods for RatesProportions, 3rd ed. Hoboken, NJ, USA: Wiley, 2003.
    • K. O. McGraw and S. Wong, “Forming inferences about some intraclasscorrelation coefficients,” Psychol. Methods, vol. 1, no. 1, pp. 30–46,1996.
    • J. J. Bartko, “The intraclass correlation coefficient as a measureof reliability,” Psychol. Rep., vol. 19, no. 1, pp. 3–11, 1966, doi:10.2466/pr0.1966.19.1.3.
    • L. Li, L. Zeng, Z.-J. Lin, M. Cazzell, and H. Liu, “Tutorial onuse of intraclass correlation coefficients for assessing intertest reliability...
    • P. Shea, S. Mccall, and A. Ozdogru, “Adoption of the multimediaeducational resource for learning and online teaching (MERLOT)among higher...
    • C. Cechinel, “Objetos de Aprendizagem: Introdução e fundamentos,”Editora UFABC, vol. 1, pp. 71–77, Mar. 2015.
    • C. Cechinel, M. Á. Sicilia, S. Sánchez-Alonso, andE. García-Barriocanal, “Evaluating collaborative filteringrecommendations inside large...
    • P. A. Barbetta, Estatística Aplicada às Ciências Sociais, 2nd ed.Florianópolis, Brazil: Editora da UFSC, 1994.
    • L. Grayson and Q. Mary, “Evidence based policy and the qualityof evidence: Rethinking peer review,” in ESRC UK Centre forEvidence Based Policy...
    • J. Kelly, T. Sadeghieh, and K. Adeli, “Peer review in scientific publications: Benefits, critiques, & a survival guide,” J. Int. Fed....
    • S. Jenal, D. W. Vituri, G. M. Ezaías, A. Silva, M. Helena, and L. Caliri,“The peer review process: An integrative review of the literature,”...
    • M. Szklo, “Quality of scientific articles,” Rev. SaC, de PC, blica, vol. 40,pp. 30–35, Feb. 2006.
    • P. A. Gaona-García, D. Martín-Moncunill, E. E. Gaona-García,A. Gómez-Acosta, and C. Monenegro-Marin, “Usability of big dataresources in visual...

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno