Ir al contenido

Documat


Resumen de Computer-aided assessment in mechanics: question design and test evaluation

M. Gill, Martin Greenhow

  • This article describes pedagogic issues in setting objective tests in mechanics using Question Mark Perception, coupled with MathML mathematics mark-up and the Scalable Vector Graphics (SVG) syntax for producing diagrams. The content of the questions (for a range of question types such as multi-choice, numerical input and variants such as confidence-based questions) is scripted with random parameters, thereby producing many millions of realizations of the underlying ‘question style’. This means that the question setter must completely specify the algebraic and pedagogic structure of the question. For some question types, we need to understand and encode the ways in which students make mistakes, offering them as distracters or recognizing their use in numerical inputs (we call this responsive numerical input). We have examined several years’ worth of exam scripts to discover what ‘mal-rules’ are used for each question and attempted to characterize them with metadata that makes students’ responses recorded in the answer files easier to understand.

    Results from evaluation experiments are presented; in particular, we are interested in whether the feedback ‘feeds forward’ to affect students’ approaches to doing problems in a repeat test or exam, delayed by a variable time period (almost immediately, after 1 week, 1 month or more). To quantify this when examining end of semester exam scripts, we looked at four indicators: using units, identifying vectors, using diagrams and emulating the good layout of the feedback screens in their own written solutions.


Fundación Dialnet

Mi Documat