NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International…2
What Works Clearinghouse Rating
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Gombert, Sebastian; Di Mitri, Daniele; Karademir, Onur; Kubsch, Marcus; Kolbe, Hannah; Tautz, Simon; Grimm, Adrian; Bohm, Isabell; Neumann, Knut; Drachsler, Hendrik – Journal of Computer Assisted Learning, 2023
Background: Formative assessments are needed to enable monitoring how student knowledge develops throughout a unit. Constructed response items which require learners to formulate their own free-text responses are well suited for testing their active knowledge. However, assessing such constructed responses in an automated fashion is a complex task…
Descriptors: Coding, Energy, Scientific Concepts, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Zehner, Fabian; Goldhammer, Frank; Lubaway, Emily; Sälzer, Christine – Education Inquiry, 2019
In 2015, the "Programme for International Student Assessment" (PISA) introduced multiple changes in its study design, the most extensive being the transition from paper- to computer-based assessment. We investigated the differences between German students' text responses to eight reading items from the paper-based study in 2012 to text…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank – Educational and Psychological Measurement, 2016
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
Descriptors: Educational Assessment, Coding, Automation, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Aldabe, Itziar; Maritxalar, Montse – IEEE Transactions on Learning Technologies, 2014
The work we present in this paper aims to help teachers create multiple-choice science tests. We focus on a scientific vocabulary-learning scenario taking place in a Basque-language educational environment. In this particular scenario, we explore the option of automatically generating Multiple-Choice Questions (MCQ) by means of Natural Language…
Descriptors: Science Tests, Test Construction, Computer Assisted Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deane, Paul; Lawless, René R.; Li, Chen; Sabatini, John; Bejar, Isaac I.; O'Reilly, Tenaha – ETS Research Report Series, 2014
We expect that word knowledge accumulates gradually. This article draws on earlier approaches to assessing depth, but focuses on one dimension: richness of semantic knowledge. We present results from a study in which three distinct item types were developed at three levels of depth: knowledge of common usage patterns, knowledge of broad topical…
Descriptors: Vocabulary, Test Items, Language Tests, Semantics