NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Rushton, Nicky; Vitello, Sylvia; Suto, Irenka – Research Matters, 2021
It is important to define what an error in a question paper is so that there is a common understanding and to avoid people's own conceptions impacting upon the way in which they write or check question papers. We carried out an interview study to investigate our colleagues' definitions of error. We found that there is no single accepted definition…
Descriptors: Definitions, Tests, Foreign Countries, Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Alammary, Ali – IEEE Transactions on Learning Technologies, 2021
Developing effective assessments is a critical component of quality instruction. Assessments are effective when they are well-aligned with the learning outcomes, can confirm that all intended learning outcomes are attained, and their obtained grades are accurately reflecting the level of student achievement. Developing effective assessments is not…
Descriptors: Outcomes of Education, Alignment (Education), Student Evaluation, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Carla; Hoge, Rachel; Schatschneider, Christopher; Castilla-Earls, Anny – International Journal of Bilingual Education and Bilingualism, 2021
This study examines the response patterns of 288 Spanish-English dual language learners on a standardized test of receptive Spanish vocabulary. Investigators analyzed responses to 54 items on the "Test de Vocabulario en Imagenes" (TVIP) [Dunn, L. M., D. E. Lugo, E. R. Padilla, and L. M. Dunn. 1986. "Test de Vocabulario en Imganes…
Descriptors: Predictor Variables, Phonology, Item Analysis, Spanish
Peer reviewed Peer reviewed
Direct linkDirect link
Nichols, Bryan E. – Update: Applications of Research in Music Education, 2016
The purpose of this review of literature was to identify research findings for designing assessments in singing accuracy. The aim was to specify the test construction variables that directly affect test performance to guide future design in singing accuracy assessment for research and classroom uses. Three pitch-matching tasks--single pitch,…
Descriptors: Singing, Accuracy, Music, Music Education
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, Heinz-Dieter – Comparative Education, 2017
Quantitative measures of student performance are increasingly used as proxies of educational quality and teacher ability. Such assessments assume that the quality of educational practices can be unambiguously quantitatively measured and that such measures are sufficiently precise and robust to be aggregated into policy-relevant rankings like…
Descriptors: Student Evaluation, Evaluation Problems, Accuracy, Scholarship
Peer reviewed Peer reviewed
Direct linkDirect link
Ashford-Rowe, Kevin; Herrington, Janice; Brown, Christine – Assessment & Evaluation in Higher Education, 2014
This study sought to determine the critical elements of an authentic learning activity, design them into an applicable framework and then use this framework to guide the design, development and application of work-relevant assessment. Its purpose was to formulate an effective model of task design and assessment. The first phase of the study…
Descriptors: Performance Based Assessment, Models, Test Construction, Transfer of Training
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
DiBartolomeo, Matthew – ProQuest LLC, 2010
Multiple factors have influenced testing agencies to more carefully consider the manner and frequency in which pretest item data are collected and analyzed. One potentially promising development is judges' estimates of item difficulty. Accurate estimates of item difficulty may be used to reduce pretest samples sizes, supplement insufficient…
Descriptors: Test Items, Group Discussion, Athletics, Pretests Posttests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Graf, Edith Aurora; Peterson, Stephen; Steffen, Manfred; Lawless, René – ETS Research Report Series, 2005
We describe the item modeling development and evaluation process as applied to a quantitative assessment with high-stakes outcomes. In addition to expediting the item-creation process, a model-based approach may reduce pretesting costs, if the difficulty and discrimination of model-generated items may be predicted to a predefined level of…
Descriptors: Psychometrics, Accuracy, Item Analysis, High Stakes Tests