NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Leu, Donald J.; Forzani, Elena; Rhoads, Chris; Maykel, Cheryl; Kennedy, Clint; Timbrell, Nicole – Reading Research Quarterly, 2015
Is there an achievement gap for online reading ability based on income inequality that is separate from the achievement gap in traditional, offline reading? This possibility was examined between students in two pseudonymous school districts: West Town (economically advantaged) and East Town (economically challenged; N = 256). Performance-based…
Descriptors: Reading Achievement, Achievement Gap, Electronic Learning, School Districts
Peer reviewed Peer reviewed
Direct linkDirect link
Kluge, Annette – Applied Psychological Measurement, 2008
The use of microworlds (MWs), or complex dynamic systems, in educational testing and personnel selection is hampered by systematic measurement errors because these new and innovative item formats are not adequately controlled for their difficulty. This empirical study introduces a way to operationalize an MW's difficulty and demonstrates the…
Descriptors: Personnel Selection, Self Efficacy, Educational Testing, Computer Uses in Education
Chang, Yu-Wen; Davison, Mark L. – 1992
Standard errors and bias of unidimensional and multidimensional ability estimates were compared in a factorial, simulation design with two item response theory (IRT) approaches, two levels of test correlation (0.42 and 0.63), two sample sizes (500 and 1,000), and a hierarchical test content structure. Bias and standard errors of subtest scores…
Descriptors: Comparative Testing, Computer Simulation, Correlation, Error of Measurement
Sykes, Robert C.; And Others – 1992
A part-form methodology was used to study the effect of varying degrees of multidimensionality on the consistency of pass/fail classification decisions obtained from simulated unidimensional item response theory (IRT) based licensure examinations. A control on the degree of form multidimensionality permitted an assessment throughout the range of…
Descriptors: Classification, Comparative Testing, Computer Simulation, Decision Making
Spray, Judith A.; Miller, Timothy R. – 1992
A popular method of analyzing test items for differential item functioning (DIF) is to compute a statistic that conditions samples of examinees from different populations on an estimate of ability. This conditioning or matching by ability is intended to produce an appropriate statistic that is sensitive to true differences in item functioning,…
Descriptors: Blacks, College Entrance Examinations, Comparative Testing, Computer Simulation