Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
Author
| Chang, Yu-Wen | 1 |
| Davison, Mark L. | 1 |
| Forzani, Elena | 1 |
| Kennedy, Clint | 1 |
| Kluge, Annette | 1 |
| Leu, Donald J. | 1 |
| Maykel, Cheryl | 1 |
| Miller, Timothy R. | 1 |
| Rhoads, Chris | 1 |
| Spray, Judith A. | 1 |
| Sykes, Robert C. | 1 |
| More ▼ | |
Publication Type
| Reports - Evaluative | 4 |
| Journal Articles | 2 |
| Reports - Research | 2 |
| Speeches/Meeting Papers | 2 |
Education Level
| Elementary Education | 1 |
| Grade 7 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Location
| Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Leu, Donald J.; Forzani, Elena; Rhoads, Chris; Maykel, Cheryl; Kennedy, Clint; Timbrell, Nicole – Reading Research Quarterly, 2015
Is there an achievement gap for online reading ability based on income inequality that is separate from the achievement gap in traditional, offline reading? This possibility was examined between students in two pseudonymous school districts: West Town (economically advantaged) and East Town (economically challenged; N = 256). Performance-based…
Descriptors: Reading Achievement, Achievement Gap, Electronic Learning, School Districts
Kluge, Annette – Applied Psychological Measurement, 2008
The use of microworlds (MWs), or complex dynamic systems, in educational testing and personnel selection is hampered by systematic measurement errors because these new and innovative item formats are not adequately controlled for their difficulty. This empirical study introduces a way to operationalize an MW's difficulty and demonstrates the…
Descriptors: Personnel Selection, Self Efficacy, Educational Testing, Computer Uses in Education
Chang, Yu-Wen; Davison, Mark L. – 1992
Standard errors and bias of unidimensional and multidimensional ability estimates were compared in a factorial, simulation design with two item response theory (IRT) approaches, two levels of test correlation (0.42 and 0.63), two sample sizes (500 and 1,000), and a hierarchical test content structure. Bias and standard errors of subtest scores…
Descriptors: Comparative Testing, Computer Simulation, Correlation, Error of Measurement
Sykes, Robert C.; And Others – 1992
A part-form methodology was used to study the effect of varying degrees of multidimensionality on the consistency of pass/fail classification decisions obtained from simulated unidimensional item response theory (IRT) based licensure examinations. A control on the degree of form multidimensionality permitted an assessment throughout the range of…
Descriptors: Classification, Comparative Testing, Computer Simulation, Decision Making
Spray, Judith A.; Miller, Timothy R. – 1992
A popular method of analyzing test items for differential item functioning (DIF) is to compute a statistic that conditions samples of examinees from different populations on an estimate of ability. This conditioning or matching by ability is intended to produce an appropriate statistic that is sensitive to true differences in item functioning,…
Descriptors: Blacks, College Entrance Examinations, Comparative Testing, Computer Simulation

Peer reviewed
Direct link
