NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Peer reviewed Peer reviewed
Sutton, Rosemary E. – Equity & Excellence in Education, 1997
Considers equity issues of highstakes tests conducted by computer, including whether this new form of assessment actually helps level the playing field for students or represents a new cycle of assessment inequality. Two computer tests are assessed: Praxis I: Academic Skills Assessment; and the computerized version of the Graduate Record…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Assessment, Educational Testing
Mislevy, Robert J.; Almond, Russell G. – 1997
This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Testing, Higher Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation