NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.; Park, Ryoungsun – Educational and Psychological Measurement, 2012
This study compared various panel designs of the multistage test (MST) using mixed-format tests in the context of classification testing. Simulations varied the design of the first-stage module. The first stage was constructed according to three levels of test information functions (TIFs) with three different TIF centers. Additional computerized…
Descriptors: Test Format, Comparative Analysis, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Ponsoda, Vincente; And Others – Educational and Psychological Measurement, 1997
A study involving 209 Spanish high school students compared computer-based English vocabulary tests: (1) a self-adapted test (SAT); (2) a computerized adaptive test (CAT); (3) a conventional test; and (4) a test combining SAT and CAT. No statistically significant differences were found among test types for estimated ability or posttest anxiety.…
Descriptors: Ability, Adaptive Testing, Anxiety, Comparative Analysis
Peer reviewed Peer reviewed
Styles, Irene; Andrich, David – Educational and Psychological Measurement, 1993
This paper describes the use of the Rasch model to help implement computerized administration of the standard and advanced forms of Raven's Progressive Matrices (RPM), to compare relative item difficulties, and to convert scores between the standard and advanced forms. The sample consisted of 95 girls and 95 boys in Australia. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education