NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Meyer, J. Patrick; Hu, Ann; Li, Sylvia – NWEA, 2023
The Content Proximity Project was designed to improve the content validity of the MAP® Growth™ assessments while retaining the ability for the test to adapt off-grade and meet students wherever they are in their learning. Two main features of the project were the development of an enhanced item selection algorithm, and a spring pilot study…
Descriptors: Achievement Tests, Mathematics Achievement, Content Validity, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Peer reviewed Peer reviewed
Shapiro, Alexander – Psychometrika, 1982
Minimum trace factor analysis has been used to find the greatest lower bound to reliability. This technique, however, fails to be scale free. A solution to the scale problem is proposed through the maximization of the greatest lower bound as the function of weights. (Author/JKS)
Descriptors: Algorithms, Estimation (Mathematics), Factor Analysis, Psychometrics
van der Linden, Wim J.; Boekkooi-Timminga, Ellen – 1986
In order to estimate the classical coefficient of test reliability, parallel measurements are needed. H. Gulliksen's matched random subtests method, which is a graphical method for splitting a test into parallel test halves, has practical relevance because it maximizes the alpha coefficient as a lower bound of the classical test reliability…
Descriptors: Algorithms, Computer Assisted Testing, Computer Software, Difficulty Level
Peer reviewed Peer reviewed
Armstrong, Ronald D.; And Others – Journal of Educational Statistics, 1994
A network-flow model is formulated for constructing parallel tests based on classical test theory while using test reliability as the criterion. Practitioners can specify a test-difficulty distribution for values of item difficulties as well as test-composition requirements. An empirical study illustrates the reliability of generated tests. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Roid, Gale; Finn, Patrick – 1978
The feasibility of generating multiple-choice test questions by transforming sentences from prose instructional materials was examined. A computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were then transformed into multiple-choice items by four writers who…
Descriptors: Algorithms, Criterion Referenced Tests, Difficulty Level, Form Classes (Languages)
Eignor, Daniel R.; And Others – 1993
The extensive computer simulation work done in developing the computer adaptive versions of the Graduate Record Examinations (GRE) Board General Test and the College Board Admissions Testing Program (ATP) Scholastic Aptitude Test (SAT) is described in this report. Both the GRE General and SAT computer adaptive tests (CATs), which are fixed length…
Descriptors: Adaptive Testing, Algorithms, Case Studies, College Entrance Examinations