Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 11 |
Descriptor
Classification | 15 |
Computer Assisted Testing | 15 |
Test Length | 15 |
Adaptive Testing | 12 |
Cutting Scores | 6 |
Test Items | 6 |
Accuracy | 4 |
Comparative Analysis | 4 |
Higher Education | 4 |
Item Response Theory | 4 |
Probability | 4 |
More ▼ |
Source
Applied Psychological… | 3 |
Educational and Psychological… | 3 |
ProQuest LLC | 2 |
Educational Research and… | 1 |
Grantee Submission | 1 |
Measurement:… | 1 |
Author
Liu, Chen-Wei | 2 |
Wang, Wen-Chung | 2 |
Bao, Yu | 1 |
Batinic, Bernad | 1 |
Bradshaw, Laine | 1 |
Cheng, Ying | 1 |
David J. Weiss | 1 |
Diao, Qi | 1 |
Eggen, Theo J. H. M. | 1 |
Finkelman, Matthew David | 1 |
Frick, Theodore W. | 1 |
More ▼ |
Publication Type
Reports - Research | 9 |
Journal Articles | 8 |
Reports - Evaluative | 4 |
Dissertations/Theses -… | 2 |
Speeches/Meeting Papers | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Education | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jing Ma – ProQuest LLC, 2024
This study investigated the impact of scoring polytomous items later on measurement precision, classification accuracy, and test security in mixed-format adaptive testing. Utilizing the shadow test approach, a simulation study was conducted across various test designs, lengths, number and location of polytomous item. Results showed that while…
Descriptors: Scoring, Adaptive Testing, Test Items, Classification
Bao, Yu; Bradshaw, Laine – Measurement: Interdisciplinary Research and Perspectives, 2018
Diagnostic classification models (DCMs) can provide multidimensional diagnostic feedback about students' mastery levels of knowledge components or attributes. One advantage of using DCMs is the ability to accurately and reliably classify students into mastery levels with a relatively small number of items per attribute. Combining DCMs with…
Descriptors: Test Items, Selection, Adaptive Testing, Computer Assisted Testing
Mark L. Davison; David J. Weiss; Ozge Ersan; Joseph N. DeWeese; Gina Biancarosa; Patrick C. Kennedy – Grantee Submission, 2021
MOCCA is an online assessment of inferential reading comprehension for students in 3rd through 6th grades. It can be used to identify good readers and, for struggling readers, identify those who overly rely on either a Paraphrasing process or an Elaborating process when their comprehension is incorrect. Here a propensity to over-rely on…
Descriptors: Reading Tests, Computer Assisted Testing, Reading Comprehension, Elementary School Students
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi – Applied Psychological Measurement, 2013
Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be "tailored" to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Length, Ability
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien – Applied Psychological Measurement, 2013
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Bayesian Statistics
Eggen, Theo J. H. M. – Educational Research and Evaluation, 2011
If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…
Descriptors: Test Length, Adaptive Testing, Classification, Item Analysis
Gnambs, Timo; Batinic, Bernad – Educational and Psychological Measurement, 2011
Computer-adaptive classification tests focus on classifying respondents in different proficiency groups (e.g., for pass/fail decisions). To date, adaptive classification testing has been dominated by research on dichotomous response formats and classifications in two groups. This article extends this line of research to polytomous classification…
Descriptors: Test Length, Computer Assisted Testing, Classification, Test Items
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory
Kim, Jiseon – ProQuest LLC, 2010
Classification testing has been widely used to make categorical decisions by determining whether an examinee has a certain degree of ability required by established standards. As computer technologies have developed, classification testing has become more computerized. Several approaches have been proposed and investigated in the context of…
Descriptors: Test Length, Computer Assisted Testing, Classification, Probability
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Weissman, Alexander – Educational and Psychological Measurement, 2007
A general approach for item selection in adaptive multiple-category classification tests is provided. The approach uses mutual information (MI), a special case of the Kullback-Leibler distance, or relative entropy. MI works efficiently with the sequential probability ratio test and alleviates the difficulties encountered with using other local-…
Descriptors: Scientific Concepts, Probability, Test Length, Item Analysis
Vos, Hans J. – 1997
The purpose of this paper is to derive optimal rules for variable-length mastery tests in case three mastery classification decisions (nonmastery, partial mastery, and mastery) are distinguished. In a variable-length or adaptive mastery test, the decision is to classify a subject as a master, a partial master, a nonmaster, or continuing sampling…
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, Concept Formation
Reckase, Mark D. – 1981
This report describes a study comparing the classification results obtained from a one-parameter and three-parameter logistic based tailored testing procedure used in conjunction with Wald's sequential probability ratio test (SPRT). Eighty-eight college students were classified into four grade categories using achievement test results obtained…
Descriptors: Adaptive Testing, Classification, Comparative Analysis, Computer Assisted Testing
Reshetar, Rosemary A.; And Others – 1992
This study examined performance of a simulated computerized adaptive test that was designed to help direct the development of a medical recertification examination. The item pool consisted of 229 single-best-answer items from a random sample of 3,000 examinees, calibrated using the two-parameter logistic model. Examinees' responses were known. For…
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, Computer Simulation
Frick, Theodore W. – 1986
The sequential probability ratio test (SPRT), developed by Abraham Wald, is one statistical model available for making mastery decisions during computer-based criterion referenced tests. The predictive validity of the SPRT was empirically investigated with two different and relatively large item pools with heterogeneous item parameters. Graduate…
Descriptors: Achievement Tests, Adaptive Testing, Classification, Comparative Analysis