Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 11 |
Descriptor
Classification | 14 |
Monte Carlo Methods | 14 |
Test Items | 14 |
Accuracy | 7 |
Item Response Theory | 7 |
Simulation | 5 |
Bayesian Statistics | 4 |
Comparative Analysis | 4 |
Computer Assisted Testing | 4 |
Diagnostic Tests | 4 |
Test Construction | 4 |
More ▼ |
Source
Applied Psychological… | 2 |
Educational and Psychological… | 2 |
Practical Assessment,… | 2 |
Applied Measurement in… | 1 |
Grantee Submission | 1 |
Interactive Learning… | 1 |
Journal of Educational… | 1 |
ProQuest LLC | 1 |
Author
Allan S. Cohen | 1 |
Batinic, Bernad | 1 |
Bilan Liang | 1 |
Cohen, Allan S. | 1 |
David J. Weiss | 1 |
DeCarlo, Lawrence T. | 1 |
Douglas, Jeff | 1 |
Gina Biancarosa | 1 |
Gnambs, Timo | 1 |
Hambleton, Ronald K. | 1 |
Han, Kyung T. | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 9 |
Reports - Evaluative | 4 |
Speeches/Meeting Papers | 2 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Education | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
China (Shanghai) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Wanxue Zhang; Lingling Meng; Bilan Liang – Interactive Learning Environments, 2023
With the continuous development of education, personalized learning has attracted great attention. How to evaluate students' learning effects has become increasingly important. In information technology courses, the traditional academic evaluation focuses on the student's learning outcomes, such as "scores" or "right/wrong,"…
Descriptors: Information Technology, Computer Science Education, High School Students, Scoring
Mark L. Davison; David J. Weiss; Ozge Ersan; Joseph N. DeWeese; Gina Biancarosa; Patrick C. Kennedy – Grantee Submission, 2021
MOCCA is an online assessment of inferential reading comprehension for students in 3rd through 6th grades. It can be used to identify good readers and, for struggling readers, identify those who overly rely on either a Paraphrasing process or an Elaborating process when their comprehension is incorrect. Here a propensity to over-rely on…
Descriptors: Reading Tests, Computer Assisted Testing, Reading Comprehension, Elementary School Students
Han, Kyung T.; Wells, Craig S.; Hambleton, Ronald K. – Practical Assessment, Research & Evaluation, 2015
In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the cparameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if…
Descriptors: Item Response Theory, Monte Carlo Methods, Scaling, Test Items
Koziol, Natalie A. – Applied Measurement in Education, 2016
Testlets, or groups of related items, are commonly included in educational assessments due to their many logistical and conceptual advantages. Despite their advantages, testlets introduce complications into the theory and practice of educational measurement. Responses to items within a testlet tend to be correlated even after controlling for…
Descriptors: Classification, Accuracy, Comparative Analysis, Models
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying – Journal of Educational Measurement, 2012
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
Descriptors: Item Response Theory, Test Items, Markov Processes, Monte Carlo Methods
Md Desa, Zairul Nor Deana – ProQuest LLC, 2012
In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…
Descriptors: Item Response Theory, Computation, Reliability, Classification
Gnambs, Timo; Batinic, Bernad – Educational and Psychological Measurement, 2011
Computer-adaptive classification tests focus on classifying respondents in different proficiency groups (e.g., for pass/fail decisions). To date, adaptive classification testing has been dominated by research on dichotomous response formats and classifications in two groups. This article extends this line of research to polytomous classification…
Descriptors: Test Length, Computer Assisted Testing, Classification, Test Items
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2011
Cognitive diagnostic models (CDMs) attempt to uncover latent skills or attributes that examinees must possess in order to answer test items correctly. The DINA (deterministic input, noisy "and") model is a popular CDM that has been widely used. It is shown here that a logistic version of the model can easily be fit with standard software for…
Descriptors: Bayesian Statistics, Computation, Cognitive Tests, Diagnostic Tests
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming – Applied Psychological Measurement, 2008
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
Descriptors: Diagnostic Tests, Classification, Probability, Item Response Theory
Lau, C. Allen; Wang, Tianyou – 2000
This paper proposes a new Information-Time index as the basis for item selection in computerized classification testing (CCT) and investigates how this new item selection algorithm can help improve test efficiency for item pools with mixed item types. It also investigates how practical constraints such as item exposure rate control, test…
Descriptors: Algorithms, Classification, Computer Assisted Testing, Elementary Secondary Education
Kim, Seock-Ho; Cohen, Allan S. – 1997
Type I error rates of the likelihood ratio test for the detection of differential item functioning (DIF) were investigated using Monte Carlo simulations. The graded response model with five ordered categories was used to generate data sets of a 30-item test for samples of 300 and 1,000 simulated examinees. All DIF comparisons were simulated by…
Descriptors: Ability, Classification, Computer Simulation, Estimation (Mathematics)
Papa, Frank J.; Schumacker, Randall E. – 1995
Measures of the robustness of disease class-specific diagnostic concepts could play a central role in training programs designed to assure the development of diagnostic competence. In the pilot study, the authors used disease/sign-symptom conditional probability estimates, Monte Carlo procedures, and artificial intelligence (AI) tools to create…
Descriptors: Adaptive Testing, Artificial Intelligence, Classification, Clinical Diagnosis