Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 33 |
Descriptor
Computation | 34 |
Goodness of Fit | 34 |
Test Items | 34 |
Item Response Theory | 23 |
Models | 14 |
Statistical Analysis | 11 |
Error of Measurement | 10 |
Accuracy | 9 |
Correlation | 8 |
Psychometrics | 8 |
Test Construction | 7 |
More ▼ |
Source
Author
Farina, Kristy | 3 |
LaVenia, Mark | 3 |
Schoen, Robert C. | 3 |
Cai, Li | 2 |
Champagne, Zachary M. | 2 |
Wang, Wen-Chung | 2 |
Akbay, Lokman | 1 |
Bagley, Anita M. | 1 |
Bauduin, Charity | 1 |
Bevans, Katherine | 1 |
Bjornson, Kristie | 1 |
More ▼ |
Publication Type
Reports - Research | 26 |
Journal Articles | 23 |
Numerical/Quantitative Data | 3 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Tests/Questionnaires | 3 |
Dissertations/Theses -… | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 5 |
Elementary Secondary Education | 4 |
Grade 2 | 3 |
Junior High Schools | 3 |
Middle Schools | 3 |
Secondary Education | 3 |
Early Childhood Education | 2 |
Grade 1 | 2 |
Grade 3 | 2 |
Grade 7 | 2 |
Higher Education | 2 |
More ▼ |
Audience
Location
New Mexico | 2 |
Canada | 1 |
China | 1 |
Florida | 1 |
Oregon | 1 |
South Korea | 1 |
Spain | 1 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Progress in International… | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Franz Classe; Christoph Kern – Educational and Psychological Measurement, 2024
We develop a "latent variable forest" (LV Forest) algorithm for the estimation of latent variable scores with one or more latent variables. LV Forest estimates unbiased latent variable scores based on "confirmatory factor analysis" (CFA) models with ordinal and/or numerical response variables. Through parametric model…
Descriptors: Algorithms, Item Response Theory, Artificial Intelligence, Factor Analysis
Kim, Hyung Jin; Lee, Won-Chan – Journal of Educational Measurement, 2022
Orlando and Thissen (2000) introduced the "S - X[superscript 2]" item-fit index for testing goodness-of-fit with dichotomous item response theory (IRT) models. This study considers and evaluates an alternative approach for computing "S - X[superscript 2]" values and other factors associated with collapsing tables of observed…
Descriptors: Goodness of Fit, Test Items, Item Response Theory, Computation
Fatih Orcan – International Journal of Assessment Tools in Education, 2023
Among all, Cronbach's Alpha and McDonald's Omega are commonly used for reliability estimations. The alpha uses inter-item correlations while omega is based on a factor analysis result. This study uses simulated ordinal data sets to test whether the alpha and omega produce different estimates. Their performances were compared according to the…
Descriptors: Statistical Analysis, Monte Carlo Methods, Correlation, Factor Analysis
Emily A. Brown – ProQuest LLC, 2024
Previous research has been limited regarding the measurement of computational thinking, particularly as a learning progression in K-12. This study proposes to apply a multidimensional item response theory (IRT) model to a newly developed measure of computational thinking utilizing both selected response and open-ended polytomous items to establish…
Descriptors: Models, Computation, Thinking Skills, Item Response Theory
Diaz, Emily; Brooks, Gordon; Johanson, George – International Journal of Assessment Tools in Education, 2021
This Monte Carlo study assessed Type I error in differential item functioning analyses using Lord's chi-square (LC), Likelihood Ratio Test (LRT), and Mantel-Haenszel (MH) procedure. Two research interests were investigated: item response theory (IRT) model specification in LC and the LRT and continuity correction in the MH procedure. This study…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Comparative Analysis
Köhler, Carmen; Robitzsch, Alexander; Hartig, Johannes – Journal of Educational and Behavioral Statistics, 2020
Testing whether items fit the assumptions of an item response theory model is an important step in evaluating a test. In the literature, numerous item fit statistics exist, many of which show severe limitations. The current study investigates the root mean squared deviation (RMSD) item fit statistic, which is used for evaluating item fit in…
Descriptors: Test Items, Goodness of Fit, Statistics, Bias
Torre, Jimmy de la; Akbay, Lokman – Eurasian Journal of Educational Research, 2019
Purpose: Well-designed assessment methodologies and various cognitive diagnosis models (CDMs) to extract diagnostic information about examinees' individual strengths and weaknesses have been developed. Due to this novelty, as well as educational specialists' lack of familiarity with CDMs, their applications are not widespread. This article aims at…
Descriptors: Cognitive Measurement, Models, Computer Software, Testing
Gorgun, Guher; Bulut, Okan – Educational and Psychological Measurement, 2021
In low-stakes assessments, some students may not reach the end of the test and leave some items unanswered due to various reasons (e.g., lack of test-taking motivation, poor time management, and test speededness). Not-reached items are often treated as incorrect or not-administered in the scoring process. However, when the proportion of…
Descriptors: Scoring, Test Items, Response Style (Tests), Mathematics Tests
Dai, Ting; Du, Yang; Cromley, Jennifer G.; Fechter, Tia M.; Nelson, Frank – AERA Online Paper Repository, 2019
Certain planned-missing designs (e.g., simple-matrix sampling) cause zero covariances between variables not jointly observed, making it impossible to do analyses beyond mean estimations without specialized analyses. We tested a multigroup confirmatory factor analysis (CFA) approach by Cudeck (2000), which obtains a model-estimated…
Descriptors: Factor Analysis, Educational Research, Research Design, Data Analysis
DiStefano, Christine; McDaniel, Heather L.; Zhang, Liyun; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2019
A simulation study was conducted to investigate the model size effect when confirmatory factor analysis (CFA) models include many ordinal items. CFA models including between 15 and 120 ordinal items were analyzed with mean- and variance-adjusted weighted least squares to determine how varying sample size, number of ordered categories, and…
Descriptors: Factor Analysis, Effect Size, Data, Sample Size
Sinharay, Sandip – Grantee Submission, 2018
Tatsuoka (1984) suggested several extended caution indices and their standardized versions that have been used as person-fit statistics by researchers such as Drasgow, Levine, and McLaughlin (1987), Glas and Meijer (2003), and Molenaar and Hoijtink (1990). However, these indices are only defined for tests with dichotomous items. This paper extends…
Descriptors: Test Format, Goodness of Fit, Item Response Theory, Error Patterns
DeMars, Christine E. – Educational and Psychological Measurement, 2016
Partially compensatory models may capture the cognitive skills needed to answer test items more realistically than compensatory models, but estimating the model parameters may be a challenge. Data were simulated to follow two different partially compensatory models, a model with an interaction term and a product model. The model parameters were…
Descriptors: Item Response Theory, Models, Thinking Skills, Test Items
Ranger, Jochen; Kuhn, Jörg-Tobias – Journal of Educational and Behavioral Statistics, 2015
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Descriptors: Psychological Testing, Reaction Time, Statistical Analysis, Models
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Cai, Li – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2013
Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…
Descriptors: Mathematics, Scores, Item Response Theory, Computation