Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 19 |
Descriptor
Computation | 20 |
Item Response Theory | 20 |
Statistical Inference | 20 |
Data Analysis | 6 |
Models | 6 |
Monte Carlo Methods | 6 |
Sampling | 6 |
Simulation | 6 |
Bayesian Statistics | 5 |
Error of Measurement | 5 |
Maximum Likelihood Statistics | 5 |
More ▼ |
Source
Author
Cai, Li | 3 |
Gongjun Xu | 3 |
Chun Wang | 2 |
Köhler, Carmen | 2 |
Ames, Allison | 1 |
Carstensen, Claus H. | 1 |
Cheng, Ying | 1 |
Cho, April E. | 1 |
Chung, Seungwon | 1 |
David B. Dunson | 1 |
DeCarlo, Lawrence T. | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 15 |
Reports - Evaluative | 3 |
Reports - Descriptive | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Secondary Education | 4 |
High Schools | 2 |
Higher Education | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Postsecondary Education | 2 |
Grade 8 | 1 |
Grade 9 | 1 |
Two Year Colleges | 1 |
Audience
Location
Germany | 1 |
North Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Law School Admission Test | 1 |
National Assessment of… | 1 |
National Merit Scholarship… | 1 |
Preliminary Scholastic… | 1 |
What Works Clearinghouse Rating
Yuqi Gu; Elena A. Erosheva; Gongjun Xu; David B. Dunson – Grantee Submission, 2023
Mixed Membership Models (MMMs) are a popular family of latent structure models for complex multivariate data. Instead of forcing each subject to belong to a single cluster, MMMs incorporate a vector of subject-specific weights characterizing partial membership across clusters. With this flexibility come challenges in uniquely identifying,…
Descriptors: Multivariate Analysis, Item Response Theory, Bayesian Statistics, Models
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2022
Takers of educational tests often receive proficiency levels instead of or in addition to scaled scores. For example, proficiency levels are reported for the Advanced Placement (AP®) and U.S. Medical Licensing examinations. Technical difficulties and other unforeseen events occasionally lead to missing item scores and hence to incomplete data on…
Descriptors: Computation, Data Analysis, Educational Testing, Accuracy
Tianci Liu; Chun Wang; Gongjun Xu – Grantee Submission, 2022
Multidimensional Item Response Theory (MIRT) is widely used in educational and psychological assessment and evaluation. With the increasing size of modern assessment data, many existing estimation methods become computationally demanding and hence they are not scalable to big data, especially for the multidimensional three-parameter and…
Descriptors: Item Response Theory, Computation, Monte Carlo Methods, Algorithms
Köhler, Carmen; Robitzsch, Alexander; Hartig, Johannes – Journal of Educational and Behavioral Statistics, 2020
Testing whether items fit the assumptions of an item response theory model is an important step in evaluating a test. In the literature, numerous item fit statistics exist, many of which show severe limitations. The current study investigates the root mean squared deviation (RMSD) item fit statistic, which is used for evaluating item fit in…
Descriptors: Test Items, Goodness of Fit, Statistics, Bias
Sainan Xu; Jing Lu; Jiwei Zhang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
With the growing attention on large-scale educational testing and assessment, the ability to process substantial volumes of response data becomes crucial. Current estimation methods within item response theory (IRT), despite their high precision, often pose considerable computational burdens with large-scale data, leading to reduced computational…
Descriptors: Educational Assessment, Bayesian Statistics, Statistical Inference, Item Response Theory
Cho, April E.; Wang, Chun; Zhang, Xue; Xu, Gongjun – Grantee Submission, 2020
Multidimensional Item Response Theory (MIRT) is widely used in assessment and evaluation of educational and psychological tests. It models the individual response patterns by specifying functional relationship between individuals' multiple latent traits and their responses to test items. One major challenge in parameter estimation in MIRT is that…
Descriptors: Item Response Theory, Mathematics, Statistical Inference, Maximum Likelihood Statistics
Oranje, Andreas; Kolstad, Andrew – Journal of Educational and Behavioral Statistics, 2019
The design and psychometric methodology of the National Assessment of Educational Progress (NAEP) is constantly evolving to meet the changing interests and demands stemming from a rapidly shifting educational landscape. NAEP has been built on strong research foundations that include conducting extensive evaluations and comparisons before new…
Descriptors: National Competency Tests, Psychometrics, Statistical Analysis, Computation
Chung, Seungwon; Cai, Li – Grantee Submission, 2019
The use of item responses from questionnaire data is ubiquitous in social science research. One side effect of using such data is that researchers must often account for item level missingness. Multiple imputation (Rubin, 1987) is one of the most widely used missing data handling techniques. The traditional multiple imputation approach in…
Descriptors: Computation, Statistical Inference, Structural Equation Models, Goodness of Fit
Ames, Allison; Myers, Aaron – Educational Measurement: Issues and Practice, 2019
Drawing valid inferences from modern measurement models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. As Bayesian estimation is becoming more common, understanding the Bayesian approaches for evaluating model-data fit models…
Descriptors: Bayesian Statistics, Psychometrics, Models, Predictive Measurement
Maeda, Hotaka; Zhang, Bo – International Journal of Testing, 2017
The omega (?) statistic is reputed to be one of the best indices for detecting answer copying on multiple choice tests, but its performance relies on the accurate estimation of copier ability, which is challenging because responses from the copiers may have been contaminated. We propose an algorithm that aims to identify and delete the suspected…
Descriptors: Cheating, Test Items, Mathematics, Statistics
Kim, YoungKoung; DeCarlo, Lawrence T. – College Board, 2016
Because of concerns about test security, different test forms are typically used across different testing occasions. As a result, equating is necessary in order to get scores from the different test forms that can be used interchangeably. In order to assure the quality of equating, multiple equating methods are often examined. Various equity…
Descriptors: Equated Scores, Evaluation Methods, Sampling, Statistical Inference
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi – Educational and Psychological Measurement, 2014
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Descriptors: Sampling, Statistical Inference, Maximum Likelihood Statistics, Computation
Michaelides, Michalis P.; Haertel, Edward H. – Applied Measurement in Education, 2014
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
Descriptors: Equated Scores, Test Items, Sampling, Statistical Inference
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H. – Educational and Psychological Measurement, 2015
When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…
Descriptors: Competence, Tests, Evaluation Methods, Adults
Johnson, Timothy R. – Applied Psychological Measurement, 2013
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Descriptors: Item Response Theory, Scores, Computation, Bayesian Statistics
Previous Page | Next Page »
Pages: 1 | 2