Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 20 |
Since 2006 (last 20 years) | 27 |
Descriptor
Accuracy | 27 |
Error of Measurement | 27 |
Monte Carlo Methods | 27 |
Comparative Analysis | 12 |
Item Response Theory | 9 |
Models | 9 |
Sample Size | 9 |
Computation | 6 |
Test Items | 6 |
Classification | 5 |
Correlation | 5 |
More ▼ |
Source
Author
Jin, Ying | 2 |
McCoach, D. Betsy | 2 |
Ahn, Soyeon | 1 |
AlGhamdi, Hannan M. | 1 |
Arel-Bundock, Vincent | 1 |
Bellara, Aarti | 1 |
Ben Kelcey | 1 |
Fatih Orçan | 1 |
Finch, Holmes | 1 |
Finch, W. Holmes | 1 |
Finch, William Holmes | 1 |
More ▼ |
Publication Type
Journal Articles | 25 |
Reports - Research | 23 |
Reports - Evaluative | 3 |
Dissertations/Theses -… | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Japan | 1 |
Saudi Arabia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Cognitive Abilities Test | 1 |
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Fatih Orçan – International Journal of Assessment Tools in Education, 2025
Factor analysis is a statistical method to explore the relationships among observed variables and identify latent structures. It is crucial in scale development and validity analysis. Key factors affecting the accuracy of factor analysis results include the type of data, sample size, and the number of response categories. While some studies…
Descriptors: Factor Analysis, Factor Structure, Item Response Theory, Sample Size
Shu, Tian; Luo, Guanzhong; Luo, Zhaosheng; Yu, Xiaofeng; Guo, Xiaojun; Li, Yujun – Journal of Educational and Behavioral Statistics, 2023
Cognitive diagnosis models (CDMs) are the statistical framework for cognitive diagnostic assessment in education and psychology. They generally assume that subjects' latent attributes are dichotomous--mastery or nonmastery, which seems quite deterministic. As an alternative to dichotomous attribute mastery, attention is drawn to the use of a…
Descriptors: Cognitive Measurement, Models, Diagnostic Tests, Accuracy
Yuanfang Liu; Mark H. C. Lai; Ben Kelcey – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance holds when a latent construct is measured in the same way across different levels of background variables (continuous or categorical) while controlling for the true value of that construct. Using Monte Carlo simulation, this paper compares the multiple indicators, multiple causes (MIMIC) model and MIMIC-interaction to a…
Descriptors: Classification, Accuracy, Error of Measurement, Correlation
Rüttenauer, Tobias – Sociological Methods & Research, 2022
Spatial regression models provide the opportunity to analyze spatial data and spatial processes. Yet, several model specifications can be used, all assuming different types of spatial dependence. This study summarizes the most commonly used spatial regression models and offers a comparison of their performance by using Monte Carlo experiments. In…
Descriptors: Models, Monte Carlo Methods, Social Science Research, Data Analysis
Ke-Hai Yuan; Zhiyong Zhang – Grantee Submission, 2024
Data in social and behavioral sciences typically contain measurement errors and also do not have predefined metrics. Structural equation modeling (SEM) is commonly used to analyze such data. This article discuss issues in latent-variable modeling as compared to regression analysis with composite-scores. Via logical reasoning and analytical results…
Descriptors: Error of Measurement, Measurement Techniques, Social Science Research, Behavioral Science Research
Wang, Shaojie; Zhang, Minqiang; Lee, Won-Chan; Huang, Feifei; Li, Zonglong; Li, Yixing; Yu, Sufang – Journal of Educational Measurement, 2022
Traditional IRT characteristic curve linking methods ignore parameter estimation errors, which may undermine the accuracy of estimated linking constants. Two new linking methods are proposed that take into account parameter estimation errors. The item- (IWCC) and test-information-weighted characteristic curve (TWCC) methods employ weighting…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Monte Carlo Methods
Arel-Bundock, Vincent – Sociological Methods & Research, 2022
Qualitative comparative analysis (QCA) is an influential methodological approach motivated by set theory and boolean logic. QCA proponents have developed algorithms to analyze quantitative data, in a bid to uncover necessary and sufficient conditions where causal relationships are complex, conditional, or asymmetric. This article uses computer…
Descriptors: Comparative Analysis, Qualitative Research, Attribution Theory, Computer Simulation
Paulsen, Justin; Valdivia, Dubravka Svetina – Journal of Experimental Education, 2022
Cognitive diagnostic models (CDMs) are a family of psychometric models designed to provide categorical classifications for multiple latent attributes. CDMs provide more granular evidence than other psychometric models and have potential for guiding teaching and learning decisions in the classroom. However, CDMs have primarily been conducted using…
Descriptors: Psychometrics, Classification, Teaching Methods, Learning Processes
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki – Physical Review Physics Education Research, 2021
As a method to shorten the test time of the Force Concept Inventory (FCI), we suggest the use of computerized adaptive testing (CAT). CAT is the process of administering a test on a computer, with items (i.e., questions) selected based upon the responses of the examinee to prior items. In so doing, the test length can be significantly shortened.…
Descriptors: Foreign Countries, College Students, Student Evaluation, Computer Assisted Testing
Finch, W. Holmes; Shim, Sungok Serena – Educational and Psychological Measurement, 2018
Collection and analysis of longitudinal data is an important tool in understanding growth and development over time in a whole range of human endeavors. Ideally, researchers working in the longitudinal framework are able to collect data at more than two points in time, as this will provide them with the potential for a deeper understanding of the…
Descriptors: Comparative Analysis, Computation, Time, Change
McCoach, D. Betsy; Rifenbark, Graham G.; Newton, Sarah D.; Li, Xiaoran; Kooken, Janice; Yomtov, Dani; Gambino, Anthony J.; Bellara, Aarti – Journal of Educational and Behavioral Statistics, 2018
This study compared five common multilevel software packages via Monte Carlo simulation: HLM 7, M"plus" 7.4, R (lme4 V1.1-12), Stata 14.1, and SAS 9.4 to determine how the programs differ in estimation accuracy and speed, as well as convergence, when modeling multiple randomly varying slopes of different magnitudes. Simulated data…
Descriptors: Hierarchical Linear Modeling, Computer Software, Comparative Analysis, Monte Carlo Methods
Previous Page | Next Page »
Pages: 1 | 2