Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 14 |
Since 2006 (last 20 years) | 38 |
Descriptor
Error of Measurement | 42 |
Scores | 42 |
Simulation | 42 |
Item Response Theory | 18 |
Test Items | 14 |
Models | 10 |
Regression (Statistics) | 10 |
Evaluation Methods | 9 |
Comparative Analysis | 8 |
Computation | 8 |
Correlation | 8 |
More ▼ |
Source
Author
Lee, Won-Chan | 3 |
Emons, Wilco H. M. | 2 |
Kolen, Michael J. | 2 |
Shang, Yi | 2 |
Sijtsma, Klaas | 2 |
AlGhamdi, Hannan M. | 1 |
Andrews, Benjamin James | 1 |
Benítez, Isabel | 1 |
Betebenner, Damian W. | 1 |
Bogaert, Jasper | 1 |
Brennan, Robert L. | 1 |
More ▼ |
Publication Type
Journal Articles | 30 |
Reports - Research | 25 |
Reports - Evaluative | 9 |
Dissertations/Theses -… | 5 |
Reports - Descriptive | 3 |
Speeches/Meeting Papers | 3 |
Education Level
Secondary Education | 3 |
Grade 9 | 2 |
High Schools | 2 |
Grade 10 | 1 |
Grade 4 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Armed Forces Qualification… | 1 |
Cognitive Abilities Test | 1 |
What Works Clearinghouse Rating
Emma Somer; Carl Falk; Milica Miocevic – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Factor Score Regression (FSR) is increasingly employed as an alternative to structural equation modeling (SEM) in small samples. Despite its popularity in psychology, the performance of FSR in multigroup models with small samples remains relatively unknown. The goal of this study was to examine the performance of FSR, namely Croon's correction and…
Descriptors: Scores, Structural Equation Models, Comparative Analysis, Sample Size
Manuel T. Rein; Jeroen K. Vermunt; Kim De Roover; Leonie V. D. E. Vogelsmeier – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Researchers often study dynamic processes of latent variables in everyday life, such as the interplay of positive and negative affect over time. An intuitive approach is to first estimate the measurement model of the latent variables, then compute factor scores, and finally use these factor scores as observed scores in vector autoregressive…
Descriptors: Measurement Techniques, Factor Analysis, Scores, Validity
Miyazaki, Yasuo; Kamata, Akihito; Uekawa, Kazuaki; Sun, Yizhi – Educational and Psychological Measurement, 2022
This paper investigated consequences of measurement error in the pretest on the estimate of the treatment effect in a pretest-posttest design with the analysis of covariance (ANCOVA) model, focusing on both the direction and magnitude of its bias. Some prior studies have examined the magnitude of the bias due to measurement error and suggested…
Descriptors: Error of Measurement, Pretesting, Pretests Posttests, Statistical Bias
Xue Zhang; Chun Wang – Grantee Submission, 2022
Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit…
Descriptors: Goodness of Fit, Item Response Theory, Scores, Test Length
Bogaert, Jasper; Loh, Wen Wei; Rosseel, Yves – Educational and Psychological Measurement, 2023
Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error…
Descriptors: Factor Analysis, Regression (Statistics), Structural Equation Models, Error of Measurement
Mansolf, Maxwell; Jorgensen, Terrence D.; Enders, Craig K. – Grantee Submission, 2020
Structural equation modeling (SEM) applications routinely employ a trilogy of significance tests that includes the likelihood ratio test, Wald test, and score test or modification index. Researchers use these tests to assess global model fit, evaluate whether individual estimates differ from zero, and identify potential sources of local misfit,…
Descriptors: Structural Equation Models, Computation, Scores, Simulation
Ozdemir, Burhanettin; Gelbal, Selahattin – Education and Information Technologies, 2022
The computerized adaptive tests (CAT) apply an adaptive process in which the items are tailored to individuals' ability scores. The multidimensional CAT (MCAT) designs differ in terms of different item selection, ability estimation, and termination methods being used. This study aims at investigating the performance of the MCAT designs used to…
Descriptors: Scores, Computer Assisted Testing, Test Items, Language Proficiency
Kopp, Jason P.; Jones, Andrew T. – Applied Measurement in Education, 2020
Traditional psychometric guidelines suggest that at least several hundred respondents are needed to obtain accurate parameter estimates under the Rasch model. However, recent research indicates that Rasch equating results in accurate parameter estimates with sample sizes as small as 25. Item parameter drift under the Rasch model has been…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Sampling
Yesiltas, Gonca; Paek, Insu – Educational and Psychological Measurement, 2020
A log-linear model (LLM) is a well-known statistical method to examine the relationship among categorical variables. This study investigated the performance of LLM in detecting differential item functioning (DIF) for polytomously scored items via simulations where various sample sizes, ability mean differences (impact), and DIF types were…
Descriptors: Simulation, Sample Size, Item Analysis, Scores
Greifer, Noah – ProQuest LLC, 2018
There has been some research in the use of propensity scores in the context of measurement error in the confounding variables; one recommended method is to generate estimates of the mis-measured covariate using a latent variable model, and to use those estimates (i.e., factor scores) in place of the covariate. I describe a simulation study…
Descriptors: Evaluation Methods, Probability, Scores, Statistical Analysis
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Perry, Thomas – Research Papers in Education, 2019
A compositional effect is when pupil attainment is associated with the characteristics of their peers, over and above their own individual characteristics. Pupils at academically selective schools, for example, tend to out-perform similar-ability pupils who are educated with mixed-ability peers. Previous methodological studies however have shown…
Descriptors: Value Added Models, Correlation, Individual Characteristics, Peer Influence
Leventhal, Brian – ProQuest LLC, 2017
More robust and rigorous psychometric models, such as multidimensional Item Response Theory models, have been advocated for survey applications. However, item responses may be influenced by construct-irrelevant variance factors such as preferences for extreme response options. Through empirical and simulation methods, this study evaluates the use…
Descriptors: Psychometrics, Item Response Theory, Simulation, Models
Hidalgo, Ma Dolores; Benítez, Isabel; Padilla, Jose-Luis; Gómez-Benito, Juana – Sociological Methods & Research, 2017
The growing use of scales in survey questionnaires warrants the need to address how does polytomous differential item functioning (DIF) affect observed scale score comparisons. The aim of this study is to investigate the impact of DIF on the type I error and effect size of the independent samples t-test on the observed total scale scores. A…
Descriptors: Test Items, Test Bias, Item Response Theory, Surveys
Shang, Yi; VanIwaarden, Adam; Betebenner, Damian W. – Educational Measurement: Issues and Practice, 2015
In this study, we examined the impact of covariate measurement error (ME) on the estimation of quantile regression and student growth percentiles (SGPs), and find that SGPs tend to be overestimated among students with higher prior achievement and underestimated among those with lower prior achievement, a problem we describe as ME endogeneity in…
Descriptors: Error of Measurement, Regression (Statistics), Achievement Gains, Students