NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…64
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 64 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Zhichen Guo; Daxun Wang; Yan Cai; Dongbo Tu – Educational and Psychological Measurement, 2024
Forced-choice (FC) measures have been widely used in many personality or attitude tests as an alternative to rating scales, which employ comparative rather than absolute judgments. Several response biases, such as social desirability, response styles, and acquiescence bias, can be reduced effectively. Another type of data linked with comparative…
Descriptors: Item Response Theory, Models, Reaction Time, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Yoona; Hong, Sehee – Educational and Psychological Measurement, 2023
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations,…
Descriptors: Classification, Models, Prediction, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Sooyong; Han, Suhwa; Choi, Seung W. – Educational and Psychological Measurement, 2022
Response data containing an excessive number of zeros are referred to as zero-inflated data. When differential item functioning (DIF) detection is of interest, zero-inflation can attenuate DIF effects in the total sample and lead to underdetection of DIF items. The current study presents a DIF detection procedure for response data with excess…
Descriptors: Test Bias, Monte Carlo Methods, Simulation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Bitna; Sohn, Wonsook – Educational and Psychological Measurement, 2022
A Monte Carlo study was conducted to compare the performance of a level-specific (LS) fit evaluation with that of a simultaneous (SI) fit evaluation in multilevel confirmatory factor analysis (MCFA) models. We extended previous studies by examining their performance under MCFA models with different factor structures across levels. In addition,…
Descriptors: Goodness of Fit, Factor Structure, Monte Carlo Methods, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eunsook; Ferron, John M.; Dedrick, Robert F.; Tan, Tony X.; Stark, Stephen – Educational and Psychological Measurement, 2021
Factor mixture modeling (FMM) has been increasingly used to investigate unobserved population heterogeneity. This study examined the issue of covariate effects with FMM in the context of measurement invariance testing. Specifically, the impact of excluding and misspecifying covariate effects on measurement invariance testing and class enumeration…
Descriptors: Role, Error of Measurement, Monte Carlo Methods, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Zopluoglu, Cengiz – Educational and Psychological Measurement, 2020
A mixture extension of Samejima's continuous response model for continuous measurement outcomes and its estimation through a heuristic approach based on limited-information factor analysis is introduced. Using an empirical data set, it is shown that two groups of respondents that differ both qualitatively and quantitatively in their response…
Descriptors: Item Response Theory, Measurement, Models, Heuristics
Peer reviewed Peer reviewed
Direct linkDirect link
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C. – Educational and Psychological Measurement, 2018
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Descriptors: Error of Measurement, Testing, Scores, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Yoonsun; Cohen, Allan S. – Educational and Psychological Measurement, 2020
A nonconverged Markov chain can potentially lead to invalid inferences about model parameters. The purpose of this study was to assess the effect of a nonconverged Markov chain on the estimation of parameters for mixture item response theory models using a Markov chain Monte Carlo algorithm. A simulation study was conducted to investigate the…
Descriptors: Markov Processes, Item Response Theory, Accuracy, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jobst, Lisa J.; Auerswald, Max; Moshagen, Morten – Educational and Psychological Measurement, 2022
Prior studies investigating the effects of non-normality in structural equation modeling typically induced non-normality in the indicator variables. This procedure neglects the factor analytic structure of the data, which is defined as the sum of latent variables and errors, so it is unclear whether previous results hold if the source of…
Descriptors: Goodness of Fit, Structural Equation Models, Error of Measurement, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Monroe, Scott – Educational and Psychological Measurement, 2018
Lagrange multiplier (LM) or score tests have seen renewed interest for the purpose of diagnosing misspecification in item response theory (IRT) models. LM tests can also be used to test whether parameters differ from a fixed value. We argue that the utility of LM tests depends on both the method used to compute the test and the degree of…
Descriptors: Item Response Theory, Matrices, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
da Silva, Marcelo A.; Liu, Ren; Huggins-Manley, Anne C.; Bazán, Jorge L. – Educational and Psychological Measurement, 2019
Multidimensional item response theory (MIRT) models use data from individual item responses to estimate multiple latent traits of interest, making them useful in educational and psychological measurement, among other areas. When MIRT models are applied in practice, it is not uncommon to see that some items are designed to measure all latent traits…
Descriptors: Item Response Theory, Matrices, Models, Bayesian Statistics
Kara, Yusuf; Kamata, Akihito; Potgieter, Cornelis; Nese, Joseph F. T. – Educational and Psychological Measurement, 2020
Oral reading fluency (ORF), used by teachers and school districts across the country to screen and progress monitor at-risk readers, has been documented as a good indicator of reading comprehension and overall reading competence. In traditional ORF administration, students are given one minute to read a grade-level passage, after which the…
Descriptors: Oral Reading, Reading Fluency, Reading Rate, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Gonzalez, Oscar; MacKinnon, David P. – Educational and Psychological Measurement, 2018
Statistical mediation analysis allows researchers to identify the most important mediating constructs in the causal process studied. Identifying specific mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to…
Descriptors: Statistical Analysis, Monte Carlo Methods, Measurement, Models
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5