NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 32 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
James Ohisei Uanhoro – Educational and Psychological Measurement, 2024
Accounting for model misspecification in Bayesian structural equation models is an active area of research. We present a uniquely Bayesian approach to misspecification that models the degree of misspecification as a parameter--a parameter akin to the correlation root mean squared residual. The misspecification parameter can be interpreted on its…
Descriptors: Bayesian Statistics, Structural Equation Models, Simulation, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Yuting; Zhang, Jihong; Jiang, Zhehan; Shi, Dexin – Educational and Psychological Measurement, 2023
In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as X[superscript 2], M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian…
Descriptors: Goodness of Fit, Psychometrics, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Yoona; Hong, Sehee – Educational and Psychological Measurement, 2023
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations,…
Descriptors: Classification, Models, Prediction, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eunsook; Ferron, John M.; Dedrick, Robert F.; Tan, Tony X.; Stark, Stephen – Educational and Psychological Measurement, 2021
Factor mixture modeling (FMM) has been increasingly used to investigate unobserved population heterogeneity. This study examined the issue of covariate effects with FMM in the context of measurement invariance testing. Specifically, the impact of excluding and misspecifying covariate effects on measurement invariance testing and class enumeration…
Descriptors: Role, Error of Measurement, Monte Carlo Methods, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Zhan, Peida – Educational and Psychological Measurement, 2020
Timely diagnostic feedback is helpful for students and teachers, enabling them to adjust their learning and teaching plans according to a current diagnosis. Motivated by the practical concern that the simultaneity estimation strategy currently adopted by longitudinal learning diagnosis models does not provide timely diagnostic feedback, this study…
Descriptors: Markov Processes, Formative Evaluation, Evaluation Methods, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Yesiltas, Gonca; Paek, Insu – Educational and Psychological Measurement, 2020
A log-linear model (LLM) is a well-known statistical method to examine the relationship among categorical variables. This study investigated the performance of LLM in detecting differential item functioning (DIF) for polytomously scored items via simulations where various sample sizes, ability mean differences (impact), and DIF types were…
Descriptors: Simulation, Sample Size, Item Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Grice, James W.; Yepez, Maria; Wilson, Nicole L.; Shoda, Yuichi – Educational and Psychological Measurement, 2017
An alternative to null hypothesis significance testing is presented and discussed. This approach, referred to as observation-oriented modeling, is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. In terms of analysis, this novel approach complements traditional methods…
Descriptors: Hypothesis Testing, Models, Observation, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ming; Harring, Jeffrey R. – Educational and Psychological Measurement, 2017
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Descriptors: Simulation, Comparative Analysis, Monte Carlo Methods, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Lamprianou, Iasonas – Educational and Psychological Measurement, 2018
It is common practice for assessment programs to organize qualifying sessions during which the raters (often known as "markers" or "judges") demonstrate their consistency before operational rating commences. Because of the high-stakes nature of many rating activities, the research community tends to continuously explore new…
Descriptors: Social Networks, Network Analysis, Comparative Analysis, Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo – Educational and Psychological Measurement, 2012
A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…
Descriptors: Monte Carlo Methods, Factor Structure, Data Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Ting; Stone, Clement A. – Educational and Psychological Measurement, 2012
It has been argued that item response theory trait estimates should be used in analyses rather than number right (NR) or summated scale (SS) scores. Thissen and Orlando postulated that IRT scaling tends to produce trait estimates that are linearly related to the underlying trait being measured. Therefore, IRT trait estimates can be more useful…
Descriptors: Educational Research, Monte Carlo Methods, Measures (Individuals), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Davis, Andrew; Dean, Raymond S. – Educational and Psychological Measurement, 2010
The current study examined the measurement invariance of the Dean-Woodcock Sensory-Motor Battery (DWSMB) for children diagnosed with attention deficit hyperactivity disorder (ADHD) and an age- and gender-matched nonclinical sample. The DWSMB is a promising new instrument for assessing a wide range of cortical and subcortical sensory and motor…
Descriptors: Attention Deficit Hyperactivity Disorder, Comparative Analysis, Screening Tests, Neurological Impairments
Peer reviewed Peer reviewed
Direct linkDirect link
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E. – Educational and Psychological Measurement, 2009
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Descriptors: Simulation, Item Response Theory, Test Items, Factor Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3