NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 44 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Sijia; Luo, Jinwen; Cai, Li – Educational and Psychological Measurement, 2023
Random item effects item response theory (IRT) models, which treat both person and item effects as random, have received much attention for more than a decade. The random item effects approach has several advantages in many practical settings. The present study introduced an explanatory multidimensional random item effects rating scale model. The…
Descriptors: Rating Scales, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Anthony, James C.; Menold, Natalja – Educational and Psychological Measurement, 2023
The population relationship between coefficient alpha and scale reliability is studied in the widely used setting of unidimensional multicomponent measuring instruments. It is demonstrated that for any set of component loadings on the common factor, regardless of the extent of their inequality, the discrepancy between alpha and reliability can be…
Descriptors: Correlation, Evaluation Research, Reliability, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Calvocoressi, Lisa – Educational and Psychological Measurement, 2021
A procedure for evaluating the average R-squared index for a given set of observed variables in an exploratory factor analysis model is discussed. The method can be used as an effective aid in the process of model choice with respect to the number of factors underlying the interrelationships among studied measures. The approach is developed within…
Descriptors: Factor Analysis, Structural Equation Models, Statistical Analysis, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere Joan; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2019
Many psychometric measures yield data that are compatible with (a) an essentially unidimensional factor analysis solution and (b) a correlated-factor solution. Deciding which of these structures is the most appropriate and useful is of considerable importance, and various procedures have been proposed to help in this decision. The only fully…
Descriptors: Validity, Models, Correlation, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ulitzsch, Esther; von Davier, Matthias; Pohl, Steffi – Educational and Psychological Measurement, 2020
So far, modeling approaches for not-reached items have considered one single underlying process. However, missing values at the end of a test can occur for a variety of reasons. On the one hand, examinees may not reach the end of a test due to time limits and lack of working speed. On the other hand, examinees may not attempt all items and quit…
Descriptors: Item Response Theory, Test Items, Response Style (Tests), Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019
Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…
Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M. – Educational and Psychological Measurement, 2015
This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…
Descriptors: Factor Analysis, Open Source Technology, Models, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Cui, Mengyao; Öztürk Gübes, Nese; Yang, Yanyun – Educational and Psychological Measurement, 2018
The purpose of this article is twofold. The first is to provide evaluative information on the recovery of model parameters and their standard errors for the two-parameter item response theory (IRT) model using different estimation methods by Mplus. The second is to provide easily accessible information for practitioners, instructors, and students…
Descriptors: Item Response Theory, Computation, Factor Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Marmolejo-Ramos, Fernando; Cousineau, Denis – Educational and Psychological Measurement, 2017
The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…
Descriptors: Hypothesis Testing, Bayesian Statistics, Evaluation Methods, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren – Educational and Psychological Measurement, 2018
Attribute structure is an explicit way of presenting the relationship between attributes in diagnostic measurement. The specification of attribute structures directly affects the classification accuracy resulted from psychometric modeling. This study provides a conceptual framework for understanding misspecifications of attribute structures. Under…
Descriptors: Diagnostic Tests, Classification, Test Construction, Relationship
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Educational and Psychological Measurement, 2017
In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…
Descriptors: Models, Bayesian Statistics, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2015
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
Descriptors: Computation, Statistical Analysis, Reliability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J.; Samonte, Kelli – Educational and Psychological Measurement, 2015
Interest in using Bayesian methods for estimating item response theory models has grown at a remarkable rate in recent years. This attentiveness to Bayesian estimation has also inspired a growth in available software such as WinBUGS, R packages, BMIRT, MPLUS, and SAS PROC MCMC. This article intends to provide an accessible overview of Bayesian…
Descriptors: Item Response Theory, Bayesian Statistics, Computation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David – Educational and Psychological Measurement, 2016
This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…
Descriptors: Statistics, Item Response Theory, Rating Scales, Mathematical Models
Previous Page | Next Page »
Pages: 1  |  2  |  3