NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)4
Since 2006 (last 20 years)11
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy – Educational Psychologist, 2016
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Descriptors: Bayesian Statistics, Models, Educational Research, Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
Blömeke, Sigrid; Dunekacke, Simone; Jenßen, Lars – European Early Childhood Education Research Journal, 2017
This study examined the level, structure and cognitive, educational and psychological determinants of beliefs about the relevance and nature of mathematics, about gender-stereotypes with respect to mathematics abilities and about enjoyment of mathematics. Prospective preschool teachers from programs at vocational schools and higher education…
Descriptors: Foreign Countries, Preservice Teachers, Beliefs, Mathematics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Aydin, Burak; Leite, Walter L.; Algina, James – Educational and Psychological Measurement, 2016
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Descriptors: Error of Measurement, Predictor Variables, Randomized Controlled Trials, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Structural Equation Modeling: A Multidisciplinary Journal, 2012
In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…
Descriptors: Item Response Theory, Structural Equation Models, Computation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Doyoung; De Ayala, R. J.; Ferdous, Abdullah A.; Nering, Michael L. – Applied Psychological Measurement, 2011
To realize the benefits of item response theory (IRT), one must have model-data fit. One facet of a model-data fit investigation involves assessing the tenability of the conditional item independence (CII) assumption. In this Monte Carlo study, the comparative performance of 10 indices for identifying conditional item dependence is assessed. The…
Descriptors: Item Response Theory, Monte Carlo Methods, Error of Measurement, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
De Naeghel, Jessie; Van Keer, Hilde; Vansteenkiste, Maarten; Rosseel, Yves – Journal of Educational Psychology, 2012
Research indicates the need to further examine the dimensions of reading motivation. A clear theoretical basis is necessary for conceptualizing reading motivation and considering contextual differences therein. The present study develops and validates the SRQ-Reading Motivation, a questionnaire measuring recreational and academic reading…
Descriptors: Validity, Measurement, Reading Comprehension, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Willse, John T.; Goodman, Joshua T. – Educational and Psychological Measurement, 2008
This research provides a direct comparison of effect size estimates based on structural equation modeling (SEM), item response theory (IRT), and raw scores. Differences between the SEM, IRT, and raw score approaches are examined under a variety of data conditions (IRT models underlying the data, test lengths, magnitude of group differences, and…
Descriptors: Test Length, Structural Equation Models, Effect Size, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Multivariate Behavioral Research, 2009
Differential item functioning (DIF) occurs when an item on a test or questionnaire has different measurement properties for 1 group of people versus another, irrespective of mean differences on the construct. This study focuses on the use of multiple-indicator multiple-cause (MIMIC) structural equation models for DIF testing, parameterized as item…
Descriptors: Test Bias, Structural Equation Models, Item Response Theory, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Xin-Yuan; Lee, Sik-Yum – Structural Equation Modeling: A Multidisciplinary Journal, 2006
Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…
Descriptors: Structural Equation Models, Maximum Likelihood Statistics, Item Response Theory, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Lu, Irene R. R.; Thomas, D. Roland; Zumbo, Bruno D. – Structural Equation Modeling: A Multidisciplinary Journal, 2005
This article reviews the problems associated with using item response theory (IRT)-based latent variable scores for analytical modeling, discusses the connection between IRT and structural equation modeling (SEM)-based latent regression modeling for discrete data, and compares regression parameter estimates obtained using predicted IRT scores and…
Descriptors: Least Squares Statistics, Item Response Theory, Structural Equation Models, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
von Davier, Alina A.; Carstensen, Claus H.; von Davier, Matthias – ETS Research Report Series, 2006
Measuring and linking competencies require special instruments, special data collection designs, and special statistical models. The measurement instruments are tests or tests forms, which can be used in the following situations: The same test can be given repeatedly; two or more parallel tests forms (i.e., forms intended to be similar in…
Descriptors: Scores, Measurement Techniques, Competence, Comparative Analysis