NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 105 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Philipp Sterner; Kim De Roover; David Goretzko – Structural Equation Modeling: A Multidisciplinary Journal, 2025
When comparing relations and means of latent variables, it is important to establish measurement invariance (MI). Most methods to assess MI are based on confirmatory factor analysis (CFA). Recently, new methods have been developed based on exploratory factor analysis (EFA); most notably, as extensions of multi-group EFA, researchers introduced…
Descriptors: Error of Measurement, Measurement Techniques, Factor Analysis, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Abdolvahab Khademi; Craig S. Wells; Maria Elena Oliveri; Ester Villalonga-Olives – SAGE Open, 2023
The most common effect size when using a multiple-group confirmatory factor analysis approach to measurement invariance is [delta]CFI and [delta]TLI with a cutoff value of 0.01. However, this recommended cutoff value may not be ubiquitously appropriate and may be of limited application for some tests (e.g., measures using dichotomous items or…
Descriptors: Factor Analysis, Factor Structure, Error of Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle R. Blazek; Jason T. Siegel – International Journal of Social Research Methodology, 2024
Social scientists have long agreed that satisficing behavior increases error and reduces the validity of survey data. There have been numerous reviews on detecting satisficing behavior, but preventing this behavior has received less attention. The current narrative review provides empirically supported guidance on preventing satisficing by…
Descriptors: Response Style (Tests), Responses, Reaction Time, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kannan, Priya; Zapata-Rivera, Diego; Bryant, Andrew D. – Practical Assessment, Research & Evaluation, 2021
Individual-student score reports sometimes include information about precision of scores (i.e., measurement error). In this study, we specifically investigated if parents understand this information when presented. We conducted an online experimental study where 196 parents of middle school children, from various parts of the country, were…
Descriptors: Comprehension, Parents, Error of Measurement, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Zhong Jian Chee; Anke M. Scheeren; Marieke de Vries – Autism: The International Journal of Research and Practice, 2024
Despite several psychometric advantages over the 50-item Autism Spectrum Quotient, an instrument used to measure autistic traits, the abridged AQ-28 and its cross-cultural validity have not been examined as extensively. Therefore, this study aimed to examine the factor structure and measurement invariance of the AQ-28 in 818 Dutch (M[subscript…
Descriptors: Autism Spectrum Disorders, Questionnaires, Factor Structure, Factor Analysis
Wang, Huan; Kim, Dong-In – Online Submission, 2022
A fundamental premise in assessment is that the underlying construct is equivalent across different groups of students and that this structure does not vary over years. The pandemic has potentially impacted opportunity to learn and disrupted the internal structure of assessments in various ways. Past research has suggested that students tended to…
Descriptors: Measurement, Error of Measurement, COVID-19, Pandemics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mustafa, Faisal; Anwar, Samsul – Online Submission, 2018
Paper-based TOEFL scores have been used to determine the level of English proficiency for EFL learners for various purposes. However, in repeat tests some lower scores fluctuate despite no additional classroom learning, thus they cannot be used to judge the English level of those taking the test. There is limited research into the lowest score…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Willse, John T. – Measurement and Evaluation in Counseling and Development, 2017
This article provides a brief introduction to the Rasch model. Motivation for using Rasch analyses is provided. Important Rasch model concepts and key aspects of result interpretation are introduced, with major points reinforced using a simulation demonstration. Concrete guidelines are provided regarding sample size and the evaluation of items.
Descriptors: Item Response Theory, Test Results, Test Interpretation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Hidalgo, Ma Dolores; Benítez, Isabel; Padilla, Jose-Luis; Gómez-Benito, Juana – Sociological Methods & Research, 2017
The growing use of scales in survey questionnaires warrants the need to address how does polytomous differential item functioning (DIF) affect observed scale score comparisons. The aim of this study is to investigate the impact of DIF on the type I error and effect size of the independent samples t-test on the observed total scale scores. A…
Descriptors: Test Items, Test Bias, Item Response Theory, Surveys
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Monroe, Scott; Cai, Li – Grantee Submission, 2015
This research is concerned with two topics in assessing model fit for categorical data analysis. The first topic involves the application of a limited-information overall test, introduced in the item response theory literature, to Structural Equation Modeling (SEM) of categorical outcome variables. Most popular SEM test statistics assess how well…
Descriptors: Structural Equation Models, Test Interpretation, Goodness of Fit, Item Response Theory
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J. – ACT, Inc., 2016
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
Descriptors: Scores, Classification, College Entrance Examinations, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Fan, Xitao; Sun, Shaojing – Journal of Early Adolescence, 2014
In adolescence research, the treatment of measurement reliability is often fragmented, and it is not always clear how different reliability coefficients are related. We show that generalizability theory (G-theory) is a comprehensive framework of measurement reliability, encompassing all other reliability methods (e.g., Pearson "r,"…
Descriptors: Generalizability Theory, Measurement, Reliability, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael – Journal of Educational Measurement, 2011
Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…
Descriptors: Error of Measurement, Scores, Test Interpretation, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moses, Tim – ETS Research Report Series, 2013
The purpose of this report is to review ETS psychometric contributions that focus on test scores. Two major sections review contributions based on assessing test scores' measurement characteristics and other contributions about using test scores as predictors in correlational and regression relationships. An additional section reviews additional…
Descriptors: Psychometrics, Scores, Correlation, Regression (Statistics)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7