NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 421 to 435 of 3,316 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2018
This note extends the results in the 2016 article by Raykov, Marcoulides, and Li to the case of correlated errors in a set of observed measures subjected to principal component analysis. It is shown that when at least two measures are fallible, the probability is zero for any principal component--and in particular for the first principal…
Descriptors: Factor Analysis, Error of Measurement, Correlation, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Gomes, Hugo S.; Farrington, David P.; Krohn, Marvin D.; Maia, Ângela – International Journal of Social Research Methodology, 2023
Although research on sensitive topics has produced a large body of knowledge on how to improve the quality of self-reported data, little is known regarding the sensitivity of offending questions, and much less is known regarding how topic sensitivity is affected by recall periods. In this study, we developed a multi-dimensional assessment of item…
Descriptors: Self Disclosure (Individuals), Error of Measurement, Recall (Psychology), Crime
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lesly Yahaira Rodriguez-Martinez; Paul Hernandez-Martinez; Maria Guadalupe Perez-Martinez – Journal on Mathematics Education, 2023
This paper aims to describe the development process of the Observation Protocol for Teaching Activities in Mathematics (POAEM) and to report the findings from the qualitative and statistical analyses used to provide evidence of validity and reliability of the information collected with the first version of the POAEM. As part of this development…
Descriptors: Thinking Skills, Mathematics Skills, Validity, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Lifeng – Research Synthesis Methods, 2019
Assessing publication bias is a critical procedure in meta-analyses for rating the synthesized overall evidence. Because statistical tests for publication bias are usually not powerful and only give "P" values that inform either the presence or absence of the bias, examining the asymmetry of funnel plots has been popular to investigate…
Descriptors: Meta Analysis, Sample Size, Graphs, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Pustejovsky, James E.; Rodgers, Melissa A. – Research Synthesis Methods, 2019
Publication bias and other forms of outcome reporting bias are critical threats to the validity of findings from research syntheses. A variety of methods have been proposed for detecting selective outcome reporting in a collection of effect size estimates, including several methods based on assessment of asymmetry of funnel plots, such as the…
Descriptors: Effect Size, Regression (Statistics), Statistical Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Gönülates, Emre – Educational and Psychological Measurement, 2019
This article introduces the Quality of Item Pool (QIP) Index, a novel approach to quantifying the adequacy of an item pool of a computerized adaptive test for a given set of test specifications and examinee population. This index ranges from 0 to 1, with values close to 1 indicating the item pool presents optimum items to examinees throughout the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
De Raadt, Alexandra; Warrens, Matthijs J.; Bosker, Roel J.; Kiers, Henk A. L. – Educational and Psychological Measurement, 2019
Cohen's kappa coefficient is commonly used for assessing agreement between classifications of two raters on a nominal scale. Three variants of Cohen's kappa that can handle missing data are presented. Data are considered missing if one or both ratings of a unit are missing. We study how well the variants estimate the kappa value for complete data…
Descriptors: Interrater Reliability, Data, Statistical Analysis, Statistical Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Taylor, John M. – Practical Assessment, Research & Evaluation, 2019
Although frequentist estimators can effectively fit ordinal confirmatory factor analysis (CFA) models, their assumptions are difficult to establish and estimation problems may prohibit their use at times. Consequently, researchers may want to also look to Bayesian analysis to fit their ordinal models. Bayesian methods offer researchers an…
Descriptors: Bayesian Statistics, Factor Analysis, Least Squares Statistics, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Bais, Frank; Schouten, Barry; Lugtig, Peter; Toepoel, Vera; Arends-Tòth, Judit; Douhou, Salima; Kieruj, Natalia; Morren, Mattijn; Vis, Corrie – Sociological Methods & Research, 2019
Item characteristics can have a significant effect on survey data quality and may be associated with measurement error. Literature on data quality and measurement error is often inconclusive. This could be because item characteristics used for detecting measurement error are not coded unambiguously. In our study, we use a systematic coding…
Descriptors: Foreign Countries, National Surveys, Error of Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2019
Previous work showing that revised parallel analysis can be effective with dichotomous items has used a two-parameter model and normally distributed abilities. In this study, both two- and three-parameter models were used with normally distributed and skewed ability distributions. Relatively minor skew and kurtosis in the underlying ability…
Descriptors: Item Analysis, Models, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Jobst, Lisa J.; Auerswald, Max; Moshagen, Morten – Educational and Psychological Measurement, 2022
Prior studies investigating the effects of non-normality in structural equation modeling typically induced non-normality in the indicator variables. This procedure neglects the factor analytic structure of the data, which is defined as the sum of latent variables and errors, so it is unclear whether previous results hold if the source of…
Descriptors: Goodness of Fit, Structural Equation Models, Error of Measurement, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kane, Michael T.; Mroch, Andrew A. – ETS Research Report Series, 2020
Ordinary least squares (OLS) regression and orthogonal regression (OR) address different questions and make different assumptions about errors. The OLS regression of Y on X yields predictions of a dependent variable (Y) contingent on an independent variable (X) and minimizes the sum of squared errors of prediction. It assumes that the independent…
Descriptors: Regression (Statistics), Least Squares Statistics, Test Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Nguyen, Trang Quynh; Stuart, Elizabeth A. – Journal of Educational and Behavioral Statistics, 2020
We address measurement error bias in propensity score (PS) analysis due to covariates that are latent variables. In the setting where latent covariate X is measured via multiple error-prone items W, PS analysis using several proxies for X--the W items themselves, a summary score (mean/sum of the items), or the conventional factor score (i.e.,…
Descriptors: Error of Measurement, Statistical Bias, Error Correction, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Andrew T.; Kopp, Jason P.; Ong, Thai Q. – Educational Measurement: Issues and Practice, 2020
Studies investigating invariance have often been limited to measurement or prediction invariance. Selection invariance, wherein the use of test scores for classification results in equivalent classification accuracy between groups, has received comparatively little attention in the psychometric literature. Previous research suggests that some form…
Descriptors: Test Construction, Test Bias, Classification, Accuracy
Pages: 1  |  ...  |  25  |  26  |  27  |  28  |  29  |  30  |  31  |  32  |  33  |  ...  |  222