NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 406 to 420 of 3,295 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
De Raadt, Alexandra; Warrens, Matthijs J.; Bosker, Roel J.; Kiers, Henk A. L. – Educational and Psychological Measurement, 2019
Cohen's kappa coefficient is commonly used for assessing agreement between classifications of two raters on a nominal scale. Three variants of Cohen's kappa that can handle missing data are presented. Data are considered missing if one or both ratings of a unit are missing. We study how well the variants estimate the kappa value for complete data…
Descriptors: Interrater Reliability, Data, Statistical Analysis, Statistical Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Taylor, John M. – Practical Assessment, Research & Evaluation, 2019
Although frequentist estimators can effectively fit ordinal confirmatory factor analysis (CFA) models, their assumptions are difficult to establish and estimation problems may prohibit their use at times. Consequently, researchers may want to also look to Bayesian analysis to fit their ordinal models. Bayesian methods offer researchers an…
Descriptors: Bayesian Statistics, Factor Analysis, Least Squares Statistics, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Bais, Frank; Schouten, Barry; Lugtig, Peter; Toepoel, Vera; Arends-Tòth, Judit; Douhou, Salima; Kieruj, Natalia; Morren, Mattijn; Vis, Corrie – Sociological Methods & Research, 2019
Item characteristics can have a significant effect on survey data quality and may be associated with measurement error. Literature on data quality and measurement error is often inconclusive. This could be because item characteristics used for detecting measurement error are not coded unambiguously. In our study, we use a systematic coding…
Descriptors: Foreign Countries, National Surveys, Error of Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2019
Previous work showing that revised parallel analysis can be effective with dichotomous items has used a two-parameter model and normally distributed abilities. In this study, both two- and three-parameter models were used with normally distributed and skewed ability distributions. Relatively minor skew and kurtosis in the underlying ability…
Descriptors: Item Analysis, Models, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Gomes, Hugo S.; Farrington, David P.; Krohn, Marvin D.; Maia, Ângela – International Journal of Social Research Methodology, 2023
Although research on sensitive topics has produced a large body of knowledge on how to improve the quality of self-reported data, little is known regarding the sensitivity of offending questions, and much less is known regarding how topic sensitivity is affected by recall periods. In this study, we developed a multi-dimensional assessment of item…
Descriptors: Self Disclosure (Individuals), Error of Measurement, Recall (Psychology), Crime
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lesly Yahaira Rodriguez-Martinez; Paul Hernandez-Martinez; Maria Guadalupe Perez-Martinez – Journal on Mathematics Education, 2023
This paper aims to describe the development process of the Observation Protocol for Teaching Activities in Mathematics (POAEM) and to report the findings from the qualitative and statistical analyses used to provide evidence of validity and reliability of the information collected with the first version of the POAEM. As part of this development…
Descriptors: Thinking Skills, Mathematics Skills, Validity, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kane, Michael T.; Mroch, Andrew A. – ETS Research Report Series, 2020
Ordinary least squares (OLS) regression and orthogonal regression (OR) address different questions and make different assumptions about errors. The OLS regression of Y on X yields predictions of a dependent variable (Y) contingent on an independent variable (X) and minimizes the sum of squared errors of prediction. It assumes that the independent…
Descriptors: Regression (Statistics), Least Squares Statistics, Test Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Nguyen, Trang Quynh; Stuart, Elizabeth A. – Journal of Educational and Behavioral Statistics, 2020
We address measurement error bias in propensity score (PS) analysis due to covariates that are latent variables. In the setting where latent covariate X is measured via multiple error-prone items W, PS analysis using several proxies for X--the W items themselves, a summary score (mean/sum of the items), or the conventional factor score (i.e.,…
Descriptors: Error of Measurement, Statistical Bias, Error Correction, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Andrew T.; Kopp, Jason P.; Ong, Thai Q. – Educational Measurement: Issues and Practice, 2020
Studies investigating invariance have often been limited to measurement or prediction invariance. Selection invariance, wherein the use of test scores for classification results in equivalent classification accuracy between groups, has received comparatively little attention in the psychometric literature. Previous research suggests that some form…
Descriptors: Test Construction, Test Bias, Classification, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Phillippo, David M.; Dias, Sofia; Ades, A. E.; Welton, Nicky J. – Research Synthesis Methods, 2020
Indirect comparisons are used to obtain estimates of relative effectiveness between two treatments that have not been compared in the same randomized controlled trial, but have instead been compared against a common comparator in separate trials. Standard indirect comparisons use only aggregate data, under the assumption that there are no…
Descriptors: Comparative Analysis, Outcomes of Treatment, Patients, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Eames, Cheryl L.; Barrett, Jeffrey E.; Cullen, Craig J.; Rutherford, George; Klanderman, David; Clements, Douglas H.; Sarama, Julie; Van Dine, Douglas W. – School Science and Mathematics, 2020
This study explored children's area estimation performance. Two groups of fourth grade children completed area estimation tasks with rectangles ranging from 5 to 200 square units. A randomly assigned treatment group completed instructional sessions that involved a conceptual area measurement strategy along with numerical feedback. Children tended…
Descriptors: Elementary School Mathematics, Elementary School Students, Grade 4, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koçak, Duygu – Pedagogical Research, 2020
Iteration number in Monte Carlo simulation method used commonly in educational research has an effect on Item Response Theory test and item parameters. The related studies show that the number of iteration is at the discretion of the researcher. Similarly, there is no specific number suggested for the number of iteration in the related literature.…
Descriptors: Monte Carlo Methods, Item Response Theory, Educational Research, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Joshua T.; Dallas, Andrew D.; Fan, Fen – Applied Measurement in Education, 2020
Recent research has suggested that re-setting the standard for each administration of a small sample examination, in addition to the high cost, does not adequately maintain similar performance expectations year after year. Small-sample equating methods have shown promise with samples between 20 and 30. For groups that have fewer than 20 students,…
Descriptors: Equated Scores, Sample Size, Sampling, Weighted Scores
Nese, Joseph F. T.; Kamata, Akihito – Grantee Submission, 2020
Curriculum-based measurement of oral reading fluency (CBM-R) is widely used across the country as a quick measure of reading proficiency that also serves as a good predictor of comprehension and overall reading achievement, but has several practical and technical inadequacies, including a large standard error of measurement (SEM). Reducing the SEM…
Descriptors: Curriculum Based Assessment, Oral Reading, Reading Fluency, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Pages: 1  |  ...  |  24  |  25  |  26  |  27  |  28  |  29  |  30  |  31  |  32  |  ...  |  220