Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 9 |
Descriptor
Bias | 10 |
Error of Measurement | 10 |
Models | 10 |
Computation | 4 |
Data Analysis | 3 |
Equated Scores | 3 |
Classification | 2 |
Equations (Mathematics) | 2 |
Evaluation Methods | 2 |
Evidence | 2 |
Hierarchical Linear Modeling | 2 |
More ▼ |
Source
Author
Beretvas, S. Natasha | 1 |
Bernstein, Lawrence | 1 |
Burstein, Nancy | 1 |
Cao, Yi | 1 |
Carl Westine | 1 |
Chen, Qi | 1 |
Culpepper, Steven Andrew | 1 |
Kwok, Oi-Man | 1 |
Luo, Wen | 1 |
Meyers, Jason L. | 1 |
Michelle Boyer | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 7 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tong Wu; Stella Y. Kim; Carl Westine; Michelle Boyer – Journal of Educational Measurement, 2025
While significant attention has been given to test equating to ensure score comparability, limited research has explored equating methods for rater-mediated assessments, where human raters inherently introduce error. If not properly addressed, these errors can undermine score interchangeability and test validity. This study proposes an equating…
Descriptors: Item Response Theory, Evaluators, Error of Measurement, Test Validity
Tao, Wei; Cao, Yi – Applied Measurement in Education, 2016
Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…
Descriptors: Item Response Theory, Equated Scores, Test Format, Models
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables
Culpepper, Steven Andrew – Applied Psychological Measurement, 2012
Measurement error significantly biases interaction effects and distorts researchers' inferences regarding interactive hypotheses. This article focuses on the single-indicator case and shows how to accurately estimate group slope differences by disattenuating interaction effects with errors-in-variables (EIV) regression. New analytic findings were…
Descriptors: Evidence, Test Length, Interaction, Regression (Statistics)
Chen, Qi; Kwok, Oi-Man; Luo, Wen; Willson, Victor L. – Structural Equation Modeling: A Multidisciplinary Journal, 2010
Growth mixture modeling (GMM) is a relatively new technique for analyzing longitudinal data. However, when applying GMM, researchers might assume that the higher level (nonrepeated measure) units (e.g., students) are independent from each other even though it might not always be true. This article reports the results of a simulation study…
Descriptors: Longitudinal Studies, Data Analysis, Models, Monte Carlo Methods
de Vries, Jannes; de Graaf, Paul M. – Social Indicators Research, 2008
In this article we study the bias caused by the conventional retrospective measurement of parental high cultural activities in the effects of parental high cultural activities and educational attainment on son's or daughter's high cultural activities. Multi-informant data show that there is both random measurement error and correlated error in the…
Descriptors: Cultural Activities, Age Differences, Educational Attainment, Measurement
Sullivan, Paul – Journal of Human Resources, 2009
This paper develops an empirical occupational choice model that corrects for misclassification in occupational choices and measurement error in occupation-specific work experience. The model is used to estimate the extent of measurement error in occupation data and quantify the bias that results from ignoring measurement error in occupation codes…
Descriptors: Computation, Models, Career Choice, Error Correction
Meyers, Jason L.; Beretvas, S. Natasha – Multivariate Behavioral Research, 2006
Cross-classified random effects modeling (CCREM) is used to model multilevel data from nonhierarchical contexts. These models are widely discussed but infrequently used in social science research. Because little research exists assessing when it is necessary to use CCREM, 2 studies were conducted. A real data set with a cross-classified structure…
Descriptors: Social Science Research, Computation, Models, Data Analysis
van der Linden, Wim J. – Applied Psychological Measurement, 2006
Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and population of test takers. But it is argued that if the goal of equating is to adjust the scores of test takers on one version of the test to make…
Descriptors: Equated Scores, Evaluation Criteria, Models, Error of Measurement
Bernstein, Lawrence; Burstein, Nancy – 1994
The inherent methodological problem in conducting research at multiple sites is how to best derive an overall estimate of program impact across multiple sites, best being the estimate that minimizes the mean square error, that is, the square of the difference between the observed and true values. An empirical example illustrates the use of the…
Descriptors: Bias, Comprehensive Programs, Data Analysis, Data Collection