Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 20 |
Descriptor
Data Analysis | 30 |
Error of Measurement | 30 |
Measurement Techniques | 30 |
Computation | 8 |
Models | 7 |
Research Methodology | 7 |
Simulation | 7 |
Evaluation Methods | 6 |
Longitudinal Studies | 5 |
Regression (Statistics) | 5 |
Research Problems | 5 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 21 |
Reports - Research | 16 |
Reports - Evaluative | 7 |
Reports - Descriptive | 2 |
Speeches/Meeting Papers | 2 |
Dissertations/Theses -… | 1 |
Information Analyses | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Secondary Education | 2 |
Grade 9 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Researchers | 1 |
Location
Germany | 1 |
United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
What Works Clearinghouse Rating
Ryan Derickson – ProQuest LLC, 2022
Item Response Theory (IRT) models are a popular analytic method for self report data. We show how traditional IRT models can be vulnerable to specific kinds of asymmetric measurement error (AME) in self-report data, because the models spread the error to all estimates -- even those of items that do not contribute error. We quantify the impact of…
Descriptors: Item Response Theory, Measurement Techniques, Error of Measurement, Models
Zhang, Zhonghua – Journal of Experimental Education, 2022
Reporting standard errors of equating has been advocated as a standard practice when conducting test equating. The two most widely applied procedures for standard errors of equating including the bootstrap method and the delta method are either computationally intensive or confined to the derivations of complicated formulas. In the current study,…
Descriptors: Error of Measurement, Item Response Theory, True Scores, Equated Scores
da Silva, M. A. Salgueiro; Seixas, T. M. – Physics Teacher, 2017
Measuring one physical quantity as a function of another often requires making some choices prior to the measurement process. Two of these choices are: the data range where measurements should focus and the number (n) of data points to acquire in the chosen data range. Here, we consider data range as the interval of variation of the independent…
Descriptors: Physics, Regression (Statistics), Measurement, Measurement Techniques
Soysal, Sumeyra; Karaman, Haydar; Dogan, Nuri – Eurasian Journal of Educational Research, 2018
Purpose of the Study: Missing data are a common problem encountered while implementing measurement instruments. Yet the extent to which reliability, validity, average discrimination and difficulty of the test results are affected by the missing data has not been studied much. Since it is inevitable that missing data have an impact on the…
Descriptors: Sample Size, Data Analysis, Research Problems, Error of Measurement
Rhemtulla, Mijke; Jia, Fan; Wu, Wei; Little, Todd D. – International Journal of Behavioral Development, 2014
We examine the performance of planned missing (PM) designs for correlated latent growth curve models. Using simulated data from a model where latent growth curves are fitted to two constructs over five time points, we apply three kinds of planned missingness. The first is item-level planned missingness using a three-form design at each wave such…
Descriptors: Data Analysis, Error of Measurement, Models, Longitudinal Studies
Reardon, Sean F.; Ho, Andrew D. – Journal of Educational and Behavioral Statistics, 2015
In an earlier paper, we presented methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. We demonstrated that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data…
Descriptors: Achievement Gap, Performance Factors, Educational Practices, Scores
Reardon, Sean F.; Ho, Andrew D. – Grantee Submission, 2015
Ho and Reardon (2012) present methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. They demonstrate that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data…
Descriptors: Achievement Gap, Performance Factors, Educational Practices, Scores
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H. – Educational and Psychological Measurement, 2015
When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…
Descriptors: Competence, Tests, Evaluation Methods, Adults
Lang, Kyle M.; Little, Todd D. – International Journal of Behavioral Development, 2014
We present a new paradigm that allows simplified testing of multiparameter hypotheses in the presence of incomplete data. The proposed technique is a straight-forward procedure that combines the benefits of two powerful data analytic tools: multiple imputation and nested-model ?2 difference testing. A Monte Carlo simulation study was conducted to…
Descriptors: Hypothesis Testing, Data Analysis, Error of Measurement, Computation
Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D. – International Journal of Behavioral Development, 2014
We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…
Descriptors: Longitudinal Studies, Data Analysis, Error of Measurement, Research Problems
Pelanek, Radek – Journal of Educational Data Mining, 2015
Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…
Descriptors: Models, Data Analysis, Data Processing, Evaluation Criteria
Burt, Keith B.; Obradovic, Jelena – Developmental Review, 2013
The purpose of this paper is to review major statistical and psychometric issues impacting the study of psychophysiological reactivity and discuss their implications for applied developmental researchers. We first cover traditional approaches such as the observed difference score (DS) and the observed residual score (RS), including a review of…
Descriptors: Measurement Techniques, Psychometrics, Data Analysis, Researchers
Khawand, Christopher – Society for Research on Educational Effectiveness, 2012
Instrumental variables (IV) methods allow for consistent estimation of causal effects, but suffer from poor finite-sample properties and data availability constraints. IV estimates also tend to have relatively large standard errors, often inhibiting the interpretability of differences between IV and non-IV point estimates. Lastly, instrumental…
Descriptors: Least Squares Statistics, Labor Supply, Measurement Techniques, Error of Measurement
Murphy, Richard; Weinhardt, Felix – Centre for Economic Performance, 2013
We find an individual's rank within their reference group has effects on later objective outcomes. To evaluate the impact of local rank, we use a large administrative dataset tracking over two million students in England from primary through to secondary school. Academic rank within primary school has sizable, robust and significant effects on…
Descriptors: Foreign Countries, Class Rank, Progress Monitoring, Effect Size
Hutchison, Dougal – Oxford Review of Education, 2008
There is a degree of instability in any measurement, so that if it is repeated, it is possible that a different result may be obtained. Such instability, generally described as "measurement error", may affect the conclusions drawn from an investigation, and methods exist for allowing it. It is less widely known that different disciplines, and…
Descriptors: Measurement Techniques, Data Analysis, Error of Measurement, Test Reliability
Previous Page | Next Page »
Pages: 1 | 2