NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)1
Since 2016 (last 10 years)4
Since 2006 (last 20 years)18
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Onyura, Betty; Lass, Elliot; Lazor, Jana; Zuccaro, Laura; Hamza, Deena M. – Advances in Health Sciences Education, 2022
As curricular reforms are implemented, there is often urgency among scholars to swiftly evaluate curricular outcomes and establish whether desired impacts have been realized. Consequently, many evaluative studies focus on summative program outcomes without accompanying evaluations of implementation. This runs the risk of Type III errors, whereby…
Descriptors: Curriculum Implementation, Allied Health Occupations Education, Evaluation Research, Curriculum Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jewsbury, Paul A. – ETS Research Report Series, 2019
When an assessment undergoes changes to the administration or instrument, bridge studies are typically used to try to ensure comparability of scores before and after the change. Among the most common and powerful is the common population linking design, with the use of a linear transformation to link scores to the metric of the original…
Descriptors: Evaluation Research, Scores, Error Patterns, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Mather, Nancy; Wendling, Barbara J. – Journal of Psychoeducational Assessment, 2017
We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…
Descriptors: Achievement Tests, Educational Research, Evaluation Research, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Montgomery, Alyssa; Dumont, Ron; Willis, John O. – Journal of Psychoeducational Assessment, 2017
The articles presented in this Special Issue provide evidence for many statistically significant relationships among error scores obtained from the Kaufman Test of Educational Achievement, Third Edition (KTEA)-3 between various groups of students with and without disabilities. The data reinforce the importance of examiners looking beyond the…
Descriptors: Evidence, Validity, Predictive Validity, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eun Sook; Kwok, Oi-man; Yoon, Myeongsun – Structural Equation Modeling: A Multidisciplinary Journal, 2012
Testing factorial invariance has recently gained more attention in different social science disciplines. Nevertheless, when examining factorial invariance, it is generally assumed that the observations are independent of each other, which might not be always true. In this study, we examined the impact of testing factorial invariance in multilevel…
Descriptors: Monte Carlo Methods, Testing, Social Science Research, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun – Educational and Psychological Measurement, 2012
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Descriptors: Test Items, Simulation, Testing, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim; Zhang, Wenmin – Journal of Educational and Behavioral Statistics, 2011
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
Descriptors: Equated Scores, Error Patterns, Evaluation Research, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ying; Rupp, Andre A. – Educational and Psychological Measurement, 2011
This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…
Descriptors: Test Length, Item Response Theory, Statistical Analysis, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Iamarino, Danielle L. – Current Issues in Education, 2014
This paper explores the methodology and application of an assessment philosophy known as standards-based grading, via a critical comparison of standards-based grading to other assessment philosophies commonly employed at the elementary, secondary, and post-secondary levels of education. Evidenced by examples of increased student engagement and…
Descriptors: Grading, Evaluation Methods, Evaluation Criteria, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Applied Psychological Measurement, 2011
Recently, Attali and Powers investigated the usefulness of providing immediate feedback on the correctness of answers to constructed response questions and the opportunity to revise incorrect answers. This article introduces an item response theory (IRT) model for scoring revised responses to questions when several attempts are allowed. The model…
Descriptors: Feedback (Response), Item Response Theory, Models, Error Correction
Peer reviewed Peer reviewed
Direct linkDirect link
Gardner, John – Oxford Review of Education, 2013
Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…
Descriptors: Educational Assessment, Public Opinion, Error of Measurement, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Haardorfer, Regine; Gagne, Phill – Focus on Autism and Other Developmental Disabilities, 2010
Some researchers have argued for the use of or have attempted to make use of randomization tests in single-subject research. To address this tide of interest, the authors of this article describe randomization tests, discuss the theoretical rationale for applying them to single-subject research, and provide an overview of the methodological…
Descriptors: Research Design, Researchers, Evaluation Methods, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Mrazik, Martin; Janzen, Troy M.; Dombrowski, Stefan C.; Barford, Sean W.; Krawchuk, Lindsey L. – Canadian Journal of School Psychology, 2012
A total of 19 graduate students enrolled in a graduate course conducted 6 consecutive administrations of the Wechsler Intelligence Scale for Children, 4th edition (WISC-IV, Canadian version). Test protocols were examined to obtain data describing the frequency of examiner errors, including administration and scoring errors. Results identified 511…
Descriptors: Intelligence Tests, Intelligence, Statistical Analysis, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hathcoat, John D.; Penn, Jeremy D. – Research & Practice in Assessment, 2012
Critics of standardized testing have recommended replacing standardized tests with more authentic assessment measures, such as classroom assignments, projects, or portfolios rated by a panel of raters using common rubrics. Little research has examined the consistency of scores across multiple authentic assignments or the implications of this…
Descriptors: Generalizability Theory, Performance Based Assessment, Writing Across the Curriculum, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lazar, Ann A.; Zerbe, Gary O. – Journal of Educational and Behavioral Statistics, 2011
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…
Descriptors: Statistical Analysis, Evaluation Research, Error Patterns, Bias
Previous Page | Next Page ยป
Pages: 1  |  2