NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)7
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Buchholz, Janine; Hartig, Johannes; Janssen, Rianne – Journal of Educational and Behavioral Statistics, 2014
In this article, the change in examinee effort during an assessment, which we will refer to as persistence, is modeled as an effect of item position. A multilevel extension is proposed to analyze hierarchically structured data and decompose the individual differences in persistence. Data from the 2009 Program of International Student Achievement…
Descriptors: Reading Tests, International Programs, Testing Programs, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Hongwen; Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2011
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Descriptors: Testing Programs, Measurement, Item Analysis, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
von Davier, Alina A. – ETS Research Report Series, 2012
Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…
Descriptors: Quality Control, Data Analysis, Trend Analysis, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Skorupski, William P.; Carvajal, Jorge – Educational and Psychological Measurement, 2010
This study is an evaluation of the psychometric issues associated with estimating objective level scores, often referred to as "subscores." The article begins by introducing the concepts of reliability and validity for subscores from statewide achievement tests. These issues are discussed with reference to popular scaling techniques, classical…
Descriptors: Testing Programs, Test Validity, Achievement Tests, Scores
Doorey, Nancy A. – Council of Chief State School Officers, 2011
The work reported in this paper reflects a collaborative effort of many individuals representing multiple organizations. It began during a session at the October 2008 meeting of TILSA when a representative of a member state asked the group if any of their programs had experienced unexpected fluctuations in the annual state assessment scores, and…
Descriptors: Testing, Sampling, Expertise, Testing Programs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rock, Donald A. – ETS Research Report Series, 2012
This paper provides a history of ETS's role in developing assessment instruments and psychometric procedures for measuring change in large-scale national assessments funded by the Longitudinal Studies branch of the National Center for Education Statistics. It documents the innovations developed during more than 30 years of working with…
Descriptors: Models, Educational Change, Longitudinal Studies, Educational Development
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shudong; Jiao, Hong – Educational and Psychological Measurement, 2009
In practice, vertical scales have been continually used to measure students' achievement progress across several grade levels and have been considered very challenging psychometric procedures. Recently, such practices have been drawing many criticisms. The major criticisms focus on dimensionality and construct equivalence of the latent trait or…
Descriptors: Reading Comprehension, Elementary Secondary Education, Measures (Individuals), Psychometrics
Yen, Shu Jing; Ochieng, Charles; Michaels, Hillary; Friedman, Greg – Online Submission, 2005
Year-to-year rater variation may result in constructed response (CR) parameter changes, making CR items inappropriate to use in anchor sets for linking or equating. This study demonstrates how rater severity affected the writing and reading scores. Rater adjustments were made to statewide results using an item response theory (IRT) methodology…
Descriptors: Test Items, Writing Tests, Reading Tests, Measures (Individuals)
Peer reviewed Peer reviewed
Zwick, Rebecca – Educational Measurement: Issues and Practice, 1991
Item parameter estimates derived through item response theory methods have been considered relatively robust to changes in item position and context, but the anomaly in reading scores from the 1986 National Assessment of Educational Progress (NAEP) illustrates problems with common population equating procedures when there are test form changes.…
Descriptors: Achievement Tests, Context Effect, Equated Scores, Estimation (Mathematics)
Isham, Steven P.; Allen, Nancy L. – 1992
As a result of the dual roles of the National Assessment of Educational Progress (NAEP) to measure trends in academic achievement over time and to measure what students know and can do, a scale anchoring procedure was developed. Although the NAEP provides norm-referenced information about student proficiency, the scale anchoring procedure gives…
Descriptors: Academic Achievement, Criterion Referenced Tests, Elementary School Students, Elementary Secondary Education