Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 13 |
Descriptor
Source
Educational Measurement:… | 25 |
Author
Sireci, Stephen G. | 2 |
Angela Johnson | 1 |
Barry, Carol L. | 1 |
Bejar, Issac I. | 1 |
Beller, Michal | 1 |
Boyer, Michelle | 1 |
Burkhardt, Amy | 1 |
Buyske, Jo | 1 |
Capie, William | 1 |
Chester, Mitchell D. | 1 |
Chinen, Starlie | 1 |
More ▼ |
Publication Type
Journal Articles | 25 |
Reports - Descriptive | 25 |
Opinion Papers | 2 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Debra P v Turlington | 2 |
Assessments and Surveys
Florida State Student… | 2 |
National Teacher Examinations | 2 |
Program for International… | 1 |
Stanford Achievement Tests | 1 |
Teacher Performance… | 1 |
What Works Clearinghouse Rating
Lewis, Jennifer; Sireci, Stephen G. – Educational Measurement: Issues and Practice, 2022
This module is designed for educators, educational researchers, and psychometricians who would like to develop an understanding of the basic concepts of validity theory, test validation, and documenting a "validity argument." It also describes how an in-depth understanding of the purposes and uses of educational tests sets the foundation…
Descriptors: Test Validity, Tests, Testing Problems, Faculty Development
Student, Sanford R.; Gong, Brian – Educational Measurement: Issues and Practice, 2022
We address two persistent challenges in large-scale assessments of the Next Generation Science Standards: (a) the validity of score interpretations that target the standards broadly and (b) how to structure claims for assessments of this complex domain. The NGSS pose a particular challenge for specifying claims about students that evidence from…
Descriptors: Science Tests, Test Validity, Test Items, Test Construction
Ing, Marsha; Chinen, Starlie; Jackson, Kara; Smith, Thomas M. – Educational Measurement: Issues and Practice, 2021
Despite the ease of accessing a wide range of measures, little attention is given to validity arguments when considering whether to use the measure for a new purpose or in a different context. Making a validity argument has historically focused on the intended interpretation and use. There has been a press to consider both the intended and actual…
Descriptors: Instructional Improvement, Measures (Individuals), Test Validity, Test Interpretation
Barry, Carol L.; Jones, Andrew T.; Ibáñez, Beatriz; Grambau, Marni; Buyske, Jo – Educational Measurement: Issues and Practice, 2022
In response to the COVID-19 pandemic, the American Board of Surgery (ABS) shifted from in-person to remote administrations of the oral certifying exam (CE). Although the overall exam architecture remains the same, there are a number of differences in administration and staffing costs, exam content, security concerns, and the tools used to give the…
Descriptors: COVID-19, Pandemics, Computer Assisted Testing, Verbal Tests
Angela Johnson; Elizabeth Barker; Marcos Viveros Cespedes – Educational Measurement: Issues and Practice, 2024
Educators and researchers strive to build policies and practices on data and evidence, especially on academic achievement scores. When assessment scores are inaccurate for specific student populations or when scores are inappropriately used, even data-driven decisions will be misinformed. To maximize the impact of the research-practice-policy…
Descriptors: Equal Education, Inclusion, Evaluation Methods, Error of Measurement
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Mislevy, Robert J.; Oliveri, Maria Elena – Educational Measurement: Issues and Practice, 2019
In this digital ITEMS module, Dr. Robert [Bob] Mislevy and Dr. Maria Elena Oliveri introduce and illustrate a sociocognitive perspective on educational measurement, which focuses on a variety of design and implementation considerations for creating fair and valid assessments for learners from diverse populations with diverse sociocultural…
Descriptors: Educational Testing, Reliability, Test Validity, Test Reliability
Yamamoto, Kentaro; Shin, Hyo Jeong; Khorramdel, Lale – Educational Measurement: Issues and Practice, 2018
A multistage adaptive testing (MST) design was implemented for the Programme for the International Assessment of Adult Competencies (PIAAC) starting in 2012 for about 40 countries and has been implemented for the 2018 cycle of the Programme for International Student Assessment (PISA) for more than 80 countries. Using examples from PISA and PIAAC,…
Descriptors: International Assessment, Foreign Countries, Achievement Tests, Test Validity
Jonson, Jessica L.; Trantham, Pamela; Usher-Tate, Betty Jean – Educational Measurement: Issues and Practice, 2019
One of the substantive changes in the 2014 Standards for Educational and Psychological Testing was the elevation of fairness in testing as a foundational element of practice in addition to validity and reliability. Previous research indicates that testing practices often do not align with professional standards and guidelines. Therefore, to raise…
Descriptors: Culture Fair Tests, Test Validity, Test Reliability, Intelligence Tests
Bejar, Issac I. – Educational Measurement: Issues and Practice, 2012
The scoring process is critical in the validation of tests that rely on constructed responses. Documenting that readers carry out the scoring in ways consistent with the construct and measurement goals is an important aspect of score validity. In this article, rater cognition is approached as a source of support for a validity argument for scores…
Descriptors: Scores, Inferences, Validity, Scoring
Nichols, Paul D.; Williams, Natasha – Educational Measurement: Issues and Practice, 2009
This article has three goals. The first goal is to clarify the role that the consequences of test score use play in validity judgments by reviewing the role that modern writers on validity have ascribed for consequences in supporting validity judgments. The second goal is to summarize current views on who is responsible for collecting evidence of…
Descriptors: Tests, Test Validity, Scores, Data Collection
Lu, Ying; Sireci, Stephen G. – Educational Measurement: Issues and Practice, 2007
Speededness refers to the situation where the time limits on a standardized test do not allow substantial numbers of examinees to fully consider all test items. When tests are not intended to measure speed of responding, speededness introduces a severe threat to the validity of interpretations based on test scores. In this article, we describe…
Descriptors: Test Items, Timed Tests, Standardized Tests, Test Validity

Citron, Christiane H. – Educational Measurement: Issues and Practice, 1983
The Debra P. versus Turlington case marked the first major inquiry into content validity of a student competency testing program. The Florida federal district court determined material assessed on the test had been taught in Florida's classrooms. Schools may deny regular diplomas to students who fail the test. (DWH)
Descriptors: Court Litigation, Graduation Requirements, High Schools, Minimum Competency Testing
Haladyna, Thomas M.; Downing, Steven M. – Educational Measurement: Issues and Practice, 2004
There are many threats to validity in high-stakes achievement testing. One major threat is construct-irrelevant variance (CIV). This article defines CIV in the context of the contemporary, unitary view of validity and presents logical arguments, hypotheses, and documentation for a variety of CIV sources that commonly threaten interpretations of…
Descriptors: Student Evaluation, Evaluation Methods, High Stakes Tests, Construct Validity
Cizek, Gregory J.; Crocker, Linda; Frisbie, David A.; Mehrens, William A.; Stiggins, Richard J. – Educational Measurement: Issues and Practice, 2006
The authors describe the significant contributions of Robert Ebel to educational measurement theory and its applications. A biographical sketch details Ebel's roots and professional resume. His influence on classroom assessment views and procedures are explored. Classic publications associated with validity, reliability, and score interpretation…
Descriptors: Test Theory, Educational Assessment, Psychometrics, Test Reliability
Previous Page | Next Page »
Pages: 1 | 2