NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 146 results Save | Export
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Assessment in Education: Principles, Policy & Practice, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Geiser, Saul – Center for Studies in Higher Education, 2020
One of the major claims of the report of University of California's Task Force on Standardized Testing is that SAT and ACT scores are superior to high-school grades in predicting how students will perform at UC. This finding has been widely reported in the news media and cited in several editorials favoring UC's continued use of SAT/ACT scores in…
Descriptors: College Entrance Examinations, Grade Point Average, Standardized Tests, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Isaac L., Jr.; Niileksela, Christopher; Floyd, Randy G. – Contemporary School Psychology, 2021
Racial/ethnic bias in the prediction of students' educational potential was questioned in the Larry P. Vs. Riles case. The construct and predictive validity of the Woodcock-Johnson IV Tests of Cognitive Abilities (WJ IV; Schrank et al. 2014b) have not been examined for racial/ethnic bias. This study extended Keith's (1999) examination of bias…
Descriptors: Cognitive Ability, Cognitive Tests, Predictor Variables, Reading Achievement
Treadway, Meagan Nichole – ProQuest LLC, 2019
The number of applications to postsecondary institutions continues to increase year over year, and in most cases, the number of applications exceeds the number of students admitted. The use of standardized tests continues to grow to help in these admissions decisions. Due to both high usage rates and the changing demographics of our nation's…
Descriptors: College Entrance Examinations, Science Tests, Scores, Predictive Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dorans, Neil J. – ETS Research Report Series, 2013
Quantitative fairness procedures have been developed and modified by ETS staff over the past several decades. ETS has been a leader in fairness assessment, and its efforts are reviewed in this report. The first section deals with differential prediction and differential validity procedures that examine whether test scores predict a criterion, such…
Descriptors: Test Bias, Statistical Analysis, Test Validity, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Hauser, Peter C.; Lukomski, Jennifer; Samar, Vince – Journal of Psychoeducational Assessment, 2013
This study investigated the reliability and validity of the Behavior Rating Inventory of Executive Functions-Adult Form (BRIEF-A) when used with deaf college students. The BRIEF-A was administered to 176 deaf and 184 hearing students of whom 25 deaf students and 56 hearing students self-identified as having an Attention Deficit Hyperactivity…
Descriptors: College Students, Deafness, Executive Function, Test Reliability
Rizzo, Monica Ellen – Online Submission, 2012
Most American colleges and universities require standardized entrance exams when making admissions decisions. Scores on these exams help determine if, when and where students will be allowed to pursue higher education. These scores are also used to determine eligibility for merit based financial aid. This testing persists even though half of the…
Descriptors: College Entrance Examinations, Standardized Tests, Test Bias, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gill, Brian; Shoji, Megan; Coen, Thomas; Place, Kate – Regional Educational Laboratory Mid-Atlantic, 2016
School districts and states across the Regional Educational Laboratory Mid-Atlantic Region and the country as a whole have been modifying their teacher evaluation systems to identify more effective and less effective teachers and provide better feedback to improve instructional practice. The new systems typically include components related to…
Descriptors: Predictive Validity, Test Bias, Test Content, School Districts
Goldhaber, Dan; Chaplin, Duncan – Center for Education Data & Research, 2012
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM…
Descriptors: School Effectiveness, Teacher Effectiveness, Achievement Gains, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
You, Jianing; Leung, Freedom; Lai, Ching-man; Fu, Kei – Assessment, 2011
This study used item response theory (IRT) to examine the Impulsive Behaviors Checklist for Adolescents (IBCL-A) among 6,276 (67.7% girls) Chinese secondary school students. The IBCL-A included 15 maladaptive impulsive behaviors adapted from the Revised Diagnostic Interview for Borderlines. The authors obtained the severity and discrimination…
Descriptors: Check Lists, Test Bias, Construct Validity, Predictive Validity
Hardison, Chaitra M.; Sims, Carra S.; Wong, Eunice C. – RAND Corporation, 2010
The Air Force has long recognized the importance of selecting the most qualified officers possible. For more than 60 years, it has relied on the Air Force Officer Qualifying Test (AFOQT) as one measure of those qualifications. A variety of concerns have been raised about whether the AFOQT is biased, too expensive, or even valid for predicting…
Descriptors: Test Bias, Test Validity, Aptitude Tests, Military Personnel
Peer reviewed Peer reviewed
Direct linkDirect link
Worrell, Frank C. – Gifted Child Quarterly, 2009
There is a fallacy about identifying gifted and talented children and youth that refuses to go away: It is the notion that a single score is "sufficient" for determining giftedness. In this article, the author addresses several reasons for the longevity and ubiquity of this myth, as well as the data that call the myth into question. These include…
Descriptors: Talent, Predictive Validity, Scores, Academically Gifted
Peer reviewed Peer reviewed
Direct linkDirect link
Sackett, Paul R.; Borneman, Matthew J.; Connelly, Brian S. – American Psychologist, 2009
We are pleased that our article prompted this series of four commentaries and that we have this opportunity to respond. We address each in turn. Duckworth and Kaufman and Agars discussed, respectively, two broad issues concerning the validity of selection systems, namely, the expansion of the predictor domain to include noncognitive predictors of…
Descriptors: High Stakes Tests, Reader Response, Error of Measurement, Test Bias
Sander, Paul – Psychology Teaching Review, 2009
Using published findings and by further analyses of existing data, the structure, validity and utility of the Academic Behavioural Confidence scale (ABC) is critically considered. Validity is primarily assessed through the scale's relationship with other existing scales as well as by looking for predicted differences. The utility of the ABC scale…
Descriptors: Undergraduate Students, Measures (Individuals), Confidence Testing, Item Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10