NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 140 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Zengilowski, Allison; Schuetze, Brendan A.; Nash, Brady L.; Schallert, Diane L. – Educational Psychologist, 2021
Refutation texts, rhetorical tools designed to reduce misconceptions, have garnered attention across four decades and many studies. Yet, the ability of a refutation text to change a learner's mind on a topic needs to be qualified and modulated. In this critical review, we bring attention to sources of constraints often overlooked by refutation…
Descriptors: Misconceptions, Instructional Materials, Research Problems, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Corcoran, Stephanie – Contemporary School Psychology, 2022
With the iPad-mediated cognitive assessment gaining popularity with school districts and the need for alternative modes for training and instruction during this COVID-19 pandemic, school psychology training programs will need to adapt to effectively train their students to be competent in administering, scoring, an interpreting cognitive…
Descriptors: School Psychologists, Professional Education, Job Skills, Cognitive Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Della-Piana, Gabriel M.; Gardner, Michael K.; Mayne, Zachary M. – Journal of Research Practice, 2018
The authors describe challenges of following professional standards for educational achievement testing due to the complexity of gathering appropriate evidence to support demanding test interpretation and use. Validity evidence has been found to be low for some individual testing standards, leading to the possibility of faulty or impoverished test…
Descriptors: Achievement Tests, Standards, Educational Assessment, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kabuto, Bobbie; Harmey, Sinead – Language and Literacy Spectrum, 2020
This article explores how the general term of assessment literacy can be specified to the area of reading through the lens of the informal reading inventory, Qualitative Reading Inventory-5. This article will investigate the common misconceptions that teachers, who were enrolled in a graduate program to become state-certified specialized literacy…
Descriptors: Assessment Literacy, Reading Teachers, Graduate Students, Specialists
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Patrick Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Report Series, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international large-scale assessments of cognitive and…
Descriptors: Assessment Literacy, Testing, Test Bias, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Haertel, Edward H. – Educational Psychologist, 2018
In the service of educational accountability, student achievement tests are being used to measure constructs quite unlike those envisioned by test developers. Scores are compared to cut points to create classifications like "proficient"; scores are combined over time to measure growth; student scores are aggregated to measure the…
Descriptors: Achievement Tests, Scores, Test Validity, Test Interpretation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cormier, Damien C.; Bulut, Okan; McGrew, Kevin S.; Kennedy, Kathleen – Journal of Intelligence, 2022
Consideration of the influence of English language skills during testing is an understandable requirement for fair and valid cognitive test interpretation. Several professional standards and expert recommendations exist to guide psychologists as they attempt to engage in best practices when assessing English learners (ELs). Nonetheless, relatively…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Culture Fair Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Karren, Benjamin C. – Journal of Psychoeducational Assessment, 2017
The Gilliam Autism Rating Scale-Third Edition (GARS-3) is a norm-referenced tool designed to screen for autism spectrum disorders (ASD) in individuals between the ages of 3 and 22 (Gilliam, 2014). The GARS-3 test kit consists of three different components and includes an "Examiner's Manual," summary/response forms (50), and the…
Descriptors: Autism, Pervasive Developmental Disorders, Rating Scales, Norm Referenced Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Reynolds, Matthew R.; Niileksela, Christopher R. – Journal of Psychoeducational Assessment, 2015
"The Woodcock-Johnson IV Tests of Cognitive Abilities" (WJ IV COG) is an individually administered measure of psychometric intellectual abilities designed for ages 2 to 90+. The measure was published by Houghton Mifflin Harcourt-Riverside in 2014. Frederick Shrank, Kevin McGrew, and Nancy Mather are the authors. Richard Woodcock, the…
Descriptors: Cognitive Tests, Testing, Scoring, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Dickens, Rachel H.; Meisinger, Elizabeth B.; Tarar, Jessica M. – Canadian Journal of School Psychology, 2015
The Comprehensive Test of Phonological Processing-Second Edition (CTOPP-2; Wagner, Torgesen, Rashotte, & Pearson, 2013) is a norm-referenced test that measures phonological processing skills related to reading for individuals aged 4 to 24. According to its authors, the CTOPP-2 may be used to identify individuals who are markedly below their…
Descriptors: Norm Referenced Tests, Phonology, Test Format, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G. – Journal of Educational Measurement, 2013
Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…
Descriptors: Validity, Theories, Test Interpretation, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Fraccaro, Rebecca L.; Stelnicki, Andrea M.; Nordstokke, David W. – Canadian Journal of School Psychology, 2015
Anxiety disorders are among the most prevalent mental disorders among school-age children and can lead to impaired academic and social functioning (Keeley & Storch, 2009). Unfortunately, anxiety disorders in this population are often undetected (Herzig-Anderson, Colognori, Fox, Stewart, & Warner, 2012). The availability of psychometrically…
Descriptors: Anxiety, Measures (Individuals), Symptoms (Individual Disorders), Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kopriva, Rebecca J.; Thurlow, Martha L.; Perie, Marianne; Lazarus, Sheryl S.; Clark, Amy – Educational Psychologist, 2016
This article argues that test takers are as integral to determining validity of test scores as defining target content and conditioning inferences on test use. A principled sustained attention to how students interact with assessment opportunities is essential, as is a principled sustained evaluation of evidence confirming the validity or calling…
Descriptors: Tests, Testing, Test Interpretation, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Hall, Anna H.; Tannebaum, Rory P. – Journal of Psychoeducational Assessment, 2013
The first edition of the Gray Oral Reading Tests (GORT, 1963) was written by Dr. William S. Gray, a founding member and the first president of the International Reading Association. The GORT was designed to measure oral reading abilities (i.e., Rate, Accuracy, Fluency, and Comprehension) of students in Grades 2 through 12 due to the noteworthy…
Descriptors: Oral Reading, Reading Tests, Children, Testing
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10