Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 10 |
| Since 2017 (last 10 years) | 16 |
| Since 2007 (last 20 years) | 50 |
Descriptor
Source
Author
Publication Type
Education Level
| Elementary Secondary Education | 10 |
| Higher Education | 7 |
| Elementary Education | 6 |
| Postsecondary Education | 5 |
| Secondary Education | 2 |
| Grade 1 | 1 |
| Grade 4 | 1 |
| Grade 5 | 1 |
| Grade 6 | 1 |
| Grade 7 | 1 |
| Grade 8 | 1 |
| More ▼ | |
Audience
| Practitioners | 40 |
| Researchers | 16 |
| Teachers | 15 |
| Administrators | 8 |
| Policymakers | 4 |
| Counselors | 3 |
| Students | 3 |
| Parents | 1 |
Location
| Canada | 7 |
| Australia | 4 |
| California | 4 |
| New York | 4 |
| Pennsylvania | 3 |
| United States | 3 |
| Indiana | 2 |
| New York (New York) | 2 |
| United Kingdom | 2 |
| United Kingdom (Great Britain) | 2 |
| Austria | 1 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 5 |
| Elementary and Secondary… | 3 |
| Education for All Handicapped… | 2 |
| Individuals with Disabilities… | 2 |
| Elementary and Secondary… | 1 |
| Improving Americas Schools… | 1 |
| Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Karren, Benjamin C. – Journal of Psychoeducational Assessment, 2017
The Gilliam Autism Rating Scale-Third Edition (GARS-3) is a norm-referenced tool designed to screen for autism spectrum disorders (ASD) in individuals between the ages of 3 and 22 (Gilliam, 2014). The GARS-3 test kit consists of three different components and includes an "Examiner's Manual," summary/response forms (50), and the…
Descriptors: Autism, Pervasive Developmental Disorders, Rating Scales, Norm Referenced Tests
Reynolds, Matthew R.; Niileksela, Christopher R. – Journal of Psychoeducational Assessment, 2015
"The Woodcock-Johnson IV Tests of Cognitive Abilities" (WJ IV COG) is an individually administered measure of psychometric intellectual abilities designed for ages 2 to 90+. The measure was published by Houghton Mifflin Harcourt-Riverside in 2014. Frederick Shrank, Kevin McGrew, and Nancy Mather are the authors. Richard Woodcock, the…
Descriptors: Cognitive Tests, Testing, Scoring, Test Interpretation
Dickens, Rachel H.; Meisinger, Elizabeth B.; Tarar, Jessica M. – Canadian Journal of School Psychology, 2015
The Comprehensive Test of Phonological Processing-Second Edition (CTOPP-2; Wagner, Torgesen, Rashotte, & Pearson, 2013) is a norm-referenced test that measures phonological processing skills related to reading for individuals aged 4 to 24. According to its authors, the CTOPP-2 may be used to identify individuals who are markedly below their…
Descriptors: Norm Referenced Tests, Phonology, Test Format, Testing
Fraccaro, Rebecca L.; Stelnicki, Andrea M.; Nordstokke, David W. – Canadian Journal of School Psychology, 2015
Anxiety disorders are among the most prevalent mental disorders among school-age children and can lead to impaired academic and social functioning (Keeley & Storch, 2009). Unfortunately, anxiety disorders in this population are often undetected (Herzig-Anderson, Colognori, Fox, Stewart, & Warner, 2012). The availability of psychometrically…
Descriptors: Anxiety, Measures (Individuals), Symptoms (Individual Disorders), Testing
Sireci, Stephen G. – Journal of Educational Measurement, 2013
Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…
Descriptors: Validity, Theories, Test Interpretation, Test Use
Kopriva, Rebecca J.; Thurlow, Martha L.; Perie, Marianne; Lazarus, Sheryl S.; Clark, Amy – Educational Psychologist, 2016
This article argues that test takers are as integral to determining validity of test scores as defining target content and conditioning inferences on test use. A principled sustained attention to how students interact with assessment opportunities is essential, as is a principled sustained evaluation of evidence confirming the validity or calling…
Descriptors: Tests, Testing, Test Interpretation, Scores
Hall, Anna H.; Tannebaum, Rory P. – Journal of Psychoeducational Assessment, 2013
The first edition of the Gray Oral Reading Tests (GORT, 1963) was written by Dr. William S. Gray, a founding member and the first president of the International Reading Association. The GORT was designed to measure oral reading abilities (i.e., Rate, Accuracy, Fluency, and Comprehension) of students in Grades 2 through 12 due to the noteworthy…
Descriptors: Oral Reading, Reading Tests, Children, Testing
Borsboom, Denny – Measurement: Interdisciplinary Research and Perspectives, 2012
Paul E. Newton provides an insightful and scholarly overview of central issues in validity theory. As he notes, many of the conceptual problems in validity theory derive from the fact that the word "validity" has two meanings. First, it indicates "whether a test measures what it purports to measure." This is a factual claim about the psychometric…
Descriptors: Validity, Psychometrics, Test Interpretation, Scores
Kranzler, John H.; Benson, Nicholas; Floyd, Randy G. – International Journal of School & Educational Psychology, 2016
This article briefly reviews the history of intellectual assessment of children and youth in the United States of America, as well as current practices and future directions. Although administration of intelligence tests in the schools has been a longstanding practice in the United States, their use has also elicited sharp controversy over time.…
Descriptors: Intelligence Tests, Children, Youth, Test Construction
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J. – ACT, Inc., 2016
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
Descriptors: Scores, Classification, College Entrance Examinations, Error of Measurement
Henig, Jeffrey R. – Teachers College Record, 2013
Background/Context: Validity issues are often discussed in technical terms, but the context changes when measures enter broad public debate, and a wider range of interests come into play. Purpose: This article, part of a special section of TCR, considers the political dimensions of validity questions as raised by a keynote address and panel…
Descriptors: Testing, Politics of Education, Test Validity, Expertise
Kane, Michael – Journal of Educational Measurement, 2011
Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…
Descriptors: Error of Measurement, Scores, Test Interpretation, Testing
Reed, Deborah K.; Sturges, Keith M. – Remedial and Special Education, 2013
Researchers have expressed concern about "implementation" fidelity in intervention research but have not extended that concern to "assessment" fidelity, or the extent to which pre-/posttests are administered and interpreted as intended. When studying reading interventions, data gathering heavily influences the identification of…
Descriptors: Reading Tests, Fidelity, Pretests Posttests, Intervention
Davies, Alan – Language Testing, 2012
In this article, the author begins by discussing four challenges on the concept of validity. These challenges are: (1) the appeal to logic and syllogistic reasoning; (2) the claim of reliability; (3) the local and the universal; and (4) the unitary and the divisible. In language testing validity cannot be achieved directly but only through a…
Descriptors: Language Tests, Test Validity, Test Reliability, Testing
Jordan, Sally – Computers & Education, 2012
Students were observed directly, in a usability laboratory, and indirectly, by means of an extensive evaluation of responses, as they attempted interactive computer-marked assessment questions that required free-text responses of up to 20 words and as they amended their responses after receiving feedback. This provided more general insight into…
Descriptors: Learner Engagement, Feedback (Response), Evaluation, Test Interpretation

Peer reviewed
Direct link
