NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)8
What Works Clearinghouse Rating
Showing 1 to 15 of 69 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schoenfeld, Alan H. – Assessment in Education: Principles, Policy & Practice, 2017
The challenge of "educational" assessments--assessments that advance the purposes of learning and instruction--is to provide useful information regarding students' progress towards the goals of instruction in ways that are reliable and not idiosyncratic. In this commentary, the author indicates that the challenges are actually more…
Descriptors: Educational Assessment, Learning, Student Evaluation, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Hubley, Anita M. – Assessment in Education: Principles, Policy & Practice, 2016
Ultimately, measures in research, testing, assessment and evaluation are used, or have implications, for ranking, intervention, feedback, decision-making or policy purposes. Explicit recognition of this fact brings the often-ignored and sometimes maligned concept of consequences to the fore. Given that measures have personal and social…
Descriptors: Testing Programs, Testing Problems, Measurement Techniques, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Massey, Chris L.; Gambrell, Linda B. – Literacy Research and Instruction, 2014
Literacy educators and researchers have long recognized the importance of increasing students' writing proficiency across age and grade levels. With the release of the Common Core State Standards (CCSS), a new and greater emphasis is being placed on writing in the K-12 curriculum. Educators, as well as the authors of the CCSS, agree that…
Descriptors: Writing Evaluation, State Standards, Instructional Effectiveness, Writing Ability
Peer reviewed Peer reviewed
Direct linkDirect link
RiCharde, R. Stephen – Assessment Update, 2009
This article presents the author's response to Arend Flick. The author states that Flick is correct that the issue of rubrics is broader than interrater reliability, though it is the assessment practitioner's primary armament against what the author has heard dubbed "refried bean counting" (insinuating that assessment statistics are not just bean…
Descriptors: Interrater Reliability, Scoring Rubrics, Critical Thinking, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Flick, Arend – Assessment Update, 2009
This article presents the author's critique of R. Stephen RiCharde's argument in his essay on the humanities and interrater reliability in the July-August 2008 issue of "Assessment Update." RiCharde suggests that the humanities' historical commitment to a dialectical pedagogy, a "nonlinear" process that values disagreement and debate, is at odds…
Descriptors: Interrater Reliability, Humanities, Scoring Rubrics, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Haberman, Shelby J. – Measurement: Interdisciplinary Research and Perspectives, 2009
In this commentary, the authors discuss some of the issues regarding the use of diagnostic classification models that practitioners should keep in mind. In the authors experience, these issues are not as well known as they should be. The authors then provide recommendations on diagnostic scoring.
Descriptors: Scoring, Reliability, Validity, Classification
Peer reviewed Peer reviewed
Suen, Hoi K.; And Others – Journal of Early Intervention, 1995
This paper suggests that in addressing the issue of parent-professional congruence in child assessment, researchers should avoid focusing on the conventional aspects of interrater reliability and rater interchangeability, but rather should focus on the reliability of the pooled assessment information from parents and professionals. A…
Descriptors: Disabilities, Early Childhood Education, Early Intervention, Evaluation Methods
Hoachlander, E. Gareth – Techniques: Making Education and Career Connections, 1998
Discusses state testing, various types of tests, and whether the increased attention to assessment is contributing to improved student learning. Describes uses of standardized multiple-choice, open-ended constructed response, essay, performance event, and portfolio methods. (JOW)
Descriptors: Academic Achievement, Student Evaluation, Test Format, Test Reliability
Peer reviewed Peer reviewed
Cresswell, M. J. – Educational Review, 1988
The author suggests combining grades from component assessments to provide an overall student assessment. He explores the concept of reliability and concludes that the overall assessment will be reliable only if the number of grades used to report component achievements equals or exceeds the number used to report overall achievement. (Author/CH)
Descriptors: Evaluation Problems, Grades (Scholastic), Holistic Evaluation, Reliability
Thurlow, Martha; Ysseldyke, James – Diagnostique, 1983
The response to a previous article by Mardell-Czednowski and Lessen notes the lack of common criteria for evaluating technical adequacy of assessment instruments used with handicapped children and stresses the need to evaluate instruments with the specific populations to which they are administered. (DB)
Descriptors: Disabilities, Disability Identification, Elementary Secondary Education, Student Evaluation
Peer reviewed Peer reviewed
Feifer, Irwin – Journal of Cooperative Education, 1980
Critiques two central issues of measurement as applied to the evaluation of student progress in cooperative education: reliability and validity. Urges the synthesis of measurement and professional judgment in all phases of the evaluation process. (SK)
Descriptors: Behavioral Objectives, Cooperative Education, Measurement Objectives, Program Evaluation
Ysseldyke, James E. – 1977
The author traces reasons to support his contention that the state of the art in assessing learning disabled students is not good. Among issues examined are the following: use of tests for purposes other than those for which they were intended; technical adequacy of currently used tests (standardization, reliability, validity); the use of deficit…
Descriptors: Evaluation Methods, Learning Disabilities, Student Evaluation, Test Bias
Peer reviewed Peer reviewed
Kinnealey, Moya; Royeen, Charlotte Brasic – Occupational Therapy Journal of Research, 1989
Kinnealey reports on a study comparing tactile functions of 30 learning-disabled and 30 normal eight-year-olds as measured by the Southern California Sensory Integration Tests and the Luria-Nebraska Neuropsychological Battery. Reliability and validity of the two measures were examined. Results showed a significant difference between the tactile…
Descriptors: Learning Disabilities, Sensory Integration, Student Evaluation, Tactual Perception
Peer reviewed Peer reviewed
Paratore, Jeanne R. – Topics in Language Disorders, 1995
This article provides a framework for portfolio assessment in which common benchmarks and rubrics provide explicit and shared criteria for judging both the collection of work in the portfolio and individual performance samples. Also addressed are efforts to achieve validity and reliability in teacher, student, and parent judgments while…
Descriptors: Elementary Secondary Education, Evaluation Criteria, Individualized Programs, Literacy
Peer reviewed Peer reviewed
Cooper, Eileen – Journal of Creative Behavior, 1991
This paper critiques the following tests of creativity: (1) the Torrance Test of Creative Thinking; (2) the Creativity Assessment Packet; (3) subtests of the Structure of the Intellect Learning Abilities Test; (4) Thinking Creatively with Sounds and Words; (5) Thinking Creatively in Action and Movement; and (6) the Khatena-Torrance Creative…
Descriptors: Creativity, Creativity Tests, Divergent Thinking, Elementary Secondary Education
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5