Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 8 |
Descriptor
Source
Author
Herman, Joan L. | 3 |
Bauer, Malcolm I. | 2 |
Jin, Hui | 2 |
Moore, John C. | 2 |
Pressler, Yamina | 2 |
Yestness, Nissa | 2 |
van Rijn, Peter | 2 |
Baker, Eva L. | 1 |
Bennett, Jessica G. | 1 |
Bracey, Gerald W. | 1 |
Burling, Kelly S. | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 2 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Practitioners | 5 |
Teachers | 3 |
Administrators | 1 |
Community | 1 |
Students | 1 |
Location
United Kingdom | 1 |
Vermont | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
Clinical Evaluation of… | 1 |
Expressive One Word Picture… | 1 |
National Assessment of… | 1 |
Peabody Picture Vocabulary… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Jin, Hui; van Rijn, Peter; Moore, John C.; Bauer, Malcolm I.; Pressler, Yamina; Yestness, Nissa – International Journal of Science Education, 2019
This article provides a validation framework for research on the development and use of science Learning Progressions (LPs). The framework describes how evidence from various sources can be used to establish an interpretive argument and a validity argument at five stages of LP research--development, scoring, generalisation, extrapolation, and use.…
Descriptors: Sequential Approach, Educational Research, Science Education, Validity
Jin, Hui; van Rijn, Peter; Moore, John C.; Bauer, Malcolm I.; Pressler, Yamina; Yestness, Nissa – Grantee Submission, 2019
This article provides a validation framework for research on the development and use of science Learning Progressions (LPs). The framework describes how evidence from various sources can be used to establish an interpretive argument and a validity argument at five stages of LP research--development, scoring, generalisation, extrapolation, and use.…
Descriptors: Sequential Approach, Educational Research, Science Education, Validity
Bennett, Jessica G.; Gardner, Ralph, III; Rizzi, Gleides Lopes – American Annals of the Deaf, 2013
Strong correlations exist between signed and/or spoken English and the literacy skills of deaf and hard of hearing students. Assessments that are both valid and reliable are key for researchers and practitioners investigating the signed and/or spoken English skills of signing populations. The authors conducted a literature review to explore which…
Descriptors: Deafness, Hearing Impairments, Sign Language, Language Skills
American Educational Research Association (AERA), 2014
Developed jointly by the American Educational Research Association, American Psychological Association, and the National Council on Measurement in Education, "Standards for Educational and Psychological Testing" (Revised 2014) addresses professional and technical issues of test development and use in education, psychology, and…
Descriptors: Standards, Educational Testing, Psychological Testing, Test Construction
Stenlund, Tova – Assessment & Evaluation in Higher Education, 2010
The process of giving official acknowledgment to formal, informal and non-formal prior learning is commonly labelled as assessment, accreditation or recognition of prior learning (APL), representing a practice that is expanding in higher education in many countries. This paper focuses specifically on the assessment part of APL, which undoubtedly…
Descriptors: Higher Education, Validity, Prior Learning, Program Effectiveness
Nichols, Paul D.; Meyers, Jason L.; Burling, Kelly S. – Educational Measurement: Issues and Practice, 2009
Assessments labeled as formative have been offered as a means to improve student achievement. But labels can be a powerful way to miscommunicate. For an assessment use to be appropriately labeled "formative," both empirical evidence and reasoned arguments must be offered to support the claim that improvements in student achievement can be linked…
Descriptors: Academic Achievement, Tutoring, Student Evaluation, Evaluation Methods
Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald – Assessment and Accountability Comprehensive Center, 2010
This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…
Descriptors: Multiple Choice Tests, Test Items, Benchmarking, Educational Assessment
Shepard, Lorrie A. – Educational Measurement: Issues and Practice, 2009
In many school districts, the pressure to raise test scores has created overnight celebrity status for formative assessment. Its powers to raise student achievement have been touted, however, without attending to the research on which these claims were based. Sociocultural learning theory provides theoretical grounding for understanding how…
Descriptors: Learning Theories, Validity, Student Evaluation, Evaluation Methods
Ediger, Marlow – 2001
It is difficult to know the information that should be included on state report cards to enable comparisons among school districts and among different states. There may be many problems with such report cards, ranging from the possibility of computer error to the chance of reporting test scores that are not reliable or valid or the use of tests…
Descriptors: Academic Achievement, Comparative Analysis, Elementary Secondary Education, Reliability
Redfield, Doris – 2001
The purpose of this document is to provide practical guidance and support for the design, development, and implementation of large-scale assessment systems that are grounded in research and best practice. Information is included about existing large-scale testing efforts, including national testing programs, state testing programs, and…
Descriptors: Educational Assessment, Elementary Secondary Education, Reliability, State Programs
Lissitz, Robert W., Ed.; Schafer, William D., Ed. – 2002
This book originated from a conference arranged to honor the professional contributions of William D. Schafer on his retirement. In recognition of his interest in the subject, each chapter is an effort by one or more professionals to describe what they think the future of assessment should be. The chapters are: (1) What Will Teachers Know about…
Descriptors: Data Collection, Disabilities, Educational Assessment, Educational Change
Baker, Eva L. – 2000
With regard to testing, technology has already played significant roles in the form of scoring technology, analytical practices, and development strategies. Two common functions of technology and testing are currently at work. First is the use of technology to meet existing goals more efficiently, and second is the use of technology to expand the…
Descriptors: Accountability, Educational Quality, Educational Research, Educational Technology
Wilde, Sandra – 2002
This reference guide contains clear and concise explanations of concepts related to educational testing and standards. The book may be read straight through as a primer on educational assessment or may be used as a reference for particular topics. The sections are: (1) Accountability (Consumers, Taxpayers, and Citizens); (2) Authenticity in…
Descriptors: Academic Standards, Accountability, Achievement Tests, Educational Testing
Linn, Robert L.; Gronlund, Norman E. – 2000
This book is intended to introduce the classroom teacher and prospective teacher to the elements of measurement and assessment that are essential to good teaching. The main theme is that assessment plays an important role in the instructional process. This edition has been revised to reflect major changes in educational assessment since the last…
Descriptors: Educational Assessment, Elementary Secondary Education, Instructional Effectiveness, Measurement Techniques

Gredler, Margaret E. – Studies in Educational Evaluation, 1995
Different meanings of portfolio assessment are reviewed, and potential applications to program evaluation are explored. At present, portfolio assessments are not recommended as the primary source of evidence about the attainment of program goals in evaluations that compare curricula or programs because of the lack of validity and reliability…
Descriptors: Alternative Assessment, Comparative Analysis, Curriculum, Educational Assessment
Previous Page | Next Page ยป
Pages: 1 | 2