Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 9 |
Descriptor
Scoring | 26 |
Test Use | 26 |
Testing | 26 |
Test Construction | 12 |
Language Tests | 10 |
Test Items | 9 |
Test Validity | 9 |
Foreign Countries | 6 |
Test Reliability | 6 |
Test Results | 6 |
Standardized Tests | 5 |
More ▼ |
Source
Author
Publication Type
Education Level
Early Childhood Education | 3 |
Elementary Education | 3 |
Grade 3 | 3 |
Grade 4 | 3 |
Grade 5 | 3 |
Grade 6 | 3 |
Grade 7 | 3 |
Grade 8 | 3 |
Intermediate Grades | 3 |
Junior High Schools | 3 |
Middle Schools | 3 |
More ▼ |
Audience
Practitioners | 4 |
Administrators | 1 |
Policymakers | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 1 |
Preschool and Kindergarten… | 1 |
Raven Progressive Matrices | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Schmidgall, Jonathan E.; Getman, Edward P.; Zu, Jiyun – Language Testing, 2018
In this study, we define the term "screener test," elaborate key considerations in test design, and describe how to incorporate the concepts of practicality and argument-based validation to drive an evaluation of screener tests for language assessment. A screener test is defined as a brief assessment designed to identify an examinee as a…
Descriptors: Test Validity, Test Use, Test Construction, Language Tests
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS) to provide guidance to states that are interested in including New Meridian content and would like to either keep reporting scores on the New Meridian Scale or use the New Meridian performance levels; that is, the state…
Descriptors: Testing, Standards, Comparative Analysis, Test Content
New York State Education Department, 2018
This technical report provides detailed information regarding the technical, statistical, and measurement attributes of the New York State Testing Program (NYSTP) for the Grades 3-8 English Language Arts (ELA) and Mathematics 2018 Operational Tests. This report includes information about test content and test development, item (i.e., individual…
Descriptors: English, Language Arts, Language Tests, Mathematics Tests
International Journal of Testing, 2019
These guidelines describe considerations relevant to the assessment of test takers in or across countries or regions that are linguistically or culturally diverse. The guidelines were developed by a committee of experts to help inform test developers, psychometricians, test users, and test administrators about fairness issues in support of the…
Descriptors: Test Bias, Student Diversity, Cultural Differences, Language Usage
New York State Education Department, 2017
This technical report provides detailed information regarding the technical, statistical, and measurement attributes of the New York State Testing Program (NYSTP) for the Grades 3-8 English Language Arts (ELA) and Mathematics 2017 Operational Tests. This report includes information about test content and test development, item (i.e., individual…
Descriptors: English, Language Arts, Language Tests, Mathematics Tests
Cheng, Liying; DeLuca, Christopher – Educational Assessment, 2011
Test-takers' interpretations of validity as related to test constructs and test use have been widely debated in large-scale language assessment. This study contributes further evidence to this debate by examining 59 test-takers' perspectives in writing large-scale English language tests. Participants wrote about their test-taking experiences in…
Descriptors: Language Tests, Test Validity, Test Use, English
National Council on Measurement in Education, 2012
Testing and data integrity on statewide assessments is defined as the establishment of a comprehensive set of policies and procedures for: (1) the proper preparation of students; (2) the management and administration of the test(s) that will lead to accurate and appropriate reporting of assessment results; and (3) maintaining the security of…
Descriptors: State Programs, Integrity, Testing, Test Preparation
Marshall, Robert C.; Karow, Colleen M. – American Journal of Speech-Language Pathology, 2008
Purpose: The Rapid Assessment of Problem Solving test (RAPS) is a clinical measure of problem solving based on the 20 Questions Test. This article updates clinicians on the RAPS, addresses questions raised about the test in an earlier article (R. C. Marshall, C. M. Karow, C. Morelli, K. Iden, & J. Dixon, 2003a), and discusses the clinical…
Descriptors: Problem Solving, Cognitive Processes, Cognitive Tests, Clinical Diagnosis
New York State Education Department, 2014
This technical report provides an overview of the New York State Alternate Assessment (NYSAA), including a description of the purpose of the NYSAA, the processes utilized to develop and implement the NYSAA program, and Stakeholder involvement in those processes. The purpose of this report is to document the technical aspects of the 2013-14 NYSAA.…
Descriptors: Alternative Assessment, Educational Assessment, State Departments of Education, Student Evaluation
Bennett, Randy Elliot – 1990
A new assessment conception is described that integrates constructed-response testing, artificial intelligence, and model-based measurement. The conception incorporates complex constructed-response items for their potential to increase the validity, instructional utility, and credibility of standardized tests. Artificial intelligence methods are…
Descriptors: Artificial Intelligence, Constructed Response, Educational Assessment, Measurement Techniques
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests

Gray, T. G. F. – European Journal of Engineering Education, 1987
Reports on a study in which students in a first-year engineering materials class were asked to grade their own examinations, and those of others. Comparisons were then made to the same papers graded by their teachers. (TW)
Descriptors: College Science, Engineering Education, Higher Education, Measurement Objectives
Moss, Jerome, Jr.; Jensrud, Qetler – 1996
This booklet is intended for practitioners interested in administering, hand scoring, and providing individualized feedback reports on the Leader Attributes Inventory (LAI), an inventory designed to provide assessment data on 37 leader attributes. The following topics are discussed in the booklet's six sections: preparing the instruments for use;…
Descriptors: Data Analysis, Feedback, Leadership Effectiveness, Leadership Qualities
Moss, Jerome, Jr.; Jensrud, Qetler – 1996
This booklet is intended for practitioners interested in administering, hand scoring, and providing individualized feedback reports on the Leader Effectiveness Index (LEI), a seven-item instrument designed to provide assessment data on leader effectiveness. The following topics are discussed in the booklet's six sections: preparing the instruments…
Descriptors: Data Analysis, Feedback, Leadership Effectiveness, Leadership Qualities
Northeastern Local School District, Springfield, OH. – 1986
This manual provides screening guidelines for rural preschool and kindergarten programs and describes the development of a model Early Screening Program (ESP) that involves nine child stations and two parent stations. Screening stations developed for assessment included: (1) gross motor; (2) visual motor; (3) visual perception; (4) auditory…
Descriptors: Early Childhood Education, Early Identification, Guidelines, Kindergarten
Previous Page | Next Page ยป
Pages: 1 | 2