Publication Date
| In 2026 | 7 |
| Since 2025 | 690 |
| Since 2022 (last 5 years) | 3191 |
| Since 2017 (last 10 years) | 7432 |
| Since 2007 (last 20 years) | 15070 |
Descriptor
| Test Reliability | 15055 |
| Test Validity | 10290 |
| Reliability | 9763 |
| Foreign Countries | 7150 |
| Test Construction | 4828 |
| Validity | 4192 |
| Measures (Individuals) | 3880 |
| Factor Analysis | 3826 |
| Psychometrics | 3532 |
| Interrater Reliability | 3126 |
| Correlation | 3040 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 709 |
| Practitioners | 451 |
| Teachers | 208 |
| Administrators | 122 |
| Policymakers | 66 |
| Counselors | 42 |
| Students | 38 |
| Parents | 11 |
| Community | 7 |
| Support Staff | 6 |
| Media Staff | 5 |
| More ▼ | |
Location
| Turkey | 1329 |
| Australia | 436 |
| Canada | 379 |
| China | 368 |
| United States | 271 |
| United Kingdom | 256 |
| Indonesia | 253 |
| Taiwan | 234 |
| Netherlands | 224 |
| Spain | 218 |
| California | 215 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 8 |
| Meets WWC Standards with or without Reservations | 9 |
| Does not meet standards | 6 |
Wang, Ning; Wiser, Randall F.; Newman, Larry S. – 1999
Job analysis has played a fundamental role in developing and validating licensure and certification examinations, but research on what constitutes reliable and valid job analysis data is lacking. This paper examines the reliability and validity of job analysis survey results. Generalizability theory and the multi-facet Rasch item response theory…
Descriptors: Generalizability Theory, Goodness of Fit, Item Response Theory, Job Analysis
McCowan, Richard J. – Online Submission, 1999
Item writing is a major responsibility of trainers. Too often, qualified staff who prepare lessons carefully and teach conscientiously use inadequate tests that do not validly reflect the true level of trainee achievement. This monograph describes techniques for constructing multiple-choice items that measure student performance accurately. It…
Descriptors: Multiple Choice Tests, Item Analysis, Test Construction, Test Items
Kulinna, Pamela Hodges; Silverman, Stephen – 1997
A multiple-phase study was conducted to develop a reliable and valid instrument to examine teachers' attitudes toward teaching physical activity and fitness. Thirty-one subjects participated in the preliminary study involving the development of an attitude instrument. Subjects for the content validity study were 28 experts in physical education…
Descriptors: Attitude Measures, Physical Activities, Physical Education Teachers, Physical Fitness
Salaba, Athena – 1996
The purpose of this study was to examine the inter-indexer consistency in the assignment of selected access points between OCLC and RLIN (Research Libraries Information Network). Both databases are used by many libraries as bibliographic sources for copy cataloging. Access points were compared for differences in wording, MARC coding, and…
Descriptors: Bibliographic Databases, Bibliographic Records, Cataloging, Classification
PDF pending restorationSchulz, E. Matthew; Kolen, Michael J.; Nicewander, W. Alan – 1997
This paper compares modified Guttman and item response theory (IRT) based procedures for classifying examinees in ordered levels when each level is represented by several multiple choice test items. In the modified Guttman procedure, within-level number correct scores are mapped to binary level mastery scores. Examinees are then assigned to levels…
Descriptors: Classification, Comparative Analysis, Item Response Theory, Mathematics Tests
Chevalier, Shirley A. – 1998
In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models:…
Descriptors: Ability, Algorithms, Aptitude Tests, Cognitive Tests
Ediger, Marlow – 2001
It is difficult to know the information that should be included on state report cards to enable comparisons among school districts and among different states. There may be many problems with such report cards, ranging from the possibility of computer error to the chance of reporting test scores that are not reliable or valid or the use of tests…
Descriptors: Academic Achievement, Comparative Analysis, Elementary Secondary Education, Reliability
Obiekwe, Jerry C. – 2000
This study examined the internal consistency of the Student Satisfaction Inventory (SSI) (L. Schrener, S. Juillerat, and Noel-Levitz Group, 1997) and the latent structures of the instrument. The SSI is an instrument that measures students' satisfaction with their college experiences. The SSI was administered to 251 students randomly selected from…
Descriptors: Educational Experience, Factor Analysis, Factor Structure, Higher Education
Watson, Freda S.; Lang, Thomas R.; Kromrey, Jeffrey D. – 2002
A team of researchers at the University of South Florida is developing a multimedia program to identify and help students with statistics anxiety. This program, EncStat, includes tests that provide information about a student's level of anxiety and negative attitudes toward statistics, computer anxiety, and study skills, and it contains…
Descriptors: Anxiety, Focus Groups, Graduate Students, Graduate Study
Pietrowiak, Diana; Schibanoff, Sara L. – 2003
To better monitor children and families served by state child welfare agencies, Congress authorized matching funds for the development of statewide automatic child welfare information systems (SACWIS) and required that U.S. Department of Health and Human Services (HHS) compile information on children served by state agencies. This report to…
Descriptors: Adoption, Child Abuse, Child Neglect, Child Welfare
Dings, Jonathan; Childs, Ruth; Kingston, Neal – 2002
This study examined matrix sampling of test content, the practice of giving various students in the same school differing test questions. This often-used approach to large-scale assessment allows for relatively broad coverage of the curriculum, but with fewer comparable individual student scores than a conventional test. One can be sure that…
Descriptors: Junior High School Students, Junior High Schools, Performance Based Assessment, Reliability
Meehan, Merrill L.; Orletsky, Sandra R.; Sattes, Beth – 1997
A new instrument was designed by S. Hord (1996) to assess globally the maturity of a school's professional staff as a learning community. The Appalachia Educational Laboratory agreed to field test this instrument to determine its reliability and validity and to draw conclusions about its use in educational improvement. The instrument consisted of…
Descriptors: Educational Improvement, Elementary Secondary Education, Field Tests, Learning
Henerson, Marlene E.; Morris, Lynn Lyons; Fitz-Gibbon, Carol Taylor – 1987
The "CSE Program Evaluation Kit" is a series of nine books intended to assist people conducting program evaluations. This volume, sixth in the kit, is designed to help an evaluator select or design credible instruments to measure attitudes. The book discusses problems involved in measuring attitudes, including people's sensitivity about this kind…
Descriptors: Attitude Change, Attitude Measures, Evaluation Methods, Interviews
Wiersma, William – 2001
The Continuous School Improvement Questionnaire (CSIQ) is a comprehensive inventory measuring educators' perceptions of factors that affect success with school improvement. A pilot test of the CSIQ was conducted in Spring 2000, one purpose of which was to reduce the length of the CSIQ. The instrument was reduced from 147 to 72 items. The pilot…
Descriptors: Educational Improvement, Elementary Secondary Education, Field Tests, Institutional Characteristics
White, Sheida; Smith, Connie; Vanneman, Alan – Focus on NAEP, 2000
The National Center for Education Statistics (NCES) has been conducting the National Assessment of Educational Progress (NAEP) since 1969. In addition to conducting regular assessments in reading, mathematics, science, and writing, the NAEP conducts assessments in other subjects, such as geography, U.S. history, civics, and the arts. Each national…
Descriptors: Elementary Secondary Education, National Competency Tests, National Surveys, Reliability


