Publication Date
| In 2026 | 0 |
| Since 2025 | 220 |
| Since 2022 (last 5 years) | 1089 |
| Since 2017 (last 10 years) | 2599 |
| Since 2007 (last 20 years) | 4960 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Peer reviewedKim, Sewhan; And Others – Journal of Drug Education, 1985
Describes the theoretical framework, measurement properties, predictive power, and reliability and validity of the Drug Education School Evaluation Instrument (DESEI), used to determine the effectiveness of the North Carolina Drug Education Schools. The DESEI is included. (BH)
Descriptors: Drug Education, Evaluation Methods, Predictive Measurement, Program Effectiveness
Troyka, Lynn Quitman – Writing Program Administration, 1984
Defends the CUNY-WAT against the charges made by Fishman (CS 731 865). Offers suggestions for those wishing to undertake research into the choice of topics for writing assessment tests. (FL)
Descriptors: Essay Tests, Higher Education, Test Format, Test Items
Peer reviewedHammer, Carol – Clearing House, 1986
Explains that a good essay question should be as specific in what it asks for as the students are expected to be in their answers. (HOD)
Descriptors: Essay Tests, Questioning Techniques, Secondary Education, Sentence Structure
Peer reviewedPlake, Barbara S.; And Others – Educational and Psychological Measurement, 1983
The purpose of this study was to investigate further the effect of differential item performance by males and females on tests which have different item arrangements. The study allows for a more accurate evaluation of whether differential sensitivity to reinforcement strategies is a factor in performance discrepancies for males and females.…
Descriptors: Feedback, Higher Education, Performance Factors, Quantitative Tests
Peer reviewedGreen, Kathy – Educational and Psychological Measurement, 1984
Two factors, language difficulty and option set convergence, were experimentally manipulated and their effects on item difficulty assessed. Option convergence was found to have a significant effect on item difficulty while the effect of language difficulty was not significant. (Author/BW)
Descriptors: Difficulty Level, Error Patterns, Higher Education, Multiple Choice Tests
Katz, Irvin R.; Xi, Xiaoming; Kim, Hyun-Joo; Cheng, Peter C. H. – Educational Testing Service, 2004
This research applied a cognitive model to identify item features that lead to irrelevant variance on the Test of Spoken English[TM] (TSE[R]). The TSE is an assessment of English oral proficiency and includes an item that elicits a description of a statistical graph. This item type sometimes appears to tap graph-reading skills--an irrelevant…
Descriptors: Test Format, English, Test Items, Language Proficiency
von Davier, Matthias; von Davier, Alina A. – Educational Testing Service, 2004
This paper examines item response theory (IRT) scale transformations and IRT scale linking methods used in the Non-Equivalent Groups with Anchor Test (NEAT) design to equate two tests, X and Y. It proposes a unifying approach to the commonly used IRT linking methods: mean-mean, mean-var linking, concurrent calibration, Stocking and Lord and…
Descriptors: Measures (Individuals), Item Response Theory, Item Analysis, Models
National Center for Education Statistics (ED), Washington, DC. – 2003
This report, the fifth annual report on the Colorado Student Assessment Program (CSAP), provides policymakers, educators, parents, and the community with a general accounting and precise overview of the performance of Colorado's third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth grade students relative to the Colorado State Model…
Descriptors: Academic Achievement, Educational Assessment, Educationally Disadvantaged, Elementary Secondary Education
Zhang, Yanling; Matthews-Lopez, Joy; Dorans, Neil J. – 2003
Statistical procedures for detecting differential item functioning (DIF) are often used to screen items for construct irrelevant variance. Standard DIF detection procedures focus on only one categorical variables at an aggregated group or one-way level, like gender or ethnicity/race. Building on previous work by P. Hu and N. Dorans (1998), N.…
Descriptors: Classification, College Entrance Examinations, High School Students, High Schools
Michaelides, Michalis P. – 2003
The delta-plot method is used to identify which common items in a common item nonequivalent groups design for test equating show large changes in their p-values across administrations. Outliers in that plot denote differential item behavior and are candidates for exclusion from the common item pool. This study investigated whether keeping or…
Descriptors: Achievement Gains, Equated Scores, High School Students, High Schools
Virginia Department of Education, 2005
The purpose of this report is to inform users and other interested parties about the development, content and technical characteristics of the Virginia Standards of Learning (SOL) assessments. It provides information for the 2003 "SOL" cycle that comprises the fall 2003 and spring 2004 administrations. The report is divided into three…
Descriptors: Research Reports, Testing Programs, State Standards, Program Effectiveness
PDF pending restorationNeustel, Sandra – 2001
As a continuing part of its validity studies, the Association of American Medical Colleges commissioned a study of the speediness of the Medical College Admission Test (MCAT). If speed is a hidden part of the test, it is a threat to its construct validity. As a general rule, the criterion used to indicate lack of speediness is that 80% of the…
Descriptors: College Applicants, College Entrance Examinations, Higher Education, Medical Education
Reese, Lynda M. – 1999
This study extended prior Law School Admission Council (LSAC) research related to the item response theory (IRT) local item independence assumption into the realm of classical test theory. Initially, results from the Law School Admission Test (LSAT) and two other tests were investigated to determine the approximate state of local item independence…
Descriptors: College Entrance Examinations, Item Response Theory, Law Schools, Test Construction
Reese, Lynda M.; Pashley, Peter J. – 1999
This study investigated the practical effects of local item dependence (LID) on item response theory (IRT) true-score equating. A scenario was defined that emulated the Law School Admission Test (LSAT) preequating model, and data were generated to assess the impact of different degrees of LID on final equating outcomes. An extreme amount of LID…
Descriptors: College Entrance Examinations, Equated Scores, Item Response Theory, Law Schools
Reese, Lynda M. – 1995
This study explored the impact of various degrees of violations of the item response theory (IRT) local independence assumption on the Law School Admission Test (LSAT) calibration and score distribution estimates. Initially, results from the LSAT and two other tests were investigated to determine the approximate state of local item dependence…
Descriptors: College Applicants, College Entrance Examinations, Higher Education, Item Response Theory


