Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 6 |
Descriptor
Comparative Analysis | 9 |
Item Banks | 9 |
Testing | 9 |
Test Items | 5 |
Adaptive Testing | 3 |
Computer Assisted Testing | 3 |
Scores | 3 |
Simulation | 3 |
Ability | 2 |
Computer Oriented Programs | 2 |
Difficulty Level | 2 |
More ▼ |
Source
Educational and Psychological… | 2 |
ETS Research Report Series | 1 |
Independent School | 1 |
Journal of Educational… | 1 |
New Meridian Corporation | 1 |
Author
Dodd, Barbara G. | 2 |
Weiss, David J. | 2 |
Betz, Nancy E. | 1 |
Choi, Seung W. | 1 |
Grady, Matthew W. | 1 |
Hembry, Ian | 1 |
Kim, Sooyeon | 1 |
Leroux, Audrey J. | 1 |
Li, Jie | 1 |
Lopez, Myriam | 1 |
Lu, Ru | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Journal Articles | 5 |
Guides - General | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kim, Sooyeon; Lu, Ru – ETS Research Report Series, 2018
The purpose of this study was to evaluate the effectiveness of linking test scores by using test takers' background data to form pseudo-equivalent groups (PEG) of test takers. Using 4 operational test forms that each included 100 items and were taken by more than 30,000 test takers, we created 2 half-length research forms that had either 20…
Descriptors: Test Items, Item Banks, Difficulty Level, Comparative Analysis
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS) to provide guidance to states that are interested in including New Meridian content and would like to either keep reporting scores on the New Meridian Scale or use the New Meridian performance levels; that is, the state…
Descriptors: Testing, Standards, Comparative Analysis, Test Content
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Lyons, Douglas; Niblock, Andrew W. – Independent School, 2014
Independent schools are, for the most part, exempt from mandatory participation in standardized tests designed for state and federal comparisons, nor are they required to take part in comparative international assessments. The anxiety in the broader culture, however, is driving a growing interest among independent school parents (and prospective…
Descriptors: Global Approach, Comparative Analysis, Comparative Education, Educational Practices
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Sachar, Jane; Suppes, Patrick – 1977
It is sometimes desirable to obtain an estimated total-test score for an individual who was administered only a subset of the items in a total test. The present study compared six methods, two of which utilize the content structure of items, to estimate total-test scores using 450 students in grades 3-5 and 60 items of the ll0-item Stanford Mental…
Descriptors: Comparative Analysis, Elementary Education, Item Analysis, Item Banks
Vale, C. David; Weiss, David J. – 1975
A conventional test and two forms of a stradaptive test were administered to thousands of simulated subjects by minicomputer. Characteristics of the three tests using several scoring techniques were investigated while varying the discriminating power of the items, the lengths of the tests, and the availability of prior information about the…
Descriptors: Ability, Branching, Comparative Analysis, Computer Oriented Programs
Betz, Nancy E.; Weiss, David J. – 1975
A 40-item flexilevel test and a 40-item conventional test were compared using data obtained through (1) computer-administration of the two tests to three groups of college students, and (2) monte carlo simulation of test response patterns. Results indicated the flexilevel score distribution better reflected the underlying normal distribution of…
Descriptors: Ability, College Students, Comparative Analysis, Computer Oriented Programs