Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Computer Assisted Testing | 15 |
Test Items | 11 |
Test Construction | 10 |
Adaptive Testing | 8 |
Item Response Theory | 5 |
Test Format | 5 |
Testing Problems | 5 |
Item Banks | 4 |
Psychometrics | 4 |
Test Length | 4 |
Test Validity | 4 |
More ▼ |
Source
Journal of Educational… | 4 |
Journal of Educational and… | 3 |
ETS Research Report Series | 1 |
Educational Measurement:… | 1 |
Journal of College Admissions | 1 |
Author
Wainer, Howard | 15 |
Kiely, Gerard L. | 2 |
Bradlow, Eric T. | 1 |
Lewis, Charles | 1 |
Robinson, Daniel H. | 1 |
Thissen, David | 1 |
Wang, Xiaohui | 1 |
Publication Type
Journal Articles | 10 |
Reports - Evaluative | 6 |
Reports - Research | 4 |
Reports - Descriptive | 3 |
Opinion Papers | 2 |
Guides - Non-Classroom | 1 |
Information Analyses | 1 |
Education Level
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
Armed Services Vocational… | 1 |
What Works Clearinghouse Rating
Wainer, Howard – Journal of Educational and Behavioral Statistics, 2010
In this essay, the author tries to look forward into the 21st century to divine three things: (i) What skills will researchers in the future need to solve the most pressing problems? (ii) What are some of the most likely candidates to be those problems? and (iii) What are some current areas of research that seem mined out and should not distract…
Descriptors: Research Skills, Researchers, Internet, Access to Information
Wainer, Howard; Robinson, Daniel H. – Journal of Educational and Behavioral Statistics, 2007
This article presents an interview with Susan E. Embretson. Embretson attended the University of Minnesota where she received her bachelor's degree in 1967 and earned a PhD in 1973 in psychology. She became an assistant professor at the University of Kansas in 1974 and was promoted to associate professor and full professor. In 2004, she accepted a…
Descriptors: Educational Research, Psychometrics, Cognitive Psychology, Item Response Theory

Wainer, Howard – Journal of Educational and Behavioral Statistics, 2000
Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…
Descriptors: Computer Assisted Testing, Standardized Tests, Test Items, Testing Problems

Wainer, Howard – Journal of Educational Measurement, 1989
This paper reviews the role of the item in test construction, and suggests some new methods of item analysis. A look at dynamic, graphical item analysis is provided that uses the advantages of modern, high-speed, highly interactive computing. Several illustrations are provided. (Author/TJH)
Descriptors: Computer Assisted Testing, Computer Graphics, Graphs, Item Analysis

Wainer, Howard; Kiely, Gerard L. – Journal of Educational Measurement, 1987
The testlet, a bundle of test items, alleviates some problems associated with computerized adaptive testing: context effects, lack of robustness, and item difficulty ordering. While testlets may be linear or hierarchical, the most useful ones are four-level hierarchical units, containing 15 items and partitioning examinees into 16 classes. (GDC)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Item Banks

Wainer, Howard – Educational Measurement: Issues and Practice, 1993
Some cautions are sounded for converting a linearly administered test to an adaptive format. Four areas are identified in which practices broadly used in traditionally constructed tests can have adverse effects if thoughtlessly adopted when a test is administered in an adaptive mode. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Practices, Test Construction
Wainer, Howard; And Others – 1991
A series of computer simulations was run to measure the relationship between testlet validity and the factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Results confirmed the generality of earlier empirical findings of H. Wainer and others (1991) that making a testlet adaptive yields only marginal…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks
Wainer, Howard; Thissen, David – 1985
Using simulated item response data, the performance of several "robust" and conventional schemes for ability estimation was evaluated in conjunction with logistic item response theory models (one, two, and three parameter models). The simulated item response data were generated using a model that is more complex than are the usual…
Descriptors: Adaptive Testing, Adults, Computer Assisted Testing, Error of Measurement

Wainer, Howard; And Others – Journal of Educational Measurement, 1992
Computer simulations were run to measure the relationship between testlet validity and factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Making a testlet adaptive yields only modest increases in aggregate validity because of the peakedness of the typical proficiency distribution. (Author/SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation

Wainer, Howard; Lewis, Charles – Journal of Educational Measurement, 1990
Three different applications of the testlet concept are presented, and the psychometric models most suitable for each application are described. Difficulties that testlets can help overcome include (1) context effects; (2) item ordering; and (3) content balancing. Implications for test construction are discussed. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Elementary Secondary Education, Item Response Theory
Wainer, Howard; And Others – 1991
When an examination consists, in whole or in part, of constructed response items, it is a common practice to allow the examinee to choose among a variety of questions. This procedure is usually adopted so that the limited number of items that can be completed in the allotted time does not unfairly affect the examinee. This results in the de facto…
Descriptors: Adaptive Testing, Chemistry, Comparative Analysis, Computer Assisted Testing
Wang, Xiaohui; Bradlow, Eric T.; Wainer, Howard – ETS Research Report Series, 2005
SCORIGHT is a very general computer program for scoring tests. It models tests that are made up of dichotomously or polytomously rated items or any kind of combination of the two through the use of a generalized item response theory (IRT) formulation. The items can be presented independently or grouped into clumps of allied items (testlets) or in…
Descriptors: Computer Assisted Testing, Statistical Analysis, Test Items, Bayesian Statistics
Wainer, Howard; And Others – 1990
The initial development of a testlet-based algebra test was previously reported (Wainer and Lewis, 1990). This account provides the details of this excursion into the use of hierarchical testlets and validity-based scoring. A pretest of two 15-item hierarchical testlets was carried out in which examinees' performance on a 4-item subset of each…
Descriptors: Adaptive Testing, Algebra, Comparative Analysis, Computer Assisted Testing

Wainer, Howard – Journal of College Admissions, 1983
Discusses changes in testing as a result of the availability of extensive inexpensive computing and some recent developments in statistical test theory. Describes the role of the Computerized Adaptive Test (CAT) and modern Item Response Theory (IRT) in ability testing tailored to each student's knowledge and ability. (JAC)
Descriptors: Cognitive Ability, College Entrance Examinations, Computer Assisted Testing, Higher Education
Wainer, Howard; Kiely, Gerard L. – 1986
Recent experience with the Computerized Adaptive Test (CAT) has raised a number of concerns about its practical applications. The concerns are principally involved with the concept of having the computer construct the test from a precalibrated item pool, and substituting statistical characteristics for the test developer's skills. Problems with…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Construct Validity