Descriptor
Source
Author
Chang, Yu-Wen | 1 |
Davison, Mark L. | 1 |
De Ayala, R. J. | 1 |
DeAyala, R. J. | 1 |
Du Bose, Pansy | 1 |
Kim, Haeok | 1 |
Koch, William R. | 1 |
Kromrey, Jeffrey D. | 1 |
Mazor, Kathleen M. | 1 |
Plake, Barbara S. | 1 |
Sykes, Robert C. | 1 |
More ▼ |
Publication Type
Speeches/Meeting Papers | 8 |
Reports - Evaluative | 6 |
Reports - Research | 2 |
Education Level
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Wechsler Adult Intelligence… | 1 |
Wechsler Memory Scale | 1 |
What Works Clearinghouse Rating
Mazor, Kathleen M.; And Others – 1993
The Mantel-Haenszel (MH) procedure has become one of the most popular procedures for detecting differential item functioning (DIF). One of the most troublesome criticisms of this procedure is that while detection rates for uniform DIF are very good, the procedure is not sensitive to non-uniform DIF. In this study, examinee responses were generated…
Descriptors: Comparative Testing, Computer Simulation, Item Bias, Item Response Theory
Youngjohn, James R.; And Others – 1991
Test-retest reliabilities and practice effect magnitudes were considered for nine computer-simulated tasks of everyday cognition and five traditional neuropsychological tests. The nine simulated everyday memory tests were from the Memory Assessment Clinic battery as follows: (1) simple reaction time while driving; (2) divided attention (driving…
Descriptors: Adults, Comparative Testing, Computer Assisted Testing, Computer Simulation
Chang, Yu-Wen; Davison, Mark L. – 1992
Standard errors and bias of unidimensional and multidimensional ability estimates were compared in a factorial, simulation design with two item response theory (IRT) approaches, two levels of test correlation (0.42 and 0.63), two sample sizes (500 and 1,000), and a hierarchical test content structure. Bias and standard errors of subtest scores…
Descriptors: Comparative Testing, Computer Simulation, Correlation, Error of Measurement
Sykes, Robert C.; And Others – 1992
A part-form methodology was used to study the effect of varying degrees of multidimensionality on the consistency of pass/fail classification decisions obtained from simulated unidimensional item response theory (IRT) based licensure examinations. A control on the degree of form multidimensionality permitted an assessment throughout the range of…
Descriptors: Classification, Comparative Testing, Computer Simulation, Decision Making
DeAyala, R. J.; Koch, William R. – 1987
A nominal response model-based computerized adaptive testing procedure (nominal CAT) was implemented using simulated data. Ability estimates from the nominal CAT were compared to those from a CAT based upon the three-parameter logistic model (3PL CAT). Furthermore, estimates from both CAT procedures were compared with the known true abilities used…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
De Ayala, R. J. – 1992
One important and promising application of item response theory (IRT) is computerized adaptive testing (CAT). The implementation of a nominal response model-based CAT (NRCAT) was studied. Item pool characteristics for the NRCAT as well as the comparative performance of the NRCAT and a CAT based on the three-parameter logistic (3PL) model were…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Kim, Haeok; Plake, Barbara S. – 1993
A two-stage testing strategy is one method of adapting the difficulty of a test to an individual's ability level in an effort to achieve more precise measurement. A routing test provides an initial estimate of ability level, and a second-stage measurement test then evaluates the examinee further. The measurement accuracy and efficiency of item…
Descriptors: Ability, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Du Bose, Pansy; Kromrey, Jeffrey D. – 1993
Empirical evidence is presented of the relative efficiency of two potential linkage plans to be used when equivalent test forms are being administered. Equating is a process by which scores on one form of a test are converted to scores on another form of the same test. A Monte Carlo study was conducted to examine equating stability and statistical…
Descriptors: Art Education, Comparative Testing, Computer Simulation, Equated Scores