NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1,036 to 1,050 of 1,333 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Shu-Ying; Ankenman, Robert D. – Journal of Educational Measurement, 2004
The purpose of this study was to compare the effects of four item selection rules--(1) Fisher information (F), (2) Fisher information with a posterior distribution (FP), (3) Kullback-Leibler information with a posterior distribution (KP), and (4) completely randomized item selection (RN)--with respect to the precision of trait estimation and the…
Descriptors: Test Length, Adaptive Testing, Computer Assisted Testing, Test Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Yeh, Stuart S. – Educational Policy, 2006
The No Child Left Behind Act (NCLB) assumes that state-mandated tests provide useful information to school administrators and teachers. However, interviews with administrators and teachers suggest that Minnesota's tests, which are representative of the current generation of state-mandated tests, fail to provide useful information to administrators…
Descriptors: Federal Legislation, Educational Policy, Outcomes of Education, Accountability
Bridgeman, Brent; Rock, Donald A. – 1993
Three new computer-administered item types for the analytical scale of the Graduate Record Examination (GRE) General Test were developed and evaluated. One item type was a free-response version of the current analytical reasoning item type. The second item type was a somewhat constrained free-response version of the pattern identification (or…
Descriptors: Adaptive Testing, College Entrance Examinations, College Students, Computer Assisted Testing
Stocking, Martha L. – 1988
The construction of parallel editions of conventional tests for purposes of test security while maintaining score comparability has always been a recognized and difficult problem in psychometrics and test construction. The introduction of new modes of test construction, e.g., adaptive testing, changes the nature of the problem, but does not make…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Identification
Wang, Xiang-bo; And Others – 1993
An increasingly popular test format allows examinees to choose the items they will answer from among a larger set. When examinee choice is allowed fairness requires that the different test forms thus formed be equated for their possible differential difficulty. For this equating to be possible it is necessary to know how well examinees would have…
Descriptors: Adaptive Testing, Advanced Placement, Difficulty Level, Equated Scores
Kim, Seock-Ho; Cohen, Allan S. – 1996
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, three methods for developing a common metric under item response theory are compared: (1) linking separate…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Gershon, Richard C.; And Others – 1994
A 1992 study by R. Gershon found discrepancies when comparing the theoretical Rasch item characteristic curve with the average empirical curve for 1,304 vocabulary items administered to 7,711 students. When person-item mismatches were deleted (for any person-item interaction where the ability of the person was much higher or much lower than the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
PDF pending restoration PDF pending restoration
Zwick, Rebecca; And Others – 1994
A previous simulation study of methods for assessing item functioning (DIF) in computer-adaptive tests (CATs) showed that modified versions of the Mantel-Haenszel and standardization methods work well with CAT data. In that study, data were generated using the three-parameter logistic (3PL) model, and this same model was assumed in obtaining item…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Lord, Frederic M. – 1980
The purpose of this book is to make it possible for measurement specialists to solve practical testing problems through the use of item response theory (IRT). The topics, organization, and presentation are those used in a 4-week seminar held each summer for the past several years. The material is organized to facilitate understanding; all related…
Descriptors: Adaptive Testing, Estimation (Mathematics), Evaluation Problems, Item Analysis
Grist, Susan; And Others – 1989
Computerized adaptive tests (CATs) make it possible to estimate the ability of each student during the testing process. The computer presents items to students at the appropriate level, and students take different versions of the same test. Computerized testing increases the flexibility of test management in that: (1) tests are given on demand and…
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Uses in Education
DeAyala, R. J.; Koch, William R. – 1987
A nominal response model-based computerized adaptive testing procedure (nominal CAT) was implemented using simulated data. Ability estimates from the nominal CAT were compared to those from a CAT based upon the three-parameter logistic model (3PL CAT). Furthermore, estimates from both CAT procedures were compared with the known true abilities used…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Weiss, David J.; Suhadolnik, Debra – 1982
The present monte carlo simulation study was designed to examine the effects of multidimensionality during the administration of computerized adaptive testing (CAT). It was assumed that multidimensionality existed in the individuals to whom test items were being administered, i.e., that the correct or incorrect responses given by an individual…
Descriptors: Adaptive Testing, Computer Assisted Testing, Factor Structure, Latent Trait Theory
Tatsuoka, Kikumi K. – 1982
This study introduced a probabilistic model utilizing item response theory (IRT) for dealing with a variety of misconceptions. The model can be used for evaluating the transition behavior of error types, advancement of learning stages, or the stability and persistence of particular misconceptions. Moreover, it apparently can be used for relating…
Descriptors: Adaptive Testing, Elementary Secondary Education, Error Patterns, Evaluation Methods
Eddins, John M. – 1984
Major efforts of this project fall into four categories: (1) investigations were performed on the relationship between the dimensionality of a dataset and its underlying cognitive processes; (2) two approaches for diagnosing erroneous rules of operation were developed (an "error vector" system for constructing error diagnostic programs…
Descriptors: Achievement Tests, Adaptive Testing, Arithmetic, Computer Assisted Testing
PDF pending restoration PDF pending restoration
van der Linden, Wim J.; Zwarts, Michel A. – 1986
The use of item response theory (IRT) is a prerequisite to successful use of computerized test systems. In item response models, as opposed to classical test theory, the abilities of the examinees and the properties of the items are parameterized separately. Therefore, when measuring the abilities of examinees, the model implicitly corrects for…
Descriptors: Ability Identification, Adaptive Testing, Aptitude Tests, Computer Assisted Testing
Pages: 1  |  ...  |  66  |  67  |  68  |  69  |  70  |  71  |  72  |  73  |  74  |  ...  |  89