Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Simulation | 14 |
College Entrance Examinations | 10 |
Adaptive Testing | 7 |
Computer Assisted Testing | 7 |
Item Response Theory | 7 |
Law Schools | 7 |
Estimation (Mathematics) | 4 |
Matrices | 4 |
Test Items | 4 |
Ability | 3 |
Sample Size | 3 |
More ▼ |
Source
Applied Psychological… | 2 |
Educational and Psychological… | 1 |
Journal of Educational… | 1 |
Psicologica: International… | 1 |
Author
Publication Type
Reports - Research | 9 |
Journal Articles | 5 |
Reports - Evaluative | 4 |
Speeches/Meeting Papers | 2 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Law School Admission Test | 14 |
Program for International… | 1 |
What Works Clearinghouse Rating
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao – Educational and Psychological Measurement, 2013
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Descriptors: Item Response Theory, Computation, Matrices, Statistical Inference
Veldkamp, Bernard P. – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…
Descriptors: Selection, Criteria, Bayesian Statistics, Computer Assisted Testing
Ariel, Adelaide; van der Linden, Wim J.; Veldkamp, Bernard P. – Journal of Educational Measurement, 2006
Item-pool management requires a balancing act between the input of new items into the pool and the output of tests assembled from it. A strategy for optimizing item-pool management is presented that is based on the idea of a periodic update of an optimal blueprint for the item pool to tune item production to test assembly. A simulation study with…
Descriptors: Item Banks, Simulation, Interaction, Test Construction
Reese, Lynda M. – 1999
This study represented a first attempt to evaluate the impact of local item dependence (LID) for Item Response Theory (IRT) scoring in computerized adaptive testing (CAT). The most basic CAT design and a simplified design for simulating CAT item pools with varying degrees of LID were applied. A data generation method that allows the LID among…
Descriptors: College Entrance Examinations, Item Response Theory, Law Schools, Scoring

van der Linden, Wim J.; Reese, Lynda M. – Applied Psychological Measurement, 1998
Proposes a model for constrained computerized adaptive testing in which the information in the test at the trait level (theta) estimate is maximized subject to the number of possible constraints on the content of the test. Test assembly relies on a linear-programming approach. Illustrates the approach through simulation with items from the Law…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Zwick, Rebecca; Thayer, Dorothy T. – 2003
This study investigated the applicability to computerized adaptive testing (CAT) data of a differential item functioning (DIF) analysis that involves an empirical Bayes (EB) enhancement of the popular Mantel Haenszel (MH) DIF analysis method. The computerized Law School Admission Test (LSAT) assumed for this study was similar to that currently…
Descriptors: Adaptive Testing, Bayesian Statistics, College Entrance Examinations, Computer Assisted Testing

Glas, Cees A. W.; van der Linden, Wim J. – Applied Psychological Measurement, 2003
Developed a multilevel item response (IRT) model that allows for differences between the distributions of item parameters of families of item clones. Results from simulation studies based on an item pool from the Law School Admission Test illustrate the accuracy of the item pool calibration and adaptive testing procedures based on the model. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Schnipke, Deborah L.; Roussos, Louis A.; Pashley, Peter J. – 2000
Differential item functioning (DIF) analyses are conducted to investigate how items function in various subgroups. The Mantel-Haenszel (MH) DIF statistic is used at the Law School Admission Council and other testing companies. When item functioning can be well-described in terms of a one- or two-parameter logistic item response theory (IRT) model…
Descriptors: College Entrance Examinations, Comparative Analysis, Item Bias, Item Response Theory
van der Linden, Wim J.; Reese, Lynda M. – 2001
A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum information at the current ability estimate fixing…
Descriptors: Ability, Adaptive Testing, College Entrance Examinations, Computer Assisted Testing
De Champlain, Andre – 1996
The usefulness of a goodness-of-fit index proposed by R. P. McDonald (1989) was investigated with regard to assessing the dimensionality of item response matrices. The m subscript k index, which is based on an estimate of the noncentrality parameter of the noncentral chi-square distribution, possesses several advantages over traditional tests of…
Descriptors: Chi Square, Cutting Scores, Goodness of Fit, Item Response Theory
De Champlain, Andre F. – 1999
The purpose of this study was to examine empirical Type I error rates and rejection rates for three dimensionality assessment procedures with data sets simulated to reflect short tests and small samples. The TESTFACT G superscript 2 difference test suffered from an inflated Type I error rate with unidimensional data sets, while the approximate chi…
Descriptors: Admission (School), College Entrance Examinations, Item Response Theory, Law Schools
Schnipke, Deborah L. – 1999
When running out of time on a multiple-choice test such as the Law School Admission Test (LSAT), some test takers are likely to respond rapidly to the remaining unanswered items in an attempt to get some items right by chance. Because these responses will tend to be incorrect, the presence of rapid-guessing behavior could cause these items to…
Descriptors: College Entrance Examinations, Difficulty Level, Estimation (Mathematics), Guessing (Tests)
De Champlain, Andre; Gessaroli, Marc E. – 1996
The use of indices and statistics based on nonlinear factor analysis (NLFA) has become increasingly popular as a means of assessing the dimensionality of an item response matrix. Although the indices and statistics currently available to the practitioner have been shown to be useful and accurate in many testing situations, few studies have…
Descriptors: Adaptive Testing, Chi Square, Computer Assisted Testing, Factor Analysis
Wang, Xiang Bo; Pan, WeiQin; Harris, Vincent – 1999
A considerable amount of data on computerized adaptive testing (CAT) has been conducted using simulated data. However, most researchers would agree that simulations may not fully reflect the reality of examinee performance on a test. This study used maximum likelihood procedures to investigate the accuracy and efficiency of examinee ability…
Descriptors: Ability, Adaptive Testing, College Entrance Examinations, College Students