Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 8 |
Descriptor
| Computer Assisted Testing | 26 |
| Error of Measurement | 26 |
| Adaptive Testing | 18 |
| Item Response Theory | 13 |
| Test Items | 10 |
| Estimation (Mathematics) | 9 |
| Simulation | 7 |
| Test Bias | 5 |
| Ability | 4 |
| Item Bias | 4 |
| Maximum Likelihood Statistics | 4 |
| More ▼ | |
Source
Author
| Zwick, Rebecca | 3 |
| Ban, Jae-Chun | 2 |
| De Ayala, R. J. | 2 |
| Green, Donald Ross | 2 |
| Yi, Qing | 2 |
| van der Linden, Wim J. | 2 |
| Amit Sevak | 1 |
| Bergstrom, Betty A. | 1 |
| Brick, J. Michael | 1 |
| Chang, Hua-Hua | 1 |
| Chang, Yuan-chin Ivan | 1 |
| More ▼ | |
Publication Type
| Reports - Evaluative | 26 |
| Journal Articles | 14 |
| Speeches/Meeting Papers | 5 |
| Collected Works - Proceedings | 1 |
| Opinion Papers | 1 |
| Reports - Descriptive | 1 |
| Reports - Research | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Elementary Secondary Education | 2 |
| Grade 4 | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
| Race to the Top | 1 |
Assessments and Surveys
| Armed Forces Qualification… | 1 |
| Embedded Figures Test | 1 |
| Group Embedded Figures Test | 1 |
| National Household Education… | 1 |
What Works Clearinghouse Rating
Patrick C. Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Institute, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international largescale assessments of cognitive and…
Descriptors: Performance Based Assessment, Evaluation Criteria, Evaluation Methods, Test Bias
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Lane, Suzanne; Leventhal, Brian – Review of Research in Education, 2015
This chapter addresses the psychometric challenges in assessing English language learners (ELLs) and students with disabilities (SWDs). The first section addresses some general considerations in the assessment of ELLs and SWDs, including the prevalence of ELLs and SWDs in the student population, federal and state legislation that requires the…
Descriptors: Psychometrics, Evaluation Problems, English Language Learners, Disabilities
Thomas, Michael L. – Assessment, 2011
Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. Although IRT has become prevalent in the measurement of ability and achievement, its contributions to clinical domains have been less extensive. Applications of IRT to clinical…
Descriptors: Item Response Theory, Psychological Evaluation, Reliability, Error of Measurement
Chang, Yuan-chin Ivan; Lu, Hung-Yi – Psychometrika, 2010
Item calibration is an essential issue in modern item response theory based psychological or educational testing. Due to the popularity of computerized adaptive testing, methods to efficiently calibrate new items have become more important than that in the time when paper and pencil test administration is the norm. There are many calibration…
Descriptors: Test Items, Educational Testing, Adaptive Testing, Measurement
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Ferrao, Maria – Assessment & Evaluation in Higher Education, 2010
The Bologna Declaration brought reforms into higher education that imply changes in teaching methods, didactic materials and textbooks, infrastructures and laboratories, etc. Statistics and mathematics are disciplines that traditionally have the worst success rates, particularly in non-mathematics core curricula courses. This research project,…
Descriptors: Foreign Countries, Computer Assisted Testing, Educational Technology, Educational Assessment
Peer reviewedNering, Michael L. – Applied Psychological Measurement, 1997
Evaluated the distribution of person fit within the computerized-adaptive testing (CAT) environment through simulation. Found that, within the CAT environment, these indexes tend not to follow a standard normal distribution. Person fit indexes had means and standard deviations that were quite different from the expected. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Peer reviewedGreen, Donald Ross; And Others – Applied Measurement in Education, 1989
Potential benefits of using item response theory in test construction are evaluated using the experience and evidence accumulated during nine years of using a three-parameter model in the development of major achievement batteries. Topics addressed include error of measurement, test equating, item bias, and item difficulty. (TJH)
Descriptors: Achievement Tests, Computer Assisted Testing, Difficulty Level, Equated Scores
Peer reviewedChang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1996
An item selection procedure for computerized adaptive testing based on average global information is proposed. Results from simulation studies comparing the approach with the usual maximum item information item selection indicate that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Yi, Qing; Wang, Tianyou; Ban, Jae-Chun – 2000
Error indices (bias, standard error of estimation, and root mean square error) obtained on different scales of measurement under different test termination rules in a computerized adaptive test (CAT) context were examined. Four ability estimation methods were studied: (1) maximum likelihood estimation (MLE); (2) weighted likelihood estimation…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Error of Measurement
Peer reviewedZwick, Rebecca; And Others – Applied Psychological Measurement, 1994
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel method of differential item functioning (DIF) analysis in computerized adaptive tests (CAT). Results indicate that CAT-based DIF procedures perform well and support the use of item response theory-based matching variables in DIF analysis. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Error of Measurement
van der Linden, Wim J. – 1996
R. J. Owen (1975) proposed an approximate empirical Bayes procedure for item selection in adaptive testing. The procedure replaces the true posterior by a normal approximation with closed-form expressions for its first two moments. This approximation was necessary to minimize the computational complexity involved in a fully Bayesian approach, but…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computation
Zwick, Rebecca; And Others – 1993
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75-item pool. A three-parameter logistic item response model was assumed, and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Correlation, Error of Measurement
Green, Donald Ross; And Others – 1988
Potential benefits of using item response theory in test construction are evaluated, based on the experience and evidence accumulated during 9 years of using a three-parameter model in the construction of major achievement batteries. Specific benefits covered include obtaining sample-free item calibrations and item-free person measurement,…
Descriptors: Achievement Tests, Computer Assisted Testing, Difficulty Level, Elementary Secondary Education
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
