Publication Date
| In 2026 | 0 |
| Since 2025 | 24 |
| Since 2022 (last 5 years) | 119 |
| Since 2017 (last 10 years) | 248 |
| Since 2007 (last 20 years) | 574 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 38 |
| Practitioners | 25 |
| Teachers | 8 |
| Administrators | 6 |
| Counselors | 3 |
| Policymakers | 2 |
| Parents | 1 |
| Students | 1 |
Location
| Taiwan | 12 |
| United Kingdom | 10 |
| Australia | 9 |
| Netherlands | 9 |
| California | 8 |
| New York | 8 |
| Turkey | 8 |
| Germany | 7 |
| Canada | 6 |
| Florida | 6 |
| Japan | 6 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
van der Linden, Wim J.; Ren, Hao – Journal of Educational and Behavioral Statistics, 2020
The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee's ability and optimally…
Descriptors: Bayesian Statistics, Adaptive Testing, Error of Measurement, Markov Processes
Hula, William D.; Fergadiotis, Gerasimos; Swiderski, Alexander M.; Silkes, JoAnn P.; Kellough, Stacey – Journal of Speech, Language, and Hearing Research, 2020
Purpose: The purpose of this study was to verify the equivalence of 2 alternate test forms with nonoverlapping content generated by an item response theory (IRT)--based computer-adaptive test (CAT). The Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) was utilized as an item bank in a prospective, independent…
Descriptors: Adaptive Testing, Computer Assisted Testing, Severity (of Disability), Aphasia
Ethan R. Van Norman; Emily R. Forcht – Journal of Education for Students Placed at Risk, 2024
This study evaluated the forecasting accuracy of trend estimation methods applied to time-series data from computer adaptive tests (CATs). Data were collected roughly once a month over the course of a school year. We evaluated the forecasting accuracy of two regression-based growth estimation methods (ordinary least squares and Theil-Sen). The…
Descriptors: Data Collection, Predictive Measurement, Predictive Validity, Predictor Variables
Angelone, Anna Maria; Galassi, Alessandra; Vittorini, Pierpaolo – International Journal of Learning Technology, 2022
The adoption of computerised adaptive testing (CAT) instead of classical testing (FIT) raises questions from both teachers' and students' perspectives. The scientific literature shows that teachers using CAT instead of FIT should experience shorter times to complete the assessment and obtain more precise evaluations. As for the students, adaptive…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Freshmen, Student Attitudes
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Han, Kyung T.; Dimitrov, Dimiter M.; Al-Mashary, Faisal – Educational and Psychological Measurement, 2019
The "D"-scoring method for scoring and equating tests with binary items proposed by Dimitrov offers some of the advantages of item response theory, such as item-level difficulty information and score computation that reflects the item difficulties, while retaining the merits of classical test theory such as the simplicity of number…
Descriptors: Test Construction, Scoring, Test Items, Adaptive Testing
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Chun Wang; Ping Chen; Shengyu Jiang – Journal of Educational Measurement, 2020
Many large-scale educational surveys have moved from linear form design to multistage testing (MST) design. One advantage of MST is that it can provide more accurate latent trait [theta] estimates using fewer items than required by linear tests. However, MST generates incomplete response data by design; hence, questions remain as to how to…
Descriptors: Test Construction, Test Items, Adaptive Testing, Maximum Likelihood Statistics
Goodwin, Amanda P.; Petscher, Yaacov; Tock, Jamie; McFadden, Sara; Reynolds, Dan; Lantos, Tess; Jones, Sara – Assessment for Effective Intervention, 2022
Assessment of language skills for upper elementary and middle schoolers is important due to the strong link between language and reading comprehension. Yet, currently few practical, reliable, valid, and instructionally informative assessments of language exist. This study provides validation evidence for Monster, P.I., which is a gamified,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Language Tests, Vocabulary
Cui, Zhongmin; Liu, Chunyan; He, Yong; Chen, Hanwei – Journal of Educational Measurement, 2018
Allowing item review in computerized adaptive testing (CAT) is getting more attention in the educational measurement field as more and more testing programs adopt CAT. The research literature has shown that allowing item review in an educational test could result in more accurate estimates of examinees' abilities. The practice of item review in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Wiseness
Vanbecelaere, Stefanie; Van den Berghe, Katrien; Cornillie, Frederik; Sasanguie, Delphine; Reynvoet, Bert; Depaepe, Fien – Journal of Computer Assisted Learning, 2020
For the training of academic skills, digital educational games with integrated adaptivity are promising. Adaptive games are considered superior to non-adaptive games, because they constantly assess children's performance, and accordingly adapt the difficulty of the tasks corresponding to the children's individual level. However, empirical evidence…
Descriptors: Educational Games, Computer Games, Adaptive Testing, Kindergarten
Luo, Xiao; Wang, Xinrui – International Journal of Testing, 2019
This study introduced dynamic multistage testing (dy-MST) as an improvement to existing adaptive testing methods. dy-MST combines the advantages of computerized adaptive testing (CAT) and computerized adaptive multistage testing (ca-MST) to create a highly efficient and regulated adaptive testing method. In the test construction phase, multistage…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Psychometrics
Gandhi, S.; Hema, G. – Journal of Educational Technology, 2019
The computer based tests are capable of putting together a lot of interactions and fascinating question types, such as simulations, online tests, and measurement of skills, rather than simply assessing by paper-pencil tests. The computerized test result has greater standardization of test administration. The aim of this study is to seek out the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Undergraduate Students, Foreign Countries
Wise, Steven L. – Education Inquiry, 2019
A decision of whether to move from paper-and-pencil to computer-based tests is based largely on a careful weighing of the potential benefits of a change against its costs, disadvantages, and challenges. This paper briefly discusses the trade-offs involved in making such a transition, and then focuses on a relatively unexplored benefit of…
Descriptors: Computer Assisted Testing, Cheating, Test Wiseness, Scores
Lin, Chuan-Ju; Chang, Hua-Hua – Educational and Psychological Measurement, 2019
For item selection in cognitive diagnostic computerized adaptive testing (CD-CAT), ideally, a single item selection index should be created to simultaneously regulate precision, exposure status, and attribute balancing. For this purpose, in this study, we first proposed an attribute-balanced item selection criterion, namely, the standardized…
Descriptors: Test Items, Selection Criteria, Computer Assisted Testing, Adaptive Testing

Peer reviewed
Direct link
