Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 19 |
Descriptor
Adaptive Testing | 47 |
Computer Assisted Testing | 47 |
Testing | 47 |
Test Items | 16 |
Test Construction | 13 |
Comparative Analysis | 11 |
Item Banks | 9 |
Item Response Theory | 7 |
Test Format | 7 |
Test Reliability | 7 |
Computer Software | 6 |
More ▼ |
Source
Author
Weiss, David J. | 4 |
Dodd, Barbara G. | 3 |
Han, Kyung T. | 2 |
Laurier, Michel | 2 |
Wang, Wen-Chung | 2 |
Ardoin, Scott P. | 1 |
Bartroff, Jay | 1 |
Becker, Kirk A. | 1 |
Bergstrom, Betty A. | 1 |
Blais, Jean-Guy | 1 |
Chen, Po-Hsi | 1 |
More ▼ |
Publication Type
Education Level
Elementary Education | 2 |
Elementary Secondary Education | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Researchers | 5 |
Administrators | 1 |
Practitioners | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 2 |
ACT Assessment | 1 |
Advanced Placement… | 1 |
Graduate Record Examinations | 1 |
Iowa Tests of Basic Skills | 1 |
What Works Clearinghouse Rating
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Gardner, John; O'Leary, Michael; Yuan, Li – Journal of Computer Assisted Learning, 2021
Artificial Intelligence is at the heart of modern society with computers now capable of making process decisions in many spheres of human activity. In education, there has been intensive growth in systems that make formal and informal learning an anytime, anywhere activity for billions of people through online open educational resources and…
Descriptors: Artificial Intelligence, Educational Assessment, Formative Evaluation, Summative Evaluation
Nixi Wang – ProQuest LLC, 2022
Measurement errors attributable to cultural issues are complex and challenging for educational assessments. We need assessment tests sensitive to the cultural heterogeneity of populations, and psychometric methods appropriate to address fairness and equity concerns. Built on the research of culturally responsive assessment, this dissertation…
Descriptors: Culturally Relevant Education, Testing, Equal Education, Validity
New York State Education Department, 2024
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts, Mathematics, and Grades 5 & 8 Science Tests. School administrators must be thoroughly familiar with the contents of the manual, and the policies and procedures must be followed…
Descriptors: Testing Programs, Language Arts, Mathematics Tests, Science Tests
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement
Hula, William D.; Kellough, Stacey; Fergadiotis, Gerasimos – Journal of Speech, Language, and Hearing Research, 2015
Purpose: The purpose of this study was to develop a computerized adaptive test (CAT) version of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996), to reduce test length while maximizing measurement precision. This article is a direct extension of a companion article (Fergadiotis, Kellough, & Hula, 2015),…
Descriptors: Computer Assisted Testing, Adaptive Testing, Naming, Test Construction
Han, Kyung T. – Applied Psychological Measurement, 2013
Multistage testing, or MST, was developed as an alternative to computerized adaptive testing (CAT) for applications in which it is preferable to administer a test at the level of item sets (i.e., modules). As with CAT, the simulation technique in MST plays a critical role in the development and maintenance of tests. "MSTGen," a new MST…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computer Software, Simulation
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Han, Kyung T. – Applied Psychological Measurement, 2013
Most computerized adaptive testing (CAT) programs do not allow test takers to review and change their responses because it could seriously deteriorate the efficiency of measurement and make tests vulnerable to manipulative test-taking strategies. Several modified testing methods have been developed that provide restricted review options while…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Testing
Hsu, Chia-Ling; Wang, Wen-Chung; Chen, Shu-Ying – Applied Psychological Measurement, 2013
Interest in developing computerized adaptive testing (CAT) under cognitive diagnosis models (CDMs) has increased recently. CAT algorithms that use a fixed-length termination rule frequently lead to different degrees of measurement precision for different examinees. Fixed precision, in which the examinees receive the same degree of measurement…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Diagnostic Tests
January, Stacy-Ann A.; Ardoin, Scott P. – Assessment for Effective Intervention, 2015
Curriculum-based measurement in reading (CBM-R) and the Measures of Academic Progress (MAP) are assessment tools widely employed for universal screening in schools. Although a large body of research supports the validity of CBM-R, limited empirical evidence exists supporting the technical adequacy of MAP or the acceptability of either measure for…
Descriptors: Curriculum Based Assessment, Achievement Tests, Educational Assessment, Screening Tests
Tsinakos, Avgoustos; Kazanidis, Ioannis – International Review of Research in Open and Distance Learning, 2012
Student testing and knowledge assessment is a significant aspect of the learning process. In a number of cases, it is expedient not to present the exact same test to all learners all the time (Pritchett, 1999). This may be desired so that cheating in the exam is made harder to carry out or so that the learners can take several practice tests on…
Descriptors: Information Retrieval, Educational Testing, Continuing Education, Mathematics
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods