Publication Date
| In 2026 | 0 |
| Since 2025 | 25 |
| Since 2022 (last 5 years) | 121 |
| Since 2017 (last 10 years) | 250 |
| Since 2007 (last 20 years) | 576 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 38 |
| Practitioners | 25 |
| Teachers | 8 |
| Administrators | 6 |
| Counselors | 3 |
| Policymakers | 2 |
| Parents | 1 |
| Students | 1 |
Location
| Taiwan | 12 |
| United Kingdom | 10 |
| Australia | 9 |
| Netherlands | 9 |
| California | 8 |
| New York | 8 |
| Turkey | 8 |
| Germany | 7 |
| Canada | 6 |
| Florida | 6 |
| Japan | 6 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yan, Duanli; Lewis, Charles; Stocking, Martha – Journal of Educational and Behavioral Statistics, 2004
It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all the new and currently considered computer-based tests. In addition to developing new models, we also need to give attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized adaptive…
Descriptors: Nonparametric Statistics, Regression (Statistics), Adaptive Testing, Computer Assisted Testing
PEPNet 2, 2012
Beginning your college education means you'll be exploring a new place, making new friends, learning new things and setting your own priorities. You are going to face a lot of big changes in a short time. That's exciting--and challenging. The more prepared you are for college when you get there, the more ready you'll be to address these new…
Descriptors: Sign Language, Deafness, Hearing Impairments, Success
Daro, Phil; Stancavage, Frances; Ortega, Moreica; DeStefano, Lizanne; Linn, Robert – American Institutes for Research, 2007
In Spring 2006,. the NAEP Validity Studies (NVS) Panel was asked by the National Center for Education Statistics (NCES) to undertake a validity study to examine the quality of the NAEP Mathematics Assessments at grades 4 and 8. Specifically, NCES asked the NVS Panel to address five questions: (1) Does the NAEP framework offer reasonable content…
Descriptors: National Competency Tests, Mathematics Achievement, Adaptive Testing, Quality Control
Kaburlasos, Vassilis G.; Marinagi, Catherine C.; Tsoukalas, Vassilis Th. – Computers & Education, 2008
This work presents innovative cybernetics (feedback) techniques based on Bayesian statistics for drawing questions from an Item Bank towards personalized multi-student improvement. A novel software tool, namely "Module for Adaptive Assessment of Students" (or, "MAAS" for short), implements the proposed (feedback) techniques. In conclusion, a pilot…
Descriptors: Feedback (Response), Student Improvement, Computer Science, Bayesian Statistics
Doherty, R. William; Hilberg, R. Soleste – Journal of Educational Research, 2008
The authors reported findings from 3 studies examining the efficacy of Five Standards pedagogy in raising student achievement. Studies 1 and 2 were randomized designs; Study 3 was a quasi-experimental design. Samples included 53 teachers and 622 predominantly low-income Latino students in Grades 1-4. Studies assessed model fidelity with the…
Descriptors: Quasiexperimental Design, Adaptive Testing, Academic Achievement, Second Language Learning
Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David – Online Submission, 2006
Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…
Descriptors: Individual Testing, Test Reliability, Programming, Error of Measurement
Krass, Iosif A. – 1998
In the process of item calibration for a computerized adaptive test (CAT), many well-established calibrating packages show weakness in the estimation of item parameters. This paper introduces an on-line calibration algorithm based on the convexity of likelihood functions. This package consists of: (1) an algorithm that estimates examinee ability…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewedCliff, Norman – Psychometrika, 1977
Measures of consistency and completeness of order relationships derived from test data such as Guttman scales are proposed. The measures are generalized to apply to incomplete data such as data from tailored testing. (Author/JKS)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Item Analysis
Peer reviewedCudeck, Robert A.; And Others – Educational and Psychological Measurement, 1977
TAILOR, a FORTRAN computer program for tailored testing, is described. The procedure for a joint ordering of persons and items with no pretesting as the basis for the tailored test is given, and a brief discussion of the computer program is included. (Author/JKS)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Test Construction
Peer reviewedMcCormick, Douglas J.; Cliff, Norman – Educational and Psychological Measurement, 1977
An interactive computer program for tailored testing, called TAILOR, is presented. The program runs on the APL system. A cumulative file for each examinee is established and tests are then tailored to each examinee; extensive pretesting is not necessary. (JKS)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Test Construction
Peer reviewedHambleton, Ronald K., Ed.; van der Linden, Wim J., Ed. – Applied Psychological Measurement, 1982
Item response theory (IRT) is having a major impact on the field of testing. This special issue presents an introduction and seven papers concerning developments in IRT applications. Some important IRT research being conducted outside the United States is highlighted. (SLD)
Descriptors: Adaptive Testing, Equated Scores, Item Analysis, Latent Trait Theory
Kingsbury, G. Gage; Hauser, Carl – Northwest Evaluation Association, 2004
Among the changes in education called for under the No Child Left Behind act is the need for states to test students in a number of grades and subject areas. Scores from these tests are to be used for a variety of purposes, from identifying whether individual students are proficient, to helping determine whether schools are causing adequate growth…
Descriptors: Federal Legislation, Computer Assisted Testing, Adaptive Testing, Educational Assessment
Baker, Eva L. – 2003
This paper examines multiple measures of performance in school accountability systems from two perspectives: laterally (different indicators of different domains) and vertically (indicators that are at different levels of depth of the same domain). From these perspectives, organizational responsibility and instructional sensitivity are examined.…
Descriptors: Accountability, Adaptive Testing, Adjustment (to Environment), Evaluation Methods
Schnipke, Deborah L.; Reese, Lynda M. – 1999
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test taker ability. This study incorporated testlets (bundles of items) into two-stage and multistage designs, and compared the precision of the ability estimates derived from these designs with those derived from a standard computerized adaptive test (CAT)…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Law Schools
Bowles, Ryan; Pommerich, Mary – 2001
Many arguments have been made against allowing examinees to review and change their answers after completing a computer adaptive test (CAT). These arguments include: (1) increased bias; (2) decreased precision; and (3) susceptibility of test-taking strategies. Results of simulations suggest that the strength of these arguments is reduced or…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Review (Reexamination)

Direct link
