NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 706 to 720 of 1,354 results Save | Export
Peer reviewed Peer reviewed
Walker, Cindy M.; Beretvas, S. Natasha; Ackerman, Terry – Applied Measurement in Education, 2001
Conducted a simulation study of differential item functioning (DIF) to compare the power and Type I error rates for two conditions: using an examinee's ability estimate as the conditioning variable with the CATSIB program and either using the regression correction from CATSIB or not. Discusses implications of findings for DIF detection. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Nandakumar, Ratna; Roussos, Louis – Journal of Educational and Behavioral Statistics, 2004
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Descriptors: Evaluation, Adaptive Testing, Computer Assisted Testing, Pretesting
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2003
The Hetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic…
Descriptors: Law Schools, Adaptive Testing, Admission (School), Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kaburlasos, Vassilis G.; Marinagi, Catherine C.; Tsoukalas, Vassilis Th. – Computers & Education, 2008
This work presents innovative cybernetics (feedback) techniques based on Bayesian statistics for drawing questions from an Item Bank towards personalized multi-student improvement. A novel software tool, namely "Module for Adaptive Assessment of Students" (or, "MAAS" for short), implements the proposed (feedback) techniques. In conclusion, a pilot…
Descriptors: Feedback (Response), Student Improvement, Computer Science, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Doherty, R. William; Hilberg, R. Soleste – Journal of Educational Research, 2008
The authors reported findings from 3 studies examining the efficacy of Five Standards pedagogy in raising student achievement. Studies 1 and 2 were randomized designs; Study 3 was a quasi-experimental design. Samples included 53 teachers and 622 predominantly low-income Latino students in Grades 1-4. Studies assessed model fidelity with the…
Descriptors: Quasiexperimental Design, Adaptive Testing, Academic Achievement, Second Language Learning
Daro, Phil; Stancavage, Frances; Ortega, Moreica; DeStefano, Lizanne; Linn, Robert – American Institutes for Research, 2007
In Spring 2006,. the NAEP Validity Studies (NVS) Panel was asked by the National Center for Education Statistics (NCES) to undertake a validity study to examine the quality of the NAEP Mathematics Assessments at grades 4 and 8. Specifically, NCES asked the NVS Panel to address five questions: (1) Does the NAEP framework offer reasonable content…
Descriptors: National Competency Tests, Mathematics Achievement, Adaptive Testing, Quality Control
Peer reviewed Peer reviewed
Direct linkDirect link
Hol, A. Michiel; Vorst, Harrie C. M.; Mellenbergh, Gideon J. – Applied Psychological Measurement, 2007
In a randomized experiment (n = 515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible…
Descriptors: Student Motivation, Simulation, Adaptive Testing, Computer Assisted Testing
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
Ackerman, Terry A.; Davey, Tim C. – 1991
An adaptive test can usually match or exceed the measurement precision of conventional tests several times its length. This increased efficiency is not without costs, however, as the models underlying adaptive testing make strong assumptions about examinees and items. Most troublesome is the assumption that item pools are unidimensional. Truly…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Equations (Mathematics)
de Gruijter, Dato N. M. – 1988
Many applications of educational testing have a missing data aspect (MDA). This MDA is perhaps most pronounced in item banking, where each examinee responds to a different subtest of items from a large item pool and where both person and item parameter estimates are needed. The Rasch model is emphasized, and its non-parametric counterpart (the…
Descriptors: Adaptive Testing, Educational Testing, Estimation (Mathematics), Foreign Countries
ERIC Clearinghouse on Tests, Measurement, and Evaluation, Princeton, NJ. – 1983
This brief overview notes that an adaptive test differs from standardized achievement tests in that it does not consist of a certain set of items that are administered to a group of examinees. Instead, the test is individualized for each examinee. The items administered to the examinee are selected from a large pool of items on the basis of the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Latent Trait Theory
Patience, Wayne M.; Reckase, Mark D. – 1978
The feasibility of implementing self-paced computerized tailored testing evaluation methods in an undergraduate measurement and evaluation course, and possible differences in achievement levels under a paced versus self-paced testing schedule were investigated. A maximum likelihood tailored testing procedure based on the simple logistic model had…
Descriptors: Academic Achievement, Achievement Tests, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
McBride, James R. – Educational Leadership, 1985
Describes a system in which questions tailored to the examinee's capabilities are administered by computer. Enumerates possible benefits of the system, reviews the "state of the art," and predicts potential applications of computerized adaptive testing. (MCG)
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Program Design
Hambleton, Ronald K.; Sireci, Stephen G.; Swaminathan, H.; Xing, Dehui; Rizavi, Saba – 2003
The purposes of this research study were to develop and field test anchor-based judgmental methods for enabling test specialists to estimate item difficulty statistics. The study consisted of three related field tests. In each, researchers worked with six Law School Admission Test (LSAT) test specialists and one or more of the LSAT subtests. The…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Difficulty Level
Rabinowitz, Stanley; Brandt, Tamara – 2001
Computer-based assessment appears to offer the promise of radically improving both how assessments are implemented and the quality of the information they can deliver. However, as many states consider whether to embrace this new technology, serious concerns remain about the fairness of the new systems and the readiness of states (and districts and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Technology, Educational Testing
Pages: 1  |  ...  |  44  |  45  |  46  |  47  |  48  |  49  |  50  |  51  |  52  |  ...  |  91