NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Turkey1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mustafa Yildiz; Hasan Kagan Keskin; Saadin Oyucu; Douglas K. Hartman; Murat Temur; Mücahit Aydogmus – Reading & Writing Quarterly, 2025
This study examined whether an artificial intelligence-based automatic speech recognition system can accurately assess students' reading fluency and reading level. Participants were 120 fourth-grade students attending public schools in Türkiye. Students read a grade-level text out loud while their voice was recorded. Two experts and the artificial…
Descriptors: Artificial Intelligence, Reading Fluency, Human Factors Engineering, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Qian, Hong; Staniewska, Dorota; Reckase, Mark; Woo, Ada – Educational Measurement: Issues and Practice, 2016
This article addresses the issue of how to detect item preknowledge using item response time data in two computer-based large-scale licensure examinations. Item preknowledge is indicated by an unexpected short response time and a correct response. Two samples were used for detecting item preknowledge for each examination. The first sample was from…
Descriptors: Reaction Time, Licensing Examinations (Professions), Computer Assisted Testing, Prior Learning
Peer reviewed Peer reviewed
Lunz, Mary E.; Bergstrom, Betty – Journal of Educational Computing Research, 1995
Describes a study that was conducted to track the effect of candidate response patterns on a computerized adaptive test. The effect of altering responses on estimated candidate ability, test tailoring, and test precision across segments of adaptive tests and groups of candidates is examined. (Author/LRW)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Response Style (Tests)
Peer reviewed Peer reviewed
Folk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Halkitis, Perry N. – Journal of Outcome Measurement, 1998
The precision of a computerized adaptive test (CAT) with a limited item pool was measured using test results from 4,494 nursing students. Regardless of the item pool size, CAT provides greater precision in measurement with a smaller number of items administered even when the choice of items is limited, but CAT fails to achieve equiprecision along…
Descriptors: Ability Identification, Adaptive Testing, College Students, Computer Assisted Testing
Sympson, James B.; And Others – 1982
Conventional Armed Services Vocational Aptitude Battery-7 (ASVAB) Arithmetic Reasoning and Word Knowledge tests, were compared with computer-administered adaptive tests as predictors of performance in an Air Force Jet Engine Mechanic training course (n=495). Results supported earlier research in showing somewhat longer examinee response times for…
Descriptors: Ability Identification, Adaptive Testing, Aptitude Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Henly, Susan J.; And Others – Applied Psychological Measurement, 1989
A group of covariance structure models was examined to ascertain the similarity between conventionally administered and computerized adaptive versions of the Differential Aptitude Test (DAT). Results for 332 students indicate that the computerized version of the DAT is an adequate representation of the conventional test battery. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
De Ayala, R. J.; And Others – Journal of Educational Measurement, 1990
F. M. Lord's flexilevel, computerized adaptive testing (CAT) procedure was compared to an item-response theory-based CAT procedure that uses Bayesian ability estimation with various standard errors of estimates used for terminating the test. Ability estimates of flexilevel CATs were as accurate as were those of Bayesian CATs. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Bayesian Statistics, Comparative Analysis
Wise, Steven L.; And Others – 1991
According to item response theory (IRT), examinee ability estimation is independent of the particular set of test items administered from a calibrated pool. Although the most popular application of this feature of IRT is computerized adaptive (CA) testing, a recently proposed alternative is self-adapted (SA) testing, in which examinees choose the…
Descriptors: Ability Identification, Adaptive Testing, College Students, Comparative Testing
Gugel, John F. – 1990
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
Descriptors: Ability Identification, Aptitude Tests, Chi Square, Comparative Analysis
Roos, Linda L.; And Others – 1992
Computerized adaptive (CA) testing uses an algorithm to match examinee ability to item difficulty, while self-adapted (SA) testing allows the examinee to choose the difficulty of his or her items. Research comparing SA and CA testing has shown that examinees experience lower anxiety and improved performance with SA testing. All previous research…
Descriptors: Ability Identification, Adaptive Testing, Algebra, Algorithms
De Ayala, R. J.; And Others – 1988
To date, the majority of computerized adaptive testing (CAT) systems for achievement and aptitude testing have been based on the dichotomous item response models. However, current research with polychotomous model-based CATs is yielding promising results. This study extends previous work on nominal response model-based CAT (NR CAT) and compares…
Descriptors: Ability Identification, Achievement Tests, Adaptive Testing, Aptitude Tests
Weiss, David J.; McBride, James R. – 1983
Monte Carlo simulation was used to investigate score bias and information characteristics of Owen's Bayesian adaptive testing strategy, and to examine possible causes of score bias. Factors investigated in three related studies included effects of item discrimination, effects of fixed vs. variable test length, and effects of an accurate prior…
Descriptors: Ability Identification, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Lunz, Mary E.; And Others – 1990
This study explores the test-retest consistency of computer adaptive tests of varying lengths. The testing model used was designed as a mastery model to determine whether an examinee's estimated ability level is above or below a pre-established criterion expressed in the metric (logits) of the calibrated item pool scale. The Rasch model was used…
Descriptors: Ability Identification, Adaptive Testing, College Students, Comparative Testing
Rizavi, Saba; Hariharan, Swaminathan – Online Submission, 2001
The advantages that computer adaptive testing offers over linear tests have been well documented. The Computer Adaptive Test (CAT) design is more efficient than the Linear test design as fewer items are needed to estimate an examinee's proficiency to a desired level of precision. In the ideal situation, a CAT will result in examinees answering…
Descriptors: Guessing (Tests), Test Construction, Test Length, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2