NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mustafa Yildiz; Hasan Kagan Keskin; Saadin Oyucu; Douglas K. Hartman; Murat Temur; Mücahit Aydogmus – Reading & Writing Quarterly, 2025
This study examined whether an artificial intelligence-based automatic speech recognition system can accurately assess students' reading fluency and reading level. Participants were 120 fourth-grade students attending public schools in Türkiye. Students read a grade-level text out loud while their voice was recorded. Two experts and the artificial…
Descriptors: Artificial Intelligence, Reading Fluency, Human Factors Engineering, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Qian, Hong; Staniewska, Dorota; Reckase, Mark; Woo, Ada – Educational Measurement: Issues and Practice, 2016
This article addresses the issue of how to detect item preknowledge using item response time data in two computer-based large-scale licensure examinations. Item preknowledge is indicated by an unexpected short response time and a correct response. Two samples were used for detecting item preknowledge for each examination. The first sample was from…
Descriptors: Reaction Time, Licensing Examinations (Professions), Computer Assisted Testing, Prior Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Colwell, Nicole Makas – Journal of Education and Training Studies, 2013
This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…
Descriptors: Test Anxiety, Computer Assisted Testing, Evaluation Methods, Standardized Tests
Peer reviewed Peer reviewed
Lunz, Mary E.; Bergstrom, Betty – Journal of Educational Computing Research, 1995
Describes a study that was conducted to track the effect of candidate response patterns on a computerized adaptive test. The effect of altering responses on estimated candidate ability, test tailoring, and test precision across segments of adaptive tests and groups of candidates is examined. (Author/LRW)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Response Style (Tests)
Rizavi, Saba; Hariharan, Swaminathan – Online Submission, 2001
The advantages that computer adaptive testing offers over linear tests have been well documented. The Computer Adaptive Test (CAT) design is more efficient than the Linear test design as fewer items are needed to estimate an examinee's proficiency to a desired level of precision. In the ideal situation, a CAT will result in examinees answering…
Descriptors: Guessing (Tests), Test Construction, Test Length, Computer Assisted Testing
Pond, Daniel J.; And Others – 1986
This directory represents the start of a research program directed towards the creation of a human abilities matrix which cross-references data on real world jobs, laboratory performance tasks, and human performance models. The matrix will use the "abilities requirements approach" as the unifying element among these three dimensions. The…
Descriptors: Ability Identification, Classification, Computer Assisted Testing, Directories
Peer reviewed Peer reviewed
Folk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Zhang, Yanwei; Nandakumar, Ratna – Online Submission, 2006
Computer Adaptive Sequential Testing (CAST) is a test delivery model that combines features of the traditional conventional paper-and-pencil testing and item-based computerized adaptive testing (CAT). The basic structure of CAST is a panel composed of multiple testlets adaptively administered to examinees at different stages. Current applications…
Descriptors: Item Banks, Item Response Theory, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Halkitis, Perry N. – Journal of Outcome Measurement, 1998
The precision of a computerized adaptive test (CAT) with a limited item pool was measured using test results from 4,494 nursing students. Regardless of the item pool size, CAT provides greater precision in measurement with a smaller number of items administered even when the choice of items is limited, but CAT fails to achieve equiprecision along…
Descriptors: Ability Identification, Adaptive Testing, College Students, Computer Assisted Testing
Samejima, Fumiko – 1990
A method is proposed that increases the accuracies of estimation of the operating characteristics of discrete item responses, especially when the true operating characteristic is represented by a steep curve, and also at the lower and upper ends of the ability distribution where the estimation tends to be inaccurate because of the smaller number…
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Equations (Mathematics)
Sympson, James B.; And Others – 1982
Conventional Armed Services Vocational Aptitude Battery-7 (ASVAB) Arithmetic Reasoning and Word Knowledge tests, were compared with computer-administered adaptive tests as predictors of performance in an Air Force Jet Engine Mechanic training course (n=495). Results supported earlier research in showing somewhat longer examinee response times for…
Descriptors: Ability Identification, Adaptive Testing, Aptitude Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Henly, Susan J.; And Others – Applied Psychological Measurement, 1989
A group of covariance structure models was examined to ascertain the similarity between conventionally administered and computerized adaptive versions of the Differential Aptitude Test (DAT). Results for 332 students indicate that the computerized version of the DAT is an adequate representation of the conventional test battery. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Comparative Testing, Computer Assisted Testing
De Ayala, R. J.; And Others – 1990
Computerized adaptive testing procedures (CATPs) based on the graded response method (GRM) of F. Samejima (1969) and the partial credit model (PCM) of G. Masters (1982) were developed and compared. Both programs used maximum likelihood estimation of ability, and item selection was conducted on the basis of information. Two simulated data sets, one…
Descriptors: Ability Identification, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Wise, Steven L.; Plake, Barbara S. – Educational Measurement: Issues and Practice, 1989
Research dealing with the administration of tests via computer is reviewed. Several issues related to computerized testing are discussed, and areas in need of additional research are identified. The focus is on education-related ability and achievement testing; psychological tests and computer-based simulations are not addressed. (SLD)
Descriptors: Ability Identification, Achievement Tests, Computer Assisted Testing, Computer Uses in Education
De Ayala, R. J. – 1990
The effect of dimensionality on an adaptive test's ability estimation was examined. Two-dimensional data sets, which differed from one another in the interdimensional ability association, the correlation among the difficulty parameters, and whether the item discriminations were or were not confounded with item difficulty, were generated for 1,600…
Descriptors: Ability Identification, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3