NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,236 to 5,250 of 7,203 results Save | Export
Peer reviewed Peer reviewed
Cudeck, Robert – Multivariate Behavioral Research, 1985
Twelve structural models of similarity were fitted to data from conventional and computer adaptive test (CAT) batteries measuring the same aptitude in a double cross-validation design. Three of the 12 models, including a multiplicative structure model, performed well, providing support for using CATs as replacements for conventional tests. (NSF)
Descriptors: Adaptive Testing, Aptitude Tests, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Duffelmeyer, Frederick A. – Reading Teacher, 1985
Argues that, while computers make the computation of readability formulas easy, teachers should not forget the need to apply judgment and common sense to the results. Discusses RIXRATE, a computerized version of the Rix index and compares its performance to that of the Rauding Scale. (FL)
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Microcomputers, Readability
Colbourne, Marlene – Australian Journal of Reading, 1983
Explores the potential for using computers in the diagnosis of learning disabilities. Describes a prototype computer based system designed to assist teachers in the assessment of reading difficulties. (FL)
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Reading Diagnosis, Reading Difficulties
Luecht, Richard M. – 2003
This paper presents a multistage adaptive testing test development paradigm that promises to handle content balancing and other test development needs, psychometric reliability concerns, and item exposure. The bundled multistage adaptive testing (BMAT) framework is a modification of the computer-adaptive sequential testing framework introduced by…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Mastery Tests
Chang, Shu-Ren; Plake, Barbara S.; Ferdous, Abdullah A. – Online Submission, 2005
This study examined the time different ability level examinees spend taking a CAT on demanding items to these examinees. It was also found that high able examinees spend more time on the pretest items, which are not tailored to the examinees' ability level, than do lower ability examinees. Higher able examinees showed persistence with test…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Reaction Time
Wise, Steven L.; Kong, Xiaojing – Online Submission, 2005
When low-stakes assessments are administered to examinees, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. This study introduces a new method for measuring examinee test-taking effort on computer-based test items based on item response time. This…
Descriptors: Computer Assisted Testing, Reaction Time, Response Style (Tests), Measurement Techniques
Sheppard, Valarie A.; Baker, Todd A.; Gebhardt, Deborah L.; Leonard, Kristine M. – 2000
The purpose of this project was to develop valid evaluation procedures for the selection of Container Equipment Operators (CEOs) in the shipping industry. A job analysis was conducted to identify the essential tasks of the CEO job. Site visits, a task inventory, and the determination of essential tasks were used in the job analysis. The skills and…
Descriptors: Computer Assisted Testing, Employees, Equipment, Job Analysis
Patsula, Liane N.; Steffen, Mandred – 1997
One challenge associated with computerized adaptive testing (CAT) is the maintenance of test and item security while allowing for daily testing. An alternative to continually creating new pools containing an independent set of items would be to consider each CAT pool as a sample of items from a larger collection (referred to as a VAT) rather than…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Multiple Choice Tests
Patsula, Liane N.; Pashley, Peter J. – 1997
Many large-scale testing programs routinely pretest new items alongside operational (or scored) items to determine their empirical characteristics. If these pretest items pass certain statistical criteria, they are placed into an operational item pool; otherwise they are edited and re-pretested or simply discarded. In these situations, reliable…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Banks
O'Neil, Harold F., Jr.; Klein, Davina C. D. – 1997
This report documents progress at the Center for Research on Evaluation, Standards, and Student Testing (CRESST) on the feasibility of scoring concept maps using technology. CRESST, in its integrated simulation approach to assessment, has assembled a suite of performance assessment tasks (the integrated simulation) onto which they have mapped the…
Descriptors: Automation, Computer Assisted Testing, Concept Mapping, Cooperation
Schmidt, Peter – 2001
A conception of discussing mathematical material in the domain of calculus is outlined. Applications include that university students work at their knowledge and prepare for their oral examinations by utilizing the dialog system. The conception is based upon three pillars. One central pillar is a knowledge base containing the collections of…
Descriptors: Calculus, Computer Assisted Testing, Higher Education, Instructional Design
Hertz, Norman R.; Chinn, Roberta N. – 2003
This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…
Descriptors: Computer Assisted Testing, Item Banks, Item Response Theory, Licensing Examinations (Professions)
Manalo, Jonathan R.; Wolfe, Edward W. – 2000
Recently, the Test of English as a Foreign Language (TOEFL) changed by including a writing section that gives the examinee an option between computer and handwritten formats to compose their responses. Unfortunately, this may introduce several potential sources of error that might reduce the reliability and validity of the scores. The seriousness…
Descriptors: Computer Assisted Testing, Essay Tests, Evaluators, Handwriting
Manalo, Jonathan R.; Wolfe, Edward W. – 2000
Recently, the Test of English as a Foreign Language (TOEFL) changed by including a direct writing assessment where examinees choose between computer and handwritten composition formats. Unfortunately, examinees may have differential access to and comfort with computers; as a result, scores across these formats may not be comparable. Analysis of…
Descriptors: Adults, Computer Assisted Testing, Essay Tests, Handwriting
van der Linden, Wim J. – 1999
A constrained computerized adaptive testing (CAT) algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived from a set of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Pages: 1  |  ...  |  346  |  347  |  348  |  349  |  350  |  351  |  352  |  353  |  354  |  ...  |  481