Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 4 |
Descriptor
Adults | 16 |
Computer Assisted Testing | 16 |
Test Format | 16 |
Test Construction | 5 |
Test Validity | 5 |
Comparative Testing | 4 |
Test Items | 4 |
Adaptive Testing | 3 |
Comparative Analysis | 3 |
Difficulty Level | 3 |
Patients | 3 |
More ▼ |
Source
Psychological Assessment | 3 |
Assessment | 2 |
Educational Measurement:… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Journal of Speech, Language,… | 1 |
Author
Anger, W. Kent | 1 |
Arslan, Burcu | 1 |
Berger, Steven G. | 1 |
Binder, Laurence M. | 1 |
Brunner, Martin | 1 |
Campbell, Keith A. | 1 |
Davis, Kelly L. | 1 |
Engdahl, Brian | 1 |
Foster, David F. | 1 |
Gollwitzer, Mario | 1 |
Gong, Tao | 1 |
More ▼ |
Publication Type
Reports - Research | 15 |
Journal Articles | 9 |
Speeches/Meeting Papers | 6 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Armed Forces Qualification… | 1 |
Armed Services Vocational… | 1 |
Test of English as a Foreign… | 1 |
Wisconsin Card Sorting Test | 1 |
What Works Clearinghouse Rating
Shen, Jing; Wu, Jingwei – Journal of Speech, Language, and Hearing Research, 2022
Purpose: This study examined the performance difference between remote and in-laboratory test modalities with a speech recognition in noise task in older and younger adults. Method: Four groups of participants (younger remote, younger in-laboratory, older remote, and older in-laboratory) were tested on a speech recognition in noise protocol with…
Descriptors: Age Differences, Test Format, Computer Assisted Testing, Auditory Perception
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Arslan, Burcu; Jiang, Yang; Keehner, Madeleine; Gong, Tao; Katz, Irvin R.; Yan, Fred – Educational Measurement: Issues and Practice, 2020
Computer-based educational assessments often include items that involve drag-and-drop responses. There are different ways that drag-and-drop items can be laid out and different choices that test developers can make when designing these items. Currently, these decisions are based on experts' professional judgments and design constraints, rather…
Descriptors: Test Items, Computer Assisted Testing, Test Format, Decision Making
Steinmetz, Jean-Paul; Brunner, Martin; Loarer, Even; Houssemand, Claude – Psychological Assessment, 2010
The Wisconsin Card Sorting Test (WCST) assesses executive and frontal lobe function and can be administered manually or by computer. Despite the widespread application of the 2 versions, the psychometric equivalence of their scores has rarely been evaluated and only a limited set of criteria has been considered. The present experimental study (N =…
Descriptors: Computer Assisted Testing, Psychometrics, Test Theory, Scores

Campbell, Keith A.; Rohlman, Diane S.; Storzbach, Daniel; Binder, Laurence M.; Anger, W. Kent; Kovera, Craig A.; Davis, Kelly L.; Grossman, Sandra J. – Assessment, 1999
Administered 12 psychological and 7 neurobehavioral performance tests twice to nonclinical normative samples of 30 adults (computer format only) and 30 adults (computer and conventional administration) with one week between administrations. Results suggest that individual test-retest reliability is not affected when tests are administered as part…
Descriptors: Adults, Computer Assisted Testing, Neuropsychology, Psychological Testing
Manalo, Jonathan R.; Wolfe, Edward W. – 2000
Recently, the Test of English as a Foreign Language (TOEFL) changed by including a direct writing assessment where examinees choose between computer and handwritten composition formats. Unfortunately, examinees may have differential access to and comfort with computers; as a result, scores across these formats may not be comparable. Analysis of…
Descriptors: Adults, Computer Assisted Testing, Essay Tests, Handwriting
Lee, Jo Ann; And Others – 1984
The difficulty of test items administered by paper and pencil were compared with the difficulty of the same items administered by computer. The study was conducted to determine if an interaction exists between mode of test administration and ability. An arithmetic reasoning test was constructed for this study. All examinees had taken the Armed…
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Difficulty Level

Jodoin, Michael G. – Journal of Educational Measurement, 2003
Analyzed examinee responses to conventional (multiple-choice) and innovative item formats in a computer-based testing program for item response theory (IRT) information with the three parameter and graded response models. Results for more than 3,000 adult examines for 2 tests show that the innovative item types in this study provided more…
Descriptors: Ability, Adults, Computer Assisted Testing, Item Response Theory

Berger, Steven G.; And Others – Assessment, 1994
As part of a neuropsychological assessment, 95 adult patients completed either standard or computerized versions of the Category Test. Subjects who completed the computerized version exhibited more errors than those who completed the standard version, suggesting that it may be more difficult. (SLD)
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Demography

Kobak, Kenneth A.; And Others – Psychological Assessment, 1993
A developed computer-administered form of the Hamilton Anxiety Scale and the clinician form of the instrument were administered to 214 psychiatric outpatients and 78 community adults. Results support the reliability and validity of the computer-administered version as an alternative to the clinician-administered version. (SLD)
Descriptors: Adults, Anxiety, Clinical Diagnosis, Comparative Testing
Assessing the Effects of Computer Administration on Scores and Parameter Estimates Using IRT Models.
Sykes, Robert C.; And Others – 1991
To investigate the psychometric feasibility of replacing a paper-and-pencil licensing examination with a computer-administered test, a validity study was conducted. The computer-administered test (Cadm) was a common set of items for all test takers, distinct from computerized adaptive testing, in which test takers receive items appropriate to…
Descriptors: Adults, Certification, Comparative Testing, Computer Assisted Testing
Knapp, Deirdre J.; Pliske, Rebecca M. – 1986
A study was conducted to validate the Army's Computerized Adaptive Screening Test (CAST), using data from 2,240 applicants from 60 army recruiting stations across the nation. CAST is a computer-assisted adaptive test used to predict performance on the Armed Forces Qualification Test (AFQT). AFQT scores are computed by adding four subtest scores of…
Descriptors: Adaptive Testing, Adults, Aptitude Tests, Comparative Testing

Rosenfeld, Rochelle; And Others – Psychological Assessment, 1992
A computer-administered version of the Yale-Brown Obsessive-Compulsive Scale was administered to 31 patients with obsessive-compulsive disorder, 16 with other anxiety disorders, and 23 nonpatient controls. The computer version correlated highly with the clinician-administered version and was well understood and liked by subjects. (SLD)
Descriptors: Adults, Anxiety, Behavior Patterns, Comparative Testing
Schwarz, Richard D.; Rich, Changhua; Podrabsky, Tracy – 2003
This paper studied the usefulness of differential item functioning (DIF) methodology for examining potential mode effects. Although the goal was not to validate the comparability of the assessments per se, it is of interest to speculate why some formats could give rise to differential performance. Data were obtained from two instruments on which…
Descriptors: Adult Basic Education, Adults, Computer Assisted Testing, Elementary School Students
Sireci, Stephen G.; Foster, David F.; Robin, Frederic; Olsen, James – 1997
Evaluating the comparability of a test administered in different languages is a difficult, if not impossible, task. Comparisons are problematic because observed differences in test performance between groups who take different language versions of a test could be due to a difference in difficulty between the tests, to cultural differences in test…
Descriptors: Adaptive Testing, Adults, Certification, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1 | 2