Publication Date
| In 2026 | 0 |
| Since 2025 | 16 |
| Since 2022 (last 5 years) | 64 |
| Since 2017 (last 10 years) | 155 |
| Since 2007 (last 20 years) | 250 |
Descriptor
| Computer Assisted Testing | 362 |
| Multiple Choice Tests | 362 |
| Foreign Countries | 109 |
| Test Items | 109 |
| Test Construction | 83 |
| Student Evaluation | 68 |
| Higher Education | 65 |
| Test Format | 64 |
| College Students | 57 |
| Scores | 54 |
| Comparative Analysis | 45 |
| More ▼ | |
Source
Author
| Anderson, Paul S. | 6 |
| Clariana, Roy B. | 4 |
| Wise, Steven L. | 4 |
| Alonzo, Julie | 3 |
| Anderson, Daniel | 3 |
| Ben Seipel | 3 |
| Bridgeman, Brent | 3 |
| Kosh, Audra E. | 3 |
| Mark L. Davison | 3 |
| Nese, Joseph F. T. | 3 |
| Park, Jooyong | 3 |
| More ▼ | |
Publication Type
Education Level
Location
| United Kingdom | 14 |
| Australia | 9 |
| Canada | 9 |
| Turkey | 9 |
| Germany | 5 |
| Spain | 4 |
| Taiwan | 4 |
| Texas | 4 |
| Arizona | 3 |
| Europe | 3 |
| Indonesia | 3 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Peer reviewedBridgeman, Brent; Rock, Donald A. – Journal of Educational Measurement, 1993
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer-administered item types for the analytical scale of the Graduate Record Examination General Test. Results with 349 students indicate constructs the item types are measuring. (SLD)
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing
Peer reviewedRussell, Michael; Haney, Walt – Education Policy Analysis Archives, 1997
The effect that mode of administration, computer versus paper and pencil, had on the performance of 120 middle school students on multiple choice and written test questions was studied. Results show that, for students accustomed to writing on computers, responses written on the computer were more successful. Implications for testing are discussed.…
Descriptors: Computer Assisted Testing, Essay Tests, Middle School Students, Middle Schools
Laird, Barbara B. – Inquiry, 2003
Laird studies the relationship between two computerized nursing tests and finds a relationship between the two sets of scores. (Contains 2 tables.)
Descriptors: Nursing Education, Nurses, Computer Assisted Testing, Comparative Testing
Park, Jooyong – Journal of Educational Psychology, 2005
A new computerized testing system, which facilitates the use of short-answer-type testing, has been developed. In this system, the question of a multiple-choice problem is presented first, and the options appear briefly on the request of the test taker. The crux of this manipulation is to force students to solve the problem as if they were solving…
Descriptors: Experimental Groups, Control Groups, Computer Assisted Testing, Multiple Choice Tests
Handwerk, Phil – ETS Research Report Series, 2007
Online high schools are growing significantly in number, popularity, and function. However, little empirical data has been published about the effectiveness of these institutions. This research examined the frequency of group work and extended essay writing among online Advanced Placement Program® (AP®) students, and how these tasks may have…
Descriptors: Advanced Placement Programs, Advanced Placement, Computer Assisted Testing, Models
Gershon, Richard C.; And Others – 1994
A 1992 study by R. Gershon found discrepancies when comparing the theoretical Rasch item characteristic curve with the average empirical curve for 1,304 vocabulary items administered to 7,711 students. When person-item mismatches were deleted (for any person-item interaction where the ability of the person was much higher or much lower than the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
Singley, Mark K.; Bennett, Randy Elliot – 1995
One of the main limitations of the current generation of computer-based tests is its dependency on the multiple-choice item. This research was aimed at extending computer-based testing by bringing limited forms of performance assessment to it in the domain of mathematics. This endeavor involves not only building task types that better reflect…
Descriptors: Computer Assisted Testing, Item Analysis, Mathematics Tests, Multiple Choice Tests
Russell, Michael; Haney, Walt – 1996
The results of a small research project that studied the effect computer administration has on student performance for writing or essay tests are presented. The introduction of computer-administered tests has raised concern about the equivalence of scores generated by computer versus paper-and-pencil test versions. For this study a sample of…
Descriptors: Computer Assisted Testing, Essay Tests, High School Students, High Schools
PDF pending restorationAnderson, Paul S.; Hyers, Albert D. – 1991
Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Correlation
McArthur, David; And Others – 1982
The Diagnostic Testing System (DX) is an integral system for developing and administering tests. The system can be utilized for testing in any subject matter, or any number of subject matters, at any level on scholastic or cognitive continuums. The major purpose of DX is to provide diagnostic data about the level at which a given student (or group…
Descriptors: Academic Achievement, Computer Assisted Testing, Computer Managed Instruction, Diagnostic Tests
Choppin, Bruce H. – 1983
In the answer-until-correct mode of multiple-choice testing, respondents are directed to continue choosing among the alternatives to each item until they find the correct response. There is no consensus as to how to convert the resulting pattern of responses into a measure because of two conflicting models of item response behavior. The first…
Descriptors: Computer Assisted Testing, Difficulty Level, Guessing (Tests), Knowledge Level
Newsom, Robert S.; And Others – Evaluation Quarterly, 1978
For the training and placement of professional workers, multiple-choice instruments are the norm for wide-scale measurement and evaluation efforts. These instruments contain fundamental problems. Computer-based management simulations may provide solutions to these problems, appear scoreable and reliable, offer increased validity, and are better…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Occupational Tests, Personnel Evaluation
Peer reviewedNorcini, John J.; And Others – Evaluation and the Health Professions, 1986
This study compares physician performance on the Computer-Aided Simulation of the Clinical Encounter with peer ratings and performance on multiple choice questions and patient management problems. Results indicate that all formats are equally valid, although multiple choice is the most reliable method of assessment per unit of testing time.…
Descriptors: Certification, Competence, Computer Assisted Testing, Computer Simulation
Brightman, Harvey J.; And Others – Educational Technology, 1984
Describes the development and evaluation of interactive computer-based formative tests containing multiple choice questions based on Bloom's taxonomy and their use in a core-level higher education business statistics course prior to graded examinations to determine where students are experiencing difficulties. (MBR)
Descriptors: Cognitive Objectives, Computer Assisted Testing, Computer Software, Diagnostic Tests
Peer reviewedMarshall, Thomas E.; And Others – Journal of Educational Technology Systems, 1996
Examines the strategies used in answering a computerized multiple-choice test where all questions on a semantic topic were grouped together or randomly distributed. Findings indicate that students grouped by performance on the test used different strategies in completing the test due to distinct cognitive processes between the groups. (AEF)
Descriptors: Academic Achievement, Cognitive Processes, Computer Assisted Testing, Higher Education

Direct link
