Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Computer Software | 5 |
Higher Education | 5 |
Test Format | 5 |
Computer Assisted Testing | 3 |
Difficulty Level | 2 |
Essay Tests | 2 |
Interrater Reliability | 2 |
Scoring | 2 |
Test Construction | 2 |
Test Items | 2 |
Test Use | 2 |
More ▼ |
Author
Aiken, Lewis R. | 1 |
Alexander, Diane | 1 |
Anderson, Paul S. | 1 |
Clariana, Roy B. | 1 |
Collins, Michael A. J. | 1 |
Lunz, Mary E. | 1 |
Wallace, Patricia | 1 |
Publication Type
Journal Articles | 3 |
Reports - Descriptive | 3 |
Speeches/Meeting Papers | 2 |
Opinion Papers | 1 |
Reports - Evaluative | 1 |
Reports - Research | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Practitioners | 1 |
Researchers | 1 |
Teachers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Collins, Michael A. J. – Collegiate Microcomputer, 1986
Ten different methods of using computer administered tests in college level biology are described: evaluation (or posttest), student self-evaluation, student self-remediation, class remediation, individual student remediation, pretesting, identification of problem areas, immediate feedback, group testing, and remote site testing. (Author/MBR)
Descriptors: Biology, Computer Assisted Testing, Computer Software, Higher Education
Anderson, Paul S.; Alexander, Diane – 1986
The Multi-Digit (MDT) testing procedure is a computer-scored testing innovation conceptualized in 1982. It is fully compatible with multiple choice and true/false tests well suited for the testing of discreet terms and concepts such as in fill-in-the-blank examinations. The student reads the question and selects the appropriate response from an…
Descriptors: Computer Assisted Testing, Computer Software, Criminal Law, Higher Education

Aiken, Lewis R. – Educational and Psychological Measurement, 1989
Two alternatives to traditional item analysis and reliability estimation procedures are considered for determining the difficulty, discrimination, and reliability of optional items on essay and other tests. A computer program to compute these measures is described, and illustrations are given. (SLD)
Descriptors: College Entrance Examinations, Computer Software, Difficulty Level, Essay Tests
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Lunz, Mary E.; And Others – 1989
A method for understanding and controlling the multiple facets of an oral examination (OE) or other judge-intermediated examination is presented and illustrated. This study focused on determining the extent to which the facets model (FM) analysis constructs meaningful variables for each facet of an OE involving protocols, examiners, and…
Descriptors: Computer Software, Difficulty Level, Evaluators, Examiners