Publication Date
| In 2026 | 0 |
| Since 2025 | 16 |
| Since 2022 (last 5 years) | 64 |
| Since 2017 (last 10 years) | 155 |
| Since 2007 (last 20 years) | 250 |
Descriptor
| Computer Assisted Testing | 362 |
| Multiple Choice Tests | 362 |
| Foreign Countries | 109 |
| Test Items | 109 |
| Test Construction | 83 |
| Student Evaluation | 68 |
| Higher Education | 65 |
| Test Format | 64 |
| College Students | 57 |
| Scores | 54 |
| Comparative Analysis | 45 |
| More ▼ | |
Source
Author
| Anderson, Paul S. | 6 |
| Clariana, Roy B. | 4 |
| Wise, Steven L. | 4 |
| Alonzo, Julie | 3 |
| Anderson, Daniel | 3 |
| Ben Seipel | 3 |
| Bridgeman, Brent | 3 |
| Kosh, Audra E. | 3 |
| Mark L. Davison | 3 |
| Nese, Joseph F. T. | 3 |
| Park, Jooyong | 3 |
| More ▼ | |
Publication Type
Education Level
Location
| United Kingdom | 14 |
| Australia | 9 |
| Canada | 9 |
| Turkey | 9 |
| Germany | 5 |
| Spain | 4 |
| Taiwan | 4 |
| Texas | 4 |
| Arizona | 3 |
| Europe | 3 |
| Indonesia | 3 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Peer reviewedBridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing
Fraser, Linda; Harich, Katrin; Norby, Joni; Brzovic, Kathy; Rizkallah, Teeanna; Loewy, Dana – Business Communication Quarterly, 2005
To assess students' business writing abilities upon entry into the business program and exit from the capstone course, a multitiered assessment package was developed that measures students' achievement of specific learning outcomes and provides "value-added" scores. The online segment of the test measures five competencies across three process…
Descriptors: Business Communication, Grading, Writing Ability, Achievement Gains
PDF pending restorationHyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
Plake, Barbara S.; Wise, Steven L. – 1986
One question regarding the utility of adaptive testing is the effect of individualized item arrangements on examinee test scores. The purpose of this study was to analyze the item difficulty choices by examinees as a function of previous item performance. The examination was a 25-item test of basic algebra skills given to 36 students in an…
Descriptors: Adaptive Testing, Algebra, College Students, Computer Assisted Testing
Thompson, Janet G.; Weiss, David J. – 1980
The relative validity of adaptive and conventional testing strategies using non-test variables as one set of external criteria was investigated. A total of 101 college students completed both a variable length stradaptive test and peaked conventional test; a second group of 131 college students completed a variable length Bayesian adaptive test…
Descriptors: Achievement Tests, Adaptive Testing, College Entrance Examinations, Computer Assisted Testing
Davies, Phil – ALT-J: Research in Learning Technology, 2004
This paper reports on a case study that evaluates the validity of assessing students via a computerized peer-marking process, rather than on their production of an essay in a particular subject area. The study assesses the higher-order skills shown by a student in marking and providing consistent feedback on an essay. In order to evaluate the…
Descriptors: Essays, Peer Evaluation, Evaluation Methods, Student Evaluation
Adams, Andrew; Williams, Shirley – Electronic Journal of e-Learning, 2006
Customer-Driven Development is a technique from the software development method called extreme Programming (XP) where customers (most importantly including end users of all levels) are closely involved in the software design and redesign process. This method of producing software suitable for customers has been adapted to help in the production of…
Descriptors: Computer Software, Computer Assisted Testing, Multiple Choice Tests, Questioning Techniques
Anderson, Paul S.; Kanzler, Eileen M. – 1985
Test scores were compared for two types of objective achievement tests--multiple choice tests and the recently developed Multi-Digit Test (MDT) procedure. MDT is an approximation of the fill-in-the-blank technique. Students select their answers from long lists of alphabetized terms, with each answer corresponding to a number from 001 to 999. The…
Descriptors: Achievement Tests, Cloze Procedure, Comparative Testing, Computer Assisted Testing
Peer reviewedSkakun, Ernest N.; And Others – Educational and Psychological Measurement, 1979
Factor analysis was used to determine whether computerized patient management problems had the same factor structure as multiple choice examinations and rating scales. It was determined that the factor structure was similar to the examinations but not the rating scale. (JKS)
Descriptors: Comparative Testing, Computer Assisted Testing, Computer Programs, Factor Structure
Velanoff, John – 1987
This report describes courseware for comprehensive computer-assisted testing and instruction. With this program, a personal computer can be used to: (1) generate multiple test versions to meet test objectives; (2) create study guides for self-directed learning; and (3) evaluate student and teacher performance. Numerous multiple-choice examples,…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Computer Uses in Education, Courseware
Peer reviewedWise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewedO'Neill, Paula N. – Journal of Dental Education, 1998
Examines various methods for assessing dental students' learning in a problem-based curriculum, including objective structured clinical examination; clinical proficiency testing; triple jump evaluation (identifying facts, developing hypotheses, establishing learning needs to further evaluate the problem, solving the learning needs, presenting…
Descriptors: Allied Health Occupations Education, Clinical Teaching (Health Professions), Computer Assisted Testing, Curriculum Design
Wolfe, Edward W.; Manalo, Jonathan R. – ETS Research Report Series, 2005
This study examined scores from 133,906 operationally scored Test of English as a Foreign Language™ (TOEFL®) essays to determine whether the choice of composition medium has any impact on score quality for subgroups of test-takers. Results of analyses demonstrate that (a) scores assigned to word-processed essays are slightly more reliable than…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scores
Peer reviewedConiam, David – Computer Assisted Language Learning, 1997
Describes a computer program that takes multiple-choice cloze passages and compiles them into proofreading exercises. Results reveal that such a computerized test type can be used to accurately measure the proficiency of students of English as a Second Language in Hong Kong. (14 references) (Author/CK)
Descriptors: Cloze Procedure, College Students, Computer Assisted Instruction, Computer Assisted Testing
Peer reviewedCohen, Allan S.; And Others – Journal of Educational Measurement, 1991
Detecting differential item functioning (DIF) on test items constructed to favor 1 group over another was investigated on parameter estimates from 2 item response theory-based computer programs--BILOG and LOGIST--using data for 1,000 White and 1,000 Black college students. Use of prior distributions and marginal-maximum a posteriori estimation is…
Descriptors: Black Students, College Students, Computer Assisted Testing, Equations (Mathematics)

Direct link
