Publication Date
| In 2026 | 0 |
| Since 2025 | 16 |
| Since 2022 (last 5 years) | 64 |
| Since 2017 (last 10 years) | 155 |
| Since 2007 (last 20 years) | 250 |
Descriptor
| Computer Assisted Testing | 362 |
| Multiple Choice Tests | 362 |
| Foreign Countries | 109 |
| Test Items | 109 |
| Test Construction | 83 |
| Student Evaluation | 68 |
| Higher Education | 65 |
| Test Format | 64 |
| College Students | 57 |
| Scores | 54 |
| Comparative Analysis | 45 |
| More ▼ | |
Source
Author
| Anderson, Paul S. | 6 |
| Clariana, Roy B. | 4 |
| Wise, Steven L. | 4 |
| Alonzo, Julie | 3 |
| Anderson, Daniel | 3 |
| Ben Seipel | 3 |
| Bridgeman, Brent | 3 |
| Kosh, Audra E. | 3 |
| Mark L. Davison | 3 |
| Nese, Joseph F. T. | 3 |
| Park, Jooyong | 3 |
| More ▼ | |
Publication Type
Education Level
Location
| United Kingdom | 14 |
| Australia | 9 |
| Canada | 9 |
| Turkey | 9 |
| Germany | 5 |
| Spain | 4 |
| Taiwan | 4 |
| Texas | 4 |
| Arizona | 3 |
| Europe | 3 |
| Indonesia | 3 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Tucker, Bill – Educational Leadership, 2009
New technology-enabled assessments offer the potential to understand more than just whether a student answered a test question right or wrong. Using multiple forms of media that enable both visual and graphical representations, these assessments present complex, multistep problems for students to solve and collect detailed information about an…
Descriptors: Research and Development, Problem Solving, Student Characteristics, Information Technology
Veldkamp, Bernard P. – International Journal of Testing, 2008
Integrity[TM], an online application for testing both the statistical integrity of the test and the academic integrity of the examinees, was evaluated for this review. Program features and the program output are described. An overview of the statistics in Integrity[TM] is provided, and the application is illustrated with a small simulation study.…
Descriptors: Simulation, Integrity, Statistics, Computer Assisted Testing
Chang, Shao-Hua; Lin, Pei-Chun; Lin, Zih-Chuan – Educational Technology & Society, 2007
This study investigates differences in the partial scoring performance of examinees in elimination testing and conventional dichotomous scoring of multiple-choice tests implemented on a computer-based system. Elimination testing that uses the same set of multiple-choice items rewards examinees with partial knowledge over those who are simply…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Scoring, Item Analysis
Jamgochian, Elisa; Park, Bitnara Jasmine; Nese, Joseph F. T.; Lai, Cheng-Fei; Saez, Leilani; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2010
In this technical report, we provide reliability and validity evidence for the easyCBM[R] Reading measures for grade 2 (word and passage reading fluency and multiple choice reading comprehension). Evidence for reliability includes internal consistency and item invariance. Evidence for validity includes concurrent, predictive, and construct…
Descriptors: Grade 2, Reading Comprehension, Testing Programs, Reading Fluency
Draper, Stephen W. – British Journal of Educational Technology, 2009
One technology for education whose adoption is currently expanding rapidly in UK higher education is that of electronic voting systems (EVS). As with all educational technology, whether learning benefits are achieved depends not on the technology but on whether an improved teaching method is introduced with it. EVS inherently relies on the…
Descriptors: Educational Technology, Teaching Methods, Higher Education, Foreign Countries
Bridge, Pete; Appleyard, Rob; Wilson, Rob – Online Submission, 2007
This paper reports undergraduate student feedback contrasting conventional "Long-answer" examinations with automated multiple-choice question (MCQ) assessment. Feedback was gathered after students had undertaken formative MCQ assessments as a revision aid. Feedback was generally supportive of MCQ summative tests, with 74% expressing a…
Descriptors: Undergraduate Students, Summative Evaluation, Multiple Choice Tests, Feedback (Response)
Park, Jooyong; Choi, Byung-Chul – British Journal of Educational Technology, 2008
A new computerised testing system was used at home to promote learning and also to save classroom instruction time. The testing system combined the features of short-answer and multiple-choice formats. The questions of the multiple-choice problems were presented without the options so that students had to generate answers for themselves; they…
Descriptors: Experimental Groups, Control Groups, Computer Assisted Testing, Instructional Effectiveness
Nicol, David – Journal of Further and Higher Education, 2007
Over the last decade, larger student numbers, reduced resources and increasing use of new technologies have led to the increased use of multiple-choice questions (MCQs) as a method of assessment in higher education courses. This paper identifies some limitations associated with MCQs from a pedagogical standpoint. It then provides an assessment…
Descriptors: Multiple Choice Tests, Student Evaluation, Higher Education, Formative Evaluation
Wieling, M. B.; Hofman, W. H. A. – Computers & Education, 2010
To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional face-to-face course approach? In a between-subjects…
Descriptors: Feedback (Response), Grade Point Average, Predictor Variables, Lecture Method
Alexander, Cara J.; Crescini, Weronika M.; Juskewitch, Justin E.; Lachman, Nirusha; Pawlina, Wojciech – Anatomical Sciences Education, 2009
The goals of our study were to determine the predictive value and usability of an audience response system (ARS) as a knowledge assessment tool in an undergraduate medical curriculum. Over a three year period (2006-2008), data were collected from first year didactic blocks in Genetics/Histology and Anatomy/Radiology (n = 42-50 per class). During…
Descriptors: Feedback (Response), Medical Education, Audience Response, Genetics
Buchan, Janet F.; Swann, Michael – Australasian Journal of Educational Technology, 2007
The in house development of an online assessment tool, OASIS, has provided a unique opportunity to research the use of online assessment in teaching and learning across the university. The developing relationship between IT staff, educational designers and academics serves as a model for integrated and evolving management systems which demonstrate…
Descriptors: Computer Assisted Testing, Models, Universities, Foreign Countries
Ko, C. C.; Cheng, C. D. – Computers & Education, 2008
Electronic examination systems, which include Internet-based system, require extremely complicated installation, configuration and maintenance of software as well as hardware. In this paper, we present the design and development of a flexible, easy-to-use and secure examination system (e-Test), in which any commonly used computer can be used as a…
Descriptors: Computer Assisted Testing, Computers, Program Effectiveness, Examiners
Johannesen, Monica; Habib, Laurence – Technology, Pedagogy and Education, 2010
This article uses the notion of professional identity within the framework of actor network theory to understand didactic practices within three faculties in an institution of higher education. The study is based on a series of interviews with lecturers in each faculty and diaries of their didactic practices. The article focuses on the use of a…
Descriptors: Diaries, Nursing Education, Engineering Education, Teacher Education
Anderson, Richard Ivan – Journal of Computer-Based Instruction, 1982
Describes confidence testing methods (confidence weighting, probabilistic marking, multiple alternative selection) as alternative to computer-based, multiple choice tests and explains potential benefits (increased reliability, improved examinee evaluation of alternatives, extended diagnostic information and remediation prescriptions, happier…
Descriptors: Computer Assisted Testing, Confidence Testing, Multiple Choice Tests, Probability
David, Carl W. – College Board Review, 1985
A scheme of testing that will clearly discern whether a student has learned the material that has been set out as the goal is discussed. The fear of computer-assisted testing is seen as so widespread that perhaps the idea is too far ahead of our time. (MLW)
Descriptors: Computer Assisted Testing, Feedback, Higher Education, Learning Processes

Peer reviewed
Direct link
