Publication Date
| In 2026 | 0 |
| Since 2025 | 16 |
| Since 2022 (last 5 years) | 64 |
| Since 2017 (last 10 years) | 155 |
| Since 2007 (last 20 years) | 250 |
Descriptor
| Computer Assisted Testing | 362 |
| Multiple Choice Tests | 362 |
| Foreign Countries | 109 |
| Test Items | 109 |
| Test Construction | 83 |
| Student Evaluation | 68 |
| Higher Education | 65 |
| Test Format | 64 |
| College Students | 57 |
| Scores | 54 |
| Comparative Analysis | 45 |
| More ▼ | |
Source
Author
| Anderson, Paul S. | 6 |
| Clariana, Roy B. | 4 |
| Wise, Steven L. | 4 |
| Alonzo, Julie | 3 |
| Anderson, Daniel | 3 |
| Ben Seipel | 3 |
| Bridgeman, Brent | 3 |
| Kosh, Audra E. | 3 |
| Mark L. Davison | 3 |
| Nese, Joseph F. T. | 3 |
| Park, Jooyong | 3 |
| More ▼ | |
Publication Type
Education Level
Location
| United Kingdom | 14 |
| Australia | 9 |
| Canada | 9 |
| Turkey | 9 |
| Germany | 5 |
| Spain | 4 |
| Taiwan | 4 |
| Texas | 4 |
| Arizona | 3 |
| Europe | 3 |
| Indonesia | 3 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Kingston, Neal M. – Applied Measurement in Education, 2009
There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Printed Materials, Effect Size
Wilson, Kathi; Boyd, Cleo; Chen, Liwen; Jamal, Sarosh – Computers & Education, 2011
The main objective of this paper is to examine the effectiveness of computer-assisted formative assessment in a large, first-year undergraduate geography course. In particular, the paper evaluates the impact of computer-assisted multiple-choice practice tests on student performance in the course as well as student opinions of this type of…
Descriptors: Feedback (Response), Student Evaluation, Student Attitudes, Formative Evaluation
Brantmeier, Cindy; Callender, Aimee; McDaniel, Mark – Reading in a Foreign Language, 2011
With 97 advanced second language (L2) learners of Spanish, the present study utilized domain specific texts to examine the effects of embedded "what" questions and elaborative "why" questions on reading comprehension. Participants read two different vignettes, either with or without the adjuncts, from a social psychology textbook, and then…
Descriptors: Reading Comprehension, Familiarity, Social Psychology, Second Language Learning
Susan Rodrigues; Neil Taylor; Margaret Cameron; Lorraine Syme-Smith; Colette Fortuna – Science Education International, 2010
This paper reports on data collected via an audience response system, where a convenience sample of 300 adults aged 17-50 pressed a button to register their answers for twenty multiple choice questions. The responses were then discussed with the respondents at the time. The original dataset includes physics, biology and chemistry questions. The…
Descriptors: Audience Response, International Studies, Familiarity, Chemistry
Lau, Paul Ngee Kiong; Lau, Sie Hoe; Hong, Kian Sam; Usop, Hasbee – Educational Technology & Society, 2011
The number right (NR) method, in which students pick one option as the answer, is the conventional method for scoring multiple-choice tests that is heavily criticized for encouraging students to guess and failing to credit partial knowledge. In addition, computer technology is increasingly used in classroom assessment. This paper investigates the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Computers, Scoring
Howard, Keith E.; Anderson, Kenneth A. – Middle Grades Research Journal, 2010
Stereotype threat research has demonstrated how presenting situational cues in a testing environment, such as raising the salience of negative stereotypes, can adversely affect test performance (Perry, Steele, & Hilliard, 2003; Steele & Aronson, 1995) and expectancy (Cadinu, Maass, Frigerio, Impagliazzo, & Latinotti, 2003; Stangor,…
Descriptors: Cues, Stereotypes, Standardized Tests, Foreign Countries
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Pechenizkiy, Mykola; Trcka, Nikola; Vasilyeva, Ekaterina; van der Aalst, Wil; De Bra, Paul – International Working Group on Educational Data Mining, 2009
Traditional data mining techniques have been extensively applied to find interesting patterns, build descriptive and predictive models from large volumes of data accumulated through the use of different information systems. The results of data mining can be used for getting a better understanding of the underlying educational processes, for…
Descriptors: Data Analysis, Methods, Computer Software, Computer Assisted Testing
Schultz, Madeleine – Journal of Learning Design, 2011
This paper reports on the development of a tool that generates randomised, non-multiple choice assessment within the BlackBoard Learning Management System interface. An accepted weakness of multiple-choice assessment is that it cannot elicit learning outcomes from upper levels of Biggs' SOLO taxonomy. However, written assessment items require…
Descriptors: Foreign Countries, Feedback (Response), Student Evaluation, Large Group Instruction
Wang, Tzu-Hua – Computers & Education, 2011
This research refers to the self-regulated learning strategies proposed by Pintrich (1999) in developing a multiple-choice Web-based assessment system, the Peer-Driven Assessment Module of the Web-based Assessment and Test Analysis system (PDA-WATA). The major purpose of PDA-WATA is to facilitate learner use of self-regulatory learning behaviors…
Descriptors: Learning Strategies, Student Motivation, Internet, Junior High School Students
Lingard, Jennifer; Minasian-Batmanian, Laura; Vella, Gilbert; Cathers, Ian; Gonzalez, Carlos – Assessment & Evaluation in Higher Education, 2009
Effective criterion referenced assessment requires grade descriptors to clarify to students what skills are required to gain higher grades. But do students and staff actually have the same perception of the grading system, and if so, do they perform better than those whose perceptions are less accurately aligned with those of staff? Since…
Descriptors: Feedback (Response), Prior Learning, Physics, Difficulty Level
Abad, Francisco J.; Olea, Julio; Ponsoda, Vicente – Applied Psychological Measurement, 2009
This article deals with some of the problems that have hindered the application of Samejima's and Thissen and Steinberg's multiple-choice models: (a) parameter estimation difficulties owing to the large number of parameters involved, (b) parameter identifiability problems in the Thissen and Steinberg model, and (c) their treatment of omitted…
Descriptors: Multiple Choice Tests, Models, Computation, Simulation
Hancock, Terence M. – Australasian Journal of Educational Technology, 2010
The audience response system--technology that allows immediate compilation and display of a group's multiple choice input--is being shown effective in the classroom both in engaging students and providing real time, formative assessment of comprehension. This paper looks at its further potential as an alternative for summative assessment,…
Descriptors: Educational Technology, Computer Assisted Testing, Computer Software, Computer System Design
Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee – Online Submission, 2009
Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Adaptive Testing, Scoring

Peer reviewed
Direct link
