NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Meagan Karvonen; Russell Swinburne Romine; Amy K. Clark – Practical Assessment, Research & Evaluation, 2024
This paper describes methods and findings from student cognitive labs, teacher cognitive labs, and test administration observations as evidence evaluated in a validity argument for a computer-based alternate assessment for students with significant cognitive disabilities. Validity of score interpretations and uses for alternate assessments based…
Descriptors: Students with Disabilities, Intellectual Disability, Severe Disabilities, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teneqexhi, Romeo; Kuneshka, Loreta; Naço, Adrian – International Association for Development of the Information Society, 2018
Organizing exams or competitions with multiple choice questions and assessment by technology today is something that happens in many educational institutions around the world. These kinds of exams or tests as a rule are done by answering questions in a so-called answer sheet form. In this form, each student or participant in the exam is obliged to…
Descriptors: Foreign Countries, Competition, Multiple Choice Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teneqexhi, Romeo; Qirko, Margarita; Sharko, Genci; Vrapi, Fatmir; Kuneshka, Loreta – International Association for Development of the Information Society, 2017
Exams assessment is one of the most tedious work for university teachers all over the world. Multiple choice theses make exams assessment a little bit easier, but the teacher cannot prepare more than 3-4 variants; in this case, the possibility of students for cheating from one another becomes a risk for "objective assessment outcome." On…
Descriptors: Testing, Computer Assisted Testing, Test Items, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hannon, Brenda – Journal of Education and Training Studies, 2013
Recent studies show that a new strategy called differential-associative processing is effective for learning related concepts; however, our knowledge about differential-associative processing is still limited. Therefore the goals of the present study are to assess the duration of knowledge that is acquired from using differential-associative…
Descriptors: Learning Strategies, Comparative Analysis, Cognitive Processes, Associative Learning
Verbic, Srdjan; Tomic, Boris; Kartal, Vesna – Online Submission, 2010
On-line trial testing for fourth-grade students was an exploratory study realized as a part of the project "Developing annual test of students' achievement in Nature & Society" realized by Institute for Education Quality and Evaluation. Main ideas of the study were to explore possibilities for on-line testing at national level in…
Descriptors: Foreign Countries, Item Response Theory, High School Students, Computer Assisted Testing
Lin, Min-Jin; Guo, Chorng-Jee; Hsu, Chia-Er – Online Submission, 2011
This study designed and developed a CP-MCT (content-rich, photo-based multiple choice online test) to assess whether college students can apply the basic light concept to interpret daily light phenomena. One hundred college students volunteered to take the CP-MCT, and the results were statistically analyzed by applying t-test or ANOVA (Analysis of…
Descriptors: College Students, Testing, Multiple Choice Tests, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Tucker, Bill – Educational Leadership, 2009
New technology-enabled assessments offer the potential to understand more than just whether a student answered a test question right or wrong. Using multiple forms of media that enable both visual and graphical representations, these assessments present complex, multistep problems for students to solve and collect detailed information about an…
Descriptors: Research and Development, Problem Solving, Student Characteristics, Information Technology
Peer reviewed Peer reviewed
Milligan, W. Lloyd – Educational and Psychological Measurement, 1978
A computer/tape recorder interface was designed, which permits automatic oral adminstration of "true-false" or "multiple-choice" type tests. This paper describes the hardware and control program software, which were developed to implement the method on a DEC PDP 11 computer. (Author/JKS)
Descriptors: Audiotape Recordings, Auditory Stimuli, Computer Assisted Testing, Computer Programs
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1  |  2