Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 19 |
Descriptor
Computer Assisted Testing | 35 |
Multiple Choice Tests | 35 |
Scoring | 35 |
Test Items | 16 |
Test Format | 13 |
Foreign Countries | 9 |
College Students | 8 |
Higher Education | 8 |
Test Reliability | 8 |
Comparative Analysis | 6 |
Computer Software | 6 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 22 |
Reports - Research | 19 |
Reports - Evaluative | 8 |
Reports - Descriptive | 5 |
Speeches/Meeting Papers | 5 |
Guides - Non-Classroom | 1 |
Numerical/Quantitative Data | 1 |
Opinion Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 7 |
Postsecondary Education | 7 |
Secondary Education | 5 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 8 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Malaysia | 2 |
Canada | 1 |
Czech Republic | 1 |
Iran | 1 |
Maryland | 1 |
Turkey | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 2 |
Advanced Placement… | 1 |
Preliminary Scholastic… | 1 |
SAT (College Admission Test) | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Kunal Sareen – Innovations in Education and Teaching International, 2024
This study examines the proficiency of Chat GPT, an AI language model, in answering questions on the Situational Judgement Test (SJT), a widely used assessment tool for evaluating the fundamental competencies of medical graduates in the UK. A total of 252 SJT questions from the "Oxford Assess and Progress: Situational Judgement" Test…
Descriptors: Ethics, Decision Making, Artificial Intelligence, Computer Software
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Çinar, Ayse; Ince, Elif; Gezer, Murat; Yilmaz, Özgür – Education and Information Technologies, 2020
Worldwide, open-ended questions that require short answers have been used in many exams in fields of science, such as the International Student Assessment Program (PISA), the International Science and Maths Trends Research (TIMSS). However, multiple-choice questions are used for many exams at the national level in Turkey, especially high school…
Descriptors: Foreign Countries, Computer Assisted Testing, Artificial Intelligence, Grading
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Mehri Kamrood, Ali; Davoudi, Mohammad; Ghaniabadi, Saeed; Amirian, Seyyed Mohammad Reza – Computer Assisted Language Learning, 2021
Dynamic Assessment (DA) is proposed as a workable diagnostic tool in second or foreign language context. Compared to traditional non-dynamic testing, DA presents a more comprehensive account of human beings' abilities through addressing both the fully internalized abilities and the abilities that are in the process of being internalized. However,…
Descriptors: Language Tests, Computer Assisted Testing, Second Language Learning, Second Language Instruction
Sieke, Scott A.; McIntosh, Betsy B.; Steele, Matthew M.; Knight, Jennifer K. – CBE - Life Sciences Education, 2019
Understanding student ideas in large-enrollment biology courses can be challenging, because easy-to-administer multiple-choice questions frequently do not fully capture the diversity of student ideas. As part of the Automated Analysis of Constructed Responses (AACR) project, we designed a question prompting students to describe the possible…
Descriptors: Genetics, Scientific Concepts, Biology, Science Instruction
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Jancarík, Antonín; Kostelecká, Yvona – Electronic Journal of e-Learning, 2015
Electronic testing has become a regular part of online courses. Most learning management systems offer a wide range of tools that can be used in electronic tests. With respect to time demands, the most efficient tools are those that allow automatic assessment. The presented paper focuses on one of these tools: matching questions in which one…
Descriptors: Online Courses, Computer Assisted Testing, Test Items, Scoring Formulas
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Zahner, Doris; Steedle, Jeffrey T. – Council for Aid to Education, 2014
The Organisation for Economic Co-operation and Development (OECD) launched the Assessment of Higher Education Learning Outcomes (AHELO) in an effort to measure learning in international postsecondary education. This paper presents a study of scoring equivalence across nine countries for two translated and adapted performance tasks. Results reveal…
Descriptors: International Assessment, Performance Based Assessment, Postsecondary Education, Scoring
Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela – Language Testing, 2012
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…
Descriptors: Undergraduate Students, Speech Communication, Rating Scales, Scoring
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Lau, Paul Ngee Kiong; Lau, Sie Hoe; Hong, Kian Sam; Usop, Hasbee – Educational Technology & Society, 2011
The number right (NR) method, in which students pick one option as the answer, is the conventional method for scoring multiple-choice tests that is heavily criticized for encouraging students to guess and failing to credit partial knowledge. In addition, computer technology is increasingly used in classroom assessment. This paper investigates the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Computers, Scoring
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis