Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 23 |
Descriptor
Computer Assisted Testing | 28 |
Item Response Theory | 28 |
Multiple Choice Tests | 28 |
Test Items | 13 |
Test Format | 12 |
Foreign Countries | 7 |
Test Construction | 7 |
Scores | 6 |
Adaptive Testing | 5 |
Difficulty Level | 5 |
Science Tests | 5 |
More ▼ |
Source
Author
Biancarosa, Gina | 2 |
Carlson, Sarah E. | 2 |
Davison, Mark L. | 2 |
Liu, Bowen | 2 |
Seipel, Ben | 2 |
Abad, Francisco J. | 1 |
Abidin, Aang Zainul | 1 |
Adams, Raymond J. | 1 |
Ahmadi, Alireza | 1 |
Boyd, Aimee M. | 1 |
Bramley, Tom | 1 |
More ▼ |
Publication Type
Journal Articles | 21 |
Reports - Research | 21 |
Reports - Evaluative | 5 |
Numerical/Quantitative Data | 3 |
Speeches/Meeting Papers | 2 |
Tests/Questionnaires | 2 |
Dissertations/Theses -… | 1 |
Guides - Non-Classroom | 1 |
Education Level
Higher Education | 6 |
Elementary Education | 5 |
Secondary Education | 5 |
Postsecondary Education | 4 |
High Schools | 3 |
Middle Schools | 2 |
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 2 |
Program for International… | 1 |
Test of English as a Foreign… | 1 |
United States Medical… | 1 |
What Works Clearinghouse Rating
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Moon, Jung Aa; Sinharay, Sandip; Keehner, Madeleine; Katz, Irvin R. – International Journal of Testing, 2020
The current study examined the relationship between test-taker cognition and psychometric item properties in multiple-selection multiple-choice and grid items. In a study with content-equivalent mathematics items in alternative item formats, adult participants' tendency to respond to an item was affected by the presence of a grid and variations of…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Test Wiseness, Psychometrics
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Kuo, Bor-Chen; Liao, Chen-Huei; Pai, Kai-Chih; Shih, Shu-Chuan; Li, Cheng-Hsuan; Mok, Magdalena Mo Ching – Educational Psychology, 2020
The current study explores students' collaboration and problem solving (CPS) abilities using a human-to-agent (H-A) computer-based collaborative problem solving assessment. Five CPS assessment units with 76 conversation-based items were constructed using the PISA 2015 CPS framework. In the experiment, 53,855 ninth and tenth graders in Taiwan were…
Descriptors: Computer Assisted Testing, Cooperative Learning, Problem Solving, Item Response Theory
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Abidin, Aang Zainul; Istiyono, Edi; Fadilah, Nunung; Dwandaru, Wipsar Sunu Brams – International Journal of Evaluation and Research in Education, 2019
Classical assessments that are not comprehensive and do not distinguish students' initial abilities make measurement results far from the actual abilities. This study was conducted to produce a computerized adaptive test for physics critical thinking skills (CAT-PhysCriTS) that met the feasibility criteria. The test was presented for the physics…
Descriptors: Foreign Countries, High School Students, Grade 11, Physics
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Assessment for Effective Intervention, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Lee, Senyung; Shin, Sun-Young – Language Assessment Quarterly, 2021
Multiple test tasks are available for assessing L2 collocation knowledge. However, few studies have investigated the characteristics of a variety of recognition and recall tasks of collocation simultaneously, and most research on L2 collocations has focused on verb-noun and adjective-noun collocations. This study investigates (1) the relative…
Descriptors: Phrase Structure, Second Language Learning, Language Tests, Recall (Psychology)
Crabtree, Ashleigh R. – ProQuest LLC, 2016
The purpose of this research is to provide information about the psychometric properties of technology-enhanced (TE) items and the effects these items have on the content validity of an assessment. Specifically, this research investigated the impact that the inclusion of TE items has on the construct of a mathematics test, the technical properties…
Descriptors: Psychometrics, Computer Assisted Testing, Test Items, Test Format
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven – Applied Measurement in Education, 2013
This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…
Descriptors: Computer Assisted Testing, Item Response Theory, Test Construction, Models
Previous Page | Next Page ยป
Pages: 1 | 2