Publication Date
| In 2026 | 0 |
| Since 2025 | 3 |
| Since 2022 (last 5 years) | 7 |
| Since 2017 (last 10 years) | 14 |
| Since 2007 (last 20 years) | 28 |
Descriptor
| Comparative Analysis | 64 |
| Test Format | 64 |
| Test Validity | 48 |
| Test Reliability | 25 |
| Test Items | 24 |
| Computer Assisted Testing | 19 |
| Test Construction | 19 |
| Foreign Countries | 18 |
| Language Tests | 17 |
| English (Second Language) | 13 |
| Multiple Choice Tests | 13 |
| More ▼ | |
Source
Author
Publication Type
Education Level
| Higher Education | 17 |
| Postsecondary Education | 12 |
| Elementary Education | 4 |
| Secondary Education | 4 |
| Adult Education | 1 |
| Grade 3 | 1 |
| Grade 5 | 1 |
| Grade 8 | 1 |
| High Schools | 1 |
Audience
Location
| Japan | 2 |
| United Kingdom (England) | 2 |
| Australia | 1 |
| Canada | 1 |
| China | 1 |
| Germany | 1 |
| Iran | 1 |
| Israel | 1 |
| Malawi | 1 |
| Missouri | 1 |
| Netherlands | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
David Bell; Vikki O'Neill; Vivienne Crawford – Practitioner Research in Higher Education, 2023
We compared the influence of open-book extended duration versus closed book time-limited format on reliability and validity of written assessments of pharmacology learning outcomes within our medical and dental courses. Our dental cohort undertake a mid-year test (30xfree-response short answer to a question, SAQ) and end-of-year paper (4xSAQ,…
Descriptors: Undergraduate Students, Pharmacology, Pharmaceutical Education, Test Format
Jeff Allen; Jay Thomas; Stacy Dreyer; Scott Johanningmeier; Dana Murano; Ty Cruce; Xin Li; Edgar Sanchez – ACT Education Corp., 2025
This report describes the process of developing and validating the enhanced ACT. The report describes the changes made to the test content and the processes by which these design decisions were implemented. The authors describe how they shared the overall scope of the enhancements, including the initial blueprints, with external expert panels,…
Descriptors: College Entrance Examinations, Testing, Change, Test Construction
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Martin-Raugh, Michelle P.; Anguiano-Carrsaco, Cristina; Jackson, Teresa; Brenneman, Meghan W.; Carney, Lauren; Barnwell, Patrick; Kochert, Jonathan – International Journal of Testing, 2018
Single-response situational judgment tests (SRSJTs) differ from multiple-response SJTs (MRSJTS) in that they present test takers with edited critical incidents and simply ask test takers to read over the action described and evaluate it according to its effectiveness. Research comparing the reliability and validity of SRSJTs and MRSJTs is thus far…
Descriptors: Test Format, Test Reliability, Test Validity, Predictive Validity
Yangqiuting Li; Chandralekha Singh – Physical Review Physics Education Research, 2025
Research-based multiple-choice questions implemented in class with peer instruction have been shown to be an effective tool for improving students' engagement and learning outcomes. Moreover, multiple-choice questions that are carefully sequenced to build on each other can be particularly helpful for students to develop a systematic understanding…
Descriptors: Physics, Science Instruction, Science Tests, Multiple Choice Tests
Shear, Benjamin R. – Journal of Educational Measurement, 2023
Large-scale standardized tests are regularly used to measure student achievement overall and for student subgroups. These uses assume tests provide comparable measures of outcomes across student subgroups, but prior research suggests score comparisons across gender groups may be complicated by the type of test items used. This paper presents…
Descriptors: Gender Bias, Item Analysis, Test Items, Achievement Tests
Rogers, Angela – Mathematics Education Research Group of Australasia, 2021
Test developers are continually exploring the possibilities Computer Based Assessment (CBA) offers the Mathematics domain. This paper describes the trial of the Place Value Assessment Tool (PVAT) and its online equivalent, the PVAT-O. Both tests were administered using a counterbalanced research design to 253 Year 3-6 students across nine classes…
Descriptors: Mathematics Tests, Computer Assisted Testing, Number Concepts, Elementary School Students
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Ford, Jeremy W.; Conoyer, Sarah J.; Lembke, Erica S.; Smith, R. Alex; Hosp, John L. – Assessment for Effective Intervention, 2018
In the present study, two types of curriculum-based measurement (CBM) tools in science, Vocabulary Matching (VM) and Statement Verification for Science (SV-S), a modified Sentence Verification Technique, were compared. Specifically, this study aimed to determine whether the format of information presented (i.e., SV-S vs. VM) produces differences…
Descriptors: Curriculum Based Assessment, Evaluation Methods, Measurement Techniques, Comparative Analysis
Davison, Christopher B.; Dustova, Gandzhina – Journal of Instructional Pedagogies, 2017
This research study describes the correlations between student performance and examination format in a higher education teaching and research institution. The researchers employed a quantitative, correlational methodology utilizing linear regression analysis. The data was obtained from undergraduate student test scores over a three-year time span.…
Descriptors: Statistical Analysis, Performance Based Assessment, Correlation, Higher Education
Quaid, Ethan Douglas – International Journal of Computer-Assisted Language Learning and Teaching, 2018
The present trend in developing and using semi-direct speaking tests has been supported by test developers and researchers' claim of their increased practicality, higher reliability and concurrent validity with test scores in direct oral proficiency interviews. However, it is universally agreed within the language testing and assessment community…
Descriptors: Case Studies, Speech Communication, Language Tests, Comparative Analysis
Moshinsky, Avital; Ziegler, David; Gafni, Naomi – International Journal of Testing, 2017
Many medical schools have adopted multiple mini-interviews (MMI) as an advanced selection tool. MMIs are expensive and used to test only a few dozen candidates per day, making it infeasible to develop a different test version for each test administration. Therefore, some items are reused both within and across years. This study investigated the…
Descriptors: Interviews, Medical Schools, Test Validity, Test Reliability
Thompson, Gregory L.; Cox, Troy L.; Knapp, Nieves – Foreign Language Annals, 2016
While studies have been done to rate the validity and reliability of the Oral Proficiency Interview (OPI) and Oral Proficiency Interview-Computer (OPIc) independently, a limited amount of research has analyzed the interexam reliability of these tests, and studies have yet to be conducted comparing the results of Spanish language learners who take…
Descriptors: Comparative Analysis, Oral Language, Language Proficiency, Spanish
Öz, Hüseyin; Özturan, Tuba – Journal of Language and Linguistic Studies, 2018
This article reports the findings of a study that sought to investigate whether computer-based vs. paper-based test-delivery mode has an impact on the reliability and validity of an achievement test for a pedagogical content knowledge course in an English teacher education program. A total of 97 university students enrolled in the English as a…
Descriptors: Computer Assisted Testing, Testing, Test Format, Teaching Methods

Peer reviewed
Direct link
