Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 15 |
Descriptor
Source
Author
Allen, Nancy | 1 |
Barnett-Foster, Debora | 1 |
Beal, Judy | 1 |
Bennett, Randy Elliott | 1 |
Bergstrom, Betty | 1 |
Black, Susan | 1 |
Carlson, Janet F. | 1 |
Check, John F. | 1 |
Cheng, Liying | 1 |
Clarke, Rufus | 1 |
Craig, Pippa | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 8 |
Higher Education | 7 |
Postsecondary Education | 7 |
Elementary Education | 3 |
Adult Education | 1 |
Grade 3 | 1 |
Grade 8 | 1 |
Secondary Education | 1 |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Comprehensive Tests of Basic… | 1 |
Graduate Record Examinations | 1 |
Iowa Tests of Basic Skills | 1 |
SAT (College Admission Test) | 1 |
Test of English as a Foreign… | 1 |
Test of Understanding in… | 1 |
What Works Clearinghouse Rating
Davis-Berg, Elizabeth C.; Minbiole, Julie – School Science Review, 2020
The completion rates were compared for long-form questions where a large blank answer space is provided and for long-form questions where the answer space has bullet-points prompts corresponding to the parts of the question. It was found that students were more likely to complete a question when bullet points were provided in the answer space.…
Descriptors: Test Format, Test Construction, Academic Achievement, Educational Testing
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Liu, Yuming; Schulz, E. Matthew; Yu, Lei – Journal of Educational and Behavioral Statistics, 2008
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
Descriptors: Reading Comprehension, Test Format, Markov Processes, Educational Testing
Frey, Andreas; Hartig, Johannes; Rupp, Andre A. – Educational Measurement: Issues and Practice, 2009
In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…
Descriptors: Measures (Individuals), Test Construction, Theory Practice Relationship, Design
Craig, Pippa; Gordon, Jill; Clarke, Rufus; Oldmeadow, Wendy – Assessment & Evaluation in Higher Education, 2009
This study aimed to provide evidence to guide decisions on the type and timing of assessments in a graduate medical programme, by identifying whether students from particular degree backgrounds face greater difficulty in satisfying the current assessment requirements. We examined the performance rank of students in three types of assessments and…
Descriptors: Student Evaluation, Medical Education, Student Characteristics, Correlation
Whiting, Hal; Kline, Theresa J. B. – International Journal of Training and Development, 2006
This study examined the equivalency of computer and conventional versions of the Test of Workplace Essential Skills (TOWES), a test of adult literacy skills in Reading Text, Document Use and Numeracy. Seventy-three college students completed the computer version, and their scores were compared with those who had taken the test in the conventional…
Descriptors: Test Format, Adult Literacy, Computer Assisted Testing, College Students
Huang, Yi-Min; Trevisan, Mike; Storfer, Andrew – International Journal for the Scholarship of Teaching and Learning, 2007
Despite the prevalence of multiple choice items in educational testing, there is a dearth of empirical evidence for multiple choice item writing rules. The purpose of this study was to expand the base of empirical evidence by examining the use of the "all-of-the-above" option in a multiple choice examination in order to assess how…
Descriptors: Multiple Choice Tests, Educational Testing, Ability Grouping, Test Format

Snowman, Jack – Mid-Western Educational Researcher, 1993
A review of five recent studies concludes that on multiple-choice tests, changing uncertain answers improves results; testing plus feedback produces more learning than additional study time; students learn and retain more when they are tested more often; and question and completion formats are equally acceptable for multiple-choice items. (KS)
Descriptors: Academic Achievement, Educational Testing, Elementary Secondary Education, Test Construction

Carlson, Janet F. – Educational Research Quarterly, 1998
This article invokes a literal image of test givers as measurement devices and explores the psychometric properties of these test administrator instruments. Concurrent and content validation and test-retest and parallel-forms validity are explored. (SLD)
Descriptors: Achievement Tests, Educational Testing, Examiners, Psychometrics
Zheng, Ying; Cheng, Liying; Klinger, Don A. – TESL Canada Journal, 2007
Large scale testing in English affects second-language students not only greatly but also differently than first-language learners. The research literature reports that confounding factors in such large-scale testing such as varying test formats may differentially affect the performance of students from diverse backgrounds. An investigation of…
Descriptors: Reading Comprehension, Reading Tests, Test Format, Educational Testing

Weiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests

Mentzer, Thomas L. – Educational and Psychological Measurement, 1982
Evidence of biases in the correct answers in multiple-choice test item files were found to include "all of the above" bias in which that answer was correct more than 25 percent of the time, and a bias that the longest answer was correct too frequently. Seven bias types were studied. (Author/CM)
Descriptors: Educational Testing, Higher Education, Multiple Choice Tests, Psychology

Ioannidou, Mary Koutselini – Studies in Educational Evaluation, 1997
Student achievement was compared for open-book and closed-book examinations taken by 72 college students in Cyprus. There were no significant differences in total examination score between the two types of tests, although those who took the closed-book examination had slightly higher scores. (SLD)
Descriptors: Achievement Tests, College Students, Educational Testing, Foreign Countries
Griffiths, Sue – Use of English, 1989
Examines the format and nature of the two required writing tasks in the General Certificate of Secondary Education (GCSE) English Examination. Asserts that the test assignments do not properly assess writing ability and are problematic, tedious, and ineptly-conceived ordeals for pupils. Discusses the use of writing folders as an alternative. (KEH)
Descriptors: Educational Testing, English Instruction, Foreign Countries, Secondary Education