Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Comparative Analysis | 7 |
Educational Testing | 7 |
Test Format | 7 |
Computer Assisted Testing | 3 |
Test Items | 3 |
College Students | 2 |
Correlation | 2 |
Educational Assessment | 2 |
Educational Technology | 2 |
Evaluation Methods | 2 |
Evaluation Research | 2 |
More ▼ |
Source
Assessment & Evaluation in… | 1 |
Computers & Education | 1 |
International Journal for the… | 1 |
International Journal of… | 1 |
Journal of Educational and… | 1 |
Journal of Technology,… | 1 |
ProQuest LLC | 1 |
Author
Allen, Nancy | 1 |
Bennett, Randy Elliott | 1 |
Clarke, Rufus | 1 |
Craig, Pippa | 1 |
Gordon, Jill | 1 |
Horkay, Nancy | 1 |
Huang, Yi-Min | 1 |
Kaplan, Bruce | 1 |
Kline, Theresa J. B. | 1 |
Liu, Yuming | 1 |
Oldmeadow, Wendy | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 5 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Education Level
Postsecondary Education | 4 |
Higher Education | 3 |
Adult Education | 1 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 8 | 1 |
Audience
Location
Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Iowa Tests of Basic Skills | 1 |
What Works Clearinghouse Rating
Tian, Feng – ProQuest LLC, 2011
There has been a steady increase in the use of mixed-format tests, that is, tests consisting of both multiple-choice items and constructed-response items in both classroom and large-scale assessments. This calls for appropriate equating methods for such tests. As Item Response Theory (IRT) has rapidly become mainstream as the theoretical basis for…
Descriptors: Item Response Theory, Comparative Analysis, Equated Scores, Statistical Analysis
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Liu, Yuming; Schulz, E. Matthew; Yu, Lei – Journal of Educational and Behavioral Statistics, 2008
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
Descriptors: Reading Comprehension, Test Format, Markov Processes, Educational Testing
Craig, Pippa; Gordon, Jill; Clarke, Rufus; Oldmeadow, Wendy – Assessment & Evaluation in Higher Education, 2009
This study aimed to provide evidence to guide decisions on the type and timing of assessments in a graduate medical programme, by identifying whether students from particular degree backgrounds face greater difficulty in satisfying the current assessment requirements. We examined the performance rank of students in three types of assessments and…
Descriptors: Student Evaluation, Medical Education, Student Characteristics, Correlation
Whiting, Hal; Kline, Theresa J. B. – International Journal of Training and Development, 2006
This study examined the equivalency of computer and conventional versions of the Test of Workplace Essential Skills (TOWES), a test of adult literacy skills in Reading Text, Document Use and Numeracy. Seventy-three college students completed the computer version, and their scores were compared with those who had taken the test in the conventional…
Descriptors: Test Format, Adult Literacy, Computer Assisted Testing, College Students
Huang, Yi-Min; Trevisan, Mike; Storfer, Andrew – International Journal for the Scholarship of Teaching and Learning, 2007
Despite the prevalence of multiple choice items in educational testing, there is a dearth of empirical evidence for multiple choice item writing rules. The purpose of this study was to expand the base of empirical evidence by examining the use of the "all-of-the-above" option in a multiple choice examination in order to assess how…
Descriptors: Multiple Choice Tests, Educational Testing, Ability Grouping, Test Format
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness