Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 8 |
Descriptor
Source
Journal of Technology,… | 2 |
Assessment & Evaluation in… | 1 |
Computers & Education | 1 |
International Journal for the… | 1 |
International Journal of… | 1 |
ProQuest LLC | 1 |
School Science Review | 1 |
Author
Arneson, Amy | 1 |
Clarke, Rufus | 1 |
Craig, Pippa | 1 |
Davis-Berg, Elizabeth C. | 1 |
Drake, Samuel | 1 |
Gifford, Bernard | 1 |
Gordon, Jill | 1 |
Gu, Lixiong | 1 |
Huang, Yi-Min | 1 |
Kline, Theresa J. B. | 1 |
Minbiole, Julie | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 4 |
Reports - Descriptive | 2 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Education Level
Postsecondary Education | 8 |
Higher Education | 7 |
Elementary Secondary Education | 2 |
Adult Education | 1 |
Audience
Location
Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Davis-Berg, Elizabeth C.; Minbiole, Julie – School Science Review, 2020
The completion rates were compared for long-form questions where a large blank answer space is provided and for long-form questions where the answer space has bullet-points prompts corresponding to the parts of the question. It was found that students were more likely to complete a question when bullet points were provided in the answer space.…
Descriptors: Test Format, Test Construction, Academic Achievement, Educational Testing
Arneson, Amy – ProQuest LLC, 2019
This three-paper dissertation explores item cluster-based assessments, first in general as it relates to modeling, and then, specific issues surrounding a particular item cluster-based assessment designed. There should be a reasonable analogy between the structure of a psychometric model and the cognitive theory that the assessment is based upon.…
Descriptors: Item Response Theory, Test Items, Critical Thinking, Cognitive Tests
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Craig, Pippa; Gordon, Jill; Clarke, Rufus; Oldmeadow, Wendy – Assessment & Evaluation in Higher Education, 2009
This study aimed to provide evidence to guide decisions on the type and timing of assessments in a graduate medical programme, by identifying whether students from particular degree backgrounds face greater difficulty in satisfying the current assessment requirements. We examined the performance rank of students in three types of assessments and…
Descriptors: Student Evaluation, Medical Education, Student Characteristics, Correlation
Whiting, Hal; Kline, Theresa J. B. – International Journal of Training and Development, 2006
This study examined the equivalency of computer and conventional versions of the Test of Workplace Essential Skills (TOWES), a test of adult literacy skills in Reading Text, Document Use and Numeracy. Seventy-three college students completed the computer version, and their scores were compared with those who had taken the test in the conventional…
Descriptors: Test Format, Adult Literacy, Computer Assisted Testing, College Students
Huang, Yi-Min; Trevisan, Mike; Storfer, Andrew – International Journal for the Scholarship of Teaching and Learning, 2007
Despite the prevalence of multiple choice items in educational testing, there is a dearth of empirical evidence for multiple choice item writing rules. The purpose of this study was to expand the base of empirical evidence by examining the use of the "all-of-the-above" option in a multiple choice examination in order to assess how…
Descriptors: Multiple Choice Tests, Educational Testing, Ability Grouping, Test Format
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation