Publication Date
In 2025 | 2 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Research Matters | 4 |
Author
Abdullah Ali Khan | 1 |
Benton, Tom | 1 |
Chris Jellis | 1 |
Heather Mahy | 1 |
Joanna Williamson | 1 |
Sarah Hughes | 1 |
Sylvia Vitello | 1 |
Victoria Crisp | 1 |
Publication Type
Journal Articles | 4 |
Reports - Research | 3 |
Reports - Descriptive | 1 |
Education Level
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Chris Jellis – Research Matters, 2024
The Centre for Evaluation and Monitoring (CEM), based in the North of England, recently celebrated its 40th birthday. Arising from an evaluation project at Newcastle University, and a subsequent move to Durham University, it rapidly grew in scope and influence, developing a series of highly regarded school assessments. For a relatively small…
Descriptors: Educational Assessment, Foreign Countries, Computer Assisted Testing, Adaptive Testing
Joanna Williamson – Research Matters, 2025
Teachers, examiners and assessment experts know from experience that some candidates annotate exam questions. "Annotation" includes anything the candidate writes or draws outside of the designated response space, such as underlining, jotting, circling, sketching and calculating. Annotations are of interest because they may evidence…
Descriptors: Mathematics, Tests, Documentation, Secondary Education
Benton, Tom – Research Matters, 2021
Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test. Improvements due to adaptive…
Descriptors: Risk, Item Response Theory, Computer Assisted Testing, Difficulty Level