NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 202516
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Okan Bulut; Guher Gorgun; Hacer Karamese – Journal of Educational Measurement, 2025
The use of multistage adaptive testing (MST) has gradually increased in large-scale testing programs as MST achieves a balanced compromise between linear test design and item-level adaptive testing. MST works on the premise that each examinee gives their best effort when attempting the items, and their responses truly reflect what they know or can…
Descriptors: Response Style (Tests), Testing Problems, Testing Accommodations, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Ata Jahangir Moshayedi; Atanu Shuvam Roy; Zeashan Hameed Khan; Hong Lan; Habibollah Lotfi; Xiaohong Zhang – Education and Information Technologies, 2025
In this paper, a secure exam proctoring assistant 'EMTIHAN' (which means exam in Arabic/Persian/Urdu/Turkish languages) is developed to address concerns related to online exams for handwritten topics by allowing students to submit their answers online securely via their mobile devices. This system is designed with an aim to lessen the student's…
Descriptors: Computer Assisted Testing, Distance Education, MOOCs, Virtual Classrooms
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Peer reviewed Peer reviewed
Direct linkDirect link
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Robert N. Prince – Numeracy, 2025
One of the effects of the COVID-19 pandemic was the rapid shift to replacing traditional, paper-based tests with their computer-based counterparts. In many cases, these new modes of delivering tests will remain in place for the foreseeable future. In South Africa, the National Benchmark Quantitative Literacy (QL) test was impelled to make this…
Descriptors: Benchmarking, Numeracy, Multiple Literacies, Paper and Pencil Tests
Joanna Williamson – Research Matters, 2025
Teachers, examiners and assessment experts know from experience that some candidates annotate exam questions. "Annotation" includes anything the candidate writes or draws outside of the designated response space, such as underlining, jotting, circling, sketching and calculating. Annotations are of interest because they may evidence…
Descriptors: Mathematics, Tests, Documentation, Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
De Van Vo; Geraldine Mooney Simmie – International Journal of Science and Mathematics Education, 2025
While national curricula in science education highlight the importance of inquiry-based learning, assessing students' capabilities in scientific inquiry remains a subject of debate. Our study explored the construction, developmental trends and validation techniques in relation to assessing scientific inquiry using a systematic literature review…
Descriptors: Science Education, Inquiry, Science Process Skills, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony Wambua Wambua; Peter Mutua Maurice; Martin Mati Munyao – Discover Education, 2025
This study explored learners' experience in an online problem-based learning (PBL) Moodle class. Additionally, an assessment of the PBL approach in enhancing collaborative learning was done to evaluate its effectiveness in eLearning. Lastly, the researchers make recommendations for an effective adoption of PBL in Moodle. To achieve this, an…
Descriptors: Learning Management Systems, Cooperative Learning, Problem Based Learning, Electronic Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
National Assessment of Educational Progress (NAEP), 2025
Also known as The Nation's Report Card, the National Assessment of Educational Progress (NAEP) is the largest nationally representative and continuing assessment of student achievement and their learning experiences in various subjects for the nation, states, and 27 urban districts. The National Center for Education Statistics (NCES) is currently…
Descriptors: National Competency Tests, Innovation, Futures (of Society), Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Taehyeong Kim; Byungmin Lee – Language Assessment Quarterly, 2025
The Korean College Scholastic Ability Test (CSAT) aims to assess Korean high school students' scholastic ability required for college readiness. As a high-stakes test, the examination serves as a pivotal hurdle for university admission and exerts a strong washback effect on the educational system in Korea. The present study set out to investigate…
Descriptors: Reading Comprehension, Reading Tests, Language Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Abdullah Al Fraidan – International Journal of Distance Education Technologies, 2025
This study explores vocabulary assessment practices in Saudi Arabia's hybrid EFL ecosystem, leveraging platforms like Blackboard and Google Forms. The focus is on identifying prevalent test formats and evaluating their alignment with modern pedagogical goals. To classify vocabulary assessment formats in hybridized EFL contexts and recommend the…
Descriptors: Vocabulary Development, English (Second Language), Second Language Learning, Second Language Instruction
Previous Page | Next Page ยป
Pages: 1  |  2