NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 76 to 90 of 3,128 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zeynep Uzun; Tuncay Ögretmen – Large-scale Assessments in Education, 2025
This study aimed to evaluate the item model fit by equating the forms of the PISA 2018 mathematics subtest with concurrent common items equating in samples from Türkiye, the UK, and Italy. The answers given in mathematics subtest Forms 2, 8, and 12 were used in this context. Analyzes were performed using the Dichotomous Rasch Model in the WINSTEPS…
Descriptors: Item Response Theory, Test Items, Foreign Countries, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Meaghan McKenna; Hope Gerde; Nicolette Grasley-Boy – Reading and Writing: An Interdisciplinary Journal, 2025
This article describes the development and administration of the "Kindergarten-Second Grade (K-2) Writing Data-Based Decision Making (DBDM) Survey." The "K-2 Writing DBDM Survey" was developed to learn more about current DBDM practices specific to early writing. A total of 376 educational professionals (175 general education…
Descriptors: Writing Evaluation, Writing Instruction, Preschool Teachers, Kindergarten
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Alliyza; Brewer, Neil; Aistrope, Denise; Young, Robyn L. – Autism: The International Journal of Research and Practice, 2023
The Reading the Mind in the Eyes Test (RMET) is a purported theory of mind measure and one that reliably differentiates autistic and non-autistic individuals. However, concerns have been raised about the validity of the measure, with some researchers suggesting that the multiple-choice format of the RMET makes it susceptible to the undue influence…
Descriptors: Theory of Mind, Autism Spectrum Disorders, Test Validity, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Narnaware, Yuwaraj; Cuschieri, Sarah – HAPS Educator, 2023
Visualizing effects of images on improved anatomical knowledge are evident in medical and allied health students, but this phenomenon has rarely been assessed in nursing students. To assess the visualizing effect of images on improving anatomical knowledge and to use images as one of the methods of gross anatomical knowledge assessment in nursing…
Descriptors: Nursing Students, Multiple Choice Tests, Anatomy, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Susan K. Johnsen – Gifted Child Today, 2024
The author provides a checklist for educators who are selecting technically adequate tests for identifying and referring students for gifted education services and programs. The checklist includes questions related to how the test was normed, reliability and validity studies as well as questions related to types of scores, administration, and…
Descriptors: Test Selection, Academically Gifted, Gifted Education, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tugra Karademir Coskun; Ayfer Alper – Digital Education Review, 2024
This study aims to examine the potential differences between teacher evaluations and artificial intelligence (AI) tool-based assessment systems in university examinations. The research has evaluated a wide spectrum of exams including numerical and verbal course exams, exams with different assessment styles (project, test exam, traditional exam),…
Descriptors: Artificial Intelligence, Visual Aids, Video Technology, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Dambha, Tasneem; Swanepoel, De Wet; Mahomed-Asmail, Faheema; De Sousa, Karina C.; Graham, Marien A.; Smits, Cas – Journal of Speech, Language, and Hearing Research, 2022
Purpose: This study compared the test characteristics, test-retest reliability, and test efficiency of three novel digits-in-noise (DIN) test procedures to a conventional antiphasic 23-trial adaptive DIN (D23). Method: One hundred twenty participants with an average age of 42 years (SD = 19) were included. Participants were tested and retested…
Descriptors: Auditory Tests, Screening Tests, Efficiency, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kunz, Tanja; Meitinger, Katharina – Field Methods, 2022
Although list-style open-ended questions generally help us gain deeper insights into respondents' thoughts, opinions, and behaviors, the quality of responses is often compromised. We tested a dynamic and a follow-up design to motivate respondents to give higher quality responses than with a static design, but without overburdening them. Our…
Descriptors: Online Surveys, Item Response Theory, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Christian Berggren; Bengt Gerdin; Solmaz Filiz Karabag – Journal of Academic Ethics, 2025
The exposure of scientific scandals and the increase of dubious research practices have generated a stream of studies on Questionable Research Practices (QRPs), such as failure to acknowledge co-authors, selective presentation of findings, or removal of data not supporting desired outcomes. In contrast to high-profile fraud cases, QRPs can be…
Descriptors: Test Construction, Test Bias, Test Format, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Beifang Ma; Maximilian Krötz; Viola Deutscher; Esther Winther – International Journal of Training and Development, 2025
The rapid digital transformation of vocational education and training (VET) has underscored the need to adapt traditional assessment methods to digital formats. However, when transitioning to digital modes, it is crucial to consider factors beyond mere technical implementation, particularly the potential impact of altered presentation formats on…
Descriptors: Job Skills, Competence, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Gita Revalde; Madi Zholdakhmet; Anda Abola; Aliya Murzagaliyeva – Technology, Knowledge and Learning, 2025
Since its launch in November 2022, chatbot ChatGPT has gained significant popularity worldwide. It performs the task of a search engine, analyzes the information, and generates the required output. ChatGPT is already recognized as a useful tool for educational purposes, but it also comes with some limitations and potential risks. In this case…
Descriptors: Artificial Intelligence, Physics, Science Instruction, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Carvalho, Paulo F.; Goldstone, Robert L. – Applied Cognitive Psychology, 2021
Across three experiments featuring naturalistic concepts (psychology concepts) and naïve learners, we extend previous research showing an effect of the sequence of study on learning outcomes, by demonstrating that the sequence of examples during study changes the representation the learner creates of the study materials. We compared participants'…
Descriptors: Test Preparation, Test Format, Learning Processes, Test Coaching
Peer reviewed Peer reviewed
Direct linkDirect link
Spiegel, Tali; Nivette, Amy – Assessment & Evaluation in Higher Education, 2023
This study investigates the relationship between take-home (open-book) examinations (THE) and in-class (closed-book) examinations (ICE) on academic performance and student wellbeing. Two social science courses (one bachelor and one master) were included in the study. In the first cohort (2019), students from both courses performed an ICE, whereas…
Descriptors: Test Format, Tests, Academic Achievement, Retention (Psychology)
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  209