NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20255
Since 202410
Publication Type
Reports - Evaluative10
Journal Articles7
Books1
Information Analyses1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ata Jahangir Moshayedi; Atanu Shuvam Roy; Zeashan Hameed Khan; Hong Lan; Habibollah Lotfi; Xiaohong Zhang – Education and Information Technologies, 2025
In this paper, a secure exam proctoring assistant 'EMTIHAN' (which means exam in Arabic/Persian/Urdu/Turkish languages) is developed to address concerns related to online exams for handwritten topics by allowing students to submit their answers online securely via their mobile devices. This system is designed with an aim to lessen the student's…
Descriptors: Computer Assisted Testing, Distance Education, MOOCs, Virtual Classrooms
Emma Walland – Research Matters, 2024
GCSE examinations (taken by students aged 16 years in England) are not intended to be speeded (i.e. to be partly a test of how quickly students can answer questions). However, there has been little research exploring this. The aim of this research was to explore the speededness of past GCSE written examinations, using only the data from scored…
Descriptors: Educational Change, Test Items, Item Analysis, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Goran Trajkovski; Heather Hayes – Digital Education and Learning, 2025
This book explores the transformative role of artificial intelligence in educational assessment, catering to researchers, educators, administrators, policymakers, and technologists involved in shaping the future of education. It delves into the foundations of AI-assisted assessment, innovative question types and formats, data analysis techniques,…
Descriptors: Artificial Intelligence, Educational Assessment, Computer Uses in Education, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Yuyang Shen – Journal of Creative Behavior, 2025
Creativity tests, like creativity itself, vary widely in their structure and use. These differences include instructions, test duration, environments, prompt and response modalities, and the structure of test items. A key factor is task structure, referring to the specificity of the number of responses requested for a given prompt. Classic…
Descriptors: Creativity, Creative Thinking, Creativity Tests, Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Muhammed Parviz; Masoud Azizi – Discover Education, 2025
This article offers a critical review of the Ministry of Science, Research, and Technology English Proficiency Test (MSRT), a high-stakes exam required for postgraduate graduation, scholarships, and certain employment positions in Iran. Despite its widespread use, the design and implementation of the MSRT raise concerns about its validity and…
Descriptors: Language Tests, Language Proficiency, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ali Khodi; Logendra Stanley Ponniah; Amir Hossein Farrokhi; Fateme Sadeghi – Language Testing in Asia, 2024
The current article evaluates a national English language proficiency test known as the "MSRT test" which is used to determine the eligibility of candidates for admission to and completion of higher education programs in Iran. Students in all majors take this standardized, high-stake criterion-referenced test to determine if they have…
Descriptors: Foreign Countries, Language Tests, Reading Tests, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Yizhu Gao; Xiaoming Zhai; Min Li; Gyeonggeon Lee; Xiaoxiao Liu – Grantee Submission, 2025
The rapid evolution of generative artificial intelligence (GenAI) is transforming science education by facilitating innovative pedagogical paradigms while raising substantial concerns about scholarly integrity. One particularly pressing issue is the growing risk of student use of GenAI tools to outsource assessment tasks, potentially compromising…
Descriptors: Artificial Intelligence, Computer Software, Science Education, Integrity
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Grantee Submission, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-In-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this paper, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Patrick Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Report Series, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international large-scale assessments of cognitive and…
Descriptors: Assessment Literacy, Testing, Test Bias, Test Construction