Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 12 |
| Since 2017 (last 10 years) | 41 |
| Since 2007 (last 20 years) | 259 |
Descriptor
| Difficulty Level | 298 |
| Scores | 298 |
| Testing | 202 |
| Higher Education | 195 |
| Academic Achievement | 192 |
| Educational Trends | 185 |
| Racial Differences | 183 |
| Trend Analysis | 183 |
| High School Graduates | 182 |
| Benchmarking | 181 |
| Core Curriculum | 181 |
| More ▼ | |
Source
Author
| Kim, Sooyeon | 3 |
| Buzick, Heather M. | 2 |
| Cawthon, Stephanie W. | 2 |
| Gallas, Edwin J. | 2 |
| Laitusis, Cara Cahalan | 2 |
| Liu, Ou Lydia | 2 |
| Lord, Frederic M. | 2 |
| Moses, Tim | 2 |
| Pomplun, Mark | 2 |
| Ritchie, Timothy | 2 |
| Wise, Steven L. | 2 |
| More ▼ | |
Publication Type
Education Level
Audience
| Policymakers | 64 |
| Practitioners | 64 |
| Researchers | 2 |
Location
| Ohio | 6 |
| California | 5 |
| China | 5 |
| Illinois | 5 |
| New Jersey | 5 |
| New Mexico | 5 |
| Pennsylvania | 5 |
| Rhode Island | 5 |
| Turkey | 5 |
| United States | 5 |
| Arkansas | 4 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
| Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Kim, Sooyeon; Walker, Michael – ETS Research Report Series, 2021
In this investigation, we used real data to assess potential differential effects associated with taking a test in a test center (TC) versus testing at home using remote proctoring (RP). We used a pseudo-equivalent groups (PEG) approach to examine group equivalence at the item level and the total score level. If our assumption holds that the PEG…
Descriptors: Testing, Distance Education, Comparative Analysis, Test Items
Mehri Izadi; Maliheh Izadi; Farrokhlagha Heidari – Education and Information Technologies, 2024
In today's environment of growing class sizes due to the prevalence of online and e-learning systems, providing one-to-one instruction and feedback has become a challenging task for teachers. Anyhow, the dialectical integration of instruction and assessment into a seamless and dynamic activity can provide a continuous flow of assessment…
Descriptors: Adaptive Testing, Computer Assisted Testing, English (Second Language), Second Language Learning
Julian Marvin Jörs; Ernesto William De Luca – Technology, Knowledge and Learning, 2025
The real-time availability of information and the intelligence of information systems have changed the way we deal with information. Current research is primarily concerned with the interplay between internal and external memory, i.e., how much and which forms of cognitively demanding processes we handle internally and when we use external storage…
Descriptors: Ethics, Learning Processes, Technology Uses in Education, Influence of Technology
Coutinho, Mariana V. C.; Papanastasiou, Elena; Agni, Stylianou; Vasko, John M.; Couchman, Justin J. – International Journal of Instruction, 2020
In this study, we examined monitoring accuracy during in class-exams for Emirati, American and Cypriot college students. In experiment 1, 120 students made local, confidence-ratings for each multiple-choice question in a psychology exam and also estimated their performance at the end of the exam. In experiment 2, to investigate the effect of…
Descriptors: Metacognition, Foreign Countries, Cultural Differences, Accuracy
Designing Computer-Based Tests: Design Guidelines from Multimedia Learning Studied with Eye Tracking
Dirkx, K. J. H.; Skuballa, I.; Manastirean-Zijlstra, C. S.; Jarodzka, H. – Instructional Science: An International Journal of the Learning Sciences, 2021
The use of computer-based tests (CBTs), for both formative and summative purposes, has greatly increased over the past years. One major advantage of CBTs is the easy integration of multimedia. It is unclear, though, how to design such CBT environments with multimedia. The purpose of the current study was to examine whether guidelines for designing…
Descriptors: Test Construction, Computer Assisted Testing, Multimedia Instruction, Eye Movements
Andrés Christiansen; Rianne Janssen – Educational Assessment, Evaluation and Accountability, 2024
In international large-scale assessments, students may not be compelled to answer every test item: a student can decide to skip a seemingly difficult item or may drop out before the end of the test is reached. The way these missing responses are treated will affect the estimation of the item difficulty and student ability, and ultimately affect…
Descriptors: Test Items, Item Response Theory, Grade 4, International Assessment
Camenares, Devin – International Journal for the Scholarship of Teaching and Learning, 2022
Balancing assessment of learning outcomes with the expectations of students is a perennial challenge in education. Difficult exams, in which many students perform poorly, exacerbate this problem and can inspire a wide variety of interventions, such as a grading curve. However, addressing poor performance can sometimes distort or inflate grades and…
Descriptors: College Students, Student Evaluation, Tests, Test Items
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Pengelley, James; Whipp, Peter R.; Rovis-Hermann, Nina – Educational Psychology Review, 2023
The aim of the present study is to reconcile previous findings (a) that testing mode has no effect on test outcomes or cognitive load (Comput Hum Behav 77:1-10, 2017) and (b) that younger learners' working memory processes are more sensitive to computer-based test formats (J Psychoeduc Assess 37(3):382-394, 2019). We addressed key methodological…
Descriptors: Scores, Cognitive Processes, Difficulty Level, Secondary School Students
Metsämuuronen, Jari – International Journal of Educational Methodology, 2020
Pearson product-moment correlation coefficient between item g and test score X, known as item-test or item-total correlation ("Rit"), and item-rest correlation ("Rir") are two of the most used classical estimators for item discrimination power (IDP). Both "Rit" and "Rir" underestimate IDP caused by the…
Descriptors: Correlation, Test Items, Scores, Difficulty Level
Peabody, Michael R.; Wind, Stefanie A. – Measurement: Interdisciplinary Research and Perspectives, 2019
Differential Item Functioning (DIF) detection procedures provide validity evidence for proposed interpretations of test scores that can help researchers and practitioners ensure that test scores are free from potential bias, and that individual items do not create an advantage for any subgroup of examinees over another. In this study, we use the…
Descriptors: Item Response Theory, Test Items, Scores, Testing
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Mehmet Mevlüt Odaci; Erman Uzun – Journal of Educational Technology and Online Learning, 2024
This study aimed to reveal the attitudes of the students who were subjected to measurement and evaluation in an online testing environment towards the computer based testing (CBT) platform and the factors that affect their attitudes. It also examined the students' cognitive loads in the exam designed with multimedia elements from message design…
Descriptors: Cognitive Processes, Difficulty Level, Preservice Teachers, Student Attitudes
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time

Peer reviewed
Direct link
