NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 76 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jiban Khadka; Dirgha Raj Joshi; Krishna Prasad Adhikari; Bishnu Khanal – Journal of Educators Online, 2025
This study aims to explore the impact of the fairness of semester-end e-assessment in terms of policy provision, monitoring, and authenticity. The cross-sectional online survey design was employed among 346 students at Nepal Open University (NOU). The results were analyzed by using t-test, analysis of variance, and structural equation modeling.…
Descriptors: Foreign Countries, College Students, Open Universities, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anela Hrnjicic; Adis Alihodžic – International Electronic Journal of Mathematics Education, 2024
Understanding the concepts related to real function is essential in learning mathematics. To determine how students understand these concepts, it is necessary to have an appropriate measurement tool. In this paper, we have created a web application using 32 items from conceptual understanding of real functions (CURF) item bank. We conducted a…
Descriptors: Mathematical Concepts, College Freshmen, Foreign Countries, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Liou, Gloria; Bonner, Cavan V.; Tay, Louis – International Journal of Testing, 2022
With the advent of big data and advances in technology, psychological assessments have become increasingly sophisticated and complex. Nevertheless, traditional psychometric issues concerning the validity, reliability, and measurement bias of such assessments remain fundamental in determining whether score inferences of human attributes are…
Descriptors: Psychometrics, Computer Assisted Testing, Adaptive Testing, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Christopher D. Wilson; Kevin C. Haudek; Jonathan F. Osborne; Zoë E. Buck Bracey; Tina Cheuk; Brian M. Donovan; Molly A. M. Stuhlsatz; Marisol M. Santiago; Xiaoming Zhai – Journal of Research in Science Teaching, 2024
Argumentation is fundamental to science education, both as a prominent feature of scientific reasoning and as an effective mode of learning--a perspective reflected in contemporary frameworks and standards. The successful implementation of argumentation in school science, however, requires a paradigm shift in science assessment from the…
Descriptors: Middle School Students, Competence, Science Process Skills, Persuasive Discourse
Peer reviewed Peer reviewed
Direct linkDirect link
Esteban Guevara Hidalgo – International Journal for Educational Integrity, 2025
The COVID-19 pandemic had a profound impact on education, forcing many teachers and students who were not used to online education to adapt to an unanticipated reality by improvising new teaching and learning methods. Within the realm of virtual education, the evaluation methods underwent a transformation, with some assessments shifting towards…
Descriptors: Foreign Countries, Higher Education, COVID-19, Pandemics
Peer reviewed Peer reviewed
Direct linkDirect link
A. Corinne Huggins-Manley; Brandon M. Booth; Sidney K. D'Mello – Journal of Educational Measurement, 2022
The field of educational measurement places validity and fairness as central concepts of assessment quality. Prior research has proposed embedding fairness arguments within argument-based validity processes, particularly when fairness is conceived as comparability in assessment properties across groups. However, we argue that a more flexible…
Descriptors: Educational Assessment, Persuasive Discourse, Validity, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Nikola Ebenbeck; Markus Gebhardt – Journal of Special Education Technology, 2024
Technologies that enable individualization for students have significant potential in special education. Computerized Adaptive Testing (CAT) refers to digital assessments that automatically adjust their difficulty level based on students' abilities, allowing for personalized, efficient, and accurate measurement. This article examines whether CAT…
Descriptors: Computer Assisted Testing, Students with Disabilities, Special Education, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel R. Isbell; Benjamin Kremmel; Jieun Kim – Language Assessment Quarterly, 2023
In the wake of the COVID-19 boom in remote administration of language tests, it appears likely that remote administration will be a permanent fixture in the language testing landscape. Accordingly, language test providers, stakeholders, and researchers must grapple with the implications of remote proctoring on valid, fair, and just uses of tests.…
Descriptors: Distance Education, Supervision, Language Tests, Culture Fair Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Blaženka Divjak; Petra Žugec; Katarina Pažur Anicic – International Journal of Mathematical Education in Science and Technology, 2024
Assessment is among the inevitable components of a curriculum and directs students' learning. E-assessment, as prepared and administered with the use of ICT, provides opportunities to make the process easier in some aspects, but also brings certain challenges. This paper presents an e-assessment framework from a student perspective. Our study…
Descriptors: Student Evaluation, Computer Assisted Testing, Evaluation Methods, Student Attitudes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uysal, Ibrahim; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Scoring constructed-response items can be highly difficult, time-consuming, and costly in practice. Improvements in computer technology have enabled automated scoring of constructed-response items. However, the application of automated scoring without an investigation of test equating can lead to serious problems. The goal of this study was to…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Assessment in Education: Principles, Policy & Practice, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
von Zansen, Anna; Hilden, Raili; Laihanen, Emma – International Journal of Listening, 2022
In this study, we used the Rasch measurement to investigate the fairness of the listening section of a national computerized high-stakes English test for differential item functioning (DIF) across gender subgroups. The computerized test format inspired us to investigate whether the items measure listening comprehension differently for females and…
Descriptors: High Stakes Tests, Listening Comprehension Tests, Listening Comprehension, Gender Differences
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6