NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 46 to 60 of 4,240 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Putnikovic, Marko; Jovanovic, Jelena – IEEE Transactions on Learning Technologies, 2023
Automatic grading of short answers is an important task in computer-assisted assessment (CAA). Recently, embeddings, as semantic-rich textual representations, have been increasingly used to represent short answers and predict the grade. Despite the recent trend of applying embeddings in automatic short answer grading (ASAG), there are no…
Descriptors: Automation, Computer Assisted Testing, Grading, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Seohee; Kim, Kyung Yong; Lee, Won-Chan – Journal of Educational Measurement, 2023
Multiple measures, such as multiple content domains or multiple types of performance, are used in various testing programs to classify examinees for screening or selection. Despite the popular usages of multiple measures, there is little research on classification consistency and accuracy of multiple measures. Accordingly, this study introduces an…
Descriptors: Testing, Computation, Classification, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Hattingh, Sherene; Northcote, Maria – Journal of Further and Higher Education, 2023
In the last few decades, the expansion of online learning and online assessment has attracted both negative and positive attention, some of which has celebrated the flexibility and individualised affordances of online learning contexts, while also lamenting the overuse of one-size-fits-all teaching approaches. Virtual learning contexts have been…
Descriptors: Individualized Instruction, Computer Assisted Testing, Literature Reviews, Online Courses
Peer reviewed Peer reviewed
Direct linkDirect link
Kurdi, Ghader; Leo, Jared; Parsia, Bijan; Sattler, Uli; Al-Emari, Salam – International Journal of Artificial Intelligence in Education, 2020
While exam-style questions are a fundamental educational tool serving a variety of purposes, manual construction of questions is a complex process that requires training, experience, and resources. This, in turn, hinders and slows down the use of educational activities (e.g. providing practice questions) and new advances (e.g. adaptive testing)…
Descriptors: Computer Assisted Testing, Adaptive Testing, Natural Language Processing, Questioning Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Bennett, Randy E. – Educational Measurement: Issues and Practice, 2022
This commentary focuses on one of the positive impacts of COVID-19, which was to tie societal inequity to testing in a manner that could motivate the reimagining of our field. That reimagining needs to account for our nation's dramatically changing demographics so that assessment generally, and standardized testing specifically, better fit the…
Descriptors: COVID-19, Pandemics, Social Justice, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Caroline Larson; Hannah R. Thomas; Jason Crutcher; Michael C. Stevens; Inge-Marie Eigsti – Review Journal of Autism and Developmental Disorders, 2025
Autism Spectrum Disorder (ASD) is a heterogeneous condition associated with differences in functional neural connectivity relative to neurotypical (NT) peers. Language-based functional connectivity represents an ideal context in which to characterize connectivity because language is heterogeneous and linked to core features in ASD, and NT language…
Descriptors: Autism Spectrum Disorders, Brain, Brain Hemisphere Functions, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Shunji Wang; Katerina M. Marcoulides; Jiashan Tang; Ke-Hai Yuan – Structural Equation Modeling: A Multidisciplinary Journal, 2024
A necessary step in applying bi-factor models is to evaluate the need for domain factors with a general factor in place. The conventional null hypothesis testing (NHT) was commonly used for such a purpose. However, the conventional NHT meets challenges when the domain loadings are weak or the sample size is insufficient. This article proposes…
Descriptors: Hypothesis Testing, Error of Measurement, Comparative Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Caspar J. Van Lissa; Eli-Boaz Clapper; Rebecca Kuiper – Research Synthesis Methods, 2024
The product Bayes factor (PBF) synthesizes evidence for an informative hypothesis across heterogeneous replication studies. It can be used when fixed- or random effects meta-analysis fall short. For example, when effect sizes are incomparable and cannot be pooled, or when studies diverge significantly in the populations, study designs, and…
Descriptors: Hypothesis Testing, Evaluation Methods, Replication (Evaluation), Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tan, Teck Kiang – Practical Assessment, Research & Evaluation, 2023
Researchers often have hypotheses concerning the state of affairs in the population from which they sampled their data to compare group means. The classical frequentist approach provides one way of carrying out hypothesis testing using ANOVA to state the null hypothesis that there is no difference in the means and proceed with multiple comparisons…
Descriptors: Comparative Analysis, Hypothesis Testing, Statistical Analysis, Guidelines
Doris Zahner; Jeffrey T. Steedle; James Soland; Catherine Welch; Qi Qin; Kathryn Thompson; Richard Phelps – Annenberg Institute for School Reform at Brown University, 2025
The "Standards for Educational and Psychological Testing" have served as a cornerstone for best practices in assessment. As the field evolves, so must these standards, with regular revisions ensuring they reflect current knowledge and practice. The National Council on Measurement in Education (NCME) conducted a survey to gather feedback…
Descriptors: Psychological Testing, Best Practices, National Standards, Review (Reexamination)
Peer reviewed Peer reviewed
Direct linkDirect link
David Eubanks; Scott A. Moore – Assessment Update, 2025
Assessment and institutional research offices have too much data and too little time. Standard reporting often crowds out opportunities for innovative research. Fortunately, advancements in data science now offer a clear solution. It is equal parts technique and philosophy. The first and easiest step is to modernize data work. This column…
Descriptors: Higher Education, Educational Assessment, Data Science, Research Methodology
Santi Lestari – Research Matters, 2025
The ability to draw visual representations such as diagrams and graphs is considered fundamental to science learning. Science exams therefore often include questions which require students to draw a visual representation, or to augment a partially provided one. The design features of such questions (e.g., layout of diagrams, amount of answer…
Descriptors: Science Education, Secondary Education, Visual Aids, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Nguyen, Hong Thu Thi – Issues in Educational Research, 2023
This study investigates unproctored assignment-based assessment implementation in an online teaching environment compared to on-site assessment. A mixed-method research approach was conducted with the participation of 284 English-major students, 6 teachers, and 4 experts at a university in Vietnam. Data collection instruments included a…
Descriptors: Foreign Countries, Student Evaluation, Online Courses, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Qutaiba I. Ali – Discover Education, 2024
This paper contributes to the ongoing efforts aimed at enhancing Outcome-Based Education (OBE) assessment methodologies by addressing some critical gaps and exploring new solutions. Our work focuses on two main areas: firstly, this study proposes an improved assessment method for OBE. It refines traditional approaches by classifying course…
Descriptors: Outcome Based Education, Evaluation Methods, Student Evaluation, Artificial Intelligence
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  283