NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 202515
Since 202448
Since 2021 (last 5 years)175
Since 2016 (last 10 years)431
Since 2006 (last 20 years)1077
What Works Clearinghouse Rating
Meets WWC Standards with or without Reservations1
Showing 1 to 15 of 1,077 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lauren E. Bates; Sarah J. Myers; Edward L. DeLosh; Matthew G. Rhodes – Psychology Learning and Teaching, 2025
The present work assessed a quizzing method that combines the benefits of retrieval practice and feedback, whereby learners must continue taking quizzes until they achieve a perfect score with feedback provided (i.e., "mastery quizzing"). Across four experiments (n = 952; age 18-76, M = 37.10, SD = 11.61; 50% female, 48% male, 2% other…
Descriptors: Mastery Tests, Retention (Psychology), Evaluation Methods, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Journal of Educational Measurement, 2023
Technical difficulties and other unforeseen events occasionally lead to incomplete data on educational tests, which necessitates the reporting of imputed scores to some examinees. While there exist several approaches for reporting imputed scores, there is a lack of any guidance on the reporting of the uncertainty of imputed scores. In this paper,…
Descriptors: Evaluation Methods, Scores, Standardized Tests, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew Wass – Journal of Dance Education, 2024
Ensemble Thinking (ET) is a toolkit of movement scores developed by dance-maker Nina Martin in the 1980s in New York City. The scores of ET arose out of a confluence of Martin's choreographic and improvisational performance practices. The impetus for developing ET was to develop a technical language for creating and discussing improvised dance.…
Descriptors: Scores, Dance, Dance Education, Creative Activities
Peer reviewed Peer reviewed
Direct linkDirect link
Wendy Chan – Asia Pacific Education Review, 2024
As evidence from evaluation and experimental studies continue to influence decision and policymaking, applied researchers and practitioners require tools to derive valid and credible inferences. Over the past several decades, research in causal inference has progressed with the development and application of propensity scores. Since their…
Descriptors: Probability, Scores, Causal Models, Statistical Inference
Mohammad Ghulam Ali – Online Submission, 2025
This research article establishes the relationship between key performance indicators and the academic and research quality performance and assessment and quality assurance of any large multidisciplinary academic and research institution or university. The indicators in terms of qualitative and quantitative are being proposed below and are…
Descriptors: Research Universities, Reputation, Educational Quality, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Jingwen Wang; Xiaohong Yang; Dujuan Liu – International Journal of Web-Based Learning and Teaching Technologies, 2024
The large scale expansion of online courses has led to the crisis of course quality issues. In this study, we first established an evaluation index system for online courses using factor analysis, encompassing three key constructs: course resource construction, course implementation, and teaching effectiveness. Subsequently, we employed factor…
Descriptors: Educational Quality, Online Courses, Course Evaluation, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deschênes, Marie-France; Dionne, Éric; Dorion, Michelle; Grondin, Julie – Practical Assessment, Research & Evaluation, 2023
The use of the aggregate scoring method for scoring concordance tests requires the weighting of test items to be derived from the performance of a group of experts who take the test under the same conditions as the examinees. However, the average score of experts constituting the reference panel remains a critical issue in the use of these tests.…
Descriptors: Scoring, Tests, Evaluation Methods, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Mosquera, Jose Miguel Llanos; Suarez, Carlos Giovanny Hidalgo; Guerrero, Victor Andres Bucheli – Education and Information Technologies, 2023
This paper proposes to evaluate learning efficiency by implementing the flipped classroom and automatic source code evaluation based on the Kirkpatrick evaluation model in students of CS1 programming course. The experimentation was conducted with 82 students from two CS1 courses; an experimental group (EG = 56) and a control group (CG = 26). Each…
Descriptors: Flipped Classroom, Coding, Programming, Evaluation Methods
Huan Liu – ProQuest LLC, 2024
In many large-scale testing programs, examinees are frequently categorized into different performance levels. These classifications are then used to make high-stakes decisions about examinees in contexts such as in licensure, certification, and educational assessments. Numerous approaches to estimating the consistency and accuracy of this…
Descriptors: Classification, Accuracy, Item Response Theory, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Bouwer, Renske; Koster, Monica; van den Bergh, Huub – Assessment in Education: Principles, Policy & Practice, 2023
Assessing students' writing performance is essential to adequately monitor and promote individual writing development, but it is also a challenge. The present research investigates a benchmark rating procedure for assessing texts written by upper-elementary students. In two studies we examined whether a benchmark rating procedure (1) leads to…
Descriptors: Benchmarking, Writing Evaluation, Evaluation Methods, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Baumgartner, Michael; Ambühl, Mathias – Sociological Methods & Research, 2023
Consistency and coverage are two core parameters of model fit used by configurational comparative methods (CCMs) of causal inference. Among causal models that perform equally well in other respects (e.g., robustness or compliance with background theories), those with higher consistency and coverage are typically considered preferable. Finding the…
Descriptors: Causal Models, Evaluation Methods, Goodness of Fit, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Lauren Prather; Nancy Creaghead; Jennifer Vannest; Lisa Hunter; Amy Hobek; Tamika Odum; Mekibib Altaye; Juanita Lackey – Perspectives of the ASHA Special Interest Groups, 2025
Purpose: The lack of appropriate assessments affects populations presumed to be most at risk for speech and language concerns, one of them being children with a history of preterm birth. This study aims to examine whether cultural bias is present in two currently available language tests for Black children under 3 years of age: the Communication…
Descriptors: African American Children, Premature Infants, Evaluation Methods, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demir, Suleyman – International Journal of Assessment Tools in Education, 2022
This study aims to compare normality tests in different sample sizes in data with normal distribution under different kurtosis and skewness coefficients obtained simulatively. To this end, firstly, simulative data were produced using the MATLAB program for different skewness/kurtosis coefficients and different sample sizes. The normality analysis…
Descriptors: Sample Size, Comparative Analysis, Computer Software, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Corinne Huggins-Manley; Anthony W. Raborn; Peggy K. Jones; Ted Myers – Journal of Educational Measurement, 2024
The purpose of this study is to develop a nonparametric DIF method that (a) compares focal groups directly to the composite group that will be used to develop the reported test score scale, and (b) allows practitioners to explore for DIF related to focal groups stemming from multicategorical variables that constitute a small proportion of the…
Descriptors: Nonparametric Statistics, Test Bias, Scores, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Jing Chen; Bei Fang; Hao Zhang; Xia Xue – Interactive Learning Environments, 2024
High dropout rate exists universally in massive open online courses (MOOCs) due to the separation of teachers and learners in space and time. Dropout prediction using the machine learning method is an extremely important prerequisite to identify potential at-risk learners to improve learning. It has attracted much attention and there have emerged…
Descriptors: MOOCs, Potential Dropouts, Prediction, Artificial Intelligence
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  72