Publication Date
| In 2026 | 0 |
| Since 2025 | 186 |
| Since 2022 (last 5 years) | 1065 |
| Since 2017 (last 10 years) | 2887 |
| Since 2007 (last 20 years) | 6172 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Teachers | 480 |
| Practitioners | 358 |
| Researchers | 152 |
| Administrators | 122 |
| Policymakers | 51 |
| Students | 44 |
| Parents | 32 |
| Counselors | 25 |
| Community | 15 |
| Media Staff | 5 |
| Support Staff | 3 |
| More ▼ | |
Location
| Australia | 183 |
| Turkey | 157 |
| California | 133 |
| Canada | 124 |
| New York | 118 |
| United States | 112 |
| Florida | 107 |
| China | 103 |
| Texas | 72 |
| United Kingdom | 72 |
| Japan | 70 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 5 |
| Meets WWC Standards with or without Reservations | 11 |
| Does not meet standards | 8 |
Bahi, Halima; Necibi, Khaled – International Journal of Computer-Assisted Language Learning and Teaching, 2020
Pronunciation teaching is an important stage in language learning activities. This article tackles the pronunciation scoring problem where research has demonstrated relatively low human-human and low human-machine agreement rates, which makes teachers skeptical about their relevance. To overcome these limitations, a fuzzy combination of two…
Descriptors: Oral Reading, Reading Fluency, Pronunciation, Learning Activities
Dinnesen, Megan Schneider; Olszewski, Arnold; Breit-Smith, Allison; Guo, Ying – Communication Disorders Quarterly, 2020
Expanding on the extant body of research on content validity, this study applied the tenets of content validity research to the development of an expository book reading intervention focused on science for preschool students with language impairment. This example case explains how, guided by content validity research in healthcare interventions,…
Descriptors: Preschool Children, Language Impairments, Intervention, Content Validity
Lavi, Rea; Dori, Yehudit Judy; Wengrowicz, Niva; Dori, Dov – IEEE Transactions on Education, 2020
Contribution: A rubric for assessing the systems thinking expressed in conceptual models of technological systems has been constructed and assessed using a formal methodology. The rubric, a synthesis of prior findings in science and engineering education, forms a framework for improving communication between science and engineering educators.…
Descriptors: Models, Engineering Education, Teamwork, Scoring Rubrics
Chasteen, Stephanie V.; Scherr, Rachel E. – Physical Review Physics Education Research, 2020
Given the insufficient number of well-qualified future physics teachers in the U.S., physics programs often seek guidance for how to address this national need. Measurement tools can provide such guidance, by both defining excellence in physics teacher education (PTE) and providing a means to measure progress towards excellence. This paper…
Descriptors: Physics, Science Teachers, Teacher Education Programs, Scoring Rubrics
Burkholder, E. W.; Miles, J. K.; Layden, T. J.; Wang, K. D.; Fritz, A. V.; Wieman, C. E. – Physical Review Physics Education Research, 2020
We introduce a template to (i) scaffold the problem solving process for students in the physics 1 course, and (ii) serve as a generic rubric for measuring how expertlike students are in their problem solving. This template is based on empirical studies of the problem solving practices of expert scientists and engineers, unlike most existing…
Descriptors: Physics, Science Instruction, Teaching Methods, Introductory Courses
Provasnik, Stephen; Dogan, Enis; Erberber, Ebru; Zheng, Xiaying – National Center for Education Statistics, 2020
Large-scale assessment programs, such as the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS), employ item response theory (IRT) and marginal estimation methods to estimate student proficiency in specific subjects such as mathematics, science, or reading. Each of these…
Descriptors: Student Evaluation, Evaluation Methods, Academic Achievement, Item Response Theory
Dinnesen, Megan Schneider; Olszewski, Arnold; Breit-Smith, Allison; Guo, Ying – Grantee Submission, 2020
Expanding on the extant body of research on content validity, this study applied the tenets of content validity research to the development of an expository book reading intervention focused on science for preschool students with language impairment. This example case explains how, guided by content validity research in healthcare interventions,…
Descriptors: Preschool Children, Language Impairments, Intervention, Content Validity
Sung, Kyung Hee; Noh, Eun Hee; Chon, Kyong Hee – Asia Pacific Education Review, 2017
With increased use of constructed response items in large scale assessments, the cost of scoring has been a major consideration (Noh et al. in KICE Report RRE 2012-6, 2012; Wainer and Thissen in "Applied Measurement in Education" 6:103-118, 1993). In response to the scoring cost issues, various forms of automated system for scoring…
Descriptors: Automation, Scoring, Social Studies, Test Items
Rhone, Jeffrey – General Music Today, 2017
The physical, social, and music attributes inherent to folk dancing make it an ideal component of music education curricula. The communal experience of folk dancing is unprecedented for many adults and children. These experiences are unique because folk dancing can foster individual and group learning through music, and noncompetitive play. There…
Descriptors: Folk Culture, Physical Education, Evaluation Methods, Student Evaluation
Soh, Kaycheng – Journal of Higher Education Policy and Management, 2017
World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…
Descriptors: Reputation, Colleges, Evaluation Methods, Institutional Characteristics
Blakeslee, Theron; Chandler, Denny; Roeber, Edward; Kintz, Tara – Learning Professional, 2017
Formative assessment is one of the most effective tools that teachers use to promote student learning and watching one's self teach on video is one of the most effective ways to improve teaching. As part of a project for the Michigan Department of Education, the authors worked with eight teachers in Michigan who are using videos of their teaching…
Descriptors: Formative Evaluation, Student Evaluation, Scoring Rubrics, Video Technology
Gates, Leslie – Art Education, 2017
Throughout this article, the voice of the author will weave with those of practicing art educators in order to illustrate the tension between objective and subjective assessment methods. The collages that appear throughout this article were created by students to illustrate their assessment dilemmas and are used with permission to visually…
Descriptors: Art Education, Art Teachers, Student Evaluation, Evaluation Methods
Nalbantoglu Yilmaz, Funda – Online Submission, 2017
In the study, it was aimed to investigate the leniency/severity, bias and halo effect of the raters which were used in the scoring of the diagnostic tree prepared by the teacher candidates with the many-facet Rasch model. The research study group constitutes 24 teacher candidates who are taking measurement and evaluation lesson from the students…
Descriptors: Scoring, Item Response Theory, Preservice Teachers, Interrater Reliability
International Journal of Testing, 2019
These guidelines describe considerations relevant to the assessment of test takers in or across countries or regions that are linguistically or culturally diverse. The guidelines were developed by a committee of experts to help inform test developers, psychometricians, test users, and test administrators about fairness issues in support of the…
Descriptors: Test Bias, Student Diversity, Cultural Differences, Language Usage
Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C. – Journal of Research in Science Teaching, 2016
Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…
Descriptors: Science Tests, Scoring, Automation, Validity

Peer reviewed
Direct link
