Publication Date
| In 2026 | 0 |
| Since 2025 | 186 |
| Since 2022 (last 5 years) | 1065 |
| Since 2017 (last 10 years) | 2887 |
| Since 2007 (last 20 years) | 6172 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Teachers | 480 |
| Practitioners | 358 |
| Researchers | 152 |
| Administrators | 122 |
| Policymakers | 51 |
| Students | 44 |
| Parents | 32 |
| Counselors | 25 |
| Community | 15 |
| Media Staff | 5 |
| Support Staff | 3 |
| More ▼ | |
Location
| Australia | 183 |
| Turkey | 157 |
| California | 133 |
| Canada | 124 |
| New York | 118 |
| United States | 112 |
| Florida | 107 |
| China | 103 |
| Texas | 72 |
| United Kingdom | 72 |
| Japan | 70 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 5 |
| Meets WWC Standards with or without Reservations | 11 |
| Does not meet standards | 8 |
Liu, Ming; Wang, Yuqi; Xu, Weiwei; Liu, Li – International Journal of Distance Education Technologies, 2017
The number of Chinese engineering students has increased greatly since 1999. Rating the quality of these students' English essays has thus become time-consuming and challenging. This paper presents a novel automatic essay scoring algorithm called PSOSVR, based on a machine learning algorithm, Support Vector Machine for Regression (SVR), and a…
Descriptors: Essays, English (Second Language), College Second Language Programs, Engineering Education
St. Pierre, Nathan A.; Wuttke, Brian C. – Update: Applications of Research in Music Education, 2017
This study sought to describe the prevalence of Standards-based grading (SBG) among practicing music teachers and report the rationale teachers provided for or against its use. Participants were music educators (N = 96) responsible for grading students. Most participants (52.08%, n = 50) indicated that they were not familiar with SBG. Many…
Descriptors: Grading, Standards, Music Education, Music Teachers
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
DeSanto, Dan; Nichols, Aaron – College & Research Libraries, 2017
This article presents the results of a faculty survey conducted at the University of Vermont during academic year 2014-2015. The survey asked faculty about: familiarity with scholarly metrics, metric-seeking habits, help-seeking habits, and the role of metrics in their department's tenure and promotion process. The survey also gathered faculty…
Descriptors: College Faculty, Teacher Surveys, Knowledge Level, Use Studies
Ballard, Laura – ProQuest LLC, 2017
Rater scoring has an impact on writing test reliability and validity. Thus, there has been a continued call for researchers to investigate issues related to rating (Crusan, 2015). Investigating the scoring process and understanding how raters arrive at particular scores are critical "because the score is ultimately what will be used in making…
Descriptors: Evaluators, Schemata (Cognition), Eye Movements, Scoring Rubrics
McLaughlin, Tara W.; Snyder, Patricia A.; Algina, James – Grantee Submission, 2017
The Learning Target Rating Scale (LTRS) is a measure designed to evaluate the quality of teacher-developed learning targets for embedded instruction for early learning. In the present study, we examined the measurement dependability of LTRS scores by conducting a generalizability study (G-study). We used a partially nested, three-facet model to…
Descriptors: Generalizability Theory, Scores, Rating Scales, Evaluation Methods
Li, Haiying; Gobert, Janice; Dickler, Rachel – International Educational Data Mining Society, 2017
Scientific explanations, which include a claim, evidence, and reasoning (CER), are frequently used to measure students' deep conceptual understandings of science. In this study, we developed an automated scoring approach for the CER that students constructed as a part of virtual inquiry (e.g., formulating questions, analyzing data, and warranting…
Descriptors: Automation, Science Instruction, Inquiry, Educational Assessment
Menéndez-Varela, José-Luis; Gregori-Giralt, Eva – Assessment & Evaluation in Higher Education, 2016
Rubrics have attained considerable importance in the authentic and sustainable assessment paradigm; nevertheless, few studies have examined their contribution to validity, especially outside the domain of educational studies. This empirical study used a quantitative approach to analyse the validity of a rubrics-based performance assessment. Raters…
Descriptors: Scoring Rubrics, Validity, Performance Based Assessment, College Freshmen
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P. – Journal of Research on Educational Effectiveness, 2016
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Descriptors: Educational Research, Generalization, Sampling, Participant Characteristics
Levine, William H.; Betzner, Michelle; Autry, Kevin S. – Discourse Processes: A multidisciplinary journal, 2016
Recent research has provided evidence that the information provided before a story--a spoiler--may increase the enjoyment of that story, perhaps by increasing the processing fluency experienced during reading. In one experiment, we tested the reliability of these findings by closely replicating existing methods and the generality of these findings…
Descriptors: Literary Genres, Reading Fluency, Reliability, Reading Processes
Miller, Janette K. – ProQuest LLC, 2016
This policy analysis project focused on states' policies regarding social media use in education. Currently, policies, practices and laws are not keeping pace with the rapidly changing nature of technology. As a result of the quick advancement of social media practices, the need exists for organic policies and practices within the educational…
Descriptors: State Policy, Social Media, Technology Uses in Education, Educational Technology
Moore, Kendall Ryan – ProQuest LLC, 2016
The purpose of this study was to develop a jazz improvisation rubric for the evaluation of collegiate jazz improvisation. To create this measure, research objectives were devised to investigate the aurally-observed performer-controlled components of improvisation, which aurally-observed components should be evaluated in an improvisatory…
Descriptors: Music Education, Creativity, Creative Activities, Measurement Techniques
Yuan, Min; Recker, Mimi M. – AERA Online Paper Repository, 2016
This paper investigates how people applied and perceived the utility of rubrics for evaluating the quality of Open Educational Resources, and whether teachers and non-teachers showed differences. Forty-four participants evaluated 20 OER using three quality rubrics (comprised of 17 indicators), and reported their perceptions. Results showed that…
Descriptors: Scoring Rubrics, Educational Resources, Shared Resources and Services, Educational Quality
Grunert, Megan L.; Raker, Jeffrey R.; Murphy, Kristen L.; Holme, Thomas A. – Journal of Chemical Education, 2013
The concept of assigning partial credit on multiple-choice test items is considered for items from ACS Exams. Because the items on these exams, particularly the quantitative items, use common student errors to define incorrect answers, it is possible to assign partial credits to some of these incorrect responses. To do so, however, it becomes…
Descriptors: Multiple Choice Tests, Scoring, Scoring Rubrics, Science Tests
Attali, Yigal; Lewis, Will; Steier, Michael – Language Testing, 2013
Automated essay scoring can produce reliable scores that are highly correlated with human scores, but is limited in its evaluation of content and other higher-order aspects of writing. The increased use of automated essay scoring in high-stakes testing underscores the need for human scoring that is focused on higher-order aspects of writing. This…
Descriptors: Scoring, Essay Tests, Reliability, High Stakes Tests

Peer reviewed
Direct link
