NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
Education for All Handicapped…1
What Works Clearinghouse Rating
Showing 1 to 15 of 58 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kroc, Edward; Olvera Astivia, Oscar L. – Educational and Psychological Measurement, 2022
Setting cutoff scores is one of the most common practices when using scales to aid in classification purposes. This process is usually done univariately where each optimal cutoff value is decided sequentially, subscale by subscale. While it is widely known that this process necessarily reduces the probability of "passing" such a test,…
Descriptors: Multivariate Analysis, Cutting Scores, Classification, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, Anthony – British Educational Research Journal, 2021
The COVID pandemic and the cancellation of state examinations caused unprecedented turmoil in the education systems on both sides of the Irish Sea. As the policy of calculating grades using purpose-built algorithms came undone in the face of a barrage of appeal, protest and legal action, the context in which the policies had been devised…
Descriptors: Grades (Scholastic), Scoring Formulas, Testing, COVID-19
Feldman, Jo – Educational Leadership, 2018
Have teachers become too dependent on points? This article explores educators' dependency on their points systems, and the ways that points can distract teachers from really analyzing students' capabilities and achievements. Feldman argues that using a more subjective grading system can help illuminate crucial information about students and what…
Descriptors: Grading, Evaluation Methods, Evaluation Criteria, Achievement Rating
Peer reviewed Peer reviewed
Direct linkDirect link
Peterson, Claudette M.; Peterson, Tim O. – Journal of Management Education, 2016
As professors, we each have our own approach to grading which allows us to assess learning and provide useful feedback to our students, yet is not too onerous. This article explains one approach we have used that differs from standard grading scales we often hear about from our colleagues. Rather than being based on 100 points or 100% over the…
Descriptors: Grading, Student Evaluation, Evaluation Criteria, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kingsbury, G. Gage – Journal of Educational Measurement, 2016
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent…
Descriptors: Achievement Tests, Student Motivation, Test Wiseness, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Martin, Jeremy P. – Change: The Magazine of Higher Learning, 2015
Rankings are a powerful force in higher education, swaying the enrollment decisions of prospective students and affecting the opinions of parents, board members, and policymakers. In the words of one provost, "The rankings matter to our university because they matter to people who matter to us." Rankings are also a business--one that is…
Descriptors: Higher Education, Achievement Rating, Institutional Characteristics, Reputation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ryan, Brendan M. – Higher Education Studies, 2017
This paper is a discussion and critical review of grading practices at a large flagship public university. In this paper, I examine the rights a student has when calling into question the authority and decision-making abilities of teachers in a classroom setting. Following my recent experience with a professor (noted at the beginning of this…
Descriptors: Protocol Analysis, Higher Education, Learning Experience, Grading
Guskey, Thomas R.; Jung, Lee Ann – Educational Leadership, 2016
Many educators consider grades calculated from statistical algorithms more accurate, objective, and reliable than grades they calculate themselves. But in this research, the authors first asked teachers to use their professional judgment to choose a summary grade for hypothetical students. When the researchers compared the teachers' grade with the…
Descriptors: Grading, Computer Assisted Testing, Interrater Reliability, Grades (Scholastic)
Northwest Evaluation Association, 2016
Northwest Evaluation Association™ (NWEA™) is committed to providing partners with useful tools to help make inferences from Measures of Academic Progress® (MAP®) interim assessment scores. One important tool is the concordance table between MAP and state summative assessments. Concordance tables have been used for decades to relate scores on…
Descriptors: Tables (Data), Benchmarking, Scoring Formulas, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Van Hecke, Tanja – Teaching Mathematics and Its Applications, 2015
Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…
Descriptors: Multiple Choice Tests, Error Correction, Grading, Evaluation Methods
Northwest Evaluation Association, 2015
Concordance tables have been used for decades to relate scores on different tests measuring similar but distinct constructs. These tables, typically derived from statistical linking procedures, provide a direct link between scores on different tests and serve various purposes. Aside from describing how a score on one test relates to performance on…
Descriptors: Outcome Measures, Tables (Data), Language Arts, English Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Yancey, Kathleen Blake – Theory Into Practice, 2015
How does one grade an electronic portfolio? This question is one I have thought about, have enacted, and have written about, primarily in reference to ePortfolios used in writing classrooms (Yancey, McElroy, & Powers, 2013). But what happens when the content and developmental levels are changed, in this case from an undergraduate first-year…
Descriptors: Grading, Portfolio Assessment, Electronic Publishing, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Fleet, Wendy – Accounting Education, 2013
As academics we often assume that allocating marks to a task will influence student decision-making when it comes to completing that task. Marks are used by lecturers to indicate the relative importance of each of the criteria used for marking the assessment task and we expect the student to respond to the marks' allocation. This Postcard suggests…
Descriptors: Task Analysis, Decision Making, Evaluation Criteria, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Runco, Mark A.; Acar, Selcuk – Creativity Research Journal, 2012
Divergent thinking (DT) tests are very often used in creativity studies. Certainly DT does not guarantee actual creative achievement, but tests of DT are reliable and reasonably valid predictors of certain performance criteria. The validity of DT is described as reasonable because validity is not an all-or-nothing attribute, but is, instead, a…
Descriptors: Creativity, Creative Activities, Creative Thinking, Test Validity
Northwest Evaluation Association, 2014
Recently, Northwest Evaluation Association (NWEA) completed a study to connect the scale of the Minnesota Comprehensive Assessments (MCA) Testing Program used for Minnesota's mathematics and reading assessments with NWEA's RIT (Rasch Unit) scale. Information from the state assessments was used in a study to establish performance-level scores on…
Descriptors: Alignment (Education), Testing Programs, State Programs, Mathematics Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4