NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Strategy Inventory for…1
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wan Fazwani Wan Mat; Lim Hooi Lian – Journal of Education and Learning (EduLearn), 2025
This bibliometric article examines the current state of publication in the field of classroom assessment, exploring the productivity and influence of countries, institutions, and authors. A search query of on the Scopus database using the term "classroom assessment" or "classroom-based assessment" or "assessment for…
Descriptors: Alternative Assessment, Student Evaluation, Bibliometrics, Formative Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thippayacharoen, Thanakrit; Hoofd, Chonlatee; Pala, Napat; Sameephet, Banchakarn; Satthamnuwong, Bhirawit – LEARN Journal: Language Education and Acquisition Research Network, 2023
Research on English Medium Instruction (EMI) is rapidly increasing and well-documented worldwide; however, recent studies in EMI have given less emphasis on assessments in EMI classrooms. Indeed, assessment plays a significant role in informing teaching and learning competencies, but what to assess and how to assess are questions which have been…
Descriptors: Language of Instruction, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Robbins, Joy; Firth, Amanda; Evans, Maria – Practitioner Research in Higher Education, 2018
Work based assessment (WBA) is a common but contentious practice increasingly used to grade university students on professional degrees. A key issue in WBA is the potentially low assessment literacy of the assessors, which can lead to a host of unintended results, including grade inflation. We identified grade inflation in the WBA of the clinical…
Descriptors: Grade Inflation, Weighted Scores, Evaluation Methods, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Fives, Helenrose; Barnes, Nicole; Dacey, Charity; Gillis, Anna – Teacher Educator, 2016
We conducted a content analysis of 27 assessment textbooks to determine how assessment planning was framed in texts for preservice teachers. We identified eight assessment planning themes: alignment, assessment purpose and types, reliability and validity, writing goals and objectives, planning specific assessments, unpacking, overall assessment…
Descriptors: Student Evaluation, Lesson Plans, Knowledge Base for Teaching, Textbook Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Gardner, John – Oxford Review of Education, 2013
Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…
Descriptors: Educational Assessment, Public Opinion, Error of Measurement, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bowman, Nicholas A. – Research & Practice in Assessment, 2013
Asking college students how much they have learned or grown is a common assessment practice in student affairs and elsewhere. Unfortunately, recent research suggests that these self-reported gains do a very poor job of measuring actual student learning and growth. This paper provides an overview of the psychological process of how students likely…
Descriptors: College Students, Student Development, Student Improvement, Achievement Gains
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J.; Haertel, Geneva; Cheng, Britte H.; Ructtinger, Liliana; DeBarger, Angela; Murray, Elizabeth; Rose, David; Gravel, Jenna; Colker, Alexis M.; Rutstein, Daisy; Vendlinski, Terry – Educational Research and Evaluation, 2013
Standardizing aspects of assessments has long been recognized as a tactic to help make evaluations of examinees fair. It reduces variation in irrelevant aspects of testing procedures that could advantage some examinees and disadvantage others. However, recent attention to making assessment accessible to a more diverse population of students…
Descriptors: Testing Accommodations, Access to Education, Testing, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Camilli, Gregory – Educational Research and Evaluation, 2013
In the attempt to identify or prevent unfair tests, both quantitative analyses and logical evaluation are often used. For the most part, fairness evaluation is a pragmatic attempt at determining whether procedural or substantive due process has been accorded to either a group of test takers or an individual. In both the individual and comparative…
Descriptors: Alternative Assessment, Test Bias, Test Content, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Martone, Andrea; Sireci, Stephen G. – Review of Educational Research, 2009
The authors (a) discuss the importance of alignment for facilitating proper assessment and instruction, (b) describe the three most common methods for evaluating the alignment between state content standards and assessments, (c) discuss the relative strengths and limitations of these methods, and (d) discuss examples of applications of each…
Descriptors: Teaching Methods, Alignment (Education), Student Evaluation, Curriculum Development
Cheyney, Donald A. – ProQuest LLC, 2010
Rubrics are a means to communicate the standards or criteria of an assignment and to assess student work formatively or summatively by faculty, peer, and/or self. Given that assessment is a necessary and mandated component of education, this study sought to summarize what is known about rubrics as an assessment tool for student learning. In this…
Descriptors: Higher Education, Student Evaluation, Program Effectiveness, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J. – Educational Researcher, 2007
Lissitz and Samuelsen (2007) argue that the unitary conception of validity for educational assessments is too broad to guide applied work. They call for attention to considerations and procedures that focus on "test development and analysis of the test itself" and propose that those activities be collectively termed "content validity." The author…
Descriptors: Content Validity, Test Validity, Test Construction, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Embretson, Susan E. – Educational Researcher, 2007
Lissitz and Samuelsen (2007) have proposed a framework that seemingly deems construct validity evidence irrelevant to supporting educational test meaning. The author of this article agrees with Lissitz and Samuelsen that internal evidence establishes test meaning, but she argues that construct validity need not be removed from the validity sphere.…
Descriptors: Construct Validity, Test Validity, Evaluation Methods, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Moss, Pamela A. – Educational Researcher, 2007
In response to Lissitz and Samuelsen (2007), the author reconstructs the historical arguments for the more comprehensive unitary concept of validity and the principles of scientific inquiry underlying it. Her response is organized in terms of four questions: (a) How did validity in educational measurement come to be conceptualized as unitary, and…
Descriptors: Evaluators, Construct Validity, Test Validity, Measurement
Ramos, Mark Louie F. – Online Submission, 2008
The purpose of this study was to construct and evaluate an instrument for determining student preparedness in College Algebra. A 73-item instrument covering prerequisite arithmetic and high school Algebra knowledge for College Algebra was constructed. The instrument was pilot-tested on a freshman population of 595 students. Results of reliability…
Descriptors: Predictive Validity, Item Analysis, Foreign Countries, Algebra
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna S. – Educational Researcher, 2007
Lissitz and Samuelsen (2007) propose a new framework for validity theory and terminology, emphasizing a shift in theory and practice toward issues of test content rather than constructs. The author of this article argues that several of Lissitz and Samuelsen's critiques of validity theory focus on previously considered, but subsequently discarded,…
Descriptors: Test Content, Test Validity, Construct Validity, Test Construction
Previous Page | Next Page ยป
Pages: 1  |  2