NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational Measurement:…71
Audience
Teachers2
What Works Clearinghouse Rating
Showing 1 to 15 of 71 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Pellegrino, James W. – Educational Measurement: Issues and Practice, 2020
Professor Gordon argues for a significant reorientation in the focus and impact of assessment in education. For the types of assessment activities that he advocates to prosper and positively impact education, serious attention must be paid to two important topics: (1) the conceptual underpinnings of the assessment practices we develop and use to…
Descriptors: Educational Assessment, Teaching Methods, Learning Processes, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Jiangang Hao; Alina A. von Davier; Victoria Yaneva; Susan Lottridge; Matthias von Davier; Deborah J. Harris – Educational Measurement: Issues and Practice, 2024
The remarkable strides in artificial intelligence (AI), exemplified by ChatGPT, have unveiled a wealth of opportunities and challenges in assessment. Applying cutting-edge large language models (LLMs) and generative AI to assessment holds great promise in boosting efficiency, mitigating bias, and facilitating customized evaluations. Conversely,…
Descriptors: Evaluation Methods, Artificial Intelligence, Educational Change, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel Murphy; Sarah Quesen; Matthew Brunetti; Quintin Love – Educational Measurement: Issues and Practice, 2024
Categorical growth models describe examinee growth in terms of performance-level category transitions, which implies that some percentage of examinees will be misclassified. This paper introduces a new procedure for estimating the classification accuracy of categorical growth models, based on Rudner's classification accuracy index for item…
Descriptors: Classification, Growth Models, Accuracy, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Ihlenfeldt, Samuel D.; Dosedel, Michael; Riegelman, Amy – Educational Measurement: Issues and Practice, 2020
This systematic review investigated the topics studied and reporting practices of published meta-analyses in educational measurement. Our findings indicated that meta-analysis is not a highly utilized methodological tool in educational measurement; on average, less than one meta-analysis has been published per year over the past 30 years (28…
Descriptors: Meta Analysis, Educational Assessment, Test Format, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Terry A. Ackerman; Deborah L. Bandalos; Derek C. Briggs; Howard T. Everson; Andrew D. Ho; Susan M. Lottridge; Matthew J. Madison; Sandip Sinharay; Michael C. Rodriguez; Michael Russell; Alina A. Davier; Stefanie A. Wind – Educational Measurement: Issues and Practice, 2024
This article presents the consensus of an National Council on Measurement in Education Presidential Task Force on Foundational Competencies in Educational Measurement. Foundational competencies are those that support future development of additional professional and disciplinary competencies. The authors develop a framework for foundational…
Descriptors: Educational Assessment, Competence, Skill Development, Communication Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Jennifer; Lim, Hwanggyu; Padellaro, Frank; Sireci, Stephen G.; Zenisky, April L. – Educational Measurement: Issues and Practice, 2022
Setting cut scores on (MSTs) is difficult, particularly when the test spans several grade levels, and the selection of items from MST panels must reflect the operational test specifications. In this study, we describe, illustrate, and evaluate three methods for mapping panelists' Angoff ratings into cut scores on the scale underlying an MST. The…
Descriptors: Cutting Scores, Adaptive Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ji, Xuejun Ryan; Wu, Amery D. – Educational Measurement: Issues and Practice, 2023
The Cross-Classified Mixed Effects Model (CCMEM) has been demonstrated to be a flexible framework for evaluating reliability by measurement specialists. Reliability can be estimated based on the variance components of the test scores. Built upon their accomplishment, this study extends the CCMEM to be used for evaluating validity evidence.…
Descriptors: Measurement, Validity, Reliability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wolff, Fabian – Educational Measurement: Issues and Practice, 2021
The internal/external frame of reference (I/E) model describes the formation of students' math and verbal self-concepts by the joint effects of social comparisons (where students compare their subject-specific achievements with those of their classmates) and dimensional comparisons (where students compare their math and verbal achievements with…
Descriptors: Self Concept, Concept Formation, Mathematics Achievement, Verbal Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Li-Ping; Xin, Tao – Educational Measurement: Issues and Practice, 2022
The upgrade educational information technology triggered by COVID-19 has shaped a new educational order and new educational forms. As a result, traditional educational measurement is now facing a systematic transformation, that is, from the Assessment of Learning (AoL) to Assessment for Learning (AfL), and finally to Assessment as Learning (AaL).…
Descriptors: Educational Assessment, Information Technology, Educational Technology, COVID-19
Peer reviewed Peer reviewed
Direct linkDirect link
Lavery, Matthew Ryan; Bostic, Jonathan D.; Kruse, Lance; Krupa, Erin E.; Carney, Michele B. – Educational Measurement: Issues and Practice, 2020
Since it was formalized by Kane, the argument-based approach to validation has been promoted as the preferred method for validating interpretations and uses of test scores. Because validation is discussed in terms of arguments, and arguments are both interactive and social, the present review systematically examines the scholarly arguments which…
Descriptors: Persuasive Discourse, Validity, Research Methodology, Peer Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Leighton, Jacqueline P.; Lehman, Blair – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Jacqueline Leighton and Dr. Blair Lehman review differences between think-aloud interviews to measure problem-solving processes and cognitive labs to measure comprehension processes. Learners are introduced to historical, theoretical, and procedural differences between these methods and how to use and analyze…
Descriptors: Protocol Analysis, Interviews, Problem Solving, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Pepper, David – Educational Measurement: Issues and Practice, 2020
The Standards for Educational and Psychological Testing identify several strands of validity evidence that may be needed as support for particular interpretations and uses of assessments. Yet assessment validation often does not seem guided by these Standards, with validations lacking a particular strand even when it appears relevant to an…
Descriptors: Validity, Foreign Countries, Achievement Tests, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Marion, Scott; Domaleski, Chris – Educational Measurement: Issues and Practice, 2019
This article offers a critique of the validity argument put forward by Camara, Mattern, Croft, and Vispoel (2019) regarding the use of college-admissions tests in high school assessment systems. We challenge their argument in two main ways. First, we illustrate why their argument fails to address broader issues related to consequences of using…
Descriptors: College Entrance Examinations, High School Students, Test Use, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Kostal, Jack W.; Sackett, Paul R.; Kuncel, Nathan R.; Walmsley, Philip T.; Stemig, Melissa S. – Educational Measurement: Issues and Practice, 2017
Previous research has established that SAT scores and high school grade point average (HSGPA) differ in their predictive power and in the size of mean differences across racial/ethnic groups. However, the SAT is scaled nationally across all test takers while HSGPA is scaled locally within a school. In this study, the researchers propose that this…
Descriptors: College Entrance Examinations, Scaling, Grade Point Average, Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Gordon, Edmund W. – Educational Measurement: Issues and Practice, 2020
Drawing upon his experience, more than 60 years ago, as a psychometric support person to a very special teacher of brain damaged children, the author of this article reflects on the productive use of educational assessments and data from them to educate - assessment in the service of learning. Findings from the Gordon Commission on the Future of…
Descriptors: Psychometrics, Student Evaluation, Special Education Teachers, Educational Assessment
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5