NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational Measurement:…46
Audience
What Works Clearinghouse Rating
Showing 1 to 15 of 46 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael – Educational Measurement: Issues and Practice, 2022
Despite agreement about the central importance of validity for educational and psychological testing, consensus regarding the definition of validity remains elusive. Differences in the definition of validity are examined and reveals that a potential cause of disagreement stems from differences in word use and meanings given to key terms commonly…
Descriptors: Test Validity, Psychological Testing, Educational Testing, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel Murphy; Sarah Quesen; Matthew Brunetti; Quintin Love – Educational Measurement: Issues and Practice, 2024
Categorical growth models describe examinee growth in terms of performance-level category transitions, which implies that some percentage of examinees will be misclassified. This paper introduces a new procedure for estimating the classification accuracy of categorical growth models, based on Rudner's classification accuracy index for item…
Descriptors: Classification, Growth Models, Accuracy, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Coggeshall, Whitney Smiley – Educational Measurement: Issues and Practice, 2021
The continuous testing framework, where both successful and unsuccessful examinees have to demonstrate continued proficiency at frequent prespecified intervals, is a framework that is used in noncognitive assessment and is gaining in popularity in cognitive assessment. Despite the rigorous advantages of this framework, this paper demonstrates that…
Descriptors: Classification, Accuracy, Testing, Failure
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Educational Measurement: Issues and Practice, 2020
Educational assessment involves eliciting, transmitting, and receiving information concerning the level of proficiency of a learner in a specified domain. With that in mind, it is perhaps surprising that the literature seems to make very little use of the signal processing metaphor. The present article begins by making a general case for greater…
Descriptors: Educational Assessment, Student Evaluation, Evaluative Thinking, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Pepper, David – Educational Measurement: Issues and Practice, 2020
The Standards for Educational and Psychological Testing identify several strands of validity evidence that may be needed as support for particular interpretations and uses of assessments. Yet assessment validation often does not seem guided by these Standards, with validations lacking a particular strand even when it appears relevant to an…
Descriptors: Validity, Foreign Countries, Achievement Tests, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
O'Leary, Timothy M.; Hattie, John A. C.; Griffin, Patrick – Educational Measurement: Issues and Practice, 2017
Validity is the most fundamental consideration in test development. Understandably, much time, effort, and money is spent in its pursuit. Central to the modern conception of validity are the interpretations made, and uses planned, on the basis of test scores. There is, unfortunately, however, evidence that test users have difficulty understanding…
Descriptors: Test Interpretation, Scores, Test Validity, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Marion, Scott; Domaleski, Chris – Educational Measurement: Issues and Practice, 2019
This article offers a critique of the validity argument put forward by Camara, Mattern, Croft, and Vispoel (2019) regarding the use of college-admissions tests in high school assessment systems. We challenge their argument in two main ways. First, we illustrate why their argument fails to address broader issues related to consequences of using…
Descriptors: College Entrance Examinations, High School Students, Test Use, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Gordon, Edmund W. – Educational Measurement: Issues and Practice, 2020
Drawing upon his experience, more than 60 years ago, as a psychometric support person to a very special teacher of brain damaged children, the author of this article reflects on the productive use of educational assessments and data from them to educate - assessment in the service of learning. Findings from the Gordon Commission on the Future of…
Descriptors: Psychometrics, Student Evaluation, Special Education Teachers, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Sijtsma, Klaas – Educational Measurement: Issues and Practice, 2015
I discuss the contribution by Davenport, Davison, Liou, & Love (2015) in which they relate reliability represented by coefficient a to formal definitions of internal consistency and unidimensionality, both proposed by Cronbach (1951). I argue that coefficient a is a lower bound to reliability and that concepts of internal consistency and…
Descriptors: Reliability, Mathematics, Validity, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Gaertner, Matthew N.; McClarty, Katie Larsen – Educational Measurement: Issues and Practice, 2016
This rejoinder provides a reply to comments on a middle school college readiness index, which was devised to generate earlier and more nuanced readiness diagnoses to K-12 students. Issues of reliability and validity (including construct underrepresentation and construct-irrelevant variance) are discussed in detail. In addition, comments from…
Descriptors: Middle School Students, College Readiness, Measures (Individuals), Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Lazowski, Rory A.; Barron, Kenneth E.; Kosovich, Jeff J.; Hulleman, Chris S. – Educational Measurement: Issues and Practice, 2016
In an article published in "Educational Measurement: Issues and Practice," Gaertner and McClarty (2015) discuss a college readiness index based, in part, on nonacademic or noncognitive factors measured in middle school. Such an index is laudable as it incorporates important constructs beyond academic achievement measures that may be…
Descriptors: College Readiness, Measures (Individuals), Student Motivation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Camara, Wayne J.; Mattern, Krista; Croft, Michelle; Vispoel, Sara; Nichols, Paul – Educational Measurement: Issues and Practice, 2019
In 2018, 26 states administered a college admissions test to all public school juniors. Nearly half of those states proposed to use those scores as their academic achievement indicators for federal accountability under the Every Student Succeeds Act (ESSA); many others are planning to use those scores for other accountability purposes.…
Descriptors: College Entrance Examinations, Accountability, Scores, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Camara, Wayne – Educational Measurement: Issues and Practice, 2014
This article reviews the intended uses of these college- and career-readiness assessments with the goal of articulating an appropriate validity argument to support such uses. These assessments differ fundamentally from today's state assessments employed for state accountability. Current assessments are used to determine if students have…
Descriptors: College Readiness, Career Readiness, Aptitude Tests, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Suto, Irenka – Educational Measurement: Issues and Practice, 2012
Internationally, many assessment systems rely predominantly on human raters to score examinations. Arguably, this facilitates the assessment of multiple sophisticated educational constructs, strengthening assessment validity. It can introduce subjectivity into the scoring process, however, engendering threats to accuracy. The present objectives…
Descriptors: Evaluation Methods, Scoring, Qualitative Research, Protocol Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4