NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Langenfeld, Thomas; Thomas, Jay; Zhu, Rongchun; Morris, Carrie A. – Journal of Educational Measurement, 2020
An assessment of graphic literacy was developed by articulating and subsequently validating a skills-based cognitive model intended to substantiate the plausibility of score interpretations. Model validation involved use of multiple sources of evidence derived from large-scale field testing and cognitive labs studies. Data from large-scale field…
Descriptors: Evidence, Scores, Eye Movements, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Hyo Jeong; Wilson, Mark; Choi, In-Hee – Journal of Educational Measurement, 2017
This study proposes a structured constructs model (SCM) to examine measurement in the context of a multidimensional learning progression (LP). The LP is assumed to have features that go beyond a typical multidimentional IRT model, in that there are hypothesized to be certain cross-dimensional linkages that correspond to requirements between the…
Descriptors: Middle School Students, Student Evaluation, Measurement Techniques, Learning Processes
Peer reviewed Peer reviewed
Embretson, Susan E. – Journal of Educational Measurement, 1992
New developments for solving the validation problem are applied to measuring and validating spatial modifiability. Results from 582 Air Force recruits support construct and criterion-related validities for the cognitive modifiability of spatial visualization items. Results also support modifiability as a direct measurement of learning ability.…
Descriptors: Cognitive Ability, Cognitive Measurement, Concurrent Validity, Construct Validity
Peer reviewed Peer reviewed
Koehler, Roger A. – Journal of Educational Measurement, 1974
The purposes of the study were to develop a measure of overconfidence on probabilistic tests, to assess the measurement characteristics of such a measure, and to investigate the relationship of overconfidence on tests to knowledge and to risk-taking propensity. (Author/BB)
Descriptors: Confidence Testing, Measurement Techniques, Multiple Choice Tests, Risk
Peer reviewed Peer reviewed
Secolsky, Charles – Journal of Educational Measurement, 1987
For measuring the face validity of a test, Nevo suggested that test takers and nonprofessional users rate items on a five point scale. This article questions the ability of those raters and the credibility of the aggregated judgment as evidence of the validity of the test. (JAZ)
Descriptors: Content Validity, Measurement Techniques, Rating Scales, Test Items
Peer reviewed Peer reviewed
Hanna, Gerald S. – Journal of Educational Measurement, 1977
The effects of providing total and partial immediate feedback to pupils in multiple choice testing was investigated with fifth and sixth grade pupils. The split-half reliability was higher with total feedback than with no feedback. Concurrent validity with a completion test showed all three settings to be nearly identical. (Author/JKS)
Descriptors: Elementary Education, Elementary School Students, Feedback, Forced Choice Technique
Peer reviewed Peer reviewed
Stenner, A. Jackson; And Others – Journal of Educational Measurement, 1983
In an attempt to restore the symmetry and balance between the study of person and item variation, this paper presents a novel methodology construct specification equations, which allows one to ascertain from the lawful behavior of items what an instrument is measuring. (Author/PN)
Descriptors: Measurement Objectives, Measurement Techniques, Research Methodology, Test Construction
Peer reviewed Peer reviewed
Ebel, Robert L. – Journal of Educational Measurement, 1982
Reasonable and practical solutions to two major problems confronting the developer of any test of educational achievement (what to measure and how to measure it) are proposed, defended, and defined. (Author/PN)
Descriptors: Measurement Techniques, Objective Tests, Test Construction, Test Items
Peer reviewed Peer reviewed
Farr, Roger; Roelke, Patricia – Journal of Educational Measurement, 1971
Descriptors: Classroom Observation Techniques, Comparative Analysis, Measurement Techniques, Rating Scales
Peer reviewed Peer reviewed
Homan, Susan; And Others – Journal of Educational Measurement, 1994
A study was conducted with 782 elementary school students to determine whether the Homan-Hewitt Readability Formula could identify the readability of a single-sentence test item. Results indicate that a relationship exists between students' reading grade levels and responses to test items written at higher readability levels. (SLD)
Descriptors: Difficulty Level, Elementary Education, Elementary School Students, Identification
Peer reviewed Peer reviewed
Jaradat, Derar; Sawaged, Sari – Journal of Educational Measurement, 1986
The impact of the Subset Selection Technique (SST) for multiple-choice items on certain properties of a test was compared with that of two other methods, the Number Right and the Correction for Guessing Formula. Results indicated that SST outperformed the other two, producing higher reliability and validity without favoring high risk takers.…
Descriptors: Foreign Countries, Grade 9, Guessing (Tests), Measurement Techniques
Peer reviewed Peer reviewed
Sykes, Robert C.; Ito, Kyoko; Fitzpatrick, Anne R.; Ercikan, Kadriye – Journal of Educational Measurement, 1997
The five chapters of this report provide resources that deal with the validity, generalizability, comparability, performance standards, and fairness, equity, and bias of performance assessments. The book is written for experienced educational measurement practitioners, although an extensive familiarity with performance assessment is not required.…
Descriptors: Educational Assessment, Measurement Techniques, Performance Based Assessment, Standards
Peer reviewed Peer reviewed
Kirsch, Irwin S. – Journal of Educational Measurement, 1980
The construct validity of reading comprehension test items was studied in a two-stage process. Five characteristics of task difficulty were defined and a heterogeneous set of 52 items were rated for these characteristics. Then correlations were obtained between ratings and item difficulty data. (CTM)
Descriptors: Adults, Cognitive Processes, Difficulty Level, Evaluation Criteria
Peer reviewed Peer reviewed
Shavelson, Richard J.; And Others – Journal of Educational Measurement, 1993
Evidence is presented on the generalizability and convergent validity of performance assessments using data from six studies of student achievement that sampled a wide range of measurement facets and methods. Results at individual and school levels indicate that task-sampling variability is the major source of measurement error. (SLD)
Descriptors: Academic Achievement, Educational Assessment, Error of Measurement, Generalizability Theory
Peer reviewed Peer reviewed
Messick, Samuel – Journal of Educational Measurement, 1984
Comprehensive assessment in context focuses on the processes and structures involved in subject matter competence as moderated in performance by personal and environmental influences. This article addresses in detail both the nature of developing competence and its measurement in terms of context-dependent task performance. (Author/EGS)
Descriptors: Academic Achievement, Achievement Tests, Cognitive Ability, Cognitive Development
Previous Page | Next Page ยป
Pages: 1  |  2