Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Source
Applied Measurement in… | 1 |
Center for Research on… | 1 |
Educational Psychologist | 1 |
Educational and Psychological… | 1 |
Evaluation and Program… | 1 |
Journal of Educational… | 1 |
Author
Shavelson, Richard J. | 7 |
Yin, Yue | 2 |
Cronbach, Lee J. | 1 |
Ruiz-Primo, Maria Araceli | 1 |
Solano-Flores, Guillermo | 1 |
Publication Type
Journal Articles | 5 |
Reports - Evaluative | 3 |
Reports - Descriptive | 2 |
Information Analyses | 1 |
Reports - Research | 1 |
Education Level
Grade 8 | 1 |
Audience
Researchers | 1 |
Location
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Shavelson, Richard J. – Educational Psychologist, 2013
E. L. Thorndike contributed significantly to the field of educational and psychological testing as well as more broadly to psychological studies in education. This article follows in his testing legacy. I address the escalating demand, across societal sectors, to measure individual and group competencies. In formulating an approach to measuring…
Descriptors: Competence, Psychology, Psychological Testing, Psychological Studies
Yin, Yue; Shavelson, Richard J. – Applied Measurement in Education, 2008
In the first part of this article, the use of Generalizability (G) theory in examining the dependability of concept map assessment scores and designing a concept map assessment for a particular practical application is discussed. In the second part, the application of G theory is demonstrated by comparing the technical qualities of two frequently…
Descriptors: Generalizability Theory, Concept Mapping, Validity, Reliability
Cronbach, Lee J.; Shavelson, Richard J. – Educational and Psychological Measurement, 2004
In 1997, noting that the 50th anniversary of the publication of "Coefficient Alpha and the Internal Structure of Tests" was fast approaching, Lee Cronbach planned what have become the notes published here. His aim was to point out the ways in which his views on coefficient alpha had evolved, doubting now that the coefficient was the best way of…
Descriptors: Generalizability Theory, Reliability, Statistical Analysis
Yin, Yue; Shavelson, Richard J. – Center for Research on Evaluation Standards and Student Testing CRESST, 2004
In the first part of this paper we discuss the feasibility of using Generalizability (G) Theory to examine the dependability of concept map assessments and to design a concept map assessment for a particular practical application. In the second part, we apply G theory to compare the technical qualities of two frequently used mapping techniques:…
Descriptors: Formative Evaluation, Generalizability Theory, Concept Mapping, Comparative Analysis

Shavelson, Richard J.; Solano-Flores, Guillermo; Ruiz-Primo, Maria Araceli – Evaluation and Program Planning, 1998
Research on developing technology for large-scale performance assessments in science is reported briefly, and a conceptual framework is presented for defining, generating, and evaluating science performance assessments. Types of tasks are discussed, and the technical qualities of performance assessments are discussed in the context of…
Descriptors: Educational Technology, Generalizability Theory, Models, Performance Based Assessment

Shavelson, Richard J.; And Others – Journal of Educational Measurement, 1993
Evidence is presented on the generalizability and convergent validity of performance assessments using data from six studies of student achievement that sampled a wide range of measurement facets and methods. Results at individual and school levels indicate that task-sampling variability is the major source of measurement error. (SLD)
Descriptors: Academic Achievement, Educational Assessment, Error of Measurement, Generalizability Theory
Shavelson, Richard J.; And Others – 1993
In this paper, performance assessments are cast within a sampling framework. A performance assessment score is viewed as a sample of student performance drawn from a complex universe defined by a combination of all possible tasks, occasions, raters, and measurement methods. Using generalizability theory, the authors present evidence bearing on the…
Descriptors: Academic Achievement, Educational Assessment, Error of Measurement, Evaluators